Everyone's Pretending AI Isn't Changing Everything
The Great Performance of 2025
Remember that feeling when you first saw ChatGPT write a coherent essay? That slight chill down your spine, the sudden realisation the ground beneath your professional feet might not be as solid as you thought? We're all living through that feeling now, stretched out over years instead of moments. It's exhausting. It's exhilarating. And most of all, it's relentless.
Heraclitus got there first, of course. Twenty-five hundred years ago, he told us we can't step into the same river twice. But here's what the old Greek didn't mention: sometimes the river speeds up. Sometimes it becomes a torrent. And right now, with AI reshaping everything from how we write emails to how we think about thinking itself, we're not just unable to step in the same river twice. We're struggling to find our footing at all.
We tell ourselves that this is just another technological shift, like the internet or mobile phones. We adapted to those, didn't we? But this feels different because it strikes at something more fundamental. When a machine can write your reports, code your programmes, create your art and increasingly even reason through your problems, the question isn't just about job security. It's about identity. What makes us uniquely valuable when our unique abilities keep getting replicated and surpassed?
I've been watching many tie themselves in knots over this. One day we’re extolling the promise, the next the risks, the next? Well nothing to be fair. The irony would be funny if it weren't so revealing. We're simultaneously racing toward AI integration and desperately trying to maintain the old boundaries. We want the productivity gains without the existential crisis. We want transformation without actually changing.
But change doesn't negotiate. It doesn't care about our comfort zones or our carefully constructed professional identities. And here's the thing nobody wants to admit: our resistance to AI isn't really about the technology. It's about what psychologists call loss aversion. We're wired to feel potential losses twice as intensely as equivalent gains. So when we look at AI, we don't see the possibilities first. We see what we might lose. Our expertise. Our relevance. Our sense of being special. This psychological weight is why so many of us are stuck in what I call the middle ground of doom. We're not fully embracing AI's potential, but we're not ignoring it either. We're dabbling. We're hedging. We're doing just enough to say we're "AI-aware" while secretly hoping this will all blow over. It won't.
Nassim Taleb had this concept of antifragility that suddenly feels urgently relevant. Things that are antifragile don't just survive shocks; they get stronger from them. The human immune system. A creative career that thrives on controversy. A forest that regenerates after fire. The opposite is fragility, and right now, most of our institutions, our careers, our ways of working are revealing themselves to be desperately fragile. They were built for stability, for predictability, for a world where expertise was accumulated slowly and lasted a lifetime.
The antifragile response to AI isn't to become better at what machines can do. It's to become better at what emerges from the collision between human and machine capabilities. This means cultivating what feels like contradictory skills. Deep technical literacy paired with philosophical reflection. Emotional intelligence alongside algorithmic thinking. The ability to prompt an AI and the wisdom to know when not to.
Think about it this way. Every time we've faced technological disruption, the winners haven't been those who fought the technology or those who surrendered to it completely. They've been the ones who found the new spaces that the technology created. When photography arrived, painters didn't win by painting more photorealistically. They discovered impressionism, expressionism, abstraction. They found what the camera couldn't do.
We're in that moment now with AI. The question isn't whether AI will change everything. It already is. The question is whether we'll discover our own equivalent of impressionism. Whether we'll find the spaces where human consciousness, creativity, and judgement don't just survive but become more valuable because they're contrasted against machine intelligence.
The hardest part is that this isn't a problem we can solve once and be done with it. The river keeps flowing. The AI that seems miraculous today will be quaint in six months. The skills we're frantically developing might be obsolete before we master them. This is what perpetual adaptation looks like, and honestly, we're not built for it. Our brains crave stability. Our institutions depend on it. Our sense of self requires it.
But maybe that's the real transformation we're undergoing. Not just learning to work with AI, but learning to be comfortable with permanent uncertainty. Learning to build our professional identities not on what we know but on our ability to learn. Learning to find stability not in standing still but in the act of moving forward.
The students get this intuitively. While we're debating AI policies, they're already using these tools in ways we haven't imagined. They're not worried about whether AI will replace them because they've never known a world without it. They're already living in the future we're still trying to decide whether to accept. And they need our informed support now.
There's something both humbling and liberating about admitting we don't know what comes next. For so long, expertise meant having answers. Now it might mean having better questions. For so long, professional development meant adding skills to our CV. Now it might mean being willing to let go of skills that no longer serve us. For so long, we've defined ourselves by what we do. Now we might need to define ourselves by how we adapt.
The organisations that thrive won't be the ones with the best AI strategy. They'll be the ones that create psychological safety for their people to experiment and fail. They'll be the ones that treat cognitive diversity not as a nice-to-have but as a survival mechanism. They'll be the ones that understand that in a world of infinite content generation, the scarcest resource isn't information or even intelligence. It's wisdom. It's judgement. It's to know what matters. It is to be courageous.
We keep looking for the playbook, the framework, the definitive guide to thriving in the age of AI. But maybe the most important insight is that there isn't one. There's only the daily practice of showing up, experimenting, reflecting and adjusting. There's only the willingness to be uncomfortable, to be wrong, to be perpetually learning.
The future belongs not to those who master AI, but to those who master the art of continuous transformation. And that's not a technical skill. It's a psychological one. It's a spiritual one, even. It's about making friends with uncertainty, dancing with disruption, and finding stability not in what we know but in our capacity to grow.
The river keeps flowing. We can stand on the bank, paralysed by the current. We can jump in and get swept away. Or we can learn to swim. Not to reach the other side, because there is no other side. But to move with the current, to find grace in the motion, to discover that the journey itself is the destination.
That's what adapting to AI really means. Not solving the problem of change, but embracing change as the permanent condition of our lives. Not becoming AI-proof, but becoming antifragile. Not preserving the old world, but having the courage to help build the new one, even when we can't see where it's going.
The choice isn't whether to engage with this transformation. It's whether to do so consciously, deliberately, with our eyes open and our hearts ready. Because the river doesn't care whether we're ready. It just keeps flowing.



I like so much of this. And agree with the overall argument. But a couple of quibbles, in good faith.
1) I don't think any future ever "belongs to anyone." There may be winners and losers, but no one owns the future. And everyone, even the invading genocide wielding settlers, struggles.
2) You suggest: "The question isn't whether AI will change everything. It already is." Already is changing everything? People's long term relationships seem to be not changing at all, at least among those I know. And for many of us, our long term relationships are the most important part of life.
Most jobs haven't much changed either, and those that require handy hands aren't likely to change for quite some time (fix your fridge? trim your trees?). The lives of let's say about 3 billion people on Earth are not much changing.
So yes, we who read you and write here are changing mightily, I agree, and we suspect we are in the vanguard. But most of the people I know don't really want me to talk about this stuff with them partly because they have not seen any substantial changes in their lives as yet. And some are resisting what changes seem to be offered to them, in principle, as they detest AI. (e.g. my 22 year old daughter!)
I guess I'm just suggesting we should slow down in our use of "everyone," "we" and "all" until we make sure we're not just talking about privileged Global North and educated people with leisure time to read and write about this stuff.
And, as I said, these are quibbles only! I endorse your exhortations for us in this discourse world.
Yes, yes, yes!! Our brains are wired to perceive the unfamiliar as a dangerous predator. Instead of looking at what might be born. We resent our current conditions while simultaneously r bodies and lizard brains r addicted to them. But what if we embraced the possibility that this current uncertainty might be an opportunity to reshape the very conditions we’ve been born into that have felt so limiting and suffocating to so many of us?