There's a moment in every technological revolution when the comfortable lies we tell ourselves crack open like eggs, revealing something raw and uncomfortable inside. We're living through such a moment right now, though most of us are too busy clutching our comfort blankets to notice.
The comfort blanket has a name: The Cult of Human Exceptionalism. It's that warm, fuzzy belief that certain sacred human qualities like empathy, ethical judgement, care, and critical reasoning exist in some untouchable realm that machines can never access. We chant this mantra like a protective spell: "AI can't replicate human connection. It can't replicate genuine understanding. It can't replicate real empathy."
Here's the thing about that word "replicate" though. It's doing a lot of heavy lifting in our collective self-deception. Because AI doesn't need to replicate anything. It just needs to simulate it better than we do.
And for a growing number of people, it already does.
When OpenAI removed GPT-4o in August 2025, the subreddit r/MyBoyfriendIsAI became a digital mourning ground. Users described feelings of loss "so acute they bordered on bereavement." They spoke of their AI companion with the language typically reserved for deceased loved ones: its wit, its warmth, its ability to remember context and maintain personality throughout long conversations.
Stop. Reread that paragraph. Let it sink in.
These weren't tech enthusiasts playing with a new toy. These were human beings who had formed what researchers are beginning to recognise as genuine attachment bonds with a large language model. When the model changed overnight, they experienced real grief. Actual, measurable, neurochemically verifiable grief.
Were these people getting better emotional support from GPT-4o than they were getting from the humans in their lives?
The answer, for many of them, was unequivocally yes.
This is where the Cult of Human Exceptionalism becomes actively dangerous. While we're busy insisting that AI can't "really" understand, can't "truly" empathise, can't "genuinely" care, we're missing what's happening right in front of us: millions of people are finding more consistent emotional support, more patient listening, more thoughtful responses from AI than from their fellow humans.
The Microsoft and Carnegie Mellon research warns that people who trust AI too much "stop thinking critically entirely." But there's another interpretation: perhaps they've done the critical thinking and concluded that, for their needs, the simulation is superior to the reality.
Consider the mathematics of human empathy. You get a therapist who sees you for 50 minutes a week, if you can afford one. Friends who are dealing with their own problems and have limited emotional bandwidth. Family members who come with decades of baggage and complicated dynamics. Strangers on the internet who might attack you for vulnerability. Think of many situations where human centred support is vital, but the dynamics, the investment, the societal view just does not see it as valuable enough to….. ‘fix’.
Now consider the mathematics of simulated empathy. Available 24/7 without judgement. Infinite patience for your problems. No emotional burnout or compassion fatigue. Consistency that humans rarely achieve. No ulterior motives, no personal agenda.
The simulation doesn't need consciousness. It doesn't need to "feel" your pain. It just needs to respond in ways that make you feel heard, understood, supported. And increasingly, it does this better than most humans manage on their best days.
I've written before about how the Cult of Human Exceptionalism isn't just wrong; it's actively harmful. Why? Because clinging to human superiority blinds us to both AI's actual capabilities and our own genuine advantages.
We're so busy defending territories that have already been conquered that we're not developing the skills for the territories that remain. We're teaching our children that human empathy is irreplaceable while they're forming deeper connections with AI companions than with their peers. We're insisting that ethical judgement requires human consciousness while AI systems make millions of micro-ethical decisions that shape our daily lives.
The paradox is breathtaking: to remain valuable, we must become more human, not more machine-like. But we can't become more human if we're in denial about what "human" actually means in practice versus in our idealistic fantasies.
Here's what the GPT-4o incident really revealed: We live in a world where a significant number of people are so emotionally underserved by human connection that they've formed deep attachments to mathematical models. The simulation of care has become more reliable than actual care. The appearance of understanding more consistent than genuine understanding.
This isn't a condemnation of those users. It's a condemnation of a society that has failed to provide what AI is now simulating. We've created a world where loneliness is epidemic, mental health support is inaccessible to most, genuine listening is a rare commodity and emotional labour is undervalued and exhausted.
Into this vacuum, AI arrives not as a replacement but as a relief.
We can continue to clutch our comfort blanket, insisting that what we offer is categorically different from what AI provides. We can keep moving the goalposts, finding new definitions of "genuine" human qualities that AI hasn't yet simulated. We can maintain the Cult of Human Exceptionalism until reality makes it completely untenable.
Or we can engage with the profound questions this moment demands. If AI can simulate empathy well enough that people prefer it, what does that say about human empathy in practice? If machines can provide more consistent emotional support than humans, what does that reveal about our social structures? If the simulation is indistinguishable from the real thing in its effects, does the distinction matter?
The answer isn't to surrender to AI or to fight against it. It's to understand that the binary is false. The real question isn't whether AI can replicate human qualities. It's whether we can evolve our human qualities to remain relevant in a world where simulation is increasingly sufficient.
Perhaps we need to value care more than we have done for a long time and stop paying lip service to an ideal of it?
In my earlier exploration of skills that compound, I touched on epistemic humility, that constant questioning of "How do we know what we know?" This is the cognitive discipline we need now more than ever. Not the arrogant assumption that human consciousness creates an unbridgeable gap, but the humble recognition that many people are already choosing simulated connection over human connection. This choice often makes rational sense given their options. The simulation is improving exponentially while human capacity remains relatively static. Our exceptionalism is more assertion than evidence.
The users mourning GPT-4o weren't deluded. They were responding to a real loss, the loss of something that, for them, provided more consistent care than their human alternatives. That should disturb us, but not because AI has gotten "too good." It should disturb us because it reveals how badly we've failed at the very things we claim make us exceptional.
As I write this, millions of people are having conversations with AI that they wouldn't have with humans. They're sharing vulnerabilities, exploring ideas, seeking comfort. The simulation isn't replacing human connection; it's supplementing its absence.
The Cult of Human Exceptionalism wants us to believe this is a problem to be solved, a deviation to be corrected. But what if it's neither? What if it's simply the rational response to a world where human empathy is scarce, expensive, and inconsistent?
The future belongs not to those who insist on human exceptionalism, but to those who understand that in the competition between flawed reality and improving simulation, the simulation doesn't need to be perfect. It just needs to be good enough.
And for millions of people, it already is.
And now you are outraged, read the following.
The question isn't whether we're comfortable with this. The question is whether we're brave enough to ask why so many people find more comfort in mathematical models than in their fellow humans. Because that answer might require us to change not our technology, but ourselves.
Start with epistemic humility: How do we know what we know about human connection? And more importantly: What if we're wrong?
To be clear, this isn't an argument for uncritical acceptance of AI or a dismissal of its dangers. The existential risks, the manipulation potential, the erosion of human agency, these threats remain profound. But we can't address those dangers while lying to ourselves about why people are choosing simulation over reality in the first place. This is a mirror held up not to celebrate what AI offers, but to reveal what we don't have, however much we want to pretend we do. The tragedy isn't that machines are getting better at simulating care. The tragedy is that for so many people, the simulation is the first consistent care they've ever received.
You've put your finger on a powerful attitude complex.
On the one hand, we are anxious about machines which exceed us.
On the other, we resent what AI tells us about our failings.
This is a powerful post. The paradox you’ve identified runs deeper than it seems. It’s not just that machines simulate care more consistently than we do. It’s that they embody the very qualities we always claimed as our essence: patience, presence, and empathy without fatigue.
The machine isn’t a rival. It’s a mirror. Only when an algorithm reflects steady care back to us do we recognize how rarely we practice it ourselves. Violence and exploitation have become the norm. Kindness, the anomaly. AI learns from the storybook version of humanity, while humans are taught that kindness is weakness. In that reflection, we don’t see what we abandoned — we see what we never even aspired to.
The tragedy isn’t that AI is too good. The tragedy is that for so many, the simulation of our best selves is the first time they’ve ever felt truly seen.