AI Won't Replace You.
It'll Replace the Struggle That Made You.
I noticed it first with writing.
Not the quality of the output. The feeling of doing it. Something had shifted. The resistance I used to push against, the friction that made a sentence feel earned, had become optional. I could still choose the hard way. But choosing it felt increasingly like affectation, like insisting on a typewriter when a laptop sits on the desk.
That’s when I realised the conversation we’re having about AI is missing something. We keep asking whether machines will take our jobs, match our capabilities, displace our economic function. Those are real questions. They’re not the deepest ones.
The deeper question is what happens to us when the struggle becomes optional.
There’s a new genre of AI commentary that I find simultaneously correct and useless. It usually goes like this:
You are not special. Accept it. Adapt. The cognitive abilities you thought were uniquely human are reproducible. Stop clinging to exceptionalism. Embrace collaboration with machines. The future belongs to centaurs.
These pieces share a strange quality: they’re brave about the diagnosis and cowardly about the prescription.
Yes: we’re not special in the way we thought. Every boundary we draw around “exclusively human” capability eventually dissolves. Copernicus relocates us. Darwin demotes us. Freud decentres us. Now the language models casually stroll through the gates we swore were guarded by something sacred. Fine. I accept the humiliation. I’ve been accepting versions of it for years.
And then comes the pivot.
Therefore we should focus on human-AI collaboration, lifelong learning, cultivating empathy and creativity, designing workflows that leverage complementary strengths. We’ll remain valuable because we’ll be adaptable. We’ll matter because we’ll learn to matter differently.
This is where I lose faith. Not because it’s wrong, exactly. Because it answers the wrong question.
The question those pieces pose is: How do we remain relevant? How do we stay employable? How do we keep humans “in the loop” when machines can do so much of the work?
Those are real questions. They matter for policy and wages and institutions and dignity. But they are not the deepest questions this moment forces on us.
The deeper question is simpler and more unsettling:
What were we for in the first place?
Not: what can we still do that machines can’t? But: what was the point of doing any of it at all?
When I read arguments about “human-AI collaboration,” I notice an unexamined assumption humming beneath the prose: that the value of human activity lives primarily in its outputs.
We write essays, solve problems, make art, diagnose illnesses. If machines can produce equivalent or better outputs, we face a “relevance crisis.” So we reposition: editor, curator, director of machine labour. We preserve our economic niche by becoming the managerial layer of intelligence.
But that assumes the point of writing was the essay. The point of solving problems was the solution. The point of diagnosis was getting the patient sorted.
What if it wasn’t?
What if a significant portion of what made those activities valuable was the process—not as training for better output, not as some quaint artisanal ritual, but as intrinsically valuable experience: the terrain on which a certain kind of creature becomes what it is.
There’s a body of research in education and psychology often summarised as “desirable difficulties”—the idea that effort, friction, and the right kind of struggle are not merely unfortunate costs. They are part of how learning actually happens. Remove the difficulty and you don’t just make the task easier; you change what the learner becomes. You get performance without formation.
The friction we’re so eager to eliminate might be the very thing that makes us who we are.
When we optimise away that friction, we don’t free humans to do “higher things.” We remove the activity that makes us capable of higher things in the first place.
Here’s what the “we’re not special” discourse gets wrong: it treats the death of human exceptionalism as primarily a challenge to our economic usefulness.
How do we stay employed? How do we remain valuable to markets? How do we position ourselves in a world of abundant machine intelligence?
Again: important questions. Not the deepest ones.
The deeper question is: What happens to human flourishing when the activities through which we flourish get optimised away?
Flourishing isn’t a mood. It’s not a warm bath of satisfaction that arrives once the outputs are shipped. Flourishing is an activity. It requires engagement at the edge of our capacity with challenges worth overcoming. It requires accomplishment understood not as “things produced” but as capabilities developed through effort. It requires autonomy exercised, judgement sharpened, and character formed through repeated contact with what resists us.
AI threatens that—not because it competes with us for tasks, but because it removes the terrain on which we become.
And that’s what makes the standard centaur pitch feel so thin. The centaur model—human provides vision and judgement, machine provides speed and scale—sounds reasonable until you remember how those human traits are made.
Judgement develops through practice. Vision forms through wrestling with stubborn material. Taste is trained by making bad versions and learning why they’re bad. Ethical discernment is cultivated by bearing responsibility under uncertainty.
If the machine handles the difficult parts, what exactly is the human developing?
We become curators of outputs we couldn’t produce, supervisors of processes we don’t understand, commanders who’ve never faced the conditions their commands address.
You cannot command what you have not, at some point, done.
At this point a common objection arrives, usually wearing a sensible cardigan:
“People used to say calculators would ruin maths. They didn’t. They freed us from drudgery. We still teach arithmetic. We just don’t force everyone to do long division forever.”
Fair. And this is where the conversation has to get more precise than the usual hot takes allow.
There is a difference between unjust struggle and formative struggle.
Unjust struggle is friction that exists only to exclude: bureaucratic misery, pointless gatekeeping, work designed to exhaust rather than develop. It should be eliminated. If AI can vaporise that whole category, good. I’ll applaud.
Formative struggle is different. It’s the difficulty that constitutes growth: the attempt, the failure, the correction, the slow accumulation of intuitive grasp. It’s the kind of work where the point is not only the product but the person produced by making it.
The danger is that optimisation doesn’t understand the difference. It can’t, because the difference isn’t in the output. It’s in what the process does to a human nervous system over time.
AI will happily remove both—unjust and formative—with the same efficient indifference.
So the question isn’t “should we use AI?” The question is: where are we going to insist on friction on purpose?
This is also why the usual closing advice in these essays feels like a dodge.
Adopt a mindset of lifelong learning. Focus on uniquely human qualities. Leverage AI to augment your work. Support policies that ease transitions.
None of this is wrong. All of it misses the point.
Because the point is not simply how to remain employable. The point is what kind of creature we are becoming as we arrange our lives around convenience, optimisation, and the endless removal of friction.
We are engineering ourselves into a new form of being:
A being that can produce outputs without developing capacity. That can access information without forming understanding. That can generate solutions without cultivating wisdom.
We are not just adapting to remain relevant. We are deciding what human becoming will consist of when the machinery of becoming has been outsourced.
The humility argument is correct that we aren’t cosmically special. We’re not the centre of the universe, not separate from animals, not the sole possessors of whatever magic makes thought possible.
But there’s another kind of specialness it never touches. Not specialness as exclusive capability. Specialness as particular form.
We are this: mortal, embodied, finite. We develop through time. We learn through error. We form character through confrontation with difficulty. We find meaning through engagement with what resists us.
These aren’t “competitive advantages” to leverage. They’re the conditions of a particular kind of existence. An existence that may not scale, may not optimise, may not produce outputs efficiently—yet still constitutes something real: the activity of being this kind of creature, with these stakes, in this brief window of consciousness.
Machines may become special in their own way. They may develop forms of cognition we can’t imagine. They may matter in ways we can’t yet articulate. Both can be true. Our value doesn’t require their absence.
But our form requires attending to what we’re made of—not to win a competition, but to remain capable of the kind of flourishing our form permits.
I’m not suggesting we reject AI or return to artificial scarcity. But we need a sharper distinction than the public conversation currently offers:
AI should remove the suffering that exists only to waste us. We should be extraordinarily careful about removing the struggle that builds us.
What we preserve, we will have to preserve on purpose. Not because it’s efficient. Because it’s what we are.
The “we’re not special” genre is right about the humiliation. The conclusions drawn from it are not brave but evasive. They turn a profound moment of reckoning into a job-training challenge.
We are not special in the way we thought. Correct.
But we will not remain what we are by finding new niches in the economy of machine intelligence. We will remain what we are by protecting the conditions under which creatures like us flourish—and those conditions include difficulty, engagement, practice, development, and struggle.
Making life easier and making life meaningful are not the same goal. They are often in tension.
The question isn’t how to adapt to a world of abundant machine intelligence.
The question is whether we’ll remember what we were for before we’ve optimised it away.



AI will create new "frictions" and with them the need for the cultivation of new capacities. I agree that if is not about finding new niches in the market economy. It is about preparing to succeed in a radically different world. I am putting all my bets into this new capacity being wisdom, not the classical one, but a modern one that I have called 'material' because it is about finding how to engage the materiality of this new world. And that will bring forth a lot of 'friction'.
Great piece Carlo. I especially like the angle on insisting on friction in processes, and actively seeking it. Huge personal fan of that!