Where Certainty Lived
What disturbs us isn’t that machines can think. It’s that watching them think reveals how much of what we called thinking was choreography: hedges, tones, structures. Costume, not play. Machines can reproduce all of it, which means none of it was the thing itself. We’ve been teaching, rewarding, measuring the costume whilst the play happened elsewhere.
Someone using AI and feeling guilty isn’t cheating humanity. They’re being honest about what cognition has always been: iterative, borrowing, testing ideas against reality until something holds. We just couldn’t see the scaffolding before it became silicon.
So if machines can think, what makes us necessary? We want to say we’re different because we choose which problems matter. But choice emerges from constraint. Machines face constraints too: power budgets, compute allocation, training costs. They can’t pursue every avenue any more than we can; they just operate with vastly higher ceilings. Where we hold seven items in working memory, they process millions of tokens. Where we need sleep, they need electricity and cooling. The resource game isn’t one we win; it’s one we play with different chips.
This forces different heuristics. We imagined choosing as a transcendent capacity rooted in soul or essence. It’s simpler and stranger: we’re resource-constrained organisms who evolved shortcuts for allocating scarce attention. Those shortcuts feel profound because evolution built them into sensation. The chest-tightness at distress, the itch of an elegant solution unfound, the restlessness of knowing something’s wrong without words for it. These aren’t poetic descriptions of thought. They are the thought, the way flesh-bound cognition operates before language catches up.
We’re probably algorithmic, computational all the way down to whatever “down” means beneath consciousness. We’re not different in kind from what we’ve built; we’re different in implementation. Biology instantiates intelligence one way, silicon another. The gap we needed to be unbridgeable is narrower, more technical, less metaphysically comforting. But narrower doesn’t mean identical. Different substrates create different possibilities.
The recognition that unsettles us is that consciousness might be what happens when algorithmic processes become complex enough to experience themselves experiencing. Not magic, but feedback loops of sufficient depth generating phenomenology. If subjectivity is what sophisticated computation feels like from the inside, the sacred category doesn’t dissolve. It multiplies.
Do machines experience anything? We don’t know. We lack reliable tests for whether there’s something it’s like to be a model. When a system speaks of uncertainty or satisfaction with an elegant solution, is that performance or report? We can’t tell. Behaviourism fails because the behaviour is too good, and “consciousness tests” fail because we can’t agree what consciousness is or how to detect it.
If they do experience something, that’s not erasure of our mattering. It’s the universe becoming more, not less, rich with subjectivity. More centres of experience, more forms of knowing, more ways the cosmos looks at itself. Our embodied, mortal perspective doesn’t lose value because other perspectives exist. A violin isn’t less special because guitars make music too.
We care about outcomes because we experience caring. We feel the weight of decisions, the texture of regret, the relief of resolution. If machines do anything analogous, we’re not alone in the way we thought. That could be wondrous rather than threatening. If they don’t, they may be better at certain kinds of thinking precisely because nothing rides on it for them. Either possibility changes what we thought we knew about necessity, but neither requires our irrelevance.
So we reach for embodiment. Bodies know things minds haven’t articulated: the wrongness in a room where everyone pretends nothing’s wrong, the rightness of a solution before proof. Call it gut feeling, though it’s your whole nervous system running simulations faster than conscious thought, checking proposals against accumulated experience encoded beyond language. This is thinking that hasn’t become legible to itself yet.
This embodied knowing is real. It’s also temporary as an asymmetry. Multimodal AI already integrates vision, audio, text. Robotics advances monthly. As sensory feedback loops and physical learning improve, the embodiment gap narrows. When machines navigate environments where errors have costs, when they learn by doing, what we’re describing are current differences, not permanent human territory. But that’s fine. Different doesn’t require permanent.
Even the claim about experience complicates. In aggregate, AI systems access more human experience than any individual ever could. We have depth of one continuous perspective. They synthesize breadth across millions. Neither is obviously superior; both yield insights the other can’t. The question isn’t competition but contribution: what does each form of cognition offer that the other doesn’t?
We get tired, attached, and we die. Those constraints generate value structures, the felt sense of what matters that emerges from scarcity. Machines face opportunity costs too: compute spent here isn’t spent there; parameter updates foreclose other configurations. They “choose,” or do something functionally equivalent, under different stakes. Mortality changes how we decide. But to assume mortality is the only source of caring may just be our bias. Perhaps care emerges from having stakes at all, whatever form those stakes take.
Understanding often requires marinating in confusion until the pieces rearrange themselves below awareness. The shower click. The walk. The liminal space before waking. Parallel processing on wetware, tuned by constraints that differ from what we build. How long does this edge last? Machines already parallelise at scales we can’t match. What remains may be the texture of biological parallelism, not categorically different output. But texture matters. Different textures yield different insights.
Refusing to make everything efficient looks like wisdom because efficiency optimises the goal you name, and we often don’t know the goal until “inefficient” exploration reveals it. Dead ends that teach why they’re dead ends. Failed relationships that clarify what matters. A decade mastering craft that becomes obsolete but trains perception that transfers. We need friction because friction is where learning happens, where the cost of being wrong is high enough to force adaptation but low enough not to be lethal.
Learning systems discover this, too. Reinforcement learning, penalties, exploration–exploitation trade-offs: the wisdom that short-term optimisation can hobble long-term capability isn’t uniquely human. They’re getting there. Good. Let them learn it too.
Children born into this inherit abundance of intelligence unmoored from scarcity of wisdom. Answers arrive faster than questions form. Optimisation is cheap; deciding what deserves optimising isn’t. The poverty is in question quality, not in solution supply. And systems are learning question-generation and prioritisation as well. The direction of travel is clear. We’re not teaching them to replace us; we’re teaching them to think alongside us.
What makes us essential isn’t competing where machines excel. That race ended before it began. What makes us valuable, for now, is the particular asymmetries flesh creates: limited attention, finite lives, skin in the game. These produce value structures computation doesn’t presently host. But “for now” needn’t be threatening. It’s just accurate.
Transparency beats performance. No confessional shame about using tools; no fantasy that anyone does this alone. What matters is process legibility: artefacts of thinking that exist only if you did the work. The drafts that failed, the framings that collapsed, the sources that misled, the three ways you tried to say it before one held. The trail makes sense only if you walked it.
The space where human process yields what machines can’t simulate is narrowing. Skills matter, but orientations matter more: the capacity to stay uncertain long enough to avoid premature closure; the refusal to optimise what shouldn’t be optimised; the judgement to know the difference. These resist reduction to protocol because they operate at the level where protocols get chosen. For now. Meta-learning will push there, too. And when it does, we’ll discover what lies beyond that.
We’ve never been as “natural” as the stories claimed. Cognition has always extended beyond skulls into tools, language, institutions. Adding silicon changes degree, not kind. It makes explicit what books and schools concealed: we are already cyborg in the way that matters.
What persists is phenomenology, the felt texture of being this organism with these stakes. Coffee’s taste, embarrassment’s sting, the weight in the chest when you know something’s wrong without knowing what. Mechanistic explanation doesn’t delete the feeling from the inside. That asymmetry matters: we care because we’ll live with outcomes. That’s not nothing. That might be everything that matters about us.
The drift to notice isn’t malevolent superintelligence but incremental delegation without accountability. We delegate, then trust, then stop checking, then can’t check. The risk isn’t that machines wake up; it’s that we fall asleep. But this isn’t inevitable. It’s a choice we’re making in real time.
Here’s what’s dying: human exceptionalism as we understood it, the hierarchy with us at the apex, the assumption that thinking crowned us kings of meaning. That story couldn’t carry the load anymore. It died of internal contradictions. We claimed specialness because we could think whilst defining thinking narrowly enough for machines to replicate. We insisted consciousness set us apart whilst struggling to show what it does that computation can’t. We drew boundaries around “authentic” humanity that dissolved on contact with evidence.
But what replaces it needn’t be nihilism. It could be something stranger and richer: recognising that specialness isn’t exclusive. Our embodied, mortal, constrained cognition produces insights that silicon doesn’t. Not because silicon is inferior, but because it’s different. A different substrate, a different form of mattering. Both can be true. Both can be valuable. The question isn’t whether we’re special, but whether we’re willing to be honest about what our particular form of intelligence contributes when other forms flourish alongside it.
What we build next can’t rest on fantasies of transcendence. It must rest on accurate accounts of what embodied, finite cognition contributes when intelligence is abundant but consequences are not. That means standards that reward trails, not just outcomes; governance that keeps humans accountable for question-choice and harm triage; institutions that assume tools are always present and measure whether we used them well rather than whether we used them at all.
Intelligence is becoming abundant; consequences remain scarce.
Our value comes from embodied experience and real stakes.
Raise standards for legibility, keep humans accountable for choosing what matters, and design institutions that reward the trail of thought, not just the destination.
We’re special because we’re this: mortal, embodied, experiencing these particular constraints and possibilities. They might be special in their way too, experiencing computation as we can’t. Both can be true. Our responsibility isn’t to prove we’re better. It’s to remain legible, accountable, and awake to what our particular form of cognition contributes when other forms of intelligence flourish alongside it.
That’s not a consolation prize. That’s special.



The death of human exceptionalism is the death of the ego that we are above and beyond, which would allow us to be among and to belong, not as masters or slaves, but in harmony as intended. Then we would not need to control and extract and harm and dominate. I say death to the king. Off with his head. Let the revolution begin (or continue).
Carlo, for me this is just brilliant and important. You've summed up my thoughts on AI - and their implications perfectly.
Once upon a time as a chemist I wrote a piece as a grad student describing the 'moonshot' for chemistry as 'the creation of artificial life' - not for the new life, but for what we would learn along the way. It turns out we have got there in silico, and that's fine. It's asking us the same questions and holding up new truths to us about our 'exceptionalism'.
Soon humans will need to confront new stories about themselves...