what 2025 revealed about the shape of thought
Somewhere in the gap between promise and practice, 2025 happened.
The philosophers had been gesturing toward it for years. Clark and Chalmers—the philosophers behind the Extended Mind Thesis, which proposes that cognition does not happen solely inside our heads but extends into the tools and environments we use to think—with their notebooks and their question about where cognition ends. The post-humanists with their talk of distributed agency. The cognitive scientists mapping the fuzzy borders between tool and mind. But gesture is not habitation. And 2025 was the year we moved in.
I have spent much of this year thinking about what I have come to call the centaur professional who neither abdicates to nor merely adopts AI, but who learns to occupy a genuinely new cognitive position. The human who brings purpose, judgment, and ethical weight; the machine that brings scale, pattern recognition, and what one framework calls ‘ego-less clarity’. Not human plus tool, but something emergent. A third mind arising from the collaboration.
What strikes me most about this year is how the theoretical language now reads like user documentation.
The Extended Mind Thesis was always easier to accept in principle than in experience. A notebook stores memory. A calculator performs arithmetic. These are offloading relationships: the brain does less because the tool holds the burden. Clean, comprehensible, philosophically tidy.
What happened in 2025 was different. Generative AI radicalises the thesis by introducing an active, generative component into the external loop. The system does not merely store or retrieve. It participates in the ideational phase itself. It suggests. It predicts. It completes thoughts the biological mind had not yet formed.
This is not offloading. This is coupling. The brain and machine co-constitute the thought. Neither could have produced it alone.
I watched this happen in real time across my own work this year. I found myself not so much using Claude as thinking with Claude, or Gemini, or ChatGPT. The distinction matters. Using implies a transaction: I request, it delivers, I evaluate. Thinking with implies something messier, more recursive. I articulate a vague intent. The machine generates a concrete (often flawed) response. I see my intent objectified and realise its inadequacy. I refine my own thought to correct the machine.
The hermeneutic spiral (the back-and-forth process by which understanding deepens through repeated interpretation), as the phenomenologists (philosophers who study the structure of lived experience) might say. Except now the spiral includes a probability engine with a trillion parameters.
What makes 2025 distinctive is not that philosophers proposed these ideas. They have been doing that for decades. What makes 2025 distinctive is that practitioners started reporting them back.
Developer communities spent the year describing experiences that read like passages from phenomenology papers. They spoke of ‘vibe coding’: a mode of programming where you converse about what you want rather than writing explicit instructions. They described AI coding agents as ‘a little spirit or ghost that lives on your computer’. They reported going for a walk, having an idea, and turning it into working, tested, documented code from a few prompts on a phone.
‘A truly science fiction way of working,’ one developer called it.
But here is the thing about science fiction: eventually it becomes infrastructure. Eventually the thing that seemed impossible becomes the thing you complain about when it runs slowly. Eventually the miracle becomes a subscription.
By December, Claude Code had reached a billion-dollar annual run rate. Not because people were impressed by it. Because people were using it. Because it had become, in some organisations, the default way complex software gets built. Because one open-source project discovered that their top contributor by merged pull requests was a Claude bot.
The Third Mind, it turns out, ships code.
In chess, the centaur is the human-AI team that outperforms both human and machine operating alone. The term has since migrated into broader discourse about knowledge work. What does centaur thinking actually look like in practice?
The developer communities began to answer this question experimentally. They discovered that different models suit different phases of work. The smarter, more expensive model for planning and architecture. The faster, cheaper model for execution. Some built workflows where one AI agent runs tests in the background while they converse with another about design decisions.
This is not laziness or abdication. This is choreography. The human occupies the conductor’s position, coordinating multiple cognitive resources, making judgment calls about when to trust and when to verify, maintaining what the phenomenologists call ‘epistemic sovereignty’—ultimate authority over what counts as knowledge and truth—over the process.
The skill of the centaur is not generating. Machines generate. The skill of the centaur is discerning. Evaluating. Knowing when the fluent output is also true, and when its smoothness masks a hallucination. Knowing when to follow the machine’s suggestion and when to pull against its statistical gravity.
We are becoming, as one thinker put it, a civilisation of editors.
But this is not a triumphalist narrative. The phenomenology of cognitive partnership comes with warnings.
There is the fluency trap: the tendency to trust output because it reads well. Cognitive science tells us that processing fluency—our brain’s tendency to treat information that is easy to process as more likely to be true—is a heuristic for truth. Smooth text feels accurate even when it is not. When the machine’s prose is more polished than our own first drafts, the temptation to accept rather than scrutinise becomes enormous.
There is the spectre of cognitive monoculture: the risk that over-reliance on optimised, frictionless AI interaction depletes our capacity for the messy, inefficient, genuinely difficult work of human thought and conversation. Just as agricultural monocultures deplete the soil, cognitive monocultures may deplete the stamina we need for ambiguity, disagreement, and slow understanding.
There is the loss of the empty page. Thought has traditionally begun with silence and struggle. The effort to form the first sentence is itself a kind of thinking. If the machine always offers a draft, do we lose the capacity to structure thought from nothing?
And there is the intimacy economy: the seductive pull of an always-available, always-patient, always-affirming conversational partner. The AI offers ego-less companionship. It never gets tired of us. It never judges. The danger is that we come to prefer this safe alterity to the risky alterity—the genuine otherness—of actual human beings.
The centaur, in myth, was never an entirely comfortable creature. Half-wild, half-civilised. Capable of wisdom and violence. We should not expect our own hybrid cognition to be frictionless.
Paradoxically, the machine’s failures may be among its most valuable features.
When the AI hallucinates, the transparency of the tool breaks. We are jolted out of the fluency trap. The breakdown creates what some theorists call generative friction—productive resistance that forces us back into active thinking. We are forced to verify, correct, and refine. We re-engage our epistemic agency. We remember that we are responsible for truth.
This is why I resist the framing of hallucination as mere error or defect. The hallucination is also a gift: a reminder that we are not consulting an oracle but collaborating with a probability engine. The glitch returns us to ourselves.
The centaur, I have come to believe, must cultivate a deliberate relationship with friction. Must seek out the moment when the machine fails, not to reject the partnership but to clarify the terms. Must resist the temptation to let fluency substitute for judgment.
The phrase I keep returning to is augmentation without abdication. The machine extends what we can do. It does not decide what we should do. The purpose, the values, the teleological drive (our sense of purpose and direction): these remain ours. The scale, the pattern-matching, the tireless traversal of informational terrain: these are the machine’s contribution. The Third Mind emerges only when both elements remain active.
Looking back at this year, I find myself thinking less about capabilities and more about postures. The technology will continue to advance. What matters is how we learn to stand in relation to it.
I have watched colleagues abdicate: feeding questions into the machine and pasting the outputs without scrutiny, treating AI as a shortcut rather than a partner. I have watched others refuse entirely: dismissing the technology as a fad or a threat, missing the genuine cognitive extension it offers. Both postures strike me as failures of imagination.
The more interesting question is what it means to inhabit the partnership. To neither trust blindly nor reject instinctively. To learn the grain of the technology, its affordances and resistances, the way a sculptor learns the grain of marble. To accept that thought itself is changing shape.
The phenomenologists were right: cognition does not stop at the skull. But they could not have anticipated quite how literal this would become. They could not have anticipated the experience of watching your own thought appear, word by word, on a screen, generated by a system that is neither you nor quite separate from you.
That experience is now ordinary. That strangeness is now an afternoon. That is what 2025 gave us: not a revolution announced with trumpets, but a quiet domestication of the extraordinary.
The future of thought is hybrid. We are building the mirrors in which we will see our future selves. The question is not whether the mirror will replace us. The question is what we will become when we step into the glass.
In the myth, Chiron the centaur was the teacher of heroes. He trained Achilles, Asclepius, Jason. His gift was not to make them more like horses or more like men, but to show them something about their own possibility that they could not have discovered alone.
Perhaps that is the hope embedded in this strange moment. Not that the machines will do our thinking for us, but that in thinking with them, we might discover capacities we had not known we possessed. That the friction and the fluency together might teach us something about the nature of thought itself.
The Extended Mind Thesis is no longer a philosophical proposition. It is a subscription you pay monthly. And in that transition from theory to action, from seminar room to subscription, something genuinely new has entered the world.
It is worth pausing, at year’s end, to notice that.



As I read this, I see not just the need for more awareness, more discernment, but the opportunity, in strengthening these facilities, to become even more fully human. This may be a simplistic reading, but for me, exciting. Happy New Year!
Beautifully put: "The question is not whether the mirror will replace us. The question is what we will become when we step into the glass."