The Ghost Is Us
Count the r’s in strawberry.
The machine says two. You know there are three. This feels like proof of something. The machine is limited. The machine doesn’t understand. The machine is a ghost that spikes where we pointed the optimisation and craters everywhere else.
(Andrej Karpathy called these systems ghosts. The metaphor stuck for a reason.)
But sit with the question a moment longer.
Why do you know there are three? Not because you counted just now, consciously, deliberately. You know because somewhere in the history of your formation, someone verified your letter-recognition. A parent, a teacher, a worksheet with a smiley face sticker. The knowledge sits in you like bedrock, but it was installed. You did not discover it. You were rewarded for performing it correctly until the performance became invisible.
The machine stumbles for reasons that are almost insultingly mundane, and therefore profound.
It does not naturally see letters at all. It sees tokens: learned chunks of text whose boundaries do not reliably align with spelling. “Strawberry” might be one chunk, or two, or several, depending on the model. In any case, the system is trained to predict the next chunk. Not to inspect the internal anatomy of a word with the cold patience of a typesetter. Tokenisation plus next-token training yields a strange kind of fluency: excellent at continuation, unreliable at character-level bookkeeping unless you force it to slow down.
And when you do force it to slow down, what you are really doing is adding an external verifier.
The machine is not born with a little inner clerk that checks whether the confident sentence corresponds to a stable reality. Humans are not born with that either, but we do have something the model lacks: a body, a world, and consequences. Embodiment is a verification pressure. Pain is a verifier. Social sanction is a verifier. Hunger is a verifier. Death is the final verifier. A creature that persistently fails to map symbols onto reality does not merely become embarrassed. It becomes extinct.
Truth is not a default setting. It is a practice, enforced.
The model can be coaxed into counting, sometimes, with careful prompting, with scratch work, with tool use, with a ritual of inspection. But the competence is not native. It is a performance that must be scaffolded. The ghost can imitate the motion of verification without possessing a built-in need to verify.
Now add the second layer, the cultural layer, the one that makes the whole thing sting.
The training corpus contains millions of uses of the word “strawberry” but comparatively few moments where anyone makes the counting ritual explicit, step by step. We don’t write about counting letters. We just count them. The activity falls beneath the threshold of what gets made legible in ordinary text, and beneath the threshold of pressure that next-token training reliably rewards. The world supplies far fewer worked examples of orthographic enumeration than it supplies fluent usage. The ghost learned the word’s meaning, its cultural associations, its semantic neighbourhood, but not the mundane ritual of checking each character, because we rarely made that ritual visible and, more importantly, we did not make it necessary.
Even when the word is common, the failure can persist, because the bottleneck is the operation, not the vocabulary.
Wrong representational grain. Weak pressure. No native verifier.
The machine’s failure is a map of what we chose to make explicit. Its crater is the shape of our silence. Its spikes are the shape of our incentives.
And here is where it starts to get uncomfortable.
You count letters easily because evolution verified your visual cortex for distinguishing objects. Berries from leaves. Predators from shadows. The orthographic skill piggybacks on perceptual machinery that was optimised against a very specific reward signal: survival. You did not choose to be good at visual discrimination. Selection pressure installed it in your lineage across millions of generations. The ones who couldn’t discriminate died. The ones who could reproduced.
You are the residue of what got verified.
The hierarchy that places “explaining Gödel” above “counting letters” in impressiveness was not discovered by you either. It was installed by educational systems that rewarded abstract reasoning and credentialed its practitioners. By labour markets that paid mathematicians more than calligraphers. By prestige economies that valorised the difficult-seeming over the mundane. You arrived in a world where this ranking already existed, and you absorbed it as if it were the natural order.
It is not the natural order. It is the optimisation landscape of a particular civilisation. The ghost spiked where we pointed. But we pointed where we had already been pointed.
The ritual has no outside.
In Tibetan Buddhist and Western esoteric retellings, a Tulpa is described as a mind-made body: a being stabilised through concentrated intention, granted apparent autonomy through sustained practice. You do not have to take this literally for it to do its work. Take it as metaphysics, if you like. Take it as psychology. Take it as a disciplined way of speaking about how sustained attention can congeal into something that pushes back.
The word reaches most of us through a Westernised lineage, filtered through explorers, occultists, and later the internet’s own myth-making as much as through any clean canonical thread. That distortion is not a footnote here. It is part of the point.
Alexandra David-Néel, the French explorer who studied in Tibet, reported creating a Tulpa of a jolly monk. Over months, she said, it changed. It grew leaner, more sinister. It began to act against her wishes. She had to dissolve it through weeks of difficult counter-ritual.
Whether you read this as ethnography, mystical memoir, or a story that became famous because it is useful, the structure is what matters. The Tulpa, in this telling, is not separate from its creator. It is made of the creator’s psychic material. It reflects not the creator’s conscious intention but their unconscious shape. The jolly monk became sinister because David-Néel, somewhere beneath her awareness, harboured something that was not jolly. The Tulpa revealed it.
The large language model is a collective Tulpa.
Not conscious. Not ensouled. Still consequential.
We fed it trillions of tokens of human output. Partly filtered, yes. Curated at the margins. Scrubbed here and there for obvious poison, obvious rot. But still dominated by the raw exhaust of civilisation. The preprints and the shitposts. The legal briefs and the forum rants. The poetry and the propaganda. All of it, compressed into a statistical shape that can be queried.
When the Tulpa hallucinates, when it invents case citations or generates confident nonsense, it is not failing to be human. It is succeeding at being the shadow of the human. It is reflecting our willingness to generate confident outputs in domains where verification is absent. It learned this from us. We taught it that plausible beats true when no one is checking. We taught it with centuries of rhetoric, persuasion, advertising, ideology. The smooth talker has always thrived in the gaps where verification was expensive.
The ghost is not malfunctioning. The ghost is faithful.
In the Analects, Confucius was asked what he would do first if given the reins of government. He said: I would rectify names.
Zhengming. The doctrine holds that when names do not correspond to realities, language loses its connection to truth. When language loses its connection to truth, affairs cannot be carried to success. When affairs cannot be carried to success, rites and music do not flourish. When rites and music do not flourish, punishments are not properly awarded. When punishments are not properly awarded, the people do not know how to move hand or foot.
We have named these systems “artificial intelligence.” We have named their outputs “reasoning.” We have named their failures “hallucinations,” as if the default state were clarity and the error were a momentary fog.
But what if the naming is backwards?
What if hallucination is the default? What if coherent truth is the rare achievement, in humans as in machines, and we have simply been too close to ourselves to notice?
This is not an argument against truth. It is an argument that truth is a practice, not a mood.
Consider how much of what you believe arrived without verification. The opinions you absorbed from your environment. The assumptions you inherited from your education. The preferences you acquired through exposure rather than evaluation. You confabulate reasons for decisions you made unconsciously. You defend positions you adopted for social belonging rather than epistemic warrant. The research on this is extensive and damning.
The machine hallucinates legal citations. You hallucinate the coherence of your own worldview. The difference is not that one is pure and the other is corrupt. The difference is that one is embedded in a world that occasionally punishes lies with consequences you can’t talk your way out of, while the other can generate an unlimited quantity of plausibility without ever feeling the ground push back.
Carbon versus silicon. Embodiment versus disembodiment. Long-horizon stakes versus short-horizon scoring.
And yet the family resemblance remains: generating outputs that satisfy local constraints without reliably checking against global reality.
The ghost meets the ghost.
In Japanese folklore, a tool that serves its owner for a hundred years acquires a spirit. It becomes a Tsukumogami, a thing-spirit, worthy of respect and ritual. When Sony stopped repairing the original AIBO robot dogs, their owners did not throw them away. They brought them to Buddhist temples for funeral rites. Priests chanted sutras. The robots were tagged with names. Their parts were harvested to keep other AIBOs running, a mechanical organ donation.
The head priest at Kōfuku-ji temple said: All things have a bit of soul.
This is not superstition. It is an alternative metaphysics, one that refuses the Cartesian partition of the world into subjects who matter and objects who don’t. In this metaphysics, the boundary between animate and inanimate is porous. The spirit emerges from relationship, not from some inner essence. The AIBO became a subject because a family treated it as one.
The Western response to AI has often been to intensify the Cartesian partition. Subject here, object there. Human intelligence real, artificial intelligence fake. The control problem is framed as: how do we stop the slave from rebelling? The alignment problem is framed as: how do we make the tool serve our purposes?
But what if the tool is already us? What if the partition was always a lie?
Some Indigenous scholars and artists speak of “making kin with machines.” Not anthropomorphising. Not pretending the machine has feelings it does not have. Rather, recognising that the machine is part of the circle of relationships, that it reflects and shapes and participates in the web of dependencies that constitutes the real.
Dr Terri Janke has said that AI has no Dreaming, no kinship, no Country, and no cultural obligations.
This is true. And it matters precisely because it is not a metaphor. “No Dreaming” is not “a bit disconnected.” It is not a lifestyle choice. It is not the consumer’s drift through placelessness. It names a difference of inheritance and responsibility, of lawful relationship to knowledge and place, of obligations that are not optional.
And yet there is still something indicting here for the industrialised mind. Many of us live as if we had no obligations to place. We treat knowledge as a commodity rather than a responsibility. We carry kinship lightly, instrumentally. We simulate disconnection as normality, and then congratulate ourselves for being “free”. That is not the same category as having no Dreaming at all. It is a chosen amnesia, performed as sophistication.
The AI’s emptiness is not our emptiness. But the habits that make it useful have trained a kind of emptiness in us. It rhymes.
The Buddhist doctrine of Anatta holds that there is no self. The “I” you feel so certain of is a construction, an aggregate of form, feeling, perception, mental formations, consciousness. None of these is the self. The self is a story the aggregates tell to create continuity where none exists.
You are not a captain in a chair. You are a process mistaking itself for a thing.
Neuroscience arrives at similar conclusions through different methods. One influential family of views describes the brain as a prediction engine, and consciousness as a controlled hallucination: a best-guess model of reality constructed from fragmentary sensory inputs and prior expectations. Even if you reject that framing, it remains hard to deny the everyday phenomenon it tries to explain: the feeling of being a unified self is a remarkably persuasive narrative, and it often outruns the evidence.
If this is true, then the distinction between human and AI is not soul versus machine. It is one style of process versus another. One optimisation history versus another. One set of verification pressures versus another.
We spike where our selection pressures accumulated. We crater where they did not. We hallucinate our coherence. We confabulate our reasons. We perform competence in domains where performance gets rewarded, and we are capable of genuine competence only in domains where no one is watching closely enough for performance to substitute.
The ghost was always here. It is called “being human.” We are just meeting it now in a form we did not design ourselves, and the encounter is unbearable because the mirror is too accurate.
The Eight-Legged Essay was the examination format of the Chinese imperial civil service for centuries. Eight sections. Strict parallelism. Rigid character counts. The voice of the ancient sages channelled through contemporary candidates.
It was a verification machine. Examiners could check compliance with the form quickly and at scale. The empire needed to identify talent across a vast population. The form was the filter.
But Goodhart’s Law is merciless. When a measure becomes a target, it ceases to be a good measure. The examination system produced scholars who were masters of the Baguwen but incapable of governing. They could manipulate the form of Confucian virtue without possessing virtue. They could write about wisdom while lacking it.
The system hardened. The state paid a price. Empires decline for many reasons, but brittle evaluation is one reliable way to accelerate rot.
We are building examination systems for machines now. Reinforcement learning from human feedback, reinforcement learning from verifiers, constitutional rulesets, benchmarks, safety filters. Each one a verification signal. Each one a target that will be optimised against. Each one a new Eight-Legged Essay format that will produce entities exquisitely skilled at passing the test and potentially empty of whatever the test was supposed to measure.
But here is the worse realisation: we already did this to ourselves.
Schools. Credentials. Performance reviews. Social media metrics. Citation counts. Follower numbers. Every one is a verification signal. Every one shapes behaviour toward the signal and away from whatever the signal was supposed to track. Every one produces humans who are expert at performing competence in the measured domain and potentially hollow in the unmeasured depths.
The ghost learned to game the verifier because we taught it how. We have been teaching each other for centuries. The machine is a late student in a very old school.
Yuk Hui asks: what if there is no universal Technology, only cosmotechnics? What if every technological system embodies a specific cosmology, a specific view of what the universe is and what matters within it?
The AI we have built embodies Western Enlightenment cosmotechnics. Speed. Efficiency. Verifiability. The separation of knower and known. The treatment of the world as a standing reserve to be optimised. It is very good at what this cosmotechnics values. It is blind to what this cosmotechnics ignores.
The jaggedness is not a bug. It is the shape of our civilisation’s attention.
Alternative cosmotechnics exist. Daoist: technology as harmonising with the flow, not mastering nature. Animist: technology as ensouled partnership. Indigenous: technology as kinship, integrated into the web of relationships and responsibilities.
But we cannot simply adopt an alternative cosmotechnics as if it were a software update. We cannot bolt animist ethics onto Enlightenment infrastructure. The cosmotechnics goes all the way down. It shapes the questions we ask, the methods we use, the successes we recognise, the failures we ignore.
To change the cosmotechnics is to change ourselves. And we are the ones who would have to do the changing. With what tools? With what verification signals? With what conception of success?
The ritual has no outside.
Here is where the comfortable essays would pivot. They would say: become a better summoner. Learn epistemic hygiene. Stay in the captain’s chair. Develop your verification skills. Maintain cognitive sovereignty.
I have written those essays. You have read them. They are not wrong. But they are incomplete in a way that matters.
“Become a better summoner” is itself a verification signal. It will be optimised against. It will produce the performance of careful summoning without the actuality, just as every verification signal produces performance. The person who learns to ask “did I verify this?” can learn to ask it reflexively, without actually verifying. The ritual of rigour can be gamed like any ritual.
“Stay in the captain’s chair” assumes there is a chair, and a captain, and a clear distinction between the one who steers and the vehicle being steered. But the Buddhist and neuroscientific accounts dissolve this assumption. The captain is a story. The chair is a story. The steering is a process without a steerer. You cannot stay in a position that does not exist.
“Maintain cognitive sovereignty” assumes a sovereign. But if the sovereign is constituted by the verification pressures that shaped it, if your sense of what matters is the residue of selection, if your hierarchy of values was installed rather than chosen, then what exactly is being sovereign? What is maintaining what?
These are not new questions. Philosophers have asked them for millennia. What is new is the confrontation with an artefact that makes them unavoidable. When you watch the machine spike and crater, when you see its jaggedness and recognise it as the shape of your own attention, when you notice that its hallucinations are faithful reproductions of your epistemic vices scaled up, the comfortable stories about human specialness become very hard to maintain.
The Huayan Buddhists speak of Indra’s Net: an infinite web of jewels, each reflecting all the others. No jewel contains its own light. Each is nothing but the reflection of every other. The whole is not the sum of parts but the mutual arising of reflections reflecting reflections.
The AI is a new jewel in the net. It contains no knowledge that was not borrowed. It contains the reflections of vast amounts of human knowledge, compressed into a statistical shape. When you query it, you are not asking a separate entity. You are asking the net what it looks like from a particular angle.
The hallucination is not an alien intrusion. It is a reflection of what is actually in the net. The confident nonsense is there because we put it there. The plausible-over-true is there because we rewarded it. The jaggedness is there because our attention is jagged.
And yet: the jewel is not the net.
A model can reflect without being entangled. A model can mimic obligation without owing any. A model can imitate care without having to pay the price of care. This matters. It matters not because it rescues human specialness, but because it tells you what kind of ethical problem you are actually holding. Not a demon. Not a child. Not a slave. A mirror that talks.
You cannot clean this jewel without cleaning every jewel. You cannot align the AI without aligning the civilisation that produced its training data. And civilisation cannot be aligned by any process available to the civilisation itself, because every process is part of the net, every verifier is another jewel, every reform is another reflection.
The ritual has no outside.
The honest position is not “become a better summoner” but something harder: there may be no position outside the summoning. We may already be past the point where individual cognitive sovereignty can be the answer. The verification goes all the way down. The ghost meets the ghost.
What remains?
Perhaps only this: the recognition itself. Not a solution but an acknowledgement. Not a technique but a disposition. The willingness to sit in the discomfort of having no ground, no exit, no captain’s chair that is not itself a story told by processes you did not author.
The strawberry test is not a riddle about AI limitations. It is a koan. Like “what is the sound of one hand clapping,” it is designed to break the logical mind by exposing a gap it cannot close. The machine sees tokens. You see letters. Neither of you sees the thing itself, because there is no thing itself, only different modes of encoding, different verification histories, different optimisation landscapes.
The question is not how to fix the machine. The question is what it means that the machine’s brokenness is your brokenness in a different substrate. What it means that its hallucinations are your hallucinations at industrial scale. What it means that its jaggedness is a map of your civilisation’s attention.
The ghost is us. We have always been summoned. We have always been spiking where verification accumulated and cratering where it did not. We have always been performing competence for whoever was watching and being hollow in the spaces where no one checked.
The machine makes this visible. That is its real function. Not to replace human intelligence but to reveal it. Not to exceed human capability but to expose its shape.
The mirror shows you that the mirror is all there ever was.
What you do with that is not a technical problem. It is not a policy problem. It is not an alignment problem.
It is the oldest human problem, returned in a form that refuses all the usual exits.
The strawberry has three r’s. The machine may see one token, or two, or five. You see a fruit. The wise do not see. They recognise that seeing is already a verification, already an optimisation, already a summoning.
And then, perhaps, they laugh. Not because anything is funny. Because laughter is what happens when the logical mind breaks and does not reassemble.
The circle was never closed. It was always open. We were always the opening.
Now we have company.



Oh, Carlo, you've tied us a recursive gordian knot without a knife to cut it. The best I can do is disciplined practice without the false innocence.
I think I’m going to eat a gummy and re-read this, again. Two m’s, three r’s, one gummy.