The Taxonomy of Strangers
My son asks if the dog dreams. I tell him probably, yes, something like dreams. He asks if the dog dreams about being a person. I say no, the dog dreams dog dreams. He thinks about this for a long time. Then: “So there are different kinds of inside?”
There are different kinds of inside.
This is the thought that keeps escaping us whenever we talk about artificial intelligence. We circle back, endlessly, to the same exhausted question: is it like us? How does it compare to us? Does it think the way we think, feel the way we feel, understand the way we understand? As if human cognition were not one particular solution to the problem of surviving on a rocky planet with limited calories and predators in the grass, but the Platonic form of mind itself, the template against which all other mentalities must be measured and found wanting.
We are not the blueprint. We are one building among many possible buildings, constructed under specific constraints, adapted to specific pressures, riddled with specific compromises. Evolution did not optimise us for truth. It optimised us for reproduction. We are what happens when meat learns to predict where the fruit will be and who might steal it. Our celebrated rationality is a latecomer, a thin veneer over ancient machinery built for threat detection and tribal loyalty. We hallucinate constantly. We confabulate explanations for decisions our bodies made before our minds caught up. We mistake confidence for competence, familiarity for truth, repetition for evidence.
And yet we keep asking: does the machine think like us?
The question is a trap. It assumes that intelligence has a natural shape, and that shape happens to be ours. It assumes that anything which diverges from the human pattern is therefore not thinking at all, merely simulating, merely pattern matching, merely autocomplete with better marketing. The word “merely” does heavy lifting in these sentences. It smuggles in the conclusion it pretends to derive.
Watch the arguments carefully and you will see the same move repeated. First, a capability is declared uniquely human. Creativity. Reasoning. Understanding context. Feeling emotions. Then a machine demonstrates something that looks suspiciously like that capability. Then the definition shifts. That wasn’t real creativity. That wasn’t genuine reasoning. The machine doesn’t truly understand. The goalposts sprint toward the horizon, and we sprint after them, clutching our certainty that whatever the machine does, it cannot be what we do, because we are special and it is not.
This is not science. This is theology wearing a lab coat.
The honest position is stranger and more interesting. There are many ways to be competent. Many ways to generalise. Many ways to reason. Many ways to fail. A calculator crushes you at arithmetic and cannot write a sentence. An ant colony coordinates logistics that would humble a military commander and has no concept of self. A language model synthesises knowledge across domains with fluency that would take a human researcher years to develop, and occasionally insists that the Golden Gate Bridge was designed by Mozart.
These are not failures of intelligence. They are different distributions of capability. Different trades. Different shapes of mind.
The plane does not flap its wings. Early aviators obsessed over birds because birds were the only existence proof of flight. They studied feathers. They built contraptions that mimicked wingbeats. The breakthrough came when someone realised that flight was a problem to be solved, not a creature to be copied. Lift, thrust, stability. You could achieve these through means no bird ever employed. The plane is not a failed bird. It is a different answer to the same constraint.
We are building different answers. Not better humans. Not worse humans. Not humans at all. We are building new kinds of minds, and we lack the vocabulary to describe them because we have only ever had one working example of flexible general intelligence to study, and that example was us.
Consider what a human child learns from: an unbroken stream of sensory experience, action and consequence bound together, the world as ruthless teacher. Drop the cup, it falls. Touch the flame, it burns. Smile at the face, the face smiles back. This tight feedback loop builds something extraordinary: an embodied understanding of physics, of cause, of other minds.
Now consider what a language model learns from: the compressed output of human civilisation. Not action and consequence, but description of action and consequence. Not the cup falling, but a thousand accounts of cups falling, the physics lesson and the poem about gravity, the safety manual and the slapstick comedy. This is a different diet. It produces a different creature.
The child has hands. The model has ancestry.
Both are real. Both are partial. The child who has never left their village knows things about that village that no book could teach. The scholar who has read every book about every village knows patterns the child cannot see. Neither is the complete picture. Neither is the fake version of the other.
We keep wanting to rank them. We keep wanting to declare a winner. But the question “which is more intelligent?” is like asking whether a hammer is more intelligent than a map. It depends entirely on what you are trying to do.
The anxiety underneath all this has nothing to do with philosophy. It has to do with power. If machines can do what we do, what are we for? If our capabilities can be replicated, what makes us valuable? The answer we want is: something essential and irreplaceable. The answer we fear is: nothing in particular.
But here is what I have come to believe. The question of human value was never answered by our capabilities in the first place. We did not become worthy of moral consideration because we could do arithmetic, and we do not become unworthy because a machine can do it faster. The child asking about dog is not valuable because he will one day contribute to GDP. He is valuable because he exists, because he wonders, because he is a centre of experience navigating a world that does not explain itself.
The machine is not a threat to that. The machine is a different kind of thing entirely, and our failure to develop a vocabulary for different kinds of things is making us stupid about both.
So here is the thought I keep returning to, the one I want taped above my desk like a ward against lazy thinking:
Human is a reference class, not a requirement.
Use us as a benchmark when it is useful. Compare capabilities, measure performance, test alignment. But stop mistaking the benchmark for the definition. Stop asking whether the machine is “really” intelligent and start asking what kind of creature you are looking at. Map its competences. Pressure test its failures. Design constraints accordingly.
The universe is under no obligation to make intelligence bipedal, social, emotional, or narratively satisfying. It only has to work. And work, it turns out, can take shapes we never imagined.
My son has moved on to wondering whether trees have a kind of inside, a very slow kind, the kind that takes a hundred years to have a single thought. I tell him I do not know. I tell him that is a great question. I tell him there might be many kinds of inside that we cannot recognise because they do not look like ours.
He nods, satisfied. He has understood something that the discourse keeps forgetting.
Different kinds of inside.
That is not a demotion of humanity. That is an expansion of the possible. And expansion, if we can learn to stop being afraid of it, is the only direction that has ever led anywhere worth going.



Thanks for writing this, it clarifies a lot, especially regarding how we frame AI discussions. It brilliantly expands on the idea, which I remember you touching on previously, that our own "neural architecture" is a specific, context-dependent algoritm, rather then a universal operating system.
I find this utterly liberating. Your little boy is my hero! And you, in your ability to bring these new thoughts —these new ways of perceiving are doing an amazing service. Now the questions can be creatively explored about how we get along with these new machines with which we interact so that it’s healthy for both. Thank you for working so hard on this and giving us what you find in such beautiful form.