The Thinking Class
What if the real problem was always us?
I’m going to make an argument I don’t want to make. I’m going to hold it longer than feels polite. I’m going to see what it does to my own sense of self.
The argument is this: a lot of the anxiety about AI and “human thinking” isn’t a universal human concern. It’s the specific anxiety of a specific class, my class, and we are not as important as we’ve trained ourselves to believe.
Before anyone (including me) reaches for the torches and pitchforks, a clarification: of course everyone thinks. People plan meals, improvise repairs, read moods, negotiate conflict, raise children, care for the sick, navigate grief, manage risk, build, grow, hunt for meaning. That’s cognition. That’s judgement. That’s intelligence in the real world.
But that isn’t the kind of thinking most of the AI discourse is mourning.
What we’re mourning, what I’m mourning, is a narrower, culturally prestigious form of thinking: abstract, language-heavy, credentialled reasoning that can be made legible to institutions. The kind that becomes essays, strategies, reports, analyses, frameworks, policy memos, research syntheses, thought leadership. The kind that is easy to point at and say: this is my value.
Call it prestige thinking.
Alongside it sits situated thinking, the embodied, local, tacit intelligence that keeps bodies alive and systems working, and reflective thinking, the inward-facing practice of examining your life, your reasons, your responsibilities, your place in the world.
The panic is not really about thinking as such. It’s about prestige thinking losing its price tag.
There’s a question that no one in the comfortable version of this conversation wants to ask directly, so I’ll ask it.
What percentage of humans have ever been paid, credentialled, or socially exalted for prestige thinking?
Not “who had thoughts”. Not “who solved problems”. But who had the social permission, the leisure, the literacy, and the institutional platform, to spend their days producing abstract, portable cognition as an output.
Throughout most of human history, that percentage was small. Not because most people were dull, but because most people were busy, and because power has always been stingy with time. Societies ran, and still run, on a division of cognitive labour: some people specialise in making bread, some in making laws, some in making meaning, some in making wars sound necessary.
We tell ourselves a flattering story about this: that the thinkers were the essence and the rest were the background. That the life of the mind is the life most fully lived. That the job of education is to make everyone more like us. That our mode of cognition is not just one specialised activity among many, but the thing that makes a human a human.
And now a machine is walking into our temple and doing the liturgy.
Here’s the history we prefer not to notice.
Universal schooling at national scale is a recent and uneven project, mostly an expansion of the last couple of centuries. Widespread literacy, mass higher education, “knowledge work” as a large fraction of employment, these are historically strange arrangements. The idea that everyone should be trained not merely to follow knowledge but to interrogate it, to perform “critical thinking” as a general civic virtue, became a central educational aspiration relatively late.
That does not mean earlier cultures didn’t value reasoning. Obviously they did. Philosophers argued, clerics debated, merchants calculated, engineers built, parents advised, artisans innovated, midwives made life or death judgements. Human beings have always been thinking animals.
The change wasn’t the existence of thought. It was the expansion of a particular kind of thought into an identity and a moral hierarchy: the belief that the highest human calling is the production of explicit reasons, reasons that can be written down, standardised, examined, graded, cited, and turned into institutional authority.
That was not only enlightenment. It was also an economic and administrative project. Modern states need legible citizens. Bureaucracies need legible decisions. Corporations need legible plans. Universities need legible performance. “Thinking”, in its prestige form, became both a cultural ideal and a machine for sorting people.
And because the thinking class helped build those machines, we came to confuse our outputs with the species.
A lot of us also tell ourselves another story: that in earlier eras most people “outsourced their thinking” to institutions, traditions, and authorities because they couldn’t do it themselves.
That’s too smug. People didn’t outsource thinking. They outsourced coordination. They outsourced validation. They outsourced the burden of re-deriving the world from scratch, because re-deriving the world from scratch is costly, in time, attention, risk, and social conflict, not just calories. When you’re trying to get through a winter or keep a toddler alive, “epistemic independence” is not always the first priority.
We call that primitive. We call it unenlightened. We build an educational apparatus to ensure it never happens again.
But what if some version of cognitive delegation is simply how complex societies function? What if distributed cognition, where institutions stabilise shared beliefs and a subset of people specialise in explicit reasoning, is not a moral failure but a structural feature?
And what if the brief cultural moment in which large numbers of people were trained, encouraged, and economically rewarded to do prestige thinking was a kind of historical bloom: real, valuable, and temporary?
Everything in my formation rebels against this. It sounds elitist. Anti-democratic. Like a betrayal of the ideals that shaped my life.
But notice what’s doing the rebelling. It isn’t evidence. It’s identity.
“I am a thinker” isn’t just a job description. It’s a status claim. It’s a story about why my hours matter. It’s an explanation for why my life has weight.
Of course I want to believe thinking, my kind of thinking, is universally essential. What else would I want to believe? The alternative is uncomfortable: that my contribution is specialised, contingent, and possibly less central to human flourishing than I’ve been trained to assume.
Here’s another discomfort: when people say they fear AI will “replace human thinking”, the loudest versions of that fear are rarely coming from the parts of society that have already been mechanised, monitored, optimised, and squeezed. Automation anxiety has been real for factory workers, logistics workers, retail workers, call centre staff, drivers. Whole sectors have lived through “the machine is coming for your job” for generations.
But the particular panic that presents itself as a civilisational crisis, the sudden philosophical grief over “the end of thought”, the apocalyptic rhetoric about human dignity, tends to be authored, amplified, and emotionally centred in the class that sells words and decisions.
Writers. Researchers. Analysts. Consultants. Managers. Academics. The people who were rewarded for producing legible cognition, and who built identities around the idea that this cognition was both rare and holy.
In other words, us.
We are not worried about “humanity” in the abstract. We are worried about the collapse of a premium.
Let me make this concrete.
Picture a hospital ward. A nurse is triaging a patient who doesn’t fit the textbook. The data is messy: the patient’s story is incomplete, the symptoms overlap, the clock is loud. The nurse is doing situated thinking at high stakes: pattern recognition, yes, but also judgement, empathy, escalation, restraint, coordination with others, responsibility under uncertainty.
Down the road, a knowledge worker is preparing a slide deck: “Strategic Options for Q3”. The deck will be read by people who will make decisions that affect budgets and jobs. The worker is doing prestige thinking: synthesis, framing, articulation, justification. The output is legible. It travels.
Now add AI.
AI can draft the slide deck quickly, often surprisingly well. AI can summarise the market report, propose options, suggest risks, rewrite the narrative in three tones, generate the tables, provide counterarguments. It can produce the shape of the output at a fraction of the cost.
AI cannot be the nurse. Not because the nurse is “more human”, but because the nurse’s cognition is entangled with bodies, accountability, trust, and a world that doesn’t politely become text. Maybe one day machines will take more of that too. History warns us not to be complacent. But the point remains: the kind of thinking being cheapened first is the kind that looks like language.
Which is the kind of thinking the thinking class sells.
So when we raise the alarm about AI, we should be honest about what’s being threatened. Not thought itself. Not consciousness. Not “the human spirit”. What’s being threatened is a specific monopoly: the monopoly on the production of certain cognitive outputs that institutions have historically paid humans to produce.
That monopoly was always a little bit of a story. Not entirely false, but inflated.
Here’s the part that really hurts.
What if AI can do so much of our work because a lot of our work was never as special as we believed?
We told ourselves we were doing something uniquely human. That our outputs required deep understanding, lived experience, irreducible consciousness. That the difference between “real thinking” and “mere pattern” was obvious and unbridgeable.
Then a pattern machine arrived and started generating outputs that look, to many consumers, close enough.
Not always. Not in every domain. Not without error. Not without hallucinations and blind spots and the bizarre confidence of a system that cannot be embarrassed. But often enough, cheaply enough, that the old boundary between “human thought” and “mere computation” became harder to locate in everyday practice.
Maybe the machine didn’t reveal that consciousness is fake. Maybe it revealed something more mundane and more humiliating: that a large portion of what we sold as thinking was an institutionalised form of patterning all along. That much of what we called creativity was recombination. That much of what we called insight was retrieval plus framing. That we were, in our own way, also pattern machines, meat with memoirs, convinced our inner narration made our outputs metaphysically different.
There’s an ugly phrase in this discourse, “stochastic parrots”, that people fling around like a purity test. I don’t think it captures the whole truth about minds, and I think it’s often used lazily. But the sting of it lands because it touches a nerve: many of our professional outputs are, in fact, highly patterned. The forms are learned, the styles are imitated, the moves are culturally standardised.
A machine can learn those moves.
And that fact, more than any abstract theory of intelligence, is what is breaking our composure.
I said I’d make this argument and see where it goes. Here’s where it goes.
If prestige thinking is less essential than we claimed, and if machines can now produce a lot of its outputs cheaply, then what’s coming is not necessarily a tragedy for humanity. It may be a correction.
A correction of our class’s inflated self-regard.
A correction of an educational system that quietly treated one kind of cognition as the apex of being human, while devaluing the rest.
A correction of a culture that looked down on work that is embodied, relational, caretaking, practical work without which no one gets to have a seminar in the first place.
That does not mean the transition won’t be brutal.
The people who will suffer in it are real. Their suffering matters. I’m one of them. My colleagues are. The graduate students training for roles that may shrink or vanish are. The mid-career professionals whose skills are being commoditised are.
This is not a morality play where the thinking class “deserves” what it gets. It’s a shift in incentives. And incentives don’t care about our self-conceptions.
But let’s be precise about the pain: it is largely the pain of losing a specialness we were paid to embody.
Now comes the turn, the part where I stop sounding certain.
I’ve sustained this argument for a while. I believe about 60% of it.
The other 40% is the portion I can’t quite accept, even when I can’t cleanly refute it.
The part I can’t accept is the implied conclusion that thinking only matters insofar as it is economically valuable. That the examined life is merely a class preference. That reflective consciousness, mind looking at itself, asking why, demanding reasons, refusing easy answers, is just another skill that gets priced and traded like any other.
Maybe that insistence is my formation talking. Maybe it’s a leftover religious impulse dressed in secular language. Maybe I’m so embedded in prestige thinking that I can’t see past its prejudices.
It’s possible that in fifty years people will look back at our current panic and find it funny: a brief tantrum from a class that mistook its job description for a metaphysical truth.
But it’s also possible that the thinking class got the economics wrong and the value right.
Maybe reflective thinking really is intrinsically valuable, not because it produces better quarterly reports, but because it deepens agency, expands empathy, complicates cruelty, and makes room for responsibility. Maybe a society can survive without widespread reflective thought, but survival isn’t the only metric worth caring about.
And maybe the question isn’t “Can AI do our jobs?” but “Did our jobs ever fully capture what mattered about thought?”
I don’t have a verdict. I have a suspicion and a bruise.
So here’s where I land, which is not quite anywhere.
The thinking class is losing its monopoly on prestige thinking. This is probably fine for humanity in the large and definitely painful for those of us whose identities were built on that monopoly. Our anxiety is real, but it is not universal. Our loss is genuine, but it is not everyone’s loss.
And yet something in me refuses the clean, market-shaped conclusion that thinking is merely an economic function, valuable when profitable, dispensable when not.
Something insists that reflection matters even when it’s redundant, that the capacity to ask “why” is not just a workplace competency but a form of human freedom.
I can’t prove that in a way that satisfies a spreadsheet. I can only confess it as a commitment.
The thinking class may be wrong about its centrality. But if we’re wrong, I would rather be wrong in defence of meaning than right in service of emptiness.
At minimum, we can stop flattering ourselves with the easy story, that AI threatens “human dignity” and we must resist, and stop anaesthetising ourselves with the other easy story, that none of this matters and we were never anything special.
Between those comforts is the harder question worth holding: not whether machines can imitate our outputs, but what kinds of thought, and what kinds of life, we decide are worth protecting when imitation is cheap.



Brilliant dissection of the hidden class anxitey beneath AI panic. The distinction betwene prestige, situated, and reflective thinking cuts through so much noise. I've watched similar dynamics play out in startups where engineers building infrastructure suddenly realize their abstractions are being commoditized while the frontend designers (who deal with messy human interface problems) stay irreplaceable. The nurse vs slide deck comparison lands hard because it exposes what we dunno want to admit: a lot of knowledge work was always closer to pattern matching than we pretended.
Carlo, first of all, thank you. For going here and drawing out the pain. In one article you've nailed what feels like the thing I've been dancing around in about 20 articles over the last year.
Like you I still have hope. That hope is based on prestige thinking becoming better-aligned with behaviour, as your example of the nurse made clear.
It's going to take a lot of work to persuade the world that this alignment is worth investment. But I believe its value can be framed and signalled effectively.
I'm going to pursue that because I don't want to pursue anything else. But your article made me feel much less crazy than I often do, given how little interest there was in this pursuit in 2025.
Here's to a new kind of specialness :)