The Scarce Thing
The universe is easier to figure out than we wanted it to be.
Not easy. Easier. Easier than the story we told ourselves, the story where human insight was special, where reality would always keep some secrets, where there would always be problems only we could solve.
That story is thinning out.
You were probably told, at some point, that certain things were simply too complex. Protein folding. Weather. Turbulence. The kinds of problems that would take longer than the age of the universe if you tried to brute-force them.
Except proteins fold in milliseconds. They do it constantly, in your body, right now. Nature isn’t “solving” the problem it’s living inside a shortcut.
The question was never whether these problems could be solved. The question was whether we could find the shortcuts that reality already uses.
Turns out we can, more often than we expected.
There is a pattern here that people avoid saying out loud.
Everything we can point to has persisted long enough to be noticed. Persistence implies constraint. Constraint implies structure. Structure implies regularity. And regularity — when it’s exposed to observation — can usually be modelled well enough to be useful.
This is not, at bottom, a claim about artificial intelligence. It’s a claim about the kind of world we inhabit: a world with more compressible structure than the old story allowed.
We wanted exemptions. We wanted some corner of reality that would stay forever beyond reach. Consciousness. Creativity. The soul. Pick your favourite.
Maybe those exemptions exist. But they are smaller than we hoped. And getting smaller.
The old story about automation went like this: there is a ladder. Routine work at the bottom. Skilled work in the middle. Genius at the top. Machines start at the bottom and climb. First they take the factory jobs. Then the clerical jobs. Then the professional jobs. We retreat upward, rung by rung, until only the highest work remains - the creative work, the truly human work.
It was comforting because ladders have tops. There would always be somewhere to stand.
Here is what is actually happening.
The ladder is not being climbed. It is collapsing in the middle.
Think about what the “middle” actually is. It’s solving problems that someone else defined. It’s expertise within a frame. It’s knowing the rules well enough to apply them: synthesis, analysis, recommendation, compliance, optimisation, the respectable work of the professional class.
The doctor who diagnoses within established categories.
The lawyer who searches within existing case law.
The analyst who models within given assumptions.
The engineer who optimises within specified constraints.
This is the work credentials certify. This is the work most educated people were trained to do. This is the work that felt safe because it required years of difficult learning.
This is also exactly the kind of work pattern-finding systems are increasingly good at.
Not perfectly. Not always. Not yet.
But well enough. And improving fast.
The defensible ground is not in the middle. The defensible ground is at the edges.
One edge is the body: physical presence in messy, local, high-entropy environments. Changing a nappy. Fixing a pipe in a particular building with particular quirks. Being there when someone is afraid. Bodies are hard because the world is not a clean interface and it refuses to behave like a dataset.
The other edge is choosing what matters.
Not solving problems, but deciding which problems deserve a life in the first place. Deciding what to measure. What to ignore. What to refuse to optimise. When to stop. What “success” even means.
We have a clumsy phrase for this: research taste. It sounds academic and precious. It is neither. It is practical direction-finding in an infinite space of possible work.
Let me be concrete.
A machine can now ingest an absurd amount of scientific literature and treat it like working memory. It can surface connections no human would spot simply because no human can hold that much context in their head at once.
That is extraordinary.
But the machine does not know which connections matter. It finds all of them: the significant ones and the trivial ones, the ones that unlock new treatments and the ones that are statistical lint.
The human who can tell the difference becomes more valuable, not less.
The machine solves. The human selects.
Selection is harder than solving. That is the thing nobody wants to say.
We built institutions, careers, entire civilisations on the assumption that solving was the bottleneck. Once you knew what to do, doing it was comparatively straightforward. Resistance came from ignorance. Progress meant learning more, calculating better, executing faster.
Now solving is getting cheap, and we are discovering that the real difficulty was upstream all along.
The difficulty was choosing what to solve.
Framing the question.
Deciding what counts as an answer.
Knowing when to stop.
These look simple. They are not simple.
Binary search is a technique from computer science: you have a space of possibilities, you test the middle, the result tells you which half to discard, and you repeat. It’s brutally efficient. With each test you eliminate half of everything. Thirty tests can cut through a billion possibilities.
Here is what binary search does not do.
It does not choose the space.
Someone has to decide what counts as a possibility in the first place. Where the boundaries are. What “finding the answer” even means. The algorithm is powerful only after those decisions have been made. Before those decisions, there is no search. There is nothing to search.
This “before-the-search” work is where power actually lives.
And no, you cannot automate the framing by creating a larger search that includes all possible framings. That just pushes the question back one level. Someone still has to choose the meta-frame. It is turtles all the way down, except at some point there is a human deciding what game to play.
The people who understood this earliest were the ones building the systems.
The best AI researchers do not talk about intelligence as a single thing to be maximised. They talk about objectives. Reward functions. The specification problem.
You can build a system that optimises almost anything. The question is what you tell it to optimise. Get this wrong and the system will give you solutions you did not want: technically correct, practically useless — or worse, harmful.
This is not mainly a failure of capability. It is a failure of translation: a failure to turn what we actually want into terms a system can act on.
That translation is hard because we often do not know what we want until we see what we get. Our values are not explicit. Our goals are not clean. We operate on hunches and felt senses we cannot fully articulate even when the stakes are high.
The machines need articulation. We provide intuition. The gap between these is where the interesting problems live.
I said this would be about the scarce thing. Here it is.
The scarce thing is not intelligence. Intelligence is becoming abundant.
The scarce thing is not information. Information is already abundant.
The scarce thing is not even synthesis, if by synthesis you mean “knowing a lot and connecting it fluently”. Machines are getting good at that too.
The scarce thing is the ability to point.
This sounds mystical. It is not mystical. It is extremely practical.
Every person who has done something significant will tell you the hardest part was not the work itself. The hardest part was choosing what to work on — committing to one direction when there were infinite directions, saying no to everything else.
Discipline is easy once you know what you are disciplined toward. The hard part is the toward.
Most people never develop this. Most people wait to be told what to work on. They follow paths, respond to incentives, solve the problems institutions put in front of them.
This was a workable strategy when solving was the bottleneck. You could have a good career by executing well inside frames other people provided.
It is becoming less workable.
The honest truth is that we do not really know how to teach taste.
We know how to teach facts. We know how to teach methods. We know how to test comprehension. We have centuries of machinery for producing people who can solve defined problems.
We have almost nothing for producing people who can define problems worth solving.
Taste seems to develop through exposure: seeing excellent work up close, watching what good judgement ignores, absorbing instincts through proximity. Apprenticeship rather than instruction. The long, inefficient process of being around someone who knows what matters until you start to feel it too.
This does not scale. It cannot be standardised. It is exactly the kind of learning modern institutions are worst at supporting.
Here is what I think is going to happen.
The people who learn to cultivate taste — in themselves and in others — will become disproportionately valuable. Not because they are smarter, but because they have the scarce skill: direction under uncertainty.
This is not mainly a prediction about unemployment. The economy is complicated and predictions are cheap.
It is a prediction about meaning.
If your sense of purpose comes from solving problems, and solving problems becomes cheap, you will need a new source of purpose. Cheaper than you, faster than you, tireless. The satisfaction of execution will still exist, but it will no longer be what makes you necessary.
The people who thrive will be the ones who find meaning in selection: in curation, in responsibility, in deciding what is worth doing and then signing their name to it.
I keep thinking about a phrase from the philosophy of science: coming up with worthy conjectures is harder than solving them.
Einstein’s magic was not the tensor calculus that formalised general relativity. The calculus was hard, but it was solvable. Other mathematicians could have done it. The leap was the conjecture: that space and time are curved by mass. The proof followed, but the direction came first.
We celebrate proofs because they are checkable. We can verify them. We can teach them. We can put them in textbooks.
Conjectures resist celebration because they are not checkable in advance. Before the proof, they are just stubborn guesses held by people who cannot fully explain why they believe what they believe.
Machines will get very good at proofs. They are already good at proofs.
The conjectures are still ours. For now.
I want to be careful.
I am not saying humans are magical. I am not saying there is an essence machines can never touch. I do not know that. Nobody knows that.
What I am saying is that current systems have a shape. They are extraordinary at extracting patterns from existing data. They are strong at continuation — at extending what is implied by what has been seen. They are less reliable at inventing genuinely new frames that are not already latent in the record.
That may change. The history of AI is a history of “less reliable” becoming “embarrassingly good.”
But right now the gap is real. And that gap is where taste lives: the capacity to choose a direction that cannot yet be justified, and then do the patient work of making it justifiable.
You might be wondering what to do with this.
The situation is too new and too strange for reliable step-by-step advice.
But I believe this: the skills that will matter will be legible in your choices, not your claims.
What you pay attention to.
What you ignore.
What you refuse to optimise.
What you find interesting.
What you find boring.
Where you stop.
Where you keep going.
This is taste, made visible.
And the way you develop it is old-fashioned: exposure to excellent examples, serious study of what mattered and why, sustained contact with people whose judgement you respect, and the slow accumulation of pattern recognition at a level above the patterns.
No hacks. No shortcuts. Just the long craft of learning what is worth doing.
One more thing.
The universe being more learnable than we expected does not eliminate mystery. It relocates it.
The mystery used to feel like it was in the stuff, in the resistance of nature to our understanding, in the apparent refusal of reality to be compressed.
But reality, often, turns out to be structured enough to be found.
So the mystery moves.
It moves into the silence before the question. Into the moment direction is chosen. Into the felt sense of significance that precedes justification.
Why does anything matter? Why do we care about some questions and not others? Why does one person’s intuition point towards fertile ground while another’s points towards desert?
These are not questions a model will answer for you. Not because they are sacred, but because they are not “answers” in the normal sense. They are choices. They are values. They are commitments.
The scarce thing is not intelligence.
The scarce thing is knowing what intelligence is for.



A rather “tasteful” take on staying human while navigating the current climate of exponential synthetic cognition (aka AI). This is a “hybrid horizon” I can orient towards. Thank you, as always, Mr. Iacono, for your ability to make legible what is often obscured by clouds of competing narratives. 🙏🏼
Humans select. We reason. We intuit. That is our exclusive province. For now.