Every Time I Think I Understand 'AI', I Realise I'm Looking at the Wrong Scale
I was halfway through writing a paragraph about AI agents when I realised the product I was describing had been deprecated. Not gradually. Not with a graceful sunset notice. Just gone, folded into something else with a different name and a shinier landing page. The paragraph I’d written was accurate for about eleven days.
This keeps happening. Not the specific embarrassment of outdated prose, though that happens too. The deeper problem. I can’t hold the whole thing at one stable scale.
Up close, it’s whitewater. Models change. Interfaces change. Product strategies change. The best practice of last month becomes the embarrassing mistake of this month. A new capability appears, then gets patched, rate-limited, or bundled into some “suite” nobody asked for. Influencers sprint ahead of evidence. Companies announce futures in glossy verbs. Researchers publish careful increments that get translated into breathless destiny by people who didn’t read the methods section. Meanwhile I’m trying to build a mental picture out of moving parts that are still being machined.
I was in a meeting last week where someone asked, quite sincerely, about what an AI strategy should be for the next three years. Three years. In this field, three years is geology. I opened my mouth and realised I wasn’t sure which version of the question I was answering.
Because here’s what happens. Step back a bit from the whitewater and it stops being whitewater. It becomes weather.
The near-term forecast matters. What’s likely to shift in the next six to twelve months? Which workflows will actually change? Which institutions will adopt quietly, without press releases, folding AI into procurement and professional development so gradually that nobody writes the memo? Where will policy land? What will become normal before anyone notices it becoming normal?
This is close enough to the present that you can make plausible guesses. But it’s also close enough that hype can trick you, because everything looks enormous when it’s inches from your face. A new model release feels like an earthquake at close range. Six months later you can barely remember its name.
Step back further and it stops being weather. It becomes climate.
Five years isn’t a product cycle. It’s a retooling cycle. It’s enough time for AI to seep into the plumbing of organisations: training programmes, school assessments, professional standards, workflows you never see from the outside, the tedious-but-decisive infrastructure that actually determines what an institution does as opposed to what it says it does. Five years is enough time for second-order effects. Not “a tool writes text,” but “a tool changes how people decide what counts as good text,” which changes what gets rewarded, which changes what gets produced, which changes what gets believed.
The five-year frame is where AI stops being a tool story and becomes a governance story. Because scale turns individual choices into infrastructure. That’s where I find myself thinking about audits, procurement, standards, licensing, platform rules, public sector capacity and the slow grind of law trying to describe a moving target without accidentally blessing it. Who owns the models and the data? Who gets to set the norms? What counts as “authentic” when generation is easy and detection is broken? What happens to education when the line between learning and outsourcing becomes a permanent negotiation rather than a bright boundary? What happens to creativity when the cost of producing plausible output collapses, but the cost of attention stays brutally high?
These are real questions. They don’t have answers yet. They have interests.
Step back further still and you hit something I can only call deep time. Society-scale geology.
Ten years. Twenty years. The kinds of time horizons where people start talking like prophets or cynics, because concrete details evaporate and we fill the fog with our favourite fears. In that fog it becomes easy to say “AI will do X,” because nobody can falsify it yet. In that fog every argument becomes a Rorschach test: utopia for one person, dystopia for another, and often the same evidence is used to justify both.
My mind keeps switching between these scales and every time it switches, it changes what I think the problem is.
At the whitewater edge, the problem is tracking a stack of fast-moving variables. Model capabilities that jump in weird, uneven ways: one week better reasoning, another week better voice, another week a new limitation nobody expected. Applications that either explode into usefulness or die from friction, from latency, cost, governance, trust, integration. A social layer that turns “possible” into “inevitable” within a single news cycle, then forgets it by the next. A corporate layer that treats ambiguity as a branding problem. A research layer that treats ambiguity as a measurement problem.
Those layers don’t just disagree. They speak different dialects of certainty. Influencer certainty is performative: it moves attention. Corporate certainty is strategic: it moves markets. Research certainty is conditional: it moves understanding. And I’m standing in the middle trying to translate all three while the dictionary is on fire.
When I shift to the six-to-twelve-month view, the questions change shape. I stop asking “what is true?” and start asking “what is likely to become common?” Not “will AI agents take over?” but “which narrow agent-like loops will become normal in everyday tools?” Not “will models get smarter?” but “will they get cheaper, faster, more multimodal, more embedded, more invisible?” Not “will this replace jobs?” but “where will labour be reorganised, who gets amplified, who gets squeezed, what gets deskilled, what gets reskilled?”
In that window my anxiety isn’t about capability. It’s about deployment velocity. A mediocre but ubiquitous tool can reshape a world more thoroughly than a brilliant one that stays expensive, unreliable, or gated. Reality is often shaped by whatever ships into the defaults menu. The features nobody chose but everybody uses. The friction that quietly disappears until one day you notice you’ve stopped doing something you used to do and you can’t quite remember when you stopped.
I think about this every time I watch a student open a blank document. Four years ago that blank page was a confrontation: you and the cursor, no escape. Now a suggestion engine hovers at the margin. The blank page isn’t blank anymore. The default has shifted and the shift happened not through a dramatic announcement but through a software update most people didn’t read.
Then I zoom to five years and my brain stops caring about feature lists. I start caring about institutions, power and incentives. This is where I’ve spent much of the past two years watching universities, libraries and professional bodies try to absorb a technology that keeps changing shape before the committee can finish drafting the policy. The governance framework was outdated by the time it reached the printer. I’ve watched organisations develop sophisticated position statements that carefully address a version of AI that no longer exists. Porous coherence, I’ve started calling it: the strange ideal of remaining open enough to learn while structured enough not to dissolve. Most institutions haven’t found the balance. Some have stopped looking. A few have confused rigidity with safety, which is a different kind of failure, quieter and more corrosive.
Then I zoom to ten and twenty years, and the question gets more unsettling still. Is AI even the main character?
This is where I keep catching myself. I’ll be mid-spiral about model roadmaps and then a bigger wave rolls into view: climate shocks, energy transitions, geopolitical fragmentation, demographic shifts, surveillance economics, the fragility of trust. AI doesn’t replace these forces. It interacts with them. It accelerates some, destabilises others, becomes a multiplier in systems that are already under stress.
AI is not a solo act. It is an amplifier playing into a room that was already too loud.
So when I try to come to terms with everything, I’m not just trying to predict AI. I’m trying to think about AI inside a world that is already busy being weird and volatile. And that’s a different problem entirely.
Here is the difficulty I can’t resolve.
If I write from the whitewater edge, the piece risks becoming obsolete before it’s published, or worse, it becomes a breathless catalogue of new things, which feels informative but isn’t wise. If I write from the five-year lens, it risks becoming abstract: governance, institutions, incentives, all true but distant. If I write from the twenty-year lens, it risks becoming mythology: grand narratives untethered from what anyone will actually do on Monday morning.
The problem isn’t that I can’t think about the future. The problem is that the future arrives at different speeds depending on where you stand.
Technology changes fast. Organisations change medium-fast. Institutions slow. Culture changes weirdly fast in some ways and glacially slow in others. Law changes slowly until it changes suddenly. People change one funeral at a time.
And my mind keeps trying to compress all of that into a single timeline, as if society were a neat chart and not a pile of interacting systems with feedback loops, delays and occasional panics.
I’ve started treating each time horizon as a different kind of claim. Near-term: what’s shipping, what’s being adopted, what’s real. Mid-term: what incentives will reward. Long-term: what values and power structures will permit. That way I’m not trying to answer every question with the same kind of certainty, which is how the brain ends up doing cartwheels in the fog.
Maybe the honest move isn’t to pick one horizon. Maybe it’s to admit that I’m constantly switching lenses, and to write from that vertigo instead of pretending I’ve got a calm panoramic view.
Because the confusion I feel isn’t only confusion about technology. It’s confusion about scale. Every time I think I’m forming an insight, I realise I was looking at the wrong zoom level for the question I was trying to answer. The insight was real. The framing was wrong. And a real insight in the wrong frame is worse than no insight at all, because it gives you confidence in a direction that doesn’t apply.
I don’t have a tidy resolution for this. I’m suspicious of anyone who does.
What I’m trying to practise, and I use that word deliberately, is something closer to discipline than prediction. Less prophecy. More willingness to say: here are the forces I can see, here are the uncertainties I can’t shrink and here is how I’m trying to keep my footing while the ground moves.
The ground will keep moving. The scale will keep shifting. The vertigo isn’t a symptom of not knowing enough. It’s a symptom of knowing enough to recognise that the comfortable single-frame view was always a lie.
I think about cartographers in the age of exploration, drawing coastlines they’d only glimpsed from a ship’s deck. They got the shapes wrong. They filled the interior with speculation and monsters. But the act of drawing, of committing what little they knew to a surface where it could be examined, corrected, argued about, that was the beginning of seeing clearly. Not the end of confusion. The disciplining of it.
That’s what writing about AI feels like to me right now. Drawing a coastline I can only glimpse, knowing the interior is mostly fog. The honest thing isn’t to pretend the fog has cleared. It’s to keep drawing, keep correcting, keep holding multiple time horizons at once and let the discomfort of that be the starting point rather than the thing I rush to resolve.



I love your commentary but what about the impact on literally life and death from AI? Right now there is a heartbreaking trial going on in LA where families of kids who died by suicide enabled by AI are suing major tech companies. Zuckerberg testifies next week but will he or any other tech CEO be held accountable ? I doubt it. The ethics
and morality issues are substantial but we have barely addressed them.
I gave a presentation at my retirement community today with a 94 year old fellow resident about the ways AI can enrich our lives, but lurking below the optimism is a very dark thread of problems.
On that idea of scale, I completely agree. I just popped this idea on on multiples sclaes of intelligence myself https://www.exponentialview.co/p/the-hundred-million-token-day