Thoughts for 2026
It’s late 2025. The lights are up, the year is coughing out its last receipts, and the AI discourse has returned, faithful as mould, to its annual pageant season. Prophets. Skeptics. The professionally agnostic. All of them taking turns at the microphone as if the end of December comes with a stage and compulsory audience participation.
We keep trying to force this thing into neat binaries, solutions we can sum up, understand, place in a box at the back of the cupboard and forget about. Nice and tidy. Save some time. Regrettably, this is not how reality works.
So here are my thoughts for 2026. Not predictions. Predictions are what you make when you want to sound brave without the inconvenience of being pinned to data later. They’re the cocktail-party equivalent of day trading: lots of swagger, not much audit trail and an odd attachment to screenshots.
What I do have are two things that are already true, and that will become harder to avoid talking about in 2026, not because the internet suddenly matures, but because the consequences will start showing up in places with budgets.
The first is this: more people will admit, out loud, how much cognitive work the good models can remove.
Not the toy version, mind you. Not the enterprise version, half-baked and twelve months behind. Not the free demo where you ask for a poem about tax audits and receive a limerick with the emotional range of a toaster manual. I mean the paid models, the sharper ones, the ones you use when you are not performing enthusiasm for the future but trying to get through your actual week without your brain dribbling quietly out of your ears. The ones that can sit beside you at a computer and do the unglamorous mental labour that makes up so much of modern work: the drafting, the restructuring, the summarising, the translating, the debugging, the planning, the careful rewording of something true into something safe.
Used properly, the effect isn’t “the machine is alive.” The effect is worse, in a way. It’s boring. But boring with glimmers of possibility.
Because real productivity is boring. It doesn’t arrive as a thunderclap. It arrives as a small, suspicious miracle: the thing that used to take two hours now takes twenty minutes. The first draft turns up before you’ve had time to sabotage yourself with “just one more read.” The outline appears before you can open a second tab and convince yourself that scrolling is research. Meetings begin to shrink, because the agenda exists, and the pre-read exists, and the summary exists, and once those exist the meeting has to justify itself like a guest who’s stayed past dessert.
Certain workplace rituals evaporate, like fog burning off, and with them the strange little performances we’ve all learned to tolerate. The “can you just…” tasks, the tiny draining errands made of words, start to vanish before they can breed. The endless relay race of “I’ll send notes, you rewrite, I’ll tweak, you reformat, we’ll re-approve,” which has always been a kind of bureaucratic folk dance, begins to look as absurd as it always was, except now you can’t pretend it’s inevitable. It was never inevitable. It was just what happened when text was slow and humans were expensive and no one wanted to admit how much of their day was spent moving meaning from one container to another.
At some point, perhaps not in public at first, but in a room where job titles are heavy and people speak in careful sentences, someone will say the quiet part plainly. A single expensive model subscription can remove sections of meaningful cognitive labour. Not in a sci-fi “robots replaced us” way. In the prosaic, spreadsheet-real way: fewer hours burned on text plumbing, fewer humans acting as routers for sentences, less time turning thoughts into acceptable corporate shapes.
Then the story becomes interesting, because the gap isn’t capability. The gap is admission. It’s social permission. Publicly, many organisations still treat these tools like a novelty, something you “pilot” in a sandbox with a cheerful slide deck. Privately, people are already reorganising their work around them. Privately, a certain kind of worker, quiet, curious, unembarrassed about using tools, has started to widen the distance between themselves and everyone else, not because they are geniuses, but because they have a new lever and they are willing to pull it.
In 2026, it becomes harder to pretend this is just a toy. Not because a headline tells us to stop pretending, but because the consequences will begin appearing in hiring, and promotions, and workload, and the odd new expectation that you can now do three roles at once “because the tech helps.” The conversation won’t stay at the level of “is it real?” It will move, uncomfortably, inevitably, to the more adult questions. Who gets credit when the model did the scaffolding? Who gets paid when output doubles but headcount stays flat? What happens to training when the starter tasks, the tedious ones, the ones that built craft through repetition, are the first things the model eats for breakfast? These aren’t the kind of questions you can answer with a scorecard. They are questions about power.
The second thing is gentler, in 2026 we’ll talk more seriously about being human, not as marketing fluff, but as the point.
This is the part where corporate people like to put a hand on their heart and say something inspiring about “uniquely human creativity,” and everyone nods while quietly checking their email. I mean something less decorative. I mean the slightly uncomfortable moment when the old signals of value begin to fail.
Because a lot of our workplace rituals were never really about outcomes. They were about proof of existence. Performative busyness. Credentialism as camouflage. Emails as a heartbeat monitor. Meetings as social liturgy. “Content” as a kind of currency, exchanged not for meaning but for legitimacy. We learned, almost without noticing, to confuse motion with progress, and typing with thinking, and being seen with being useful.
But when a model can generate competent output at scale, those shortcuts start to look ridiculous. Words become cheaper. And when words get cheaper, meaning gets more expensive. Speed becomes abundant, and judgment becomes scarce. “Content” becomes infinite, and suddenly taste, real taste, the kind that’s tied to values and consequences, starts to matter again. Along with integrity. Along with responsibility. Along with the stubborn, unfashionable fact that decisions have a price, and someone has to pay it.
A model can give you ten plausible answers in seconds. It cannot tell you which one you should live with. It can produce a policy memo. It cannot carry the moral weight of the policy. It can draft a heartfelt apology. It cannot repair the harm. It can summarise a paper. It cannot care whether the summary is used to enlighten or to mislead. The tool can move symbols around brilliantly, but it does not inhabit the world those symbols refer to. We do. We get the consequences. We get the regret. We get the responsibility. We get the joy, too, if we’re lucky.
So “being human” stops being inspirational wallpaper and becomes practical infrastructure. It becomes something you have to design for. You start to notice it in odd places: in what gets rewarded, in who gets trusted, in who is allowed to decide, in what kinds of mistakes are tolerated and what kinds are punished. You start to see that a workplace can generate endless words and still struggle to do the most human thing of all, which is to make a clear decision and own it.
This is why I find it hard to care, at least right now, about the loudest fantasy futures: the grand agent swarms, the self-teaching myths, the breathless enterprise hype cycles that rename ordinary automation as destiny. Those stories can be entertaining, but they can also be a way of looking past the present because the present is too close, too implicated, too real. The here-and-now is plenty.
Between the beginning of 2025 and the end of 2025, the technology changed fast enough that pretending it’s still a novelty is starting to look less like caution and more like denial. And 2026 is when denial gets harder to maintain, because the effects show up in the texture of work, not in the headlines. You can feel it in who gets hired, and who gets promoted, and who can suddenly do more than seems reasonable, and who can’t, and who gets squeezed in the middle, and who becomes indispensable because they can steer, not just produce.
The future won’t arrive like a movie. It arrives like an email. Subject line: “Quick update on how we’ll be working in 2026.” Attachment: a workflow diagram. And somewhere inside it, disguised as sensible process, a quiet redistribution of time.
There’s something in that redistribution that the doom chorus keeps missing.
Time is what you make things with. Time is where thought happens, where care becomes craft, where relationships form, where meaning accumulates. And if there is suddenly more of it, the question isn’t whether that’s good or bad. The question is what we’ll do with the hours that come back to us.
This is the glimmer I keep returning to. Not the productivity gains, though those are real. Not the efficiency, though that matters. The glimmer is simpler and stranger: what becomes possible when the mechanical parts of cognitive work get handled, and we’re left holding only the parts that require us to be present, to decide, to care, to take responsibility for outcomes that matter?
We might discover we’d forgotten how to do those things. Years of text-wrangling and format-shuffling may have atrophied the muscles we need most. That’s a real risk. But we might also discover something else: that the distinctly human capacities, the ones we’ve been told to value but rarely had time to practice, are still there, waiting for oxygen.
Judgment. Courage. The willingness to be wrong in public and learn from it. The capacity to sit with ambiguity instead of rushing to resolution. The ability to say “I don’t know” and mean it as an opening rather than a failure. These aren’t soft skills. They’re the hard skills, they require something to be at stake.
The question for 2026 isn’t whether AI will change work. It already has. The question is whether we’ll use the reclaimed time to become more fully ourselves, more capable of the things that require weight and consequence and presence, or whether we’ll fill the void with new forms of busyness, new performances, new ways of mistaking motion for meaning.
I don’t know which way it goes. This is the condition of being alive at a pivot point, when old patterns are dissolving and new ones haven’t yet solidified. There’s something almost luxurious about this moment, if you can bear to stay awake to it: the rare chance to choose what comes next, before the choosing gets done for you.
So this isn’t a prediction. It’s a plea for basic alertness. Stop debating whether it’s real. It’s real. Now decide what you’re going to do with the time that’s coming back to you, before someone else decides, in the margins of a policy document and you discover you’ve been living in their answer all along.
The technology is here. The hours are returning. What you build with them is still, for a little while longer, up to you.



I agree with both your 2 main "things that are already true," now you've articulated them so well. You do have more hope than me, however.
The question about how we'll manage that "reclaimed time" seems to be to already being answered in workplaces around the world: People are watching entertaining videos, playing digital games, gambling, and listening to podcasts while at work, as they also do at home, filling their time by passively consuming "content."
This is a third thing that is already true, I think, with as much evidentially support as your main two trends. That's my sense anyway, from talking to people still in the workforce (which I am not.)