The 2028 Global Intelligence Dividend
The Ghost That Wasn't There
On Monday (23 February 2026), a speculative “memo from the future,” written in the language of macro finance, helped trigger a sharp risk-off move that wiped billions off the value of US-listed firms. Citrini Research’s “The 2028 Global Intelligence Crisis” imagined an AI-driven hollowing-out of white-collar work so severe that the circular flow of the economy stops functioning: GDP and profits rise, but households — cut out of the loop — stop spending. The authors dubbed this “Ghost GDP,” output that appears in national accounts but doesn’t circulate through the people who used to earn it.
Michael Burry amplified it on X with: “And you think I’m bearish.” IBM fell about 13% on the day, with several other high-profile names cited in the piece dropping sharply as well. The Wall Street Journal pointed to the viral report as a key accelerant of investor anxiety, and discussion of it racked up roughly 16 million views on X.
A work of explicitly labelled speculative fiction moved real capital at real speed. That fact alone deserves serious attention, not because the Citrini authors did anything wrong (they were transparent about the exercise), but because it reveals something about the texture of the present moment. We are living through a period in which the gap between “plausible narrative” and “tradeable signal” has collapsed to nearly nothing. When a scenario feels real enough to model, and the underlying anxiety is already there waiting to be organised, fiction and forecast become functionally indistinguishable.
So let me take the Citrini piece seriously. Not as prophecy, but as diagnosis. Because it gets some important things right, and the places where it overreaches are more instructive than the places where it lands.
The strongest element of the memo is its distributional argument. If productivity gains accrue primarily to the owners of compute and capital while labour income stagnates, household demand weakens. This is not invented. The productivity-pay gap is well documented. Labour’s share of income has been declining for decades. Personal consumption accounts for roughly two-thirds of US GDP. These are not contested facts. They are the arithmetic of an economy built on the assumption that most people earn enough to buy what that economy produces. If you break that assumption hard enough and fast enough, the consequences are real.
The Citrini piece also correctly identifies that the current wave of AI capability improvement is not hypothetical. METR’s task-completion time horizon dashboard was updated on February 20 with new estimates for Claude Opus 4.6 and GPT-5.3-Codex. The numbers are striking. METR estimates that Opus 4.6 has a roughly fourteen-and-a-half-hour 50% time horizon, meaning it can succeed at tasks calibrated to that duration of human expert effort about half the time. For anyone whose work consists of well-specified, software-adjacent tasks in the four-to-fifteen-hour range, the directional signal is unmistakable: the frontier is moving.
And the memo resonated precisely because real things are already happening. Indian IT stocks lost roughly fifty billion dollars of market capitalisation in February on AI-disruption fears. Enterprise software companies are facing genuine pricing pressure. Agentic tools and “coworker” products have made automation feel closer to daily operational reality than it did even six months ago. The anxiety was not manufactured by a Substack post. The Substack post gave it a crisp causal structure that investors could act on.
All of this is worth taking seriously. And yet.
The place where the Citrini logic does the most work, and where it is most fragile, is a single assumption that runs beneath the entire scenario like a load-bearing wall: that institutions are inert. That firms will substitute labour with AI at speed, that the displaced wages will vanish into a black hole, that governments will watch the tax base erode without adapting, that no countervailing mechanism will emerge to recirculate the gains. The memo holds three variables fixed while letting capability run free, and that asymmetry is where the argument breaks.
Start with diffusion. Economists have a well-developed framework for why general purpose technologies have implementation lags. The gains from electricity, from computing, from the internet, none of them arrived on the timeline that the capability suggested they should. Realised productivity improvements require complementary investments in processes, organisational redesign, training, and what economists call “intangibles.” Erik Brynjolfsson and colleagues have documented this extensively: there is a J-curve in which measured productivity can actually decline before the gains materialise, because the reorganisation costs are front-loaded and the benefits are delayed.
This is not a comforting abstraction. It is the lived experience of every organisation that has tried to deploy AI at scale. The models improve on a curve that looks exponential. Adoption does not. Firms that want to use AI effectively must rebuild data pipelines, refactor legacy systems, rewrite incentive structures and retrain managers. Those investments are large and uneven. The exponential in capability does not map to an exponential in substitution and anyone who has watched a large institution try to change anything knows why.
METR itself is unusually explicit about this. Thomas Kwa, one of the principal authors on the time-horizon work, warned in January that observers routinely overstate the precision of the metric and draw conclusions the evidence does not support. The benchmark is noisy. The confidence intervals widen as models improve. A 50% time horizon of X hours does not mean you can delegate tasks under X hours to AI, because many real-world tasks require much higher reliability and because failure modes become more complex as tasks lengthen. METR’s benchmark suite is drawn primarily from software engineering, machine learning and cybersecurity tasks chosen to be self-contained, well-specified and automatically evaluable. The real economy is full of high-context, poorly specified, deeply embedded work that looks nothing like a benchmark.
None of this means the capability signal is wrong. It means the translation from “sometimes works on benchmarks” to “replaced the workforce” involves a series of assumptions about speed, reliability, context and organisational capacity that the Citrini memo largely elides.
Then there is the question of where the money goes. Economist Guy Berger raised this pointedly: if firms save on labour costs, those savings do not vanish. They become profits, lower prices, higher investment, or higher tax revenue depending on how institutions respond. The “Ghost GDP” framing is vivid, but it smuggles in the assumption that productivity gains have no recycling mechanism. In the real economy, capital that is not spent on wages gets spent on something. It flows into AI infrastructure, into energy and construction and grid modernisation, into lower consumer prices that increase real purchasing power. The question is not whether the gains circulate. The question is through which channels and whether those channels sustain broad household demand.
Dan Hockenmaier offered a different critique, arguing that the memo’s marketplace disruption scenarios underestimate how defensible real businesses are. Building an app is easy. Building liquidity, trust, routing, regulatory compliance, and operational execution is hard. Even when agents reduce some frictions, competing on a two-sided market requires more than code. This does not eliminate disruption risk, but it sharply reduces confidence in the memo’s assumption that code replication equals business replication.
The most rigorous non-doomer argument is not “jobs always come back.” It is that identifiable mechanisms, especially the creation of new tasks, historically counterbalance automation’s displacement effects. Daron Acemoglu and Pascual Restrepo have formalised this: automation reduces labour demand, but the creation of new tasks generates a “reinstatement effect” that can increase it again. What changes with frontier AI is not that new tasks become impossible, but that the direction and speed of task creation becomes contingent on institutions, regulation and competition rather than guaranteed by historical precedent.
This is exactly why a non-doomer reading should not be framed as a guarantee. It should be framed as a scenario with conditions. The conditions are specific and observable.
If AI diffuses unevenly and is slowed by the need for complementary investments, you do not get the overnight substitution the crisis scenario requires. If competition ensures AI becomes broadly available rather than monopolised by a handful of firms, the gains distribute more widely. If governments treat the distributional challenge as a public finance design problem rather than waiting for a crisis to force their hand, tax and transfer policy can sustain household demand even as the composition of income shifts.
None of these conditions is automatic. All of them are politically contested, uncertain and fragile. But the Citrini scenario requires all three to fail simultaneously and completely within roughly two years. That is not impossible. But it is a very specific left-tail outcome and the authors said as much in their preface.
The labour market will change. It is already changing. White-collar work is being de-layered. Entry-level “do the first draft” roles are thinning out. Demand is shifting toward people who can specify objectives clearly, audit and validate outputs, coordinate multi-agent workflows and carry responsibility when things go wrong. This is not “prompt engineering.” It is a new and widely distributed form of operational management, and it is being accompanied by growth in energy, construction, cybersecurity and regulated-industry implementation as the physical infrastructure of AI expands.
The consumer side is instructive too. When friction declines, that can be inflation-reducing rather than demand-destroying. Agents that price-match, negotiate renewals and compress fees produce savings that show up as lower prices and higher real purchasing power. Households do not need machines to spend money. They need goods and services to become cheaper and incomes to remain resilient enough to sustain demand.
The most important variable and the one the Citrini memo most conspicuously holds fixed, is policy. Labour income is a large tax base. If automation shifts income toward capital, the fiscal system must adapt or it will become pro-cyclical in the worst possible way. There is active work on this. The Brookings Institution has published frameworks for AI-era public finance. Australia’s National AI Plan explicitly aims to share the benefits across the community. The policy tools exist: wage insurance, training subsidies, expanded earned-income credits, tax design that rebalances toward AI-era rents and compute intensity rather than only wages.
Whether those tools get deployed in time, at scale, with political will, is an entirely open question. But the Citrini scenario assumes they do not. It assumes that between now and 2028, the response is essentially nothing. That assumption does a great deal of work.
I am not writing a rebuttal that says everything will be fine. I wrote in October that the AI investment cycle is a fragile hybrid, a solid core wrapped in a speculative shell. I wrote last year that the entry-level career ladder is already being pulled up. The distributional concerns at the heart of the Citrini argument are concerns I share. The question of whether productivity gains circulate broadly or concentrate upward is not a technical question about AI capability. It is a political question about power and the recent track record of political systems responding wisely to structural economic shifts is not ‘great’.
But the specific mechanism the Citrini memo describes, a feedback loop in which AI simultaneously destroys demand, collapses housing, triggers private credit cascades and breaks the circular flow of the economy within twenty-four months, requires a set of assumptions about the speed of adoption, the absence of countervailing forces and the paralysis of institutions that are individually plausible but collectively extreme. The world is messy, sticky, slow and full of people and organisations that resist, adapt, and redirect in ways that macro-models struggle to capture.
The canary is not dead. But it is coughing. And the right response to a coughing canary is not to declare the mine safe and go back to work. It is to check the ventilation, shore up the supports and have an exit plan. The non-doomer version of this fictional story is not “nothing changes.” It is a world in which AI is highly capable and the circulation mechanisms, competition, complements and policy, prevent the left-tail feedback loop from locking in.
That is not a prediction. It is a set of conditions. Whether we meet them is not up to the models. It is up to us.



Your point about all three conditions needing to fail simultaneously is the strongest counterargument I've seen. But it also cuts the other way. Nobody modeled the scenario where two of the three fail and the third holds. Or where they fail sequentially instead of simultaneously. The real gap in this whole debate isn't optimism vs pessimism, it's that everyone's arguing about one scenario when the actual range of outcomes is barely being explored.
Still waiting for someone, anyone, to explain the new roles and new tasks that will emerge as algorithms decimate knowledge work. It's already been 3 years and so far there's no sign of this bright future for the white colllar worker, especially since the machines are being explicitly tailored to get better and better at managing themselves. As far as I can tell, the only humans in the loop are the slave wage workers overseas (and increasingly in the west as more laid off tech workers resort to data annotation work).
And it *is* still prompt engineering, no matter how much certain techbros try to dress it up. Ordering outputs from a vending machine is the illusion of work.