Robots Didn't Kill the Internet
The Casino Did
Every few weeks someone publishes an essay about how AI has murdered the internet. The comments fill with people nodding along. The shares accumulate. The thesis is always some version of: we used to have a real internet, then the bots arrived and replaced it with slop, and now more than half of all web traffic is automated, which means the humans have already lost.
It’s a satisfying story. It has a villain, a timeline, and the flattering implication that you, the person reading it, can still tell the difference. The trouble is that the story gets the arsonist wrong.
The better versions of this essay are sharp in places and sloppy in the one place that matters most. They describe how platform incentives turn writers into performance machines. They concede, usually as an aside, that this condition predates AI by decades. And then they carry on blaming the robot anyway.
That aside is the answer. These essays keep nearly solving the puzzle and then blaming the shiniest object in the room.
The internet was not healthy until the robot showed up. Platforms had already spent years training humans to write like engagement machines. The “dead internet theory” phrase is relatively new; the machinery it describes is old enough to have a mortgage. The Atlantic ran a piece on the concept in August 2021, well before today’s AI-slop era. Google had already spent a decade fighting low-quality, manipulative content: its 2011 Panda changes targeted pages that existed to game search, its spam policies have long defined spam as content designed to deceive users or manipulate rankings, and in March 2024 Google tightened rules against “scaled content abuse” whether the content was made by automation, humans, or both. The factory existed before large language models. LLMs just made it cheaper to run.
Same story with bots. Imperva reported bot activity at 51.8% of all web traffic back in 2016. Its 2025 report puts automated traffic at 51% of the web, with 37% classified as bad bots. So the “more than half of internet traffic” line that circulates in these essays is not fabricated. But it is a slippery stat: it describes web traffic as a whole, not a direct measurement of how much of your feed or comment section is synthetic writing, and it lumps together wildly different kinds of automation rather than measuring “AI slop” specifically. Taking a broad infrastructure number and turning it into a cultural X-ray is too cute by half.
The actual machine is uglier and more boring than any conspiracy theory could manage. The OECD describes recommender systems as a personalised newspaper editor for each user. Meta says its Feed ranking predicts what you’re most likely to care about or engage with and literally frames it as an optimisation problem. YouTube says recommendations draw on watch history, search history, subscriptions, likes, dislikes, “not interested” feedback and satisfaction surveys. TikTok says its systems rank eligible content using user interactions, content information and user information. None of these systems is asking “is this wise?” or “is this true?” or “will this leave the person better off?” They are asking a much simpler question: will this hold attention and produce a useful signal?
That question, applied at scale and compounded over years, is what killed the internet. Not robots. Incentives.
I want to give it a proper name, because “dead internet theory” is a conspiracy frame that lets people feel clever without understanding the economics, and “industrialised attention extraction” is accurate but nobody is going to put it on a t-shirt. So call it what it is.
The Attention Casino.
Casinos are not random. They are precision-engineered environments designed to keep you at the table. The games feel like they contain agency but the odds are baked into the architecture. The beautiful people at the felt are there to make the room feel alive. The house always wins, and the longer you play, the more certain that becomes.
The internet works on the same logic now. Recommendation algorithms are the game design: they rank content not by quality or truth but by holding power. Ad markets are the house take: platforms sell the attention that the games produce, and the global numbers are staggering. Australia’s online ad market hit $18.4 billion in 2025. The IAB projects U.S. creator ad spend will reach $37 billion the same year. YouTube’s Partner Program lets creators share ad revenue from videos. TikTok Shop tells creators to feature products in shoppable videos and livestreams, earn commission, and “turn views into sales.” The logic is brutally simple: rank attention, sell attention, recruit creators who can reliably manufacture more of it.
And this is how people get herded toward influencers. Meta says its systems recommend “unconnected” content from accounts you don’t follow, and Facebook described Home as a “discovery engine” for finding new content and creators through recommendations, while the separate Feeds tab is for people and groups you already know. TikTok’s For You feed is the same species of beast: a stream built to help you find content and creators you love. That is the structural shift from social graph to interest graph, from friends to performers, from communities to ranked suppliers of content. Influencers are not a weird side effect of this system. They are the life form best adapted to it.
They are the casino’s dealers.
Why personalities specifically? Because parasocial trust converts better than anonymous content. Research on parasocial relationships finds that social attraction, perceived credibility and informational influence all strengthen purchase intention, and that positive comments reinforce the effect. In plain English: if the audience feels like they know you, trusts you, and sees other people nodding along, they buy more. That makes influencers ideal ad vehicles. They are, functionally, ad inventory with a face, a voice, and a simulated friendship.
This also explains why the “dead internet” feeling predates generative text by years. Long before ChatGPT flooded timelines, metric gaming could already goose visibility. A 2024 paper in Scientific Reports describes rich-get-richer recommendation dynamics on YouTube and warns that fake views, if corrected late, can interfere with both human and algorithmic recommendations and unfairly boost a video’s spread. Once popularity metrics feed ranking, bogus signals can snowball into real distribution. Old scam, new paint.
So the cleanest correction for the AI-killed-the-internet crowd is this: they mistake style tics for source code. Em dashes, polished sludge and that specific flavour of hollow profundity are surface tells. Sometimes they flag AI. Sometimes they just flag a human who has spent too long steeping in platform brine. The deeper cause is incentive design. These essays keep describing the casino’s effect on the player and then blaming the new card-shuffling machine instead of the house rules.
A better framing, which UNSW has articulated clearly, is that the important issue is manipulation for profit and political gain, with engagement farming and ad revenue as the obvious first motive. Fake content is a symptom. No grand conspiracy is required. Ordinary platform economics are already plenty deranged.
Here is what the Attention Casino actually looks like as a system. Recommendation engines steer attention. Ad markets convert that attention into money. Influencers are the most legible units of monetisable trust. Bots and fake engagement juice the metrics. And generative AI lowers the cost of producing endless variants of the same optimised mush. Every layer reinforces every other layer. The machine has no centre and no villain. It has incentives, and the incentives are working exactly as designed.
The internet did not start rotting because robots learned to write. It started rotting when platforms became casinos. The robots are just very efficient casino staff.
And this is where the diagnosis matters, because getting the cause wrong means getting the response wrong. If you think AI killed the internet, you end up in detection wars, watermarking debates, and regulatory schemes aimed at labelling synthetic content. Those might be fine projects for other reasons, but they will not fix the dead-internet feeling, because that feeling was produced by an economic architecture that predates generative AI by a decade. You would be arresting the card shuffler while the house keeps taking its cut.
If you think the casino killed the internet, different questions follow. Questions about recommendation transparency. About whether ad-funded attention markets are compatible with public discourse. About what it means that the FTC’s influencer disclosure rules exist precisely because the boundary between content and advertising has already collapsed, and most users can’t tell the difference. About whether a system designed to maximise time-on-platform will ever, under any regulatory pressure, produce an environment that feels like it was built for humans rather than for the extraction of humans.
Those are harder questions. They don’t have a single villain. They don’t let you feel righteous about spotting the bot in the comment section. They implicate the platforms you still use, the creators you still watch, the economy you still participate in. They are uncomfortable in a way that blaming AI never is.
Which is, of course, why the AI story keeps winning the discourse. It’s a better product. It has a villain, a timeline, and the comforting implication that the solution is technical. The Attention Casino story is worse content by every metric the casino itself would use to rank it. It’s structural, it’s boring, and boring means true.
But naming the machine correctly is the first step toward not being played by it. You can’t fix a casino by banning one brand of playing cards. You have to look at the table, the odds, the house rules, and the fact that you walked in voluntarily and they made it very easy to stay.



Good piece. The casino metaphor works and the core argument holds — incentive architecture built the rot, the robots just made it cheaper to maintain. Blaming AI for killing the internet is like blaming the automatic card shuffler for the house edge.
But the history starts too late.
Hearst was running this game in the 1890s. "You furnish the pictures, I'll furnish the war." Screaming headlines. Moral panic with a side of fabrication. Readers as raw material dressed up as civic duty. The algorithm didn't invent attention extraction. It just fired the editors and kept the business model.
The human editor was always a crude filter — sometimes principled, mostly deadline-driven, occasionally drunk. The recommendation engine is none of those things. It just optimizes, continuously, without coffee breaks or conscience. Same house. Faster croupiers. Better lighting.
Which is why your conclusion lands slightly soft. Naming the machine correctly is necessary but the machine has been running under different names for a hundred and thirty years. We named it yellow journalism. We named it tabloid press. We named it clickbait. Naming it the Attention Casino is accurate and overdue.
It didn't stop the game last time either.
So the real question isn't just about recommendation transparency or platform regulation. It's whether attention-as-product was ever compatible with a functional information environment. Before the algorithm. Before the internet. Before television.
That's the thread worth pulling.
You got closest to it and then stopped for a coffee you didn't finish.