Last week I watched a discussion about climate policy dissolve into chaos in under seven comments. Not because anyone disagreed about the science, but because someone dared suggest an imperfect solution. Within minutes, the conversation had fractured into competing moral performances, each participant claiming higher ground while the original question drowned in a sea of righteousness.
This wasn't unusual. It was Tuesday on the internet.
We've built a communication infrastructure that promises unprecedented connection, yet delivers unprecedented fragmentation. Every platform, every algorithm, every engagement metric conspires to transform dialogue into combat. However, the technology isn't creating these fractures. It's revealing them. The broken conversations we're having online are simply amplified versions of the broken conversations we've always had. Social media didn't invent our inability to handle complexity; it just gave it a megaphone and an audience.
The sociologist George Herbert Mead understood something profound about human communication that we've largely forgotten. He saw conversation not as information transmission but as reality negotiation. Every exchange of words is actually an attempt to build a shared world. When I say something to you, I'm not just conveying data. I'm proposing a version of reality and waiting to see if you'll help me construct it.
This construction project was always fragile. Strip away the body language, the tone of voice, the subtle dance of facial expressions, and you're left with naked words carrying impossible weight. Online discourse operates in this symbolic poverty, where every statement arrives orphaned from its context, ready to be adopted by whatever interpretation serves the reader's existing beliefs.
But the poverty runs deeper than missing emoji reactions or misread sarcasm. We've lost something more fundamental: the ability to imagine that disagreement might be productive rather than destructive.
Henri Tajfel's work on social identity reveals why this loss cuts so deep. Our sense of self isn't some isolated internal experience; it emerges from our group memberships. When someone challenges our position, they're not just disagreeing with an idea. They're threatening the scaffolding of our identity. No wonder we respond as if under attack. We are.
The platforms know this. They've built their entire economic model on it. The algorithm doesn't care about truth or understanding or human flourishing. It cares about engagement, and nothing engages quite like outrage. So it serves us the most extreme versions of every position, the most inflammatory takes, the most simplified strawmen of complex arguments. It's not showing us reality; it's showing us a funhouse mirror designed to maximise emotional response.
This would be troubling enough if we were dealing with simple topics. But we're not. We're trying to navigate climate change, artificial intelligence, systemic inequality, global health crises. These aren't problems with clean solutions. They're wicked problems, full of trade-offs and uncertainties, requiring nuanced thinking and patient dialogue. Instead, we get hot takes and purity tests.
The nirvana fallacy has become our default mode. Every proposed solution gets rejected for its imperfections. Every incremental improvement gets dismissed because it doesn't solve everything. We've convinced ourselves that acknowledging complexity is weakness, that admitting uncertainty is failure. So we perform certainty we don't feel, defending positions we haven't examined, fighting battles that shouldn't exist.
John Suler called it the online disinhibition effect, but that clinical term doesn't capture the full horror of what happens when humans shed their social constraints. Watch someone transform from thoughtful colleague to Twitter warrior. Watch nuanced thinkers flatten into caricatures of themselves. Watch entire communities spiral into collective madness, convinced they're saving the world while tearing each other apart.
The echo chambers we've built aren't just comfortable. They're intoxicating. There's a rush that comes from having your beliefs confirmed, your biases validated, your enemies clearly defined. Social media has turned this rush into an endless drip feed. We're all addicts now, scrolling for our next hit of righteous indignation.
And into this already fractured landscape, we've now introduced artificial intelligence.
AI doesn't just participate in our broken discourse; it accelerates every dysfunction. It can generate infinite variations of any argument, craft perfect strawmen, produce evidence for any position. It's the ultimate confirmation bias machine, capable of supporting any belief with seemingly authoritative text. The doomer versus accelerationist debates around AI itself perfectly illustrate this: two camps, absolutely certain of contradictory futures, using the same technology to prove opposite points.
We're using broken conversations about AI to make decisions about AI. We're trying to govern a technology we don't understand through discourse structures that don't work, on platforms designed to prevent the very kind of careful thinking this moment demands.
The moral grandstanding has reached fever pitch. Everyone's an expert now, everyone has the solution, everyone knows exactly what should be done. Except we don't. We're all fumbling in the dark, pretending we can see clearly, afraid to admit our uncertainty because uncertainty has become synonymous with weakness.
I keep thinking about the principle of charity that philosophers talk about. The radical act of interpreting someone's argument in its strongest form rather than its weakest. Imagine if we approached online discourse this way. Imagine if our first instinct was to understand rather than defeat. Imagine if we could sit with complexity without immediately simplifying it into something we can attack.
But this isn't just about being nice. It's about survival. The challenges we face require collective intelligence, not collective combat. They require us to build on each other's ideas, not tear them down. They require the kind of patient, careful thinking that our current discourse actively prevents.
The psychologist Jonathan Haidt mapped our moral foundations and found that we're not even arguing about the same things. Liberals and conservatives aren't just disagreeing; they're operating from different moral matrices entirely. What registers as justice to one side reads as destruction to the other. We're not having different conversations; we're having different conversations about different things while believing we're talking about the same thing.
And somewhere in all this noise, the real questions get lost. Not "who's right?" but "what works?" Not "who's moral?" but "what helps?" Not "who wins?" but "how do we move forward together?"
The technology isn't going away. The platforms will keep optimising for engagement. The algorithms will keep serving us simplified versions of complex realities. AI will keep getting better at mimicking our worst conversational habits. But we don't have to accept this as our fate.
We could choose to recognise the game being played. We could choose to refuse the false binaries, reject the manufactured outrage, resist the pull of tribal validation. We could choose to embrace uncertainty as strength, complexity as reality, nuance as necessity.
We could choose to have better conversations. Not perfect ones. Just better.
The broken mirror of online discourse shows us distorted versions of ourselves and each other. But mirrors can be fixed. Or better yet, we can learn to see past the distortion, to recognise the fractures for what they are: not features of reality, but artifacts of a system designed to profit from our division.
Every time we choose to steelman instead of strawman, to engage with complexity instead of simplification, to admit uncertainty instead of performing certainty, we're engaging in a small act of resistance. We're refusing to be what the machine wants us to be.
The conversation about AI and society isn't separate from the problem of broken discourse. It's the perfect test case for whether we can evolve past our current limitations. Can we discuss existential challenges without devolving into existential combat? Can we navigate unprecedented complexity without retreating into comfortable simplicities?
I don't know. That uncertainty makes me uncomfortable. But I'm learning to sit with that discomfort, to resist the urge to resolve it with false certainty. Because the alternative, this endless war of all against all, this fractured mirror reflecting our worst selves back at us infinitely, is far worse than uncertainty.
We built these systems. We can build better ones. But first, we need to have a different kind of conversation about what better looks like. A conversation that doesn't pretend to have all the answers. A conversation that admits its own limitations. A conversation that recognises its own participation in the very dynamics it seeks to change.
That's the paradox we're trapped in: we need better discourse to build better discourse. We need to escape the maze while we're still inside it. We need to fix the mirror while we're looking through it.
But paradoxes aren't impossibilities. They're invitations to think differently. And thinking differently, it turns out, is exactly what this moment demands.
I enjoyed this, thank you. I did find myself thinking the topic is rather recursive. The idea that we should listen to each other has become a position on the field of battle. I'm seeing cracks in what I'd call the cultural status quo for the last 100 years based upon a notion of the primacy of 'builders' and what I mean by that is people that 'do', not say. The mono culture of the 20th century is ending in flames and from those flames will come highly activated pockets of individuals with devastating levels of power at their fingertips. People who believe that rational discourse and cooperation lead to better outcomes will compete for resources and primacy like all the other factions and really...we'll see how it ends up. :D I'm team rational discourse but I don't have faith that others can be brought to the cause.
Woah, again, you have echoed and brilliantly expressed (thank you) exactly where I have landed and what I have been thinking through, here on Substack and in practice in my job.
Binary thinking is the enemy of consensus.
In my school we have introduced Socratic dialogue circles as an ancient new strategy. Occasionally yes we still will do debates, but we are also working on the much harder job of listening, receiving & reflecting each other’s opinions in a circle. The end goal is NOT to win or lose a debate but to come to consensus.
Students LOVE it. Their entire mindset is challenged, less by the topic under discussion; but far more significantly by the process. It’s also minimal prep for teachers and 100% verbal assessment. (Details in my various posts)
Absolute game changer.