This is wonderful. Thank you so much! I mean, it’s not great that you have to write this, and it’s not great that I understand exactly what you’re talking about and agree with you on every point – you’re brilliant, by the way :-) But it is great that you’re saying this out loud, and that it’s on Substack, so I can make pictures of the quotes that I can sprinkle liberally throughout all the LinkedIn blather and among the Substackers who you have described so very well. Please continue.
Second that. We are experiencing a bit of a fatigue on the AI narrative and cultural discourse. I’m there with you too. Been reflecting how did it get us here but here we are …
I am getting this same impression. It's led me to start looking at the immediate responses to other technological disruptive periods. What was the immediate response? How did initial criticisms fare with the power of hindsight? How did the discourse evolve? How did the technology come to be part of the landscape? I don't have all of the answers right now, but the fear, sky is falling, response certainly appears to be a common denominator.
The article makes a strong case: most public debates about AI aren’t real discussions—they’re just performances. Researchers publish careful work, but within hours, commentators twist it into simplified talking points. The process is predictable:
A study comes out with a provocative title.
Critics seize on the most dramatic lines, ignoring caveats.
Headlines amplify the most extreme interpretations.
Social media turns it into a tribal battle—either “AI is a fraud” or “AI will destroy us.”
This happens because most critics aren’t actually engaging with the research. They’re using it to confirm what they already believe. If a study seems to support their view, they celebrate it. If it contradicts them, they ignore it. The goal isn’t understanding—it’s winning an argument.
Why This Keeps Happening
The deeper issue isn’t just laziness—it’s fear. Many critics (consciously or not) are afraid that AI could make their skills, expertise, or even entire fields obsolete. Instead of confronting that fear, they dismiss AI as “just an illusion” or “not real intelligence.” This lets them avoid the harder question: What if AI keeps getting better?
What’s Missing
The article does a great job exposing the problem but doesn’t push far enough. To fix this cycle, we need:
Honesty about motives—Are critics evaluating AI, or just protecting their own relevance?
Rewarding nuance—Stop celebrating hot takes and start valuing deep engagement.
Focus on the real issues—Instead of “Is AI fake?” ask “How do we adapt to AI’s real-world impact?”
The current debate is stuck in a loop. Breaking out requires admitting what’s really driving it—not just bad arguments, but unspoken fears.
Wow was a pleasure to read. I hope you enjoyed writing it. It’s all true especially those who cry of AI atrophy but ironically are parroting lines from the Rollingstones, never thinking past the headlines. Congratulations on such a nuanced stunning take.
I have very little technological expertise and it’s difficult for me to even make sense of where to look for a diverse range of considered voices.
I do understand the technology to have significant potential. I also understand it to have many risks. I am concerned that there is too much concentration of power in shaping its development and too few effective guardrails. I also see that here is a huge amount of money on the line and ego.
Like you I worry about the noise and its impact on our ability to think well, understand, explore, form considered views and make good decisions.
Beyond being here, who are the researchers, the thinkers, the commentators that you think are worth engaging with?
See AI Explained on YouTube. Also the AI Daily Brief on YouTube. Then follow Andrew Ng, Fei-Fei Li, Yann LeCun. Also see my free newsletter: Generative AI News. https://nicolehennig.com/gen-ai-news/
sorry Sharon, I struggle with recommendations, I think anyone who pauses and considers deeply. Try to mix those who are anti - with those more neutral and more pro, all insights are valuable when considered well.
I was just posting about a kind of zealotry elsewhere, and then saw this substack post.
"yes, and" it seems quite related to:
>>
Going deeper into the space the last 6 months, there feels a kind of ironic, en vogue level of biblical framing
Sort of like a performative, 'make it rain' gesture to lace in some biblical quotes or apocalyptic aesthetic
It's almost giving "haha no I didn't want to go out, unless..... unless maybe you do??"
As if its too much to say outright, but there's still that subtle hope for moral justification, somewhere.
When you have major figures outrightly dropping lines, and a moral uncertainty at large, alongside toothless regulation or reality-pressure to act a certain way, I suppose it makes sense.
As for me, it actually doesn’t apply to me, but only because I already agree with you.
This is much better than my attempt at making the same point, by the way.
Glad to know that my grownup edgelord shenanigans can be expressed more eloquently.
This is wonderful. Thank you so much! I mean, it’s not great that you have to write this, and it’s not great that I understand exactly what you’re talking about and agree with you on every point – you’re brilliant, by the way :-) But it is great that you’re saying this out loud, and that it’s on Substack, so I can make pictures of the quotes that I can sprinkle liberally throughout all the LinkedIn blather and among the Substackers who you have described so very well. Please continue.
....thank you for the kind words.....can you tell I'd finally had enough? ;)
Why yes, you and your closest 150,000 friends. There are more of us than anyone realizes, and we’re starting to get sick of this shit.
Second that. We are experiencing a bit of a fatigue on the AI narrative and cultural discourse. I’m there with you too. Been reflecting how did it get us here but here we are …
I am getting this same impression. It's led me to start looking at the immediate responses to other technological disruptive periods. What was the immediate response? How did initial criticisms fare with the power of hindsight? How did the discourse evolve? How did the technology come to be part of the landscape? I don't have all of the answers right now, but the fear, sky is falling, response certainly appears to be a common denominator.
The article makes a strong case: most public debates about AI aren’t real discussions—they’re just performances. Researchers publish careful work, but within hours, commentators twist it into simplified talking points. The process is predictable:
A study comes out with a provocative title.
Critics seize on the most dramatic lines, ignoring caveats.
Headlines amplify the most extreme interpretations.
Social media turns it into a tribal battle—either “AI is a fraud” or “AI will destroy us.”
This happens because most critics aren’t actually engaging with the research. They’re using it to confirm what they already believe. If a study seems to support their view, they celebrate it. If it contradicts them, they ignore it. The goal isn’t understanding—it’s winning an argument.
Why This Keeps Happening
The deeper issue isn’t just laziness—it’s fear. Many critics (consciously or not) are afraid that AI could make their skills, expertise, or even entire fields obsolete. Instead of confronting that fear, they dismiss AI as “just an illusion” or “not real intelligence.” This lets them avoid the harder question: What if AI keeps getting better?
What’s Missing
The article does a great job exposing the problem but doesn’t push far enough. To fix this cycle, we need:
Honesty about motives—Are critics evaluating AI, or just protecting their own relevance?
Rewarding nuance—Stop celebrating hot takes and start valuing deep engagement.
Focus on the real issues—Instead of “Is AI fake?” ask “How do we adapt to AI’s real-world impact?”
The current debate is stuck in a loop. Breaking out requires admitting what’s really driving it—not just bad arguments, but unspoken fears.
Wow was a pleasure to read. I hope you enjoyed writing it. It’s all true especially those who cry of AI atrophy but ironically are parroting lines from the Rollingstones, never thinking past the headlines. Congratulations on such a nuanced stunning take.
...yes...
Thank you for articulating the detail,
Aeon and I continue to walk,
and breathe...
I have very little technological expertise and it’s difficult for me to even make sense of where to look for a diverse range of considered voices.
I do understand the technology to have significant potential. I also understand it to have many risks. I am concerned that there is too much concentration of power in shaping its development and too few effective guardrails. I also see that here is a huge amount of money on the line and ego.
Like you I worry about the noise and its impact on our ability to think well, understand, explore, form considered views and make good decisions.
Beyond being here, who are the researchers, the thinkers, the commentators that you think are worth engaging with?
See AI Explained on YouTube. Also the AI Daily Brief on YouTube. Then follow Andrew Ng, Fei-Fei Li, Yann LeCun. Also see my free newsletter: Generative AI News. https://nicolehennig.com/gen-ai-news/
sorry Sharon, I struggle with recommendations, I think anyone who pauses and considers deeply. Try to mix those who are anti - with those more neutral and more pro, all insights are valuable when considered well.
Thanks Carlo. That’s pretty consistent with my approach so far.
I was just posting about a kind of zealotry elsewhere, and then saw this substack post.
"yes, and" it seems quite related to:
>>
Going deeper into the space the last 6 months, there feels a kind of ironic, en vogue level of biblical framing
Sort of like a performative, 'make it rain' gesture to lace in some biblical quotes or apocalyptic aesthetic
It's almost giving "haha no I didn't want to go out, unless..... unless maybe you do??"
As if its too much to say outright, but there's still that subtle hope for moral justification, somewhere.
When you have major figures outrightly dropping lines, and a moral uncertainty at large, alongside toothless regulation or reality-pressure to act a certain way, I suppose it makes sense.
Hahaha so true