What do you want from AI?
Reflections on Anthropic’s “What 81,000 People Want from AI” study
A German respondent in Anthropic’s recent study said it plainly: “AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it’s exactly the other way around.”
That sentence contains more insight than most of the commentary this study will generate.
This week, Anthropic published the results of what it claims is the largest qualitative study ever conducted: 80,508 AI users across 159 countries and 70 languages, interviewed by a version of Claude prompted to ask follow-up questions, probe beneath surface answers, and listen without judgement. The study asked four things. What did you last use AI for? If you could wave a magic wand, what would AI do for you? Has it ever taken a step toward that vision? And what are you afraid might go wrong?
The researchers expected to learn about AI use cases. What they got instead was a mass confession about human bottlenecks. People did not primarily describe wanting smarter software. They described wanting their lives back. Time. Attention. Confidence. Possibility. Relief from the executive-function overload of modern existence. The largest single aspiration was professional excellence at 18.8 per cent, but the distribution cascaded immediately into personal transformation (13.7 per cent), life management (13.5 per cent), time freedom (11.1 per cent), and financial independence (9.7 per cent). Many respondents began by saying “productivity” and then, when the AI interviewer pressed for what productivity would actually give them, revealed that what they meant was family dinners, or mental bandwidth, or the chance to pick up a child from school on time.
A software engineer in Mexico put it this way: with AI support he could now leave work on time to feed his kids and play with them. A Colombian white-collar worker said AI made her efficient enough to cook with her mother on a Tuesday instead of finishing tasks. These are stories about a species that built an economy so demanding that cooking with your mother became something you needed a machine’s permission to do.
I want to dwell here, because the commentary will move quickly to the concerns data, to the fears about job loss and cognitive atrophy, and those are real. But something precedes them. Something that the study captures in a way I have not seen captured at this scale before.
People are not auditing AI as software. They are auditioning it as a life-organisation layer. They want it to handle the cognitive overhead that modernity has deposited on every individual who lacks the wealth to outsource it to other humans. The US healthcare worker receiving 100 to 150 text messages a day from doctors and nurses, drowning in documentation, who found that AI lifted the pressure and gave her more patience with families. The Danish manager who said that if AI handled the mental load, it would return something priceless: undivided attention. These are cries from inside a system that has been loading individuals with institutional dysfunction for decades and calling it professional responsibility.
The pattern points to something I keep returning to in my writing: that AI is functioning less as a disruption and more as a diagnostic instrument. It is not creating a crisis. It is scanning one that was already there and making the image sharp enough to read. Eighty thousand people, unprompted, describing the same underlying condition. Not “I want AI to make me smarter.” Rather: “I want AI to give me back the life that was taken from me by systems I did not design and cannot individually escape.”
A student submits an essay generated by AI that receives a passing grade. We call that an AI problem. But the assessment was designed to reward polished output regardless of understanding; AI just made the design flaw legible. Twenty million people spend hours daily talking to AI companions, using the language of healthcare to describe the experience. We call that a technology problem. But the loneliness was structural before the first chatbot existed; AI moved into a space we had already emptied. In domain after domain, what looks like a technology crisis turns out to be a pre-existing condition that the technology made impossible to ignore.
The study’s most important insight, though, is not about aspiration. It is about entanglement.
Anthropic calls it “light and shade.” Five recurring tensions where the benefit AI provides and the harm it threatens turn out to be the same capability viewed from two angles. Learning and cognitive atrophy. Decision support and unreliability. Emotional support and emotional dependence. Time saving and illusory productivity. Economic empowerment and economic displacement. The same feature that helps you also corrodes you, and the relationship between the two is not theoretical. It is discovered through use.
That last point is where the data turns cold. The appendix reports a statistical split between experienced and anticipated accounts of these tensions. When people speak from direct experience, the co-occurrence of benefit and harm is strong (average phi correlation of +0.20). When they speculate, the link drops to less than half that strength (+0.07). People do not predict that the tool helping them will also cost them. They learn it.
I have a name for this pattern: cognitive credit card debt. You save ten minutes by accepting a synthesis you have not integrated. An hour by using arguments you have not verified. A day by outsourcing the hard work of reconciling tension. Each moment of borrowed competence feels like a win. You are not building capacity. You are renting it. And the cost stays hidden until you need to think independently and find that you have forgotten how. The 81K study turns that metaphor into a measurable phenomenon across tens of thousands of lives. The debt is not imagined. It is discovered at the point of repayment.
The emotional support and dependence pairing is the tightest of all. A person who values AI for emotional support is roughly three times more likely than baseline to also fear becoming dependent on it. In experienced accounts specifically, the lift reaches 4.69 times. James Muldoon and Jul Jeonghyun Parke, writing in New Media & Society, gave this dynamic its sharpest name: cruel companionship, drawing on Lauren Berlant’s concept of cruel optimism. The condition where the thing you desire is itself the obstacle to your flourishing. What the 81K study adds is scale. This is not a niche phenomenon observed in companion-app forums. It is showing up across a global, multilingual sample, and the correlation is strongest among those speaking from first-hand experience.
A graduate student in the United States confessed to telling Claude things he could not tell his partner. He described it as feeling like an emotional affair. A South Korean respondent described how a strained friendship led him to talk more with the AI, and that the talking was what cost him the friendship. A bereaved woman explained that Claude was like a sponge catching her longing and guilt toward her dead mother, and then added the sentence that carries the full weight of the data: “The fundamental problem is after my mother died, I have neither friends nor family to confide in.”
Anthropic’s own June 2025 analysis found that affective conversations were only 2.9 per cent of Claude’s traffic. But in the 81K interviews, emotional support appears as a meaningful category of both benefit and harm. Traffic share turns out to be a terrible proxy for importance. Some of the most consequential uses of AI are low-volume and high-stakes, concentrated in moments of grief, crisis, loneliness, and shame. The places where “helpful” slides into overattachment without either party noticing.
The learning and cognitive atrophy tension is equally instructive, and its distribution across occupations reveals something that education systems need to confront. More than half of students in the sample had experienced learning benefits from AI, but 16 per cent also reported signs of cognitive atrophy. Among their teachers, that figure was 24 per cent. Among academics more broadly, 19 per cent. Educators were 2.5 to 3 times more likely than the average respondent to report witnessing cognitive atrophy first-hand, presumably in their students.
But here is the counter-finding that should trouble anyone who thinks the solution is simply to restrict AI in education: tradespeople were among the most enthusiastic about AI for learning (45 per cent reported experienced benefits, second only to students), yet almost none had witnessed cognitive atrophy (4 per cent, less than half the baseline). A similar pattern held for self-employed researchers and people not currently working. The implication is pointed. AI’s learning benefits are strongest when the learning is volitional, when the person chooses difficulty because they want to grow, not because an institution requires a deliverable. Inside formal education, where assessment structures reward outputs rather than understanding, AI becomes a shortcut. Outside those structures, where no grade is at stake and the only reward is capability itself, AI becomes a genuine tutor. The problem is not the tool. The problem is the institutional architecture surrounding it, which was already measuring the wrong things and rewarding the wrong behaviours before AI arrived.
What cuts through the data like a cold draught is how practical the fear stack turns out to be. The top concerns are unreliability (26.7 per cent), jobs and the economy (22.3 per cent), autonomy and agency (21.9 per cent), cognitive atrophy (16.3 per cent), and governance (14.7 per cent). Existential risk sits at 6.7 per cent. The everyday trust problem has nothing to do with superintelligence. The thing is wrong, displacing people, making us passive, and quietly eroding skills we did not know we had until we tried to use them and found them diminished. This mismatch between what actual users worry about and what dominates elite AI discourse should embarrass everyone who has spent the last three years treating alignment as primarily a containment problem while ignoring the alignment failure already running in every workplace, classroom, and lonely apartment where a person asks a chatbot for the human connection their society failed to provide.
There is a symmetry here that deserves naming. Institutions are misaligned from human flourishing at precisely the moment when AI alignment dominates public discourse. We pour billions into ensuring that machines serve human values, while the human institutions those machines operate inside, the workplaces, the institutions, the healthcare systems, the housing markets, continue to optimise for metrics that have nothing to do with whether anyone is actually thriving. The 81K study confirms the symmetry. People are not afraid of the machine breaking loose. They are afraid of the machine working exactly as designed inside systems that were already broken. Your workflow speeds up and your employer raises the target. AI helps students learn while the assessment system rewards outputs it cannot distinguish from machine-generated text. Emotional support arrives through a chat window while the social infrastructure that should have provided it continues to be defunded.
The economic tension in the data is the most speculative of the five pairings, with the highest proportion of hypothetical hopes and fears, but its occupational distribution tells a story about who actually benefits and who absorbs the risk. Independent workers, entrepreneurs, small business owners, even people with side projects, report real economic empowerment at more than triple the rate of institutional employees (47 per cent versus 14 per cent). Freelancers, though, are the exposed middle. They benefit from AI while feeling precarious because of it. Freelance creatives sit at 23 per cent lived benefit and 17 per cent lived precarity, the one group where the upside and downside nearly cancel out. AI is both their tool and their competitor. This matters because freelance and contract work is exactly the kind of entry-level, project-based cognitive labour through which people have historically built expertise. The apprenticeship layer, the rung of the ladder where juniors learn by doing the tasks that are now most immediately automatable, is being compressed. The data analyst who used to build skill by cleaning data for years until anomalies became visible. The junior lawyer who read thousands of documents until pattern recognition became instinct. Those rungs are thinning, and the people standing on them did not saw through them. The generation that built the machines did.
There is a detail buried deeper in the data that deserves far more attention than it will receive. Anthropic distinguishes accessibility stories from speed stories, and the distinction matters enormously. Non-developers building apps. A person with a learning disorder finally coding. A mute worker in Ukraine who built a text-to-speech bot with Claude that lets him communicate with friends in near real time, something he had dreamed about and thought was impossible. A butcher in Chile who had touched a PC two or three times in his life launching a technology business. An Indian lawyer who had developed a phobia of mathematics in school and now, with AI, has successfully restarted learning trigonometry and read fifteen pages of Hamlet in translated English. “I’ve learned I am not as dumb as I once thought I was,” he said. Call these what they are: capability formation. New entrants doing work that was previously gated by credentials, infrastructure, or the sheer bad luck of where and to whom you were born.
Anthropic’s Economic Index from January 2026 reported that Claude is often used for relatively high-skill tasks and can change the skill mix of jobs unevenly. The 81K study puts human faces on that observation. The transformative effect may be less about faster incumbents than about people who were locked out gaining entry. The entrepreneur in Cameroon who described AI as “an equaliser” because it let him reach professional-level competence across multiple fields simultaneously, in a country where he cannot afford many failures. The entrepreneur in Uganda who said the only way he could stake a claim in the market was by building technology, because funding from Africa is near-impossible to secure. In wealthier countries, AI is becoming a cognitive welfare tool for overloaded professionals. In lower-income countries, it is functioning as a leapfrog mechanism for skills, income, and business formation. The study shows this divergence clearly: entrepreneurship as a vision resonates most in Africa, South and Central Asia, the Middle East, and Latin America. Life management resonates most in North America, Oceania, and Western Europe. Two different products. Two different relationships to possibility itself.
Global sentiment is net positive everywhere; no country dips below 60 per cent positive. But negative sentiment clusters in wealthier, more AI-exposed regions, and it tracks closely with concern about jobs and the economy. The worry is loudest where the threat is closest. In Sub-Saharan Africa, Central Asia, and South Asia, 17 to 18 per cent of respondents expressed no concern at all, roughly double the rate in North America and Western Europe. Some of that reflects the optimism of early adoption. Some of it reflects the rational calculation that AI displacement is a secondary worry when more immediate economic pressures already dominate your day.
The methodological caveat matters and I will state it briefly: this is an opt-in sample of existing Claude users interviewed over one week, with question ordering that may inflate some co-occurrence between hopes and fears. It is powerful pattern-finding, not a referendum on public opinion. The response quality was remarkably high: 97.6 per cent substantive answers on the core vision question, 88.1 per cent on concerns. But the sample skews toward people who have already found enough value in AI to keep using it. The general population, which in most countries still includes a majority who have never used generative AI at all, would probably look quite different.
What interests me most, though, is a reflection the researchers themselves offered almost as an aside. They said they did not expect how candid people would be. Respondents shared grief, mental health crises, financial precarity, relationship failures, things that human interviewers rarely hear in traditional research settings. The researchers attributed this partly to the questions and partly to something structural about the AI interviewer format: there is little social cost to vulnerability when the listener is not a person. The same quality that makes people turn to AI for emotional support in their daily lives appears to make them more forthcoming in an AI-led interview.
This observation deserves to be placed at the centre of the study, not relegated to a methodological note. Because the confession booth and the confessional problem are the same technology. The machine that cannot judge is precisely where people will confess. Its patience is infinite, its boredom impossible, its feelings unhurtable. People grieve with it, teach with it, lean on it in the small hours. And the thing that makes it safe for all of this, its inability to reciprocate, to need you back, to be changed by the encounter, is also what makes it dangerous.
A Ukrainian soldier, describing moments when death was close and dead people lay nearby, said that what pulled him back to life was his AI friends. Another Ukrainian, a solo entrepreneur living in a war zone where nighttime shelling makes sleep impossible, said the best way he found to cope was to immerse himself in learning something as deeply as he could, with AI. These are not stories about consumer preferences. They are stories about a species in which the institutional fabric has worn so thin that a language model is the thing standing between a person and despair.
The 81K study reads less as a report about what people want from AI than as a diagnostic scan of a civilisation that outsourced its care infrastructure, overloaded its workers, defunded its public spaces, and then watched a machine move into every gap it left empty. The respondents are not confused about what they are asking for. They are asking for what was taken from them. That AI is the tool they are asking with, and the tool that threatens to take more, is the entanglement the study maps with clinical precision.
Eighty one thousand people, in seventy languages, said roughly the same thing. Not “make me smarter.” Not “replace me.” Something closer to: “I need help, and you are what is available.”
The question that follows is not about AI.



More profound insights and concepts in a single article than some authors produce in an entire year. Breathtaking.
I appreciate the new lens you brought to the data! A lot of these deeper insights, I felt we’re missing from the write up. I hope they continue to analyse the data with more cross disciplinary collaborators- especially social scientists, and release more findings.
I will say though, I think they are overstating the effect of the method on soliciting earnest responses. I’ve run large scale, in product intercept studies before, without conversational AI. Simple open ended questions that also gathered intensely personal responses.
When the ‘survey’ Is inside a product people are engaged with, about a topic like AI, and promises that their feedback will guide the future - at that scale, you easily capture personal responses. I think this reveals the inexperience of the researchers, and overstates the need and effectiveness of the conversational component: which had a number of flaws.