The Social Contract OpenAI Wrote Without You
OpenAI just released a document called Industrial Policy for the Intelligence Age: Ideas to Keep People First. Thirteen pages. Pastel cover art. The kind of thing that lands in a news cycle, gets summarised in a paragraph, and sinks. Most people will not read it. That is a mistake. Buried inside the policy language is an attempt to write the political theory of the next decade of AI, and the ambition of the thing deserves to be taken seriously even by those, perhaps especially by those, who find parts of it self-serving.
The paper assumes the transition to superintelligence is already underway. It expects systems to move from completing hour-scale tasks to executing month-long projects. And it argues, with a straight face and considerable sophistication, that the right historical analogy for what is coming is not ordinary software adoption but a Progressive Era or New Deal-scale redesign of the social contract. That is an extraordinary claim for a technology company to make about its own product category. It is also, on the evidence, probably correct.
What makes the document worth close reading is less the individual proposals, though several are ambitious, than the rhetoric. OpenAI has done something very deliberate here: it has domesticated superintelligence. Instead of the mystical or science-fiction register that still dominates most public AI discourse, the paper translates the future into the vocabulary of payroll taxes, unemployment insurance, transmission lines, auditors, public records, schools, libraries and local ratepayers. Dario Amodei did something similar in Machines of Loving Grace when he said he wanted to strip the “sci-fi baggage” from the conversation about transformative AI. OpenAI’s paper takes that instinct further. It renders a potentially civilisational rupture legible to democratic administration. It makes superintelligence sound like something a city council could discuss.
This is either an act of responsible public communication or an act of containment. Probably both.
The paper divides its agenda into two fronts: an “open economy” with broad participation and shared prosperity, and a “resilient society” with trust infrastructure, auditing and frontier-risk management. Across both, the intellectual ambition is consistent. OpenAI has moved past arguing for better models or even safer models. The paper proposes a settlement: rapid frontier progress, narrow top-end controls, universal access to ordinary AI, enough redistribution to prevent backlash, and enough monitoring to manage systemic risk.
Take the “Right to AI” section. By analogising AI access to literacy, electricity and internet connectivity, and by explicitly naming workers, small businesses, schools, libraries and underserved communities, OpenAI is treating baseline AI capability as something closer to a universal service than a premium product. If AI is infrastructure, then access, affordability and non-exclusion become political obligations rather than market outcomes. That is a radical reframing, whatever its strategic motivations.
Or take the tax-base discussion and the Public Wealth Fund proposal. These effectively concede that if labour income shrinks relative to capital, the welfare state breaks. The paper avoids the old Silicon Valley gesture of making universal basic income the whole answer. Instead it offers a portfolio: capital-based taxation, a public wealth fund, portable benefits, adaptive safety nets, care-sector pathways and shorter-week pilots. That is closer to social-democratic industrial policy than to techno-libertarian abundance talk. The distance between this document and the “move fast and break things” era is measured in political centuries.
And the safety sections mark a conceptual shift that deserves attention beyond the AI policy community. The emphasis on an “AI trust stack”, provenance standards, privacy-preserving logs, incident reporting, post-deployment monitoring, model-containment playbooks and auditing regimes means OpenAI is no longer talking as if safety is something you solve before release. The closest analogies are aviation, critical infrastructure, food and drug surveillance. AI is being reimagined as a continuously monitored system embedded inside institutions, not a product with a system card stapled to the box. That shift, from model safety to societal resilience, may be the most consequential intellectual move in the document, even if it reads as the least dramatic.
One line jumped out at me more than any other. Libraries are named explicitly as AI access points. A small sentence with very large implications.
It casts libraries as part of the democratic distribution layer for intelligence: places where access, fluency and trusted use can be made public rather than purely commercial. For anyone who has spent time inside a public or university library watching what actually happens when a community member sits down at a terminal and tries to make sense of something difficult, that framing carries real weight. Libraries have always been the institution where the gap between technological possibility and democratic participation gets closed, slowly, imperfectly, person by person. Putting them in the same sentence as AI access is an acknowledgment that intelligence, like literacy before it, requires public infrastructure to reach the people who need it most.
But as a university librarian, I also recognise what the framing leaves unsaid. Libraries appear in the document as distribution nodes. Not as rights-bearing stewards of the knowledge commons. Not as institutions with their own bargaining position, their own intellectual property concerns, their own century-long relationship with the tension between access and ownership. The paper says almost nothing about copyright, licensing, preservation or the institutional infrastructure that makes knowledge trustworthy in the first place. The “AI trust stack” it proposes on page ten, with its emphasis on provenance, verification and audit trails, is a beginning. But it describes a technical architecture without addressing who curates, who preserves, who arbitrates disputes over origin and attribution. Those are library questions. They have been library questions for centuries. The paper does not seem to know this. Libraries get a seat at the access table. They do not yet get a seat at the governance table.
That omission is telling. And it points to the biggest structural absence in the document: the question of concentration.
There is almost no serious treatment of antitrust. No detailed engagement with the structural concentration of compute, cloud services, chips, data and distribution channels. No plan for contestability when AI is used in hiring, benefits, education or public administration. The paper is strongest on redistribution after concentration and weakest on preventing concentration in the first place. A public wealth fund to socialise gains, yes. But nothing about dispersing control.
Daron Acemoglu’s recent work argues that markets will underinvest in what he calls “pro-worker AI”, and that AI can either amplify human judgement and expertise or erode both by pursuing pure automation. Read through that lens, OpenAI’s sections on worker voice, care-economy pathways and distributed scientific infrastructure look like a partial absorption of the Acemoglu critique. OpenAI is conceding that the direction of innovation is not neutral, that the state may need to steer it. But only partially. The paper wants to socialise gains more than it wants to disperse power. It wants redistribution without restructuring. That distinction matters.
The convergence across major AI companies is itself worth remarking on. Anthropic launched the Anthropic Institute and expanded its policy apparatus in the same period. Google’s 2026 AI Impact Summit language emphasised infrastructure, access, science, government capacity and skills. Mustafa Suleyman’s “humanist superintelligence” framing at Microsoft tracks almost exactly onto OpenAI’s containment instincts. The vocabulary is converging: frontier labs now talk about industrial policy, public utility, mission governance, national infrastructure and social insurance. Two years ago, the dominant language was “responsible AI” and “ethical guidelines”. That language has been quietly retired. What replaced it is more ambitious and more honest about the scale of what is coming, and also more useful as a framework for maintaining the political legitimacy of the companies doing the building.
These documents are sincere and strategic at the same time. Holding both of those truths is essential to reading them well. OpenAI is worried about real disruption. But it is also trying to define the acceptable policy perimeter before governments or publics do it without them. Narrow frontier regulation, public-benefit corporation governance, safety markets, public-input mechanisms and broad access to non-frontier systems all describe a world in which labs remain central but politically tolerated. That does not make the proposals cynical. It means they should be read as both governance and self-positioning. The end of the paper gives the game away gently: it closes not with statutes but with ecosystem-building: feedback channels, fellowships, research grants of up to $100,000, a million dollars in API credits and a Washington workshop opening in May. Frontier labs are building models and, simultaneously, the surrounding epistemic institutions that will interpret those models for governments, journalists, researchers and civil society. They are trying to shape not only the technology but the vocabulary in which the technology will be understood.
I keep thinking about what sits underneath the convergence. The deepest shift, the one that none of these companies say explicitly but all of them now assume, is that the old conversation about whether AI can be safe is over. The new conversation is about what kind of policy can metabolise abundant intelligence. That is a question about political economy, not about alignment benchmarks. It is a question about who owns the infrastructure, who sets the terms of access, who measures the disruption, who decides when the safety net triggers and who writes the auditing standards that determine whether any of it is working.
Dario Amodei would say the settlement OpenAI proposes works only if we survive the adolescence, his term for the period of maximum capability growth before institutions catch up. His two-part worldview, abundance in Machines of Loving Grace and risk in The Adolescence of Technology, maps closely onto the paper’s twin structure. Anthropic’s own labour-market analysis from March found little unemployment effect so far but suggestive evidence of slower hiring for young workers in exposed jobs. OpenAI’s proposal for real-time indicators and automatically triggered transition support sits in that same empirical groove: measure first, then let policy scale with observable disruption. The instinct is the same. The question is whether measurement can ever keep pace with the thing being measured.
Demis Hassabis occupies different ground. He has framed AI consistently as a route to faster scientific discovery, shorter drug-development cycles, perhaps the end of disease and a world of what he calls “radical abundance”. At Davos earlier this year he sounded more measured than Amodei on near-term labour shocks, expecting something closer to ordinary technology dynamics at first, even as entry-level roles start to feel pressure. But Hassabis also said something most frontier-lab leaders avoid: that the deeper post-AGI question may be purpose, not wealth distribution. That society may need what he called a “new political philosophy”. OpenAI’s paper touches that territory only lightly, through shorter workweeks and human-centred care work and public input mechanisms, but it remains mostly a material and administrative document. Distribution is worked out in detail. Meaning barely gets a paragraph.
Acemoglu would say it works only if innovation is steered toward human complementarity rather than pure substitution, and that compensating people after power has already concentrated is not the same as preventing the concentration.
That three-way tension tells you more about the state of AI governance than any single proposal in the document. OpenAI’s paper occupies a specific position within it: optimistic about abundance, serious about redistribution, largely silent on structural power. It is a document that thinks carefully about how to share the harvest and barely at all about who owns the farm.
The paper is also more American than it admits. It gestures toward a global conversation, but the policy grammar is almost entirely US fiscal federalism, US safety nets, US agencies and US-led frontier governance. For those of us working outside the United States, that matters. Australia’s National AI Plan, the Castlereagh Statement on AI and education, the copyright negotiations happening in Canberra right now, these are not footnotes to a Washington conversation. They are parallel attempts to build institutional responses suited to different political cultures, different labour markets, different relationships between the state and the citizen. The hardest question remains unresolved: how a “right to AI” survives in a world where compute, chips and model power remain geographically and corporately concentrated.
I have argued that AI functions as a diagnostic rather than a disruption, that it exposes pre-existing fractures in education, work and institutional culture rather than creating new ones. OpenAI’s paper, for all its forward-looking ambition, confirms this in ways its authors may not have intended. The reason a Progressive Era analogy is even plausible is that the fractures it describes, the eroding tax base, the inadequate safety nets, the concentration of wealth, the disconnect between productivity growth and wage growth, the hollowing out of worker voice in organisational decisions, were all there before the first large language model was trained. AI did not break the social contract. It made the existing cracks visible at a speed that can no longer be politely ignored.
Consider the document’s own admission that workers using AI might “agree that it’s increasing their productivity without believing they’re seeing the benefits”. That sentence describes a forty-year story about the decoupling of productivity from compensation, dressed in new clothes. The paper knows this. Its authors are sophisticated enough to see the structural roots. But the framing still tilts toward AI as the cause of disruption rather than the light that reveals it, because the policy prescriptions are easier to sell when the technology is the protagonist rather than the decades of institutional neglect that preceded it.
The question this paper raises, the one it does not quite answer, is whether the companies accelerating that visibility are the right ones to design the repair. They may be. They may have the technical knowledge, the resources and the institutional incentive. But a settlement written by the party that benefits most from the terms of settlement is a settlement that deserves very careful reading. Not cynicism. Not dismissal. But the kind of scrutiny you bring to any document that arrives with a pastel cover, a Washington address and a trillion-dollar thesis about why its authors should remain at the centre of the story.
Read it. All thirteen pages. Then ask who is missing from the room where it was written. And ask whether those that have always stood between concentrated power and the public interest, the institutions and unions and courts and more, are present as partners in the design or merely as beneficiaries of the generosity. The answer to that question will tell you more about the next decade than anything in the document itself.



I have a problem with the paper and OpenAI’s approach to democracy. What they describe is a managed and controlled process through bureaucratized channels. Real democracy is messy, and often results in conflicts over power, values, resources, and legitimacy. They want democratic legitimacy without fully opening themselves up to democratic contestation.
The distinction between sharing the harvest and owning the farm is precise — but it may understate the recursion. If AI is diagnostic rather than disruptive, the training data that built these systems is itself a crystallisation of those fractures. The instrument inherits the pathology it claims to reveal. That makes the boundary between exposure and reproduction far less clean than OpenAI needs it to be.
Your point about libraries not yet having a governance seat lands hard. But I suspect the omission is structural. The paper’s entire architecture assumes the right response to concentrated intelligence is distributed access — never distributed control. Libraries, unions, courts appear as beneficiaries, never as counterweights. That is not a gap in the analysis. It is the analysis.
I have been exploring a related knot: whether “human values” is a coherent alignment target when the training corpus encodes the very contradictions you describe. The constitutional approach to alignment is interesting less for its answers than for what it makes irreversible — once you commit values to an explicit, revisable text, you can no longer pretend alignment is a technical problem with a self-evident objective. The act of drafting forces the question of who drafts, who revises, and whether the governed had any voice. That is your library problem in a different key — and worth sitting with.