Discussion about this post

User's avatar
Meatbag Man's avatar

I have a problem with the paper and OpenAI’s approach to democracy. What they describe is a managed and controlled process through bureaucratized channels. Real democracy is messy, and often results in conflicts over power, values, resources, and legitimacy. They want democratic legitimacy without fully opening themselves up to democratic contestation.

Ben Zhou's avatar

The distinction between sharing the harvest and owning the farm is precise — but it may understate the recursion. If AI is diagnostic rather than disruptive, the training data that built these systems is itself a crystallisation of those fractures. The instrument inherits the pathology it claims to reveal. That makes the boundary between exposure and reproduction far less clean than OpenAI needs it to be.

Your point about libraries not yet having a governance seat lands hard. But I suspect the omission is structural. The paper’s entire architecture assumes the right response to concentrated intelligence is distributed access — never distributed control. Libraries, unions, courts appear as beneficiaries, never as counterweights. That is not a gap in the analysis. It is the analysis.

I have been exploring a related knot: whether “human values” is a coherent alignment target when the training corpus encodes the very contradictions you describe. The constitutional approach to alignment is interesting less for its answers than for what it makes irreversible — once you commit values to an explicit, revisable text, you can no longer pretend alignment is a technical problem with a self-evident objective. The act of drafting forces the question of who drafts, who revises, and whether the governed had any voice. That is your library problem in a different key — and worth sitting with.

1 more comment...

No posts

Ready for more?