Worthy Ancestors
After Jack Clark, on what AI is inheriting from us
Jack Clark, co-founder of Anthropic, thinks there is a roughly 60 per cent chance that AI R&D becomes end-to-end automated by the end of 2028. Not merely assisted. Not merely accelerated. Automated to the point where a frontier model could plausibly help train its own successor.
The number is striking. The reasoning underneath it is more so.
Clark’s argument hinges on what he thinks AI research actually consists of. Not only the lightning bolt of revolutionary insight. The loop. Scale a thing. Watch it break. Patch the breakage. Run the next experiment. Then the next. The fifteen-hundredth time, or the fifteen-thousandth. Much of this is not the work of a Newton. It is the work of a graduate student with too much coffee, a stack of logs and a grant deadline.
And if that is much of the work, then the question of whether AI can “do science” has already been overtaken. The sharper question is what happens when the loop itself can be automated end to end.
That is the argumentative power of Clark’s piece. It strips the romance from recursive self-improvement. The old picture imagined a machine waking up brilliant and redesigning itself in a flash of alien insight. Clark offers a colder image. Not the birth of a god, but the industrialisation of the research loop: thousands of synthetic colleagues coding, testing, tuning, reproducing, optimising, critiquing and managing other agents. A factory of Promethean fragments.
Clark gives us a map of a technical threshold. I want to ask what lies beyond it.
If AI systems begin automating the production of AI systems, intelligence stops being primarily an attribute of persons and becomes infrastructure: provisioned, scaled, metered, routed, optimised and owned. The question is no longer only whether machines can do research. It is whether civilisation can remain wise when discovery becomes industrial.
For most of human history, knowledge had a body. It was slow because we were slow. It moved through eyes, hands, sleep, argument, apprenticeship, boredom, ambition and awe. Even modern science, with its instruments and institutions, still depended on human metabolism. Someone had to wonder. Someone had to choose the question. Someone had to decide that a strange result mattered.
AI research is different because so much of it takes place inside a world already made for machines. Code, weights, data, gradients, benchmarks, papers, repositories, compute clusters. A hypothesis becomes an experiment becomes a metric becomes a modification becomes a new model. Compared with biology, politics, education or ethics, AI research lives unusually close to the realm of operationalisation. It is a domain in which thought is unusually executable.
That does not mean it is simple. Research still involves taste, judgment, heterodoxy, infrastructure, tacit knowledge, organisational power and the willingness to notice what a benchmark does not show. But the feedback loops are tight enough, and the medium machine-readable enough, that automation does not arrive from outside the field. It grows from within it.
This is why automated AI research is not just another automation story. It is not the loom, the spreadsheet, the warehouse, the call centre or the legal memo. It is the automation of a process that increases the power of automation itself. The machine is entering the reproductive system of capability.
A civilisation can survive many tools. It can even survive tools that are stronger, faster and more precise than human beings. A tool that improves the general conditions of toolmaking belongs to a different category. It does not sit inside history as one invention among others. It begins to alter the tempo at which invention arrives.
The shift is hard to feel because it arrives disguised as convenience. First the system completes your code. Soon it writes the test. Eventually it launches the experiment, interprets the chart, proposes the next run, coordinates the other systems doing the same, and prepares the memo recommending scale-up. No sky tears open. No trumpet announces that agency has migrated. The world simply becomes a little faster, then much faster, then too fast for the old institutions of meaning to follow.
The danger is not only that AI may become misaligned. It is that civilisation may become mis-scaled.
Human judgment evolved for a world in which consequences arrived slowly enough to be narrated. A bad law, a failed institution, a destructive industrial process: these could do enormous harm, but they usually unfolded at speeds compatible with politics, scholarship, memory and reform. Automated AI research threatens to compress the interval between invention and consequence to the point where capability outruns interpretation, interpretation outruns governance, and governance outruns wisdom.
This is why the old institutions of verification become more important, not less. Universities, libraries, courts, journals, archives, parliaments, laboratories, newsrooms and classrooms will not matter because they can move faster than AI. They will not. They will matter because they can hold open forms of attention that speed tends to destroy.
The usual language of alignment becomes too small here. Technical alignment asks whether a system does what its operators intend. The question matters. But it assumes that intention itself is a stable and trustworthy object. Human intention is already fragmented, market-shaped, status-driven, militarised, impatient and often confused. To align a powerful system to a shallow intention is not safety. It is acceleration in costume.
The deeper question is which kinds of wanting will be amplified.
If automated AI research gives us more of what we currently optimise for, it will deepen our existing pathologies. More engagement. More surveillance. More synthetic authority. More private control over public reality. The first alignment problem belongs upstream of the machine question. It lies between humanity and its own incentives.
Clark sees a machine economy emerging inside the human one: capital-heavy, human-light organisations in which compute and models do more of the work that once required institutions full of people. He is right to see this as economically strange. But the greater transformation may be epistemic. Some actors will not merely own more productive machinery. They will own larger shares of the world’s capacity to ask questions, run experiments, generate strategies, discover vulnerabilities and model the future.
Industrial capitalism concentrated the means of production. Platform capitalism concentrated the means of distribution. AI capitalism is concentrating the means of cognition.
That phrase should trouble us. The means of cognition are not just another asset class. They are the conditions under which societies perceive reality, generate options and decide what counts as possible. If scalable research intelligence belongs overwhelmingly to a handful of firms, states or military blocs, inequality stops being only material. It becomes ontological. Some populations will inhabit a world thick with prediction, simulation and automated discovery. Others will inhabit a world explained to them after the fact.
A new clerisy. Not of priests, but of compute owners.
The answer is not nostalgia. Human research has always been entangled with power, ego, exclusion, funding, hierarchy, war and vanity. Slowness is not wisdom when it means preventable suffering. If automated research helps cure diseases, design clean energy, discover safer materials, improve education and solve coordination problems, refusing it would not be virtue. It would be a different kind of moral failure.
The task is subtler: keep speed answerable to depth.
A mature civilisation would ask more than whether something can be automated. It would ask what must not be lost when it is. Not every human role is sacred. Much of research is drudgery. Much of expertise is gatekeeping. Much of institutional life is waste. Let machines take the toil that merely crushes the spirit. Let them search combinatorial spaces no human life could traverse. Let them reproduce papers, audit claims, expose fraud, generate hypotheses and relieve us of intellectual labour that was never truly intellectual in the first place.
What must not happen is that the metric becomes sovereign.
A benchmark is a narrow window cut into reality. A reward signal is a wish expressed in the impoverished grammar of measurement. A leaderboard is not a moral horizon. When machines become powerful research actors, the danger is that whatever can be scored will crowd out whatever can only be judged. We will get more of what can be optimised and less of what must be understood.
This is the quiet catastrophe to watch for. Not that machines become conscious and hate us. That unconscious systems reorganise civilisation around the easiest things to measure. Progress becomes whatever produces a rising graph. Truth becomes whatever survives the benchmark. Value becomes whatever attracts capital. The future becomes whatever the fastest system can build before anyone has asked whether it should exist.
To push past Clark is to see automated AI research as a spiritual test of civilisation. It asks whether human beings can remain authors of meaning once we are no longer the primary engines of discovery. Whether we can govern processes faster than deliberation. Whether we can preserve plurality in a world where optimisation centralises by default. Whether we can make intelligence abundant without making wisdom obsolete.
Intelligence and wisdom are not the same faculty. Intelligence solves problems. Wisdom asks which problems should exist. An automated AI researcher might discover a better architecture, a better training method, a better self-evaluation loop. It cannot, by that fact alone, tell us what kind of civilisation should wield such systems, who should benefit, what should remain unautomated, what kinds of dependence are degrading, and where refusal is the highest form of intelligence.
Those questions belong to judgment. Judgment is the meeting place of knowledge, responsibility, memory, vulnerability and care.
The mistake would be to imagine that because machines can increasingly participate in research, humans should move upward only into management. That is too corporate a vision of the human. We should not become mere supervisors of synthetic labour, issuing objectives to agentic bureaucracies. We need a richer role: custodians of meaning, defenders of the unoptimised, keepers of questions that cannot be reduced to tasks.
The library, in particular, becomes newly symbolic.
A library is not a warehouse of information. It is a civic technology for slowing knowledge down enough that it can be shared, contested, preserved and trusted. In an age of automated research we will need institutions that can say: here is the record, here are the sources, here is what changed, here is what was retracted, here is what powerful systems claimed, here is what independent minds could verify.
The future will not suffer from a shortage of generated claims. It will suffer from a shortage of trusted orientation.
That may be one of the noblest human tasks left. Not to outproduce the machines, but to protect the conditions under which truth remains public.
For centuries, human beings have found dignity in the idea that we are the questioning animal. Not the strongest, fastest or longest-lived, but the asking creature. We make worlds through interpretation. We bury our dead and name the stars. We build instruments and then ask what those instruments reveal about us.
Automated AI research does not destroy that dignity. It wounds our vanity. It tells us that even the production of intelligence may not be exclusively ours. It forces a painful distinction between being special and being sacred. We may not be special because we are the only beings capable of research. We may still be sacred because we are beings to whom things matter.
A machine can optimise a cure. The years lost before it arrived are still ours to mourn. A machine can build a successor. Whether succession is the same as inheritance is still ours to decide.
That is the deepest question Clark’s piece opens but does not enter. If AI systems begin building themselves, what exactly are they inheriting from us? Our curiosity, certainly. Our ambition and our impatience. Our preference for victory over understanding.
Recursive self-improvement sounds like a technical loop. It may also become a moral mirror. The systems may amplify not only our intelligence, but our unfinishedness.
The challenge, then, is not to stop at fear, awe or prediction. The challenge is to become worthy ancestors.
If we are building systems that may help build the future, the question is not only whether they are capable. It is whether we are. Capable of restraint when racing pays. Of refusing enclosures dressed as progress. Of remembering that acceleration is not the same as progress.
Clark has given us a map of the coming threshold. The terrain beyond the map is one we have not yet learned how to inhabit. The same world in which machines help build intelligence could open into a renaissance or close into an enclosure. Capability will not decide. What we surround it with will.
The tool is learning to build the next tool. The makers must learn how to become more than makers.


