The Half-Life of Higher Ground
Climbing toward a summit and realising, halfway up, that the peak has relocated. Not eroded. Not collapsed. Simply moved. You check your map. The map is correct. The mountain is wrong.
This is what happens when institutions built for stability encounter conditions designed for flux.
The conversation about AI and higher education has settled into a comfortable groove. Degrees need updating. Curricula need AI literacy. Graduates need augmentation. The professions are shifting to higher cognitive ground, and universities must help students climb.
The framing is reasonable. It is also incomplete.
Because what if the problem is not that we are preparing people for the wrong destination? What if the problem is that we are still assuming there is a destination at all? When we talk about professions moving to “higher cognitive and emotional ground,” we imagine a stable plateau. Doctors still diagnose, but now they interpret AI pattern recognition. Engineers still design, but now they orchestrate simulations. The titles persist. The substance transforms. Climb the new mountain, plant your flag.
But insight tells a different story. The “higher ground” curriculum becomes automatable territory by the time students graduate. A skill that represents human advantage in 20** may represent baseline machine capability by 20**. The half-life of professional differentiation will collapse faster than we realise.
By the time a programme is redesigned, approved, accredited, delivered, and assessed, the mountain it was climbing toward has already moved somewhere else entirely.
This is not a timing problem to be solved with faster processes. It is a category error. We are using navigation tools built for stable terrain in a landscape that never stops shifting.
The friction is the point of entry-level work.
Junior lawyers read everything because there was no alternative. In the process, they developed intuition about what mattered and what did not. Graduate engineers ran simulations by hand and learned what results should look like before the numbers appeared. Medical residents sat with uncertainty long enough to recognise the texture of a genuine emergency.
The struggle was not a bug in the system. It was the system. The inefficiency built capability.
If AI handles entry-level tasks, the question is not just “where do graduates start?” The question is “where does judgement develop?”
The skills everyone agrees are essential in the AI era, systems thinking, ethical reasoning, the capacity to sit with ambiguity, develop through practice, through productive struggle, through the slow accumulation of experience that comes from wrestling with problems that resist easy resolution.
The capabilities most needed in the AI era are precisely those most at risk of atrophy through AI use. We need critical thinking more than ever, but AI makes it easier to avoid thinking at all. We need synthesis, but AI synthesises for us. We need epistemic humility, but AI’s confident outputs discourage the slow work of verification.
You cannot learn to hold ambiguity by outsourcing it.
The language of “AI superpowers” is seductive. It positions AI as enhancement rather than threat. It suggests agency. You wield the power. The power does not wield you.
But the metaphor obscures a distinction that determines everything.
There is augmentation: staying awake to the transformation, using AI’s speed and breadth while keeping sovereignty over what you believe and why. You direct the exploration. You verify the outputs. You own the outcome. And there is abdication: the drift from directing to accepting, from questioning to trusting, from integration to consumption.
Someone who has outsourced their thinking to AI can look extraordinarily productive while their actual capability atrophies. The cognitive debt compounds in silence. They do not notice until they need to think without the tool.
Education that teaches AI proficiency without teaching cognitive sovereignty is not preparing graduates for shifted professions. It is preparing them for dependency dressed as capability.
And here lies the darkest implication. If augmentation requires sophisticated literacy, protected struggle, and high-touch human mentorship, then cognitive sovereignty becomes a luxury good. The two-tier future is not forming around who has access to the software, but who has the privilege of the friction.
Those with the resources for intensive human development will learn to direct the machine. Everyone else will be trained to accept its outputs. We risk creating a class system where the elite receive an education in sovereignty, while the majority receive an education in efficiency. The digital divide is evolving into a cognitive divide, and the “higher ground” is being reserved for those who can afford the equipment to climb.
If AI does most of what we currently call education better than we do, if it transmits information, provides personalised tutoring, generates practice problems, gives instant feedback, then what is education actually for?
Education’s defensible purpose becomes helping humans figure out what questions matter, what is worth doing with the knowledge machines provide, and how to remain intelligible to themselves in a world where thinking itself becomes outsourced.
This is not a curriculum update. It is a transformation of mission.
And it requires three things the current model does not provide.
First, the explicit cultivation of cognitive sovereignty. Not AI literacy as tool proficiency. AI literacy as the capacity to direct rather than be directed, to verify rather than accept, to maintain ownership of your own understanding while leveraging powerful augmentation.
Second, deliberate friction. The preservation of spaces for productive struggle even when, especially when, AI could make the work effortless. The recognition that some inefficiencies are essential, that the capacity to sit with uncertainty must be exercised or it atrophies.
Third, assessment that evaluates thinking, not outputs. Can the student explain not just what they concluded but why? Can they identify where they relied on AI and whether that reliance was justified? Can they defend their reasoning under questioning? The person becomes the evidence. The polished product becomes irrelevant.
We can keep updating curricula, adding AI modules, redesigning assessments, and calling it transformation. We can produce another generation of graduates who are proficient with tools that will be obsolete before their student loans are repaid.
Or we can ask a harder question: What remains worth teaching when the teachable content changes faster than teaching itself can adapt?
The answer, I suspect, has something to do with the capacity for thought rather than the content of it. Something to do with learning how to learn when what you learned yesterday may be wrong tomorrow. Something to do with maintaining your own mind in a world that keeps offering to think for you.
That is not preparation for shifted professions. That is preparation for a condition of permanent shift.
The mountain keeps moving. Perhaps the skill is not in reaching the summit but in learning to walk on ground that never stops changing.
The institutions that understand this will transform. The ones that do not will keep producing graduates trained for destinations that no longer exist, wondering why the map keeps being wrong.



This reads less like an argument about AI and more like a critique of destination-based models in permanently shifting systems.
The tension you describe is structural: stability assumptions collapsing under continuous change.
Carlo, as an educator, I agree. Outsourcing our thinking to AI is a route to cognitive atrophy, and this lands hardest on those at the beginning of their learning journey, i.e., students. I believe the solution is to shift focus from using AI to building AI, specifically building AI as a thinking partner that elevates and expands knowledge and capability, as opposed to treating AI as a tool that does the thinking and the work for us. This is what I'm doing at Superesque, and I would hope that amounts to more than just seducing people with fancy marketing. It upsets me that my attempts to engage other educators in conversation about this work has so far achieved very little, almost nothing. I'm developing the impression that while Australian academics like to talk about the challenges of AI in higher education, their commitment to engaging with and learning from practitioners on the ground is minimal, which is a sad and somewhat ironic thing, given that the role of higher education is to define the leading edge of thinking and professional practice.
Apologies for the gripe. I'm an appreciative reader of your work. I agree that we don't create superpowers by teaching people to offload their thinking to AI chatbots. That's pure kryptonite.