Teach Judgement, Not Prompts
When we talk about preparing people to work with AI, most of the conversation focuses on the wrong things. We discuss which platforms to teach, how to craft effective prompts, what the proper citation format should be. These are not unimportant. But they’re also not the core of what matters.
The core is this: some human capabilities become more valuable as AI gets better and some become less valuable. The ones that matter are not the ones we’re spending most of our time developing. And for young adults in particular, the window for building these capabilities is narrower than we typically acknowledge.
The Sensitive Window
Something specific happens between eighteen and twenty-five. The machinery for abstract reasoning has largely developed. The capacity for complex thought is in place. But the habits of how someone thinks, the metacognitive patterns that will shape their intellectual life, these are still forming. This is a sensitive period, plastic enough that these patterns can be deliberately shaped.
This doesn’t mean older adults can’t improve. A forty-year-old can certainly strengthen their critical thinking or develop better judgement. It’s simply that the friction is higher once patterns have hardened. University sits at a moment when friction is unusually low and gains compound quickly.
These are years when someone could be developing genuine expertise in evaluating evidence, constructing sound arguments, recognising their own cognitive biases, and maintaining epistemic rigour under pressure. Or they could be spending those years learning the syntax of current tools. One set of capabilities builds on itself, deepening with each use. The other becomes obsolete with each software update.
What Compounds vs What Updates
The distinction between capabilities that compound and skills that update is not semantic. It reflects something fundamental about how human capacity develops.
Skills that update are transactional. You learn them, apply them, then learn the next version when the technology changes. Platform-specific knowledge updates. Prompt engineering updates. Even most programming languages update faster than curricula can track. These competencies have value, but that value is temporary.
Capabilities that compound are different. They build on themselves. They deepen with practice. They transfer across domains. Most importantly, they don’t become obsolete when technology changes. They often become more valuable.
Epistemic rigour is the discipline of asking how we know what we claim to know, of demanding evidence and transparency, of maintaining appropriate uncertainty when certainty would be more comfortable. When AI can generate confident-sounding arguments about anything, this capability becomes more necessary, not less. You need to spot when confidence exceeds evidence, when statistical associations are being presented as causal understanding, when sources sound authoritative but may be fabricated.
Synthesis is integrating disparate perspectives into more sophisticated understanding. AI excels at analysis, at breaking things into components. Humans maintain advantage in the opposite direction: putting pieces together in meaningful ways. Drawing connections between unrelated domains. This capacity strengthens with practice, with breadth of knowledge, with depth of reflection.
Judgement is knowing what to do when the rules run out, when situations are genuinely novel, when competing considerations must be weighed. This emerges through making consequential decisions under uncertainty, receiving feedback, and refining internal models. It’s the calibration that distinguishes people who make reliably sound decisions.
Cognitive sovereignty is maintaining independent thought when confronted with AI-generated content that radiates authority. Resisting automation bias, where humans over-rely on automated systems even when they contradict better reasoning. Knowing when to accept AI outputs and when to override them. This must be deliberately cultivated.
These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.
Why These Resist Direct Teaching
Here’s the pedagogical challenge: the capabilities that matter most cannot be taught through instruction alone. You cannot lecture someone into good judgement. You cannot workshop epistemic rigour.
These capacities emerge through particular kinds of experience. Where reasoning carries consequences. Where ideas meet resistance. Where feedback is precise and unavoidable. Epistemic rigour develops when you must defend your reasoning to peers who’ve done different analysis. When you discover what happens when you accept claims without verification. When sloppy thinking becomes costly and careful thinking becomes necessary.
Synthesis emerges through wrestling with genuinely complex problems. Through encountering multiple legitimate but conflicting frameworks. Through productive confusion when existing mental models prove inadequate. You learn it by needing it, not by being told about it.
This is why format matters more than content. A lecture on critical thinking accomplishes little. A seminar where you must defend positions under challenge, where you watch skilled thinkers model rigorous reasoning, where you practice and receive precise feedback, this is where capabilities develop.
When you watch someone visibly work through difficult questions, make their reasoning transparent, show the productive struggle that precedes insight, you’re observing metacognition in action. How someone recognises superficial understanding. Identifies gaps in reasoning. Decides what evidence would strengthen a position. Maintains intellectual humility whilst still forming conclusions. This cannot be scripted. It must be genuine.
The Conditions That Enable Development
If these capacities are cultivated rather than delivered, what conditions enable their development?
Genuine cognitive demand. Not busywork but real challenge that pushes beyond current capability. Problems that resist template solutions. Assignments where AI could generate acceptable responses but you must provide reasoning.
Visible thinking. Most work happens privately with only products assessed. This allows surface engagement without consequence. Making thinking visible means articulating reasoning, defending choices, exposing process. Seminars where you must speak. Presentations with unscripted questions. Peer review where you evaluate and are evaluated.
Genuine feedback. Not just grades but feedback that identifies where reasoning was strong or weak, where evidence was sufficient or thin, where synthesis succeeded or failed. This requires time and expertise. Without it, refinement has no anchor.
Iteration. Capabilities grow through practice and revision. Students need to try, receive feedback, revise, try again. Portfolios matter more than exams. Projects unfolding over weeks matter more than weekend submissions. Development shows in the changes.
Communities of practice. Judgement develops partly through comparison. Observing how others reason. Testing your views against theirs. Engaging productive friction when intelligent people disagree.
The Integrated Discipline
Rather than treating AI literacy as separate from traditional skills, we need a single integrated discipline: epistemic hygiene that works regardless of tools.
This means understanding provenance. Where did information come from? Can you trace it to primary sources? Have you triangulated across independent sources? It means maintaining clear distinctions between primary, secondary and synthetic material. Between direct evidence and interpretation. Between what data shows and what AI has interpolated.
It means having a clear mental model of what AI systems are: powerful pattern-generators trained on vast datasets, capable of producing confident text about anything, skilled at reflecting training data patterns but not at independent reasoning or verification.
And it means developing reflective habits. Name assumptions. State what would change your mind. Track how conclusions evolve as evidence accumulates. Use AI to generate options or surface patterns, then provide human justification for selections.
This is not platform-specific. It’s about how you relate to mediated information. Whether from Wikipedia, textbooks, AI chatbots, or search engines, the fundamental discipline remains the same.
Libraries can play a central role here, not as service counters but capability labs. Librarians can co-teach judgement through provenance work, source comparison, search strategy as argument, documentation of process. Yes, they should provide technical orientation to current platforms. This is necessary, like learning to use a catalogue. But it’s the floor, not the ceiling. Technical training gets you operational. Epistemic formation keeps you sovereign.
The Practical Constraints
The pedagogy that develops these capabilities, small seminars, intensive mentorship, sustained dialogue, doesn’t scale easily. It’s expensive. This creates genuine tension. The economics are difficult. Smaller classes means fewer students. Intensive mentorship doesn’t scale. The labour-intensive work of formation runs counter to efficiency pressures.
And yet competing on content delivery when content is abundant and accessible makes little sense. The defensible position is what can’t be easily replicated: intensive human formation during a critical developmental window. Which is also the most expensive thing to provide.
This tension doesn’t resolve easily. It clarifies what the choice is. Every decision sits inside this: does this enable the formation of judgement, rigour and wisdom, or does this keep things viable? Sometimes answers align. Often they don’t.
What This Means
The people who thrive working with AI will be those who develop metacognitive discipline. Who understand that some cognitive struggles are productive. Who learn to distinguish between AI as amplification, extending their reach, and AI as substitution, replacing their thinking. Who build epistemic rigour. Who cultivate synthesis. Who develop judgement.
These are capabilities that make someone worth augmenting. But developing them requires genuine cognitive demand, visible thinking, sustained feedback, iteration, and communities where reasoning meets resistance. It requires people who can model these capacities authentically. It requires time and resources that don’t scale conveniently.
If we’re serious about this, we need to design experiences around what actually matters. Structure work around genuine complexity. Use formats that make thinking public and contestable. Deploy tasks requiring trade-offs under uncertainty. Build portfolios capturing iteration over time. Include moments where people use AI to generate options, then require human justification for selections.
We need to acknowledge that developing these capabilities is labour-intensive and expensive. That there’s real tension between formation and efficiency. That sometimes the answers align and often they don’t.
AI will keep changing. The platforms we use today will be obsolete tomorrow. Our task is older: helping people develop capabilities that compound rather than competencies that update. Forming people who can reason transparently, judge under pressure, integrate across differences, and act with integrity.
The question is about becoming someone whose thinking is worth amplifying. AI is the weather. The job is seamanship.



Precisely. But I fear it will take a while for the anti-AI furore to subside and for this to be seen for the opportunity it is.
As a social work educator and embodied human, just want to express thanks for your advocacy and writing!