A provocation to end this week.
The game is over. Not ending, not changing, but definitively over.
While we've been debating detection and prevention, arguing about secure environments and academic integrity, our students have already crossed the Rubicon.
They're not waiting for our permission to use AI in their learning and assessment journey. They're already there and the only question that matters now is whether we'll meet them in that reality or continue our elaborate performance of denial.
Accepting AI in assessment isn't about capitulation or lowering standards. It's about recognising that we're attempting to measure human capability using metrics designed for a world that no longer exists. We're like cartographers insisting on flat maps while our students navigate in three dimensions. The fundamental error isn't in our execution; it's in our entire conceptual framework of what assessment means and what it's meant to measure.
We’re still clinging to human exceptionalism, desperately cataloguing what makes us "special," what AI can "never" do, as if building higher walls around our uniqueness will somehow preserve education as we know it. But this defensive posture blinds us to the transformation already underway. We need to step off that pedestal and see ourselves as guides who shape and learn via AI. When we make that shift, the technology stops looking like a threat and starts becoming a partner that helps us deepen curiosity, connection and real learning.
When I wrote about my year as a test subject in cognitive amplification, I discovered something that should terrify and exhilarate educators in equal measure. AI doesn't just change how fast we can work or how much we can produce. It fundamentally alters the nature of thought itself. Ideas don't just flow faster; they flow differently. Connections emerge not through linear progression but through a kind of intellectual jazz, where human intention and machine capability create improvisations neither could achieve alone.
This is the reality our students inhabit every day. They're not using AI as a crutch or a shortcut; the sophisticated ones are using it as an intellectual exoskeleton that amplifies their cognitive reach. They're having conversations with AI that push their thinking into new territories, using it to explore counterfactuals, test hypotheses and generate creative possibilities they couldn't access alone. And then we ask them to pretend none of this exists when we assess their "learning."
The profound absurdity of this position becomes clear when you consider what we're actually trying to measure. Traditional assessment assumes that knowledge and capability reside within the individual, that they can be extracted and examined in isolation. But knowledge has always been distributed, networked, social. We've always thought with tools, from paper and pencil to calculators and search engines. AI simply makes this reality impossible to ignore.
What does accepting AI in assessment actually mean? It means abandoning the fiction that we can or should measure isolated human cognition. It means recognising that the unit of intelligence is no longer the individual but the human-AI system. It means assessing not what students know but how they think, not what they can produce but how they produce it, not their memorisation but their orchestration.
This shift towards what some educators are calling a "posthuman framing" of education isn't abstract philosophy; it's practical necessity. We're preparing students for the world we're already entering, not some distant speculative future. Every attempt to preserve traditional models of human-only assessment is like teaching navigation by stars to students who have GPS. It's not just outdated; it's actively harmful to their development.
The implications cascade through every aspect of education. Curriculum design must shift from content delivery to capability development. We need to teach students not facts but frameworks, not answers but approaches, not information but discernment. The skills that matter in an AI-integrated world are precisely those that emerge from sophisticated interaction with AI: critical filtering, creative direction, ethical reasoning and what I call cognitive sovereignty, the ability to maintain autonomous thought while leveraging artificial amplification.
Consider what this means for different disciplines. In literature, it's not whether students can produce literary analysis but whether they can engage AI in sophisticated dialogue about texts, pushing beyond surface readings while maintaining their own interpretive voice. In science, it's not whether they can recall formulas but whether they can use AI to explore hypotheses while understanding the limitations of AI's grasp of empirical method. In history, it's not whether they can memorise dates but whether they can guide AI through complex historical narratives while recognising and correcting its tendency toward oversimplification.
The assessment methods that emerge from this acceptance look nothing like traditional exams or essays. They're dynamic, dialogical and transparent about AI integration. Imagine portfolio assessments where students document their entire journey with AI, annotating decision points, reflecting on moments of acceptance and rejection, articulating their evolving understanding. Picture presentations where students demonstrate their ability to conduct sophisticated AI orchestration in real time, explaining their choices and defending their critical interventions.
This requires us to move beyond seeing AI as something to be managed or contained within existing categories. Generative AI deserves its own category in our educational frameworks, recognised as a partner in the learning process rather than a tool to be restricted or a threat to be neutralised. Yes, the technology will keep evolving, making any fixed policy obsolete almost immediately. That's precisely why we need frameworks based on principles of partnership rather than rules of restriction.
But here's where it gets truly radical. Accepting AI in assessment means accepting that the outputs will be extraordinary. When students can leverage AI effectively, they can produce work that would have been impossible just years ago. An undergraduate could generate insights that once required postgraduate study. A secondary student could create artistic works of professional quality. This isn't grade inflation; it's capability amplification. And we need assessment frameworks that can handle this new reality.
The rubrics for this world evaluate not the polish of the final product but the sophistication of the process. They examine strategic thinking, critical discernment, creative vision and ethical reasoning. They reward students who can push AI beyond its default responses, who can identify and correct biases, who can maintain their authentic voice while leveraging artificial capabilities. They penalise not AI use but AI abdication, where students accept outputs uncritically or fail to add uniquely human value.
This transformation also demands new forms of academic integrity. The question isn't "Did you use AI?" but "Did you use AI thoughtfully, critically and creatively?" Plagiarism evolves from copying others' work to failing to contribute your own intellectual value. Cheating becomes not the use of tools but the absence of genuine engagement with them.
The resistance to this shift reveals deep anxieties about human value in an AI age. If students can produce exceptional work with AI, what's the role of education? If knowledge can be accessed instantly, why develop expertise? These fears miss the fundamental point.
AI doesn't diminish the need for human intelligence; it raises the bar for what human intelligence means.
In my own journey with cognitive amplification, I discovered that effective AI use demands more, not less, intellectual rigour. You need a stronger sense of purpose to direct AI effectively. You need deeper critical faculties to filter its outputs. You need more robust creative vision to push beyond its suggestions. You need more sophisticated ethical reasoning to navigate its implications. AI doesn't replace human thought; it reveals its importance.
The students who thrive in this new assessment paradigm won't be those who use AI most but those who use it best. They'll be the ones who understand that AI is powerful, who can exploit its strengths while compensating for its weaknesses, who can maintain their intellectual autonomy while leveraging artificial amplification. They'll be what researchers call "Centaur thinkers," seamlessly blending human and artificial intelligence to achieve outcomes neither could accomplish alone.
The institutional challenge is profound. Accepting AI in assessment requires not just new policies but new philosophies. It demands educators who understand AI deeply enough to assess its sophisticated use. It requires infrastructure that can capture and evaluate complex human-AI interactions. It needs courage to explain to stakeholders why a student's AI-assisted work represents genuine learning rather than academic dishonesty.
But the alternative is educational irrelevance. In maintaining assessment systems that pretend AI doesn't exist, we prepare students for a world that has already disappeared. We teach them to hide their most powerful tools rather than master them. We measure capabilities that matter less while ignoring those that matter most. We cling to industrial models of education while our students inhabit an intelligence revolution.
The practitioners who see this clearly are already moving beyond defensive positions; innovative educators are exploring what partnership looks like in practice. They're asking not how to preserve traditional education against AI encroachment but how education transforms when we accept AI as a collaborator rather than a competitor. This isn't capitulation, it's evolution.
The future of assessment isn't about accommodating AI, it's about embracing the new forms of human capability it enables. It's about recognising that the partnership between human and artificial intelligence represents not a threat to education but its next evolutionary stage. It's about preparing students not for a world where they compete with AI but one where they conduct it, direct it and transcend it.
This isn't some distant future we're preparing for. Our students are already living in this partnered reality. They're not waiting for permission or policy; they're forging ahead, using AI to amplify their thinking and creating in ways that make our traditional assessments look quaint. Every moment spent defending human exceptionalism is a moment not spent helping students develop the partnership skills they desperately need.
The cognitive amplification I've experienced and studied isn't just a technological phenomenon; it's a human one. It reveals not what machines can do but what humans can become when they master new forms of intellectual partnership. That's what we should be assessing: not isolated human capability but amplified human potential. We can continue our futile attempts to assess students as if AI doesn't exist, watching relevance slip away with each passing term. Or we can embrace the extraordinary possibility before us: to reimagine assessment for an age where human intelligence isn't measured in isolation but in sophisticated partnership with artificial capabilities.
Our students have already made their choice. They're using AI not because they're lazy or dishonest but because they recognise its power to amplify their thinking and creating. The question isn't whether we'll accept this reality but whether we'll help them master it. The future of education depends on our answer.
The gauntlet has been thrown. The time for defensive postures and human exceptionalism has passed. Our students are already dancing with AI, creating extraordinary things we're too busy policing to notice. It's time we stopped trying to turn back the tide and started teaching them to surf these waves with skill, wisdom and purpose. Because in the end, that's what education has always been about: preparing students not for the world that was, but for the world that is and will be.
I have been shouting this to anyone who will listen for the last year! Starting with Barthes- the text being a “tissue of citations” through launching Socratic dialoguing in my school, through ongoing advocation of multimodal, portfolio style assessments which include AI process tracking, and now thinking about how pupils need to become ‘knowledge archaeologists’ in a sense too. Next step: developing our approach to ‘AI literacy across the curriculum’ for the next academic year.
Right there with you Carlo. This is very much in line with my thinking that the "bar for a pass" has to rise, and potentially go sideways, and not just because we are forced to for the sake of integrity but because its a Good Idea. AI should not be about making us lazy, but about being able to go further, faster and in new directions. That requires a shift in the expected learning outcomes as well as the standards and methods used to measure them.