It's a peculiar form of cognitive dissonance - watching institutions grapple with AI as if it were still the start of 2023. While we debate the merits of prompt libraries, basic use cases, detection, scales of use, chatbot support models and craft policies around "AI-assisted work," a far more profound transformation is unfolding. The real crisis isn't in AI capabilities or even adoption rates - it's in our collective failure to recognise how fundamentally the landscape has already shifted and how much it will continue to shift, as we move …. at a tepid rate.
In his December 14, 2024 analysis "Are educators falling behind the AI curve?", Andrew Maynard’s analysis presents a compelling framework of three interrelated curves - capability, utilisation, and perception - that helps explain our current predicament. The capability curve tracks the evolution of what AI can do, from basic pattern matching to sophisticated reasoning. The utilisation curve follows how these capabilities are actually being implemented and used. The perception curve - perhaps most crucially - reveals how we understand and respond to these changes.
These curves don't just exist in parallel; they interact in ways that create widening gaps between reality and institutional response. While AI capabilities may be showing signs of plateauing in some areas (though I'd argue this is more a branching than a slowing), the real story lies in the accelerating utilisation curve and the dangerously lagging perception curve. As I've watched these dynamics play out, I've come to realise they reveal something even more troubling than their surface implications suggest.
The Capability Curve: Beyond the Plateau
When Google's Sundar Pichai suggests the "low-hanging fruit is gone," it's tempting to see this as evidence of AI capabilities plateauing. However, I struggle to see a slowdown given what we have seen in the last month, for me, we are witnessing a dramatic branching of capabilities into new domains and dimensions:
Reasoning and Analysis
OpenAI's O3 demonstrates reasoning capabilities that approach human-level analysis in specific domains, it shows a marked step from previous models.
Gemini's "thinking model" preview reveals OpenAI are not alone with reasoning models.
Claude's evolution in nuanced, context-aware dialogue through the latest Sonnet 3.5 update whilst perhaps not a ‘jump’ still instills a sense of a jump in capability from earlier models.
Multimodal Integration
Text-to-video generation has moved from proof-of-concept to potential practical application
Voice interfaces achieve near-natural interaction, making AI assistance ambient
Image generation and manipulation are reaching professional quality
Cross-modal understanding enables seamless integration of different information types
This isn't a plateau - it's an explosion of capability across multiple dimensions. AI is not so quietly transforming into something far more profound than we've prepared for.
The Utilisation Gap: A Two-Way Evolution
The second curve reveals an even more complex story than initially suggested. We're seeing rapid evolution in both how providers are integrating AI and how users are discovering novel applications:
Provider Integration
Apple embedding AI throughout its ecosystem in increasingly invisible ways
Microsoft's integration of Copilot across its entire product suite
Google's ambient AI features becoming ubiquitous across services
New platforms emerging that blend multiple AI capabilities seamlessly
User Innovation
Developing sophisticated AI-enhanced workflows
Combining multiple AI tools for complex analysis
Pioneering new approaches to knowledge creation
Discovering novel ways to augment learning processes
Early adopters pushing boundaries of what's possible
Emerging Patterns
Seamless switching between modalities (voice, text, visual)
Sophisticated multimodal content creation
AI-augmented research and analysis workflows
Collaborative human-AI processes
Real-time assistance and feedback loops
This isn't just about using tools more frequently - it's about fundamentally new approaches to working and thinking with AI.
The Perception Prison
We're trapped in an outdated mental model of AI - one that sees it as a mere tool for text generation, plagued by hallucinations, struggling with basic reasoning, and ‘easily’ contained within institutional policies which just need ‘tweaking’. This perception persists even as AI has evolved into something far more sophisticated and pervasive:
Voice interfaces have achieved near-natural fluency, with models like OpenAI’s advanced voice mode making conversation feel eerily natural
Multimodal integration allows seamless switching between text, image, and code, creating fluid workflows that defy simple categorisation (Gemini 2.0 signposting this development).
Reasoning capabilities approach human-level analysis in specific domains, with models like O3 demonstrating sophisticated analytical abilities
AI is becoming ambient, woven into the fabric of digital life through tools like Apple's AI integration.
I understand the institutional imperative for frameworks and guidelines - education systems run on structure and clear boundaries. But our current approaches, with their neat categorisations and linear scales of AI use, are becoming dangerously obsolete. How do we meaningfully restrict AI use when models can watch our screens and guide us through complex tasks in real-time? What does "AI-assisted" even mean when the technology is ambient in our digital environments?
Consider Anthropic's recent demonstration of Claude controlling computer interfaces - not just responding to prompts, but actively navigating systems and executing complex tasks. Or look at Google's Gemini 2.0, whose real-time visual analysis capabilities allow it to watch and understand screen activity, providing immediate guidance and suggestions. These aren't just incremental advances - they fundamentally break our traditional frameworks around AI use and control. Yet our institutional responses remain locked in an era of simple prompt and response, failing to grasp that we're no longer dealing with just a tool, but with the emergence of a new cognitive environment.
The Hidden Integration
While institutions focus on visible AI tools like ChatGPT, the real transformation is happening in the shadows:
AI capabilities are being silently integrated into everyday applications
Students are developing sophisticated workflows combining multiple AI tools
Knowledge creation is becoming inherently collaborative between human and machine
Traditional boundaries between thinking, creating, and computing are dissolving
This isn't just about efficiency gains or automation - it's about the emergence of new forms of intelligence and creativity that our current educational models struggle to comprehend, let alone accommodate.
The Institutional Paralysis
The gap between perception and reality is creating a crisis of relevance. While we craft policies around AI use, many are silently:
Engaging in sophisticated multimodal learning experiences
Developing complex AI-augmented methodologies
Creating new forms of knowledge synthesis and expression
Building skills for an AI-augmented professional future
Our response? Too often it's to double down on outdated modes of assessment and control, trying to force new realities into old frameworks.
Beyond the Binary
The problem goes deeper than simple resistance to change. Our very conceptual frameworks for understanding AI in education are incorrect:
We think in terms of "use" versus "non-use" when AI is becoming ambient
We focus on text generation when AI is increasingly multimodal
We try to control rather than cultivate AI integration
We treat AI as a threat rather than a learning opportunity
The 2025 Imperative
Next year (aka tomorrow) represents a critical juncture. The gaps between capability, utilisation, and perception are widening at an accelerating rate. If we don't act decisively to bridge these gaps, we risk:
Losing institutional credibility
Creating policies that are obsolete before implementation
Missing opportunities for meaningful educational transformation
Preparing students for a world that no longer exists
Breaking Free from the Perception Prison
Moving forward requires more than just updated policies or new training programmes. We need:
Radical Honesty
Acknowledge our outdated perceptions
Recognise the extent of current AI integration
Accept that control is impossible but guidance is essential
Conceptual Evolution
Move beyond binary thinking about AI use
Embrace the reality of ambient AI
Understand AI as an environment, not just a tool
Focus on wisdom and judgement over mere capability
Institutional Transformation
Develop living policies that evolve with technology
Create spaces for experimentation and innovation
Build assessment approaches that reflect reality
Foster genuine AI literacy across all levels
The Path Forward
The solution isn't to catch up to current AI capabilities - that's a losing game. Instead, we need to:
Embrace Uncertainty
Accept that we can't predict or control AI evolution
Build frameworks that adapt to change
Focus on principles over prescriptions
Foster True Innovation
Create safe spaces for experimentation
Learn from student innovations
Build bridges between formal and informal learning
Transform Assessment
Move beyond the detection/control mindsets
Create authentic evaluation methods
Focus on process and understanding
Build assessment literacy for an AI age
The Real Challenge
The crisis we face isn't technological - it's perceptual. Our inability to see AI for what it has become is creating a chasm between educational institutions and the reality of learning in 2025 (and beyond). This isn't just about falling behind a curve; it's about our entire approach to education.
As we move into 2025, the question isn't whether to embrace AI - that ship has sailed. The real question is whether we can transform our perceptions and our institutions quickly enough to remain relevant in a world where AI isn't just a tool, but a fundamental part of the cognitive landscape.
The future belongs not to those who master current AI tools, but to those who can evolve alongside rapidly changing technology while maintaining focus on the uniquely human elements of learning and understanding. This requires more than adaptation - it demands transformation.
Are we ready to break free from our perception prison? The answer to that question may determine the future of education itself.
Fully agree. The latency is largely due to the way we have to assess & examine, however. While the grades that students attain are very high stakes and impact their life chances, educators do not have full freedom to be able to experiment. The exams are built to fit a 1940s paradigm, and it’s beyond frustrating that we have to teach students to jump through an extremely outdated hoop. Yes, AI can help in some ways, but it’s new to all of us. We are not engineers and the pressure from Silicon Valley to keep up is intense, as well as profit-driven. The students need their grades. Their parents’ taxes or wages are paying for that, and we do them a disservice if we jeopardise that. That said, many of us certainly are using AI to teach with & about and we are introducing some elements to students as age-appropriate.
Nevertheless I 100% agree that the entire thing needs to change. It has needed to change for a very long time.
We desperately need the government to have the courage to move things on, right now!