As I reflect on where we are in May 2025, an unmistakable pattern emerges, we are seeing a transition in how humans interact with artificial intelligence, already. The days of simple text-in, text-out AI are rapidly fading, replaced by multimodal, agentic systems that can perceive across multiple modalities, reason autonomously and execute complex tasks with minimal supervision.
The Current State of AI Literacy in 2025
The gap between AI capabilities and AI literacy has widened substantially. While next-generation models like OpenAI's O3 and Google's Gemini 2.5 boast unprecedented capabilities, understanding and generating content across text, images and audio (with video generation available separately) while autonomously planning and executing complex tasks, our collective understanding of these systems lags significantly behind.
OpenAI's O3 can autonomously search the web, run Python code, analyse visual inputs and generate images - chaining multiple tool calls to solve complex problems without continuous human guidance. Similarly, Google's Gemini 2.5 offers multimodal output with native image generation and audio capabilities, along with integrated tools.
The foundation of current AI literacy, focused primarily on crafting effective prompts and evaluating text responses, is increasingly inadequate for these sophisticated systems. Many educational institutions "only recently began grappling with text-based generative AI" and now face a reality where AI can "write an essay and also fetch real data, generate graphs, or produce images to support it." The fundamental question has evolved from "Can students trust a chatbot's answer?" to "Did the AI retrieve that statistic from a credible source? Who drew that diagram it provided? Was the reasoning process sound?"
The Widening Chasm
What makes this evolution particularly challenging is the hidden complexity of these systems. Where previous AI tools were largely confined to their training data with clear knowledge cutoffs, today's agentic models can access external information or tools the user didn't explicitly provide, creating serious transparency challenges. Users need to develop a mental model of the AI's possible actions to understand what the system might be doing behind the scenes.
While public discussion and basic AI literacy initiatives often remain focused on simple generative AI tools with text input and output, the technological frontier has dramatically shifted. The most advanced AI models now exhibit sophisticated multimodal and agentic characteristics, enabling them to process information across diverse formats, plan complex tasks, interact with external tools and data sources, execute actions, and learn from feedback with minimal human supervision.
This divergence between reality and perception creates a dangerous vulnerability. Without proper understanding, users may place unwarranted trust in these systems or fail to effectively harness their potential. The sophisticated combination of capabilities seen in these agentic systems with advanced reasoning, autonomous planning and dynamic tool use - represents more than just an incremental improvement. It marks a significant architectural shift towards more general problem-solving abilities.
Transforming Information Workflows
Perhaps the most immediate impact of these technologies is on information workflows. The advent of multimodal, agentic AI systems is fundamentally reshaping traditional workflows associated with information gathering, analysis and reporting. Previously reliant on manual keyword searches or simple queries directed at generative AI, information gathering can now leverage the autonomous capabilities of AI agents to undertake complex, multi-source exploration tasks with minimal human guidance.
This represents a shift "from AI as an assistant to AI as a delegate." Where previous AI tools primarily assisted humans with specific sub-tasks within a larger workflow, the new generation of agentic AI allows for the delegation of entire complex processes.
However, this increased capability comes with heightened risks. The very ability of agents to autonomously gather and synthesize information from vast, unvetted sources like the internet, and across multiple modalities, significantly amplifies the potential for generating and disseminating plausible but incorrect, biased or misleading information at scale.
Redefining AI Literacy
The emergence of sophisticated multimodal and agentic AI capabilities renders baseline AI literacy, focused on simple prompting and text evaluation, demonstrably insufficient. Interacting effectively and responsibly with systems that can autonomously plan, act, use tools and process information across diverse formats demands a more advanced and critical form of literacy.
This expanded AI literacy encompasses several key areas:
Understanding Autonomy and Agency: Users need a conceptual grasp of how AI agents function autonomously, including how they formulate plans, make independent decisions and execute sequences of actions without continuous human input.
Evaluating Agentic Processes (Not Just Outputs): Critical assessment must extend beyond the final product generated by an agent to scrutinise the agent's process: Was the plan logical? Were the chosen tools appropriate and reliable? Were the consulted sources credible? However, it is debatable whether we can truly examine this, given research from Anthropic suggesting it is not transparent.
Critical Evaluation of Multimodal Outputs: As agents integrate and generate information across text, images, audioand video, users must develop the capacity to critically analyse these complex, multimodal outputs, checking for consistency across modalities and identifying potential biases.
Assessing Tool Use Reliability and Safety: Recognising that agents interact with external tools, APIs and the web introduces new vectors for unreliability and security risks. Literacy must include the ability to assess these risks and evaluate the likely reliability of information derived from those tools.
Understanding Limitations and Failure Modes: Advanced AI literacy requires a realistic understanding of the inherent limitations of current agentic systems, including their susceptibility to generating confident-sounding falsehoods, potential biases, probabilistic nature, operational opacity and potential unreliability in novel or complex situations.
Ethical Awareness (Amplified): The autonomy of agents elevates the importance of ethical considerations. Literacy must encompass a deeper understanding of issues like algorithmic bias and fairness, transparency and explainability, accountability, data privacy, intellectual property and broader societal impacts.
The Education Imperative
The transition to agentic AI creates an urgent need for updated educational approaches. AI literacy must become a core competency, taught across subjects. Just as basic computer literacy and internet literacy became essential in past decades, the ability to understand and work alongside AI is now critical for students' futures. The goal is an empowered user base that can leverage AI's strengths while remaining vigilant about its weaknesses.
This requires a multifaceted approach:
Foundational Knowledge of AI: People should understand the basics of how these systems work and what their capabilities/limitations are, including knowing that models are trained on vast data with inherent biases, use probabilistic reasoning, and interpret multimodal inputs in specific ways with potential failure modes.
Skillful Use & Creation with AI: Teaching strategies for effective prompting and tool use, enabling students to pose questions effectively, refine results through feedback, and assemble AI-driven workflows.
Critical Evaluation & Verification: Teaching learners to never take AI-generated content at face value, regardless of how fluent or flashy it looks, and developing skills in cross-verification, recognizing signs of AI-generated content, and consulting external sources before trusting an AI answer.
Ethical and Responsible Use: Teaching the ethical use of AI through discussions of scenarios such as using AI for homework, potential benefits for accessibility, data privacy concerns, intellectual property issues, and societal impacts on jobs and equity.
Continuous Learning & Adaptability: Given how fast AI is evolving, emphasizing learning how to learn about AI, staying updated with AI news, understanding how new models improve, and being ready to adjust strategies as AI changes.
Amplified Risks
The shift to agentic AI also amplifies existing risks and introduces new ones. As IBM's AI ethics researchers succinctly put it: "Agentic AI amplifies all the risks that apply to traditional AI and generative AI because greater agency means more autonomy and therefore more that can go wrong." When an AI can not only generate content but also act on it, errors or misbehavior can have more direct consequences.
Key risk areas include:
Misinformation & Deepfakes: Multimodal generative AI can produce highly convincing fake content – from deepfake images and videos to autogenerated news articles with fake sources. In the hands of an agentic AI, such content could even be propagated or used to influence people autonomously.
Bias and Fairness Issues: If an AI is agentic and pulling info from the internet or using certain data sets, it might also pick up the biases present there, potentially amplifying stereotypes or skewed perspectives in its outputs and actions.
Lack of Accountability or Understanding: With multiple tools in play, when something goes wrong, it can be harder to tell where the failure occurred—whether in the AI's internal reasoning, data sourced from the web, or code execution.
Autonomous Misaction: The scenario of an AI agent inadvertently deleting files or doing something harmful with its tool access is not just fantasy. Researchers have already demonstrated instances where an AI given code execution ability wrote and ran destructive code due to flawed instructions or goals.
Ethical and Societal Implications: Powerful and autonomous AI raises ethical questions about attribution, mimicry of human voices or images, skill degradation through over-reliance, and potential impacts on critical thinking and creativity.
Future Trajectories
The near future promises continued rapid advancement in agentic AI capabilities. Forecasts for 2026-2027 suggest significant improvements, particularly in agents specialised for coding, scientific research and complex problem-solving. Some forecasts controversially suggest AI could reach superhuman levels in specific domains or even approach AGI within this timeframe, partly driven by AI agents automating aspects of AI research and development itself.
Analysts predict a significant increase in enterprise adoption. Gartner forecasts that 40% of generative AI solutions will be multimodal by 2027. Deloitte anticipates 25% of companies using GenAI will launch agentic AI pilots in 2025, growing to 50% in 2027. Salesforce's CEO predicted 1 billion AI agents in service by the end of FY 2026. Forrester identifies agentic AI as the "next competitive frontier," urging businesses to experiment strategically.
This rapid pace of change underscores the urgency of developing robust AI literacy frameworks and governance approaches. The capabilities of agentic AI are outpacing traditional regulatory and ethical frameworks. Relying on reactive measures after harms occur will likely prove insufficient given the speed and scale at which autonomous systems can operate.
The issue of accountability, particularly the challenge of the "moral crumple zone," demands urgent attention. As AI systems become more autonomous, weaving complex chains of reasoning, tool use and environmental interaction, the task of pinpointing the cause of a failure and assigning responsibility becomes exponentially more difficult.
The True Literacy Imperative
The evolution of AI from simple text generators to sophisticated autonomous agents represents more than just a technological shift, it's a fundamental reimagining of the human-AI relationship. The goal is an AI-literate population that can navigate a world saturated with AI agents – using them as helpful tools in work and life, rather than being manipulated or overwhelmed by them.
The rise of multimodal, agentic AI systems marks a transformative moment for AI literacy. No longer is "AI literacy" a niche skill for tech enthusiasts; it's fast becoming an essential component of general literacy in the 21st century – as fundamental as knowing how to find information online or how to read a map.
Literacy has always been changed by the technology we use to communicate and AI is perhaps the most powerful communication technology we've seen yet. Our concept of literacy is expanding accordingly.
The challenge ahead is clear: we must rapidly evolve our understanding of AI literacy to match the pace of technological advancement. This means moving beyond basic prompt engineering and output evaluation to develop a deeper, more critical form of literacy that encompasses understanding of agency, multimodal reasoning, ethical implications and the complex interplay between human intention and machine execution.
Just as previous generations had to adapt to new forms of literacy from oral traditions to writing, from handwritten manuscripts to printed books, from analog to digital media, we now face the task of adapting to a world where AI agents are not just tools but partners in our cognitive processes.
Recommendations
Cultivate Critical AI Literacy: As AI evolves from a content generator directed by prompts to an autonomous agent acting upon goals and synthesising information from diverse sources, critical evaluation becomes the paramount literacy skill. While evaluating the output of simple generative AI for accuracy and bias is necessary, agentic AI introduces far greater complexity. Develop and teach frameworks for evaluating not just the outputs of AI but the processes, sources and tools it employs.
Prioritise Ethical Understanding: Ethical literacy transitions from an important consideration to a non-negotiable requirement. While generative AI already posed ethical questions regarding bias, copyright and plagiarism, the ability of agentic AI to act autonomously in the world dramatically raises the stakes. Integrate ethical considerations into AI education from the earliest stages.
Adopt a Two-Lane Approach: The "two-lane" approach to assessment – secure, AI-free environments versus AI-integrated tasks outside that environment – will become increasingly standard practice in higher education and find its way into policy. Design educational experiences that both leverage AI as a tool and ensure independent skill development.
Develop Mental Models of AI Functioning: Users need "a mental model of the AI's possible actions" to understand what the AI might have done to find or generate information. AI interfaces are starting to assist with this by citing sources or showing tool usage, but developing an "AI intuition" is becoming an essential part of AI literacy. Help people build accurate mental models of how these systems work.
Focus on Human-AI Collaboration Skills: AI literacy involves learning how to interact effectively with agents, including skills in clearly defining goals and constraints, providing constructive feedback, knowing when and how to supervise autonomous processes, recognizing when human intervention is necessary and developing self-reflective mindsets. Teach strategies for effective AI collaboration and supervision.
Prepare for Continuous Adaptation: Given how fast AI is evolving, one of the meta-skills to teach is adaptability. The AI tools and features students learn today will likely be outdated in a few years. So it's important to emphasise learning how to learn about AI. Cultivate a mindset of continuous learning and adaptation to keep pace with technological change.
By developing a more sophisticated understanding of AI literacy, we can harness the power of these advanced systems while maintaining human agency and critical judgment. We now need to understand how to think with, about and alongside AI. And the change will not stop.
I fear that very few people truly grasp the meaning of "from AI as an assistant to AI as a delegate", as in my mind, it's an issue of agency, and humans handing theirs to a machine.
And while I support your call for AI literacy, the truth (as I see it) is that the technology is already evolving faster than 99% of humans on this planet can handle. From the dawn of humanity, we've called the shots with near total control over every technological invention. Sadly, that era is coming to an end. We need to catch up, yet catching up isn't possible.
Many thanks for bringing this critical issue to light!