There's a phrase floating around the AI community that captures something profound about where we are in 2025: "God is hungry for context."¹ It emerged from a late night experiment when two founders dumped their entire company's knowledge base into a cutting edge AI model. What they got back wasn't just helpful; it was transformative. The AI produced a strategic plan so detailed and insightful that it fundamentally changed how they thought about their own business. The only trick? They fed the model everything it could possibly need to know.
This is the quiet revolution happening in AI right now. While everyone was busy perfecting their prompts, tweaking their word choices like amateur poets, the real breakthrough was hiding in plain sight. The magic isn't in how you ask; it's in what the AI knows when you ask.
Andrej Karpathy gave us the term: context engineering.² It's not new, but naming it matters. Names create movements. And this movement is about recognising that we've been thinking about AI interaction all wrong. We treated language models like oracles you approach with carefully worded questions. But they're more like brilliant consultants who've just walked into your office with amnesia. Every conversation starts from zero unless you deliberately construct their memory.
Think about the last time an AI disappointed you. Maybe it gave generic advice when you needed specificity. Maybe it missed obvious connections that any human colleague would catch. The model wasn't stupid. It was starving. You gave it a question when it needed a world. The shift from prompt engineering to context engineering represents a fundamental change in how we conceptualise AI capability. Prompt engineering assumes the model contains what you need; you just have to ask correctly. Context engineering recognises that the model's intelligence is only as good as the information ecosystem you create for it. It's the difference between interrogating a witness and briefing a partner.
Consider what actually happens when you interact with a modern AI. You type a message. The model receives that message plus whatever context the system provides: previous conversation, system instructions, retrieved documents, user preferences. That entire package is what the model "sees." Your prompt is just the final brush stroke on a canvas you might not even realise you're painting.
The best AI applications in 2025 are essentially context orchestration engines. They gather relevant information from multiple sources, compress it intelligently, format it appropriately, and present it to the model at exactly the right moment. They maintain memories across sessions, retrieve relevant documents on demand, and carefully prune information that's no longer relevant. They're not just passing your question along; they're constructing an entire informational environment. This explains why the same model can seem brilliant in one application and pedestrian in another. The difference isn't the model. It's the context engineering. A well designed system might pull in your calendar, your email history, your writing style, relevant documents, and a compressed summary of past interactions before the model even begins to formulate a response. A poorly designed system just forwards your query.
The technical details matter less than the mindset shift. When you stop thinking about crafting the perfect prompt and start thinking about constructing the perfect context, everything changes. You become less of a wordsmith and more of an information architect. You're not trying to trick the model into understanding; you're giving it everything it needs to actually understand.
Context windows have exploded from a few thousand tokens to hundreds of thousands. Some systems push into the millions. But bigger isn't always better. There's a phenomenon researchers call "context degradation syndrome."³ Dump too much irrelevant information into a model's context and performance actually decreases. The art is in curation, not accumulation. This is where context engineering becomes genuinely creative. It's about understanding not just what information might be relevant, but what information is actually useful. It's about compression without loss of meaning. It's about maintaining coherence across time while respecting computational limits. It's about knowing when to summarise and when to preserve detail.
The most sophisticated systems maintain multiple layers of context. There's the immediate context of the current conversation. There's the session context that persists across multiple exchanges. There's the user context that captures preferences and history. There's the domain context that provides relevant background knowledge. And there's the dynamic context retrieved based on the specific query. Orchestrating these layers is where the real work happens.
Multi agent systems add another dimension. Instead of one model handling everything, specialised agents each maintain their own context. It's like assembling a team where each member has their own expertise and information access. The challenge becomes coordination: ensuring the agents share enough context to work together while maintaining enough separation to work efficiently.
But perhaps the most intriguing development is how context engineering changes our relationship with AI. When you carefully construct context, you're not just using a tool; you're creating a collaborative environment. You're establishing shared understanding. You're building something that feels less like querying a database and more like working with a knowledgeable partner.
The implications extend beyond technical implementation. Context engineering suggests that the future of AI isn't about building smarter models, though that will continue. It's about building richer informational environments for those models to inhabit. It's about recognising that intelligence without context is just potential. Context transforms potential into capability.
This might be why that phrase resonates so deeply: "God is hungry for context." It captures both the power and the limitation. These models have godlike capabilities in their ability to process and synthesise information. But without context, they're gods in a void. They need us to construct their universe.
As we move forward, the winners in AI won't be those with the cleverest prompts or even the largest models. They'll be those who master the art of context. They'll be the ones who understand that feeding the model isn't just about providing information; it's about creating an environment where intelligence can flourish.
The prompt engineering era taught us that how we communicate with AI matters. The context engineering era teaches us that what AI knows when we communicate matters more. It's a profound shift, and we're just beginning to explore its implications. The models are ready. They're powerful. They're waiting. They're hungry.
Feed them well.
¹ Hylak, B. & Gauba, A. (2025). "God is hungry for Context: First thoughts on o3 pro" – Latent Space Blog.
² Referenced across multiple sources including LangChain Team (2025). "The rise of 'context engineering'" – LangChain Blog.
³ Martin, L. (2025). "Context Engineering for Agents" – rlancemartin.github.io.
Brilliant insights, thanks Carlo. I worry that in its current state, AI is providing so much sub-optimal output that people are simply cutting, pasting, and publishing. This output is then consumed by AI LLMs. So does this become a "garbage in-garbage out". As you say, it's about context.