Why You Need to Engineer Contexts—in AI and Beyond
Recently, Phil Schmid (a Senior AI Relation Engineer at Google DeepMind) published the popular piece, Context Engineering.
He argued that smart agent usage has moved from prompt engineering (“write a clever instruction”) to context engineering (“assemble everything the model needs to solve the task effectively”). He claims most agent failures aren’t model problems but context problems—missing data, calendar access, etc.
Look closer
There’s a lot to learn from this framing, and the insights stretch far beyond using AI agents well.
- Context has always been king. The leverage isn’t in one wry prompt you found on Reddit: it’s in finding ways to systematically show and hide relevant context to your model. Too much and you overload the model. Too little and you get vague boilerplate.
- Context debt is coming. Context debt will be the new technical debt. Just as cache invalidation has been creating widespread headaches for decades, cached context means more privacy risk, stale assumptions, and decision noise.
- Context rot shows up quicker in LLMs. Every industry knows the risk of insufficient context. The problem with AI is that it is moving from context to answer with much greater speed—the sloppy context is instantly visible. What does CI/CD look like for shared contexts?
- Inappropriate context creates new risk vectors. I don’t know if you’ve noticed, but there’s a lot of context around. And passing it around on each request creates thorny privacy and security issues. What happens when the context meant for a C-suite meeting is leaked to a group of engineering managers?
- Context is costly to debug. When you get a bad answer, how do you figure out what was missing? The LLM can’t help you here. Linking each input (or lack of) to a particular answer is time-consuming and reverses most of the benefit that comes with using an agent. (This is similar to agents belching out new code that creates a huge cognitive load to understand; something I wrote about last time.)
Your takeaways (even if you’re not using AI)
Start thinking like a context engineer today.
- Before you act, ask: “What’s the minimal set of facts, constraints, and tools I need to do this well?”
- Keep it relevant. How can I create the appropriate salience landscape, instead of amassing a pile of information?
- Write output-first briefs: “Success looks like this (example, format, decision)” From this, you can reverse-engineer much of the required context.
- Add expiry dates to assumptions (“This plan assumes X remains true until 30 Sept”).
- Keep a living ‘context pack’ template for recurring tasks (sales call, strategy memo, sprint review).
- Summarise long threads into tight one-pagers before meetings. Context needs to be compressed into takeaways.
- Delete or archive what’s no longer relevant. Stale context is often worse than no context.
And remember: this isn’t something new. Context engineering is a key aspect of working well in any environment. The more you can inhabit this mindset today, the better you’ll be able to work with humans and robots alike.