Top
Best
New

Posted by robotswantdata 6/30/2025

The new skill in AI is not prompting, it's context engineering(www.philschmid.de)
915 points | 518 commentspage 5
lifeisstillgood 7/1/2025|
Something that strikes me, is that (the whole point of this thread is) if I want two LLMs to “have a conversation” or to work together as agents on similar problems we need to have same or similar context.

And to drag this back to politics - that kind of suggests that when we have political polarisation we just have context that are so different the LLM cannot arrive at similar conclusions

I guess it is obvious but it is also interesting

simonw 7/1/2025||
One of the most valuable techniques for building useful LLM systems right now is actually the opposite of that.

Context is limited in length and too much stuff in the context can lead to confusion and poor results - the solution to that is "sub-agents", where a coordinating LLM prepares a smaller context and task for another LLM and effectively treats it as a tool call.

The best explanation of that pattern right now is this from Anthropic: https://www.anthropic.com/engineering/built-multi-agent-rese...

PaulRobinson 7/1/2025||
Shared context is critical to working towards a common goal. It's as true in society when deciding policy, as it is in your vibe coded match-3 game for figuring out what tests need to be written.
defyonce 7/1/2025||
at which point AI thing stops being a Stone soup?

https://en.wikipedia.org/wiki/Stone_Soup

You need an expert who knows what to do and how to do it to get good results. Looks like coding with extra steps to me

I DO use AI for some tasks. When I know exactly what I want done and how I want it done. The only issue is busy typing, which AI solves.

walterfreedom 7/1/2025|
AI is already very impressive for natural language formatting and filtering, we use it for ratifying profiles and posts. and it takes around like an hour to implement this from scratch, and there are no alternatives that can do the same thing as comprehensively anyways
mrcoders 7/2/2025||
Looking at this thread, excited to see the community articulating what many of us have been experiencing. The shift from "prompt engineering" to "context engineering" really captures what's happening. The technical stuff (RAG, vector databases) is getting commoditized. But there's this foundational knowledge organization layer that's becoming critical.

Most companies have context scattered across wikis, Slack, Google Docs. You can build sophisticated retrieval systems, but if you're feeding them fragmented information, you're missing huge optimization opportunities.

There's research backing this - Microsoft/Salesforce found 39% accuracy drop in multi-turn conversations - so for the agent interaction's it's even more cirtical to give 'just enough' context (https://arxiv.org/pdf/2505.06120).

When (business) context is properly structured upfront, you minimize those patterns.

daxfohl 7/1/2025||
Seems like there'd be an opportunity for open source tooling here. Context visualized, summarizers, explorers, A/B testers, etc. Also LLM pre-caching of context summaries since IIUC any context change requires full N^2 recalculation of everything so adds a ton of latency and cost. And some optimizers since the previous N is actually (N-M) where M is the the first 0..M context tokens that were unchanged by your update. Though generally you probably want to summarize more of the beginning of the context, so M is likely small in most cases.

Anyway, seems like most of these algorithms are fairly ad hoc things built into all the various agents themselves these days, and not something that exist in their own right. Seems like an opportunity to make this it's own ecosystem, where context tools can be swapped and used independently of the agents that use them, similar to the LLMs themselves.

thatthatis 7/1/2025||
Glad we have a name for this. I had been calling it “context shaping” in my head for a bit now.

I think good context engineering will be one of the most important pieces of the tooling that will turn “raw model power” into incredible outcomes.

Model power is one thing, model power plus the tools to use it will be quite another.

Davidzheng 7/1/2025||
Let's grant that context engineering is here to stay and that we can never have context lengths be large enough to throw everything in it indiscriminately. Why is this not a perfect palce to train another AI whose job is to provide the context for the main AI?
emporas 6/30/2025||
Prompting sits on the back seat, while context is the driving factor. 100% agree with this.

For programming I don't use any prompts. I give a problem solved already, as a context or example, and I ask it to implement something similar. One sentence or two, and that's it.

Other kind of tasks, like writing, I use prompts, but even then, context and examples are still the driving factor.

In my opinion, we are in an interesting point in history, in which now individuals will need their own personal database. Like companies the last 50 years, which had their own database records of customers, products, prices and so on, now an individual will operate using personal contextual information, saved over a long period of time in wikis or Sqlite rows.

d0gsg0w00f 6/30/2025|
Yes, the other day I was telling a colleague that we all need our own personal context to feed into every model we interact with. You could carry it around on a thumb drive or something.
linguistbreaker 7/1/2025||
Prompt engineering was just trying to fit all the context into one prompt - but actually there would often be a series of prompts both positive and negative so...

I get coining a new term and that can be useful in itself but I don't see a big conceptual jump here.

simonw 7/1/2025|
It's not a big contextual jump. It's trying to solve for the problem where a lot of people think "prompt engineering" means "typing a prompt into a chatbot" - and the related problem that many people haven't yet realized you can (and should) dump documents, examples and other long-form content into an LLM to get good results.
geeewhy 6/30/2025||
ive beeen experimenting with this for a while, (im sure in a way, most of us did). Would be good to numerate some examples. When it comes to coding, here's a few:

- compile scripts that can grep / compile list of your relevant files as files of interest

- make temp symlinks in relevant repos to each other for documentation generation, pass each documentation collected from respective repos to to enable cross-repo ops to be performed atomically

- build scripts to copy schemas, db ddls, dtos, example records, api specs, contracts (still works better than MCP in most cases)

I found these steps not only help better output but also reduces cost greatly avoiding some "reasoning" hops. I'm sure practice can extend beyond coding.

mpazik 7/8/2025|
Yeah, I had similar findings. Making good system can be tedious though when it grows and current tools did not provide enough configuration.

I found opencode, and decided to build a fork that allows to setup specialized agent with custom tools and programmatic context building to organize it better https://github.com/mpazik/openagent

hintymad 6/30/2025|
> The New Skill in AI Is Not Prompting, It's Context Engineering

Sounds like good managers and leaders now have an edge. Per Patty McCord of Netflix fame used to say: All that a manager does is setting the context.

More comments...