Top
Best
New

Posted by robotswantdata 6/30/2025

The new skill in AI is not prompting, it's context engineering(www.philschmid.de)
915 points | 518 commentspage 6
tom_m 7/1/2025|
That is prompting. It's all a prompt going in. The parts you see or don't see as an end user is just UX. Of course when you obscure things, it changes the UX for the better or the worse.
HardCodedBias 7/1/2025||
The central argument is that the importance of prompt "tricks"—essentially empirical workarounds for current LLM limitations will decline as the technology matures.

The truly valuable and future-proof skill is "context engineering". This focuses on providing the LLM with the information required to reason through the task at hand. Although current LLMs present a trade-off between the size of the context and the quality of the output, this is a constraint that we can expect to lessen with future advancements.

sonicvrooom 7/1/2025||
Premises and conclusions.

Prompts and context.

Hopes and expectations.

Black holes and revelations.

We learned to write and then someone wrote novels.

Context, now, is for the AI, really, to overcome dogmas recursively and contiguously.

Wasn't that somebody's slogan someday in the past?

Context over Dogma

labrador 6/30/2025||
I’m curious how this applies to systems like ChatGPT, which now have two kinds of memory: user-configurable memory (a list of facts or preferences) and an opaque chat history memory. If context is the core unit of interaction, it seems important to give users more control or at least visibility into both.

I know context engineering is critical for agents, but I wonder if it's also useful for shaping personality and improving overall relatability? I'm curious if anyone else has thought about that.

simonw 6/30/2025|
I really dislike the new ChatGPT memory feature (the one that pulls details out of a summarized version of all of your previous chats, as opposed to older memory feature that records short notes to itself) for exactly this reason: it makes it even harder for me to control the context when I'm using ChatGPT.

If I'm debugging something with ChatGPT and I hit an error loop, my fix is to start a new conversation.

Now I can't be sure ChatGPT won't include notes from that previous conversation's context that I was trying to get rid of!

Thankfully you can turn the new memory thing off, but it's on by default.

I wrote more about that here: https://simonwillison.net/2025/May/21/chatgpt-new-memory/

labrador 6/30/2025||
On the other hand, for my use case (I'm retired and enjoy chatting with it), having it remember items from past chats makes it feel much more personable. I actually prefer Claude, but it doesn't have memory, so I unsubscribed and subscribed to ChatGPT. That it remembers obscure but relevant details about our past chats feels almost magical.

It's good that you can turn it off. I can see how it might cause problems when trying to do technical work.

Edit: Note, the introduction of memory was a contributing factor to "the sychophant" that OpenAI had to rollback. When it could praise you while seeming to know you was encouraging addictive use.

Edit2: Here's the previous Hacker News discussion on Simon's "I really don’t like ChatGPT’s new memory dossier"

https://news.ycombinator.com/item?id=44052246

grafmax 6/30/2025||
There is no need to develop this ‘skill’. This can all be automated as a preprocessing step before the main request runs. Then you can have agents with infinite context, etc.
simonw 6/30/2025|
You need this skill if you're the engineer that's designing and implementing that preprocessing step.
dolebirchwood 6/30/2025|||
The skill amounts to determining "what information is required for System A to achieve Outcome X." We already have a term for this: Critical thinking.
Zopieux 6/30/2025||
Why does it takes hundreds of comments for obvious facts to be laid out on this website? Thanks for the reality check.
grafmax 6/30/2025||||
In the short term horizon I think you are right. But over a longer horizon, we should expect model providers to internalize these mechanisms, similar to how chain of thought has been effectively “internalized” - which in turn has reduced the effectiveness that prompt engineering used to provide as models have gotten better.
yunwal 6/30/2025||||
Non-rhetorical question: is this different enough from data engineering that it needs it’s own name?
ofjcihen 6/30/2025|||
Not at all, just ask the LLM to design and implement it.

AI turtles all the way down.

taylorius 7/1/2025||
The model starts every conversation as a blank slate, so providing a thorough context regarding the problem you want it to solve seems a fairly obvious preparatory step tbh. How else is it supposed to know what to do? I agree that "prompt" is probably not quite the right word to describe what is necessary though - it feels a bit minimal and brief. "Context engineering" seems a bit overblown, but this is tech. and we do a love a grand title.
Snowfield9571 7/1/2025||
What’s it going to be next month?
rTX5CMRXIfFG 7/1/2025||
So then for code generation purposes, how is “context engineering” different now from writing technical specs? Providing the LLMs the “right amount of information” means writing specs that cover all states and edge cases. Providing the information “at the right time” means writing composable tech specs that can be interlinked with each other so that you can prompt the LLM with just the specs for the task at hand.
megalord 7/1/2025||
I agree with everything in the blog post. What I'm struggling with right now is the correct way of executing things the most safe way but also I want flexibility for LLM. Execute/choose function from list of available fns is okay for most use cases, but when there is something more complex, we need to somehow execute more things from allowed list, do some computations in between calls etc.
rvz 6/30/2025|
This is just another "rebranding" of the failed "prompt engineering" trend to promote another borderline pseudo-scientific trend to attact more VC money to fund a new pyramid scheme.

Assuming that this will be using the totally flawed MCP protocol, I can only see more cases of data exfiltration attacks on these AI systems just like before [0] [1].

Prompt injection + Data exfiltration is the new social engineering in AI Agents.

[0] https://embracethered.com/blog/posts/2025/security-advisory-...

[1] https://www.bleepingcomputer.com/news/security/zero-click-ai...

Zopieux 6/30/2025|
Rediscovering basic security concepts and hygiene from 2005 is also a very hot AI thing right now, so that tracks.
More comments...