Top
Best
New

Posted by robotswantdata 4 days ago

The new skill in AI is not prompting, it's context engineering(www.philschmid.de)
863 points | 492 commentspage 6
hintymad 4 days ago|
> The New Skill in AI Is Not Prompting, It's Context Engineering

Sounds like good managers and leaders now have an edge. Per Patty McCord of Netflix fame used to say: All that a manager does is setting the context.

rTX5CMRXIfFG 4 days ago||
So then for code generation purposes, how is “context engineering” different now from writing technical specs? Providing the LLMs the “right amount of information” means writing specs that cover all states and edge cases. Providing the information “at the right time” means writing composable tech specs that can be interlinked with each other so that you can prompt the LLM with just the specs for the task at hand.
_Algernon_ 4 days ago||
The prompt alchemists found a new buzzword to try to hook into the legitimacy of actual engineering disciplines.
labrador 4 days ago||
I’m curious how this applies to systems like ChatGPT, which now have two kinds of memory: user-configurable memory (a list of facts or preferences) and an opaque chat history memory. If context is the core unit of interaction, it seems important to give users more control or at least visibility into both.

I know context engineering is critical for agents, but I wonder if it's also useful for shaping personality and improving overall relatability? I'm curious if anyone else has thought about that.

simonw 4 days ago|
I really dislike the new ChatGPT memory feature (the one that pulls details out of a summarized version of all of your previous chats, as opposed to older memory feature that records short notes to itself) for exactly this reason: it makes it even harder for me to control the context when I'm using ChatGPT.

If I'm debugging something with ChatGPT and I hit an error loop, my fix is to start a new conversation.

Now I can't be sure ChatGPT won't include notes from that previous conversation's context that I was trying to get rid of!

Thankfully you can turn the new memory thing off, but it's on by default.

I wrote more about that here: https://simonwillison.net/2025/May/21/chatgpt-new-memory/

labrador 4 days ago||
On the other hand, for my use case (I'm retired and enjoy chatting with it), having it remember items from past chats makes it feel much more personable. I actually prefer Claude, but it doesn't have memory, so I unsubscribed and subscribed to ChatGPT. That it remembers obscure but relevant details about our past chats feels almost magical.

It's good that you can turn it off. I can see how it might cause problems when trying to do technical work.

Edit: Note, the introduction of memory was a contributing factor to "the sychophant" that OpenAI had to rollback. When it could praise you while seeming to know you was encouraging addictive use.

Edit2: Here's the previous Hacker News discussion on Simon's "I really don’t like ChatGPT’s new memory dossier"

https://news.ycombinator.com/item?id=44052246

grafmax 4 days ago||
There is no need to develop this ‘skill’. This can all be automated as a preprocessing step before the main request runs. Then you can have agents with infinite context, etc.
simonw 4 days ago|
You need this skill if you're the engineer that's designing and implementing that preprocessing step.
dolebirchwood 4 days ago|||
The skill amounts to determining "what information is required for System A to achieve Outcome X." We already have a term for this: Critical thinking.
Zopieux 4 days ago||
Why does it takes hundreds of comments for obvious facts to be laid out on this website? Thanks for the reality check.
grafmax 4 days ago||||
In the short term horizon I think you are right. But over a longer horizon, we should expect model providers to internalize these mechanisms, similar to how chain of thought has been effectively “internalized” - which in turn has reduced the effectiveness that prompt engineering used to provide as models have gotten better.
yunwal 4 days ago||||
Non-rhetorical question: is this different enough from data engineering that it needs it’s own name?
ofjcihen 4 days ago|||
Not at all, just ask the LLM to design and implement it.

AI turtles all the way down.

surrTurr 4 days ago||
Context engineering will be just another fad, like prompt engineering was. Once the context window problem is solved, nobody will be talking about it any more.

Also, for anyone working with LLMs right now, this is a pretty obvious concept and I'm surprised it's on top of HN.

HarHarVeryFunny 3 days ago||
I guess "context engineering" is a more encompassing term than "prompt engineering", but at the end of the day it's the same thing - choosing the best LLM input (whether you call it context or a prompt) to elicit the response you are hoping for.

The concept of prompting - asking an Oracle a question - was always a bit limited since it means you're really leaning on the LLM itself - the trained weights - to provide all the context you didn't explicitly mention in the prompt, and relying on the LLM to be able to generate coherently based on the sliced and blended mix of StackOverflow and Reddit/etc it was trained on. If you are using an LLM for code generation then obviously you can expect a better result if you feed it the API docs you want it to use, your code base, your project documents, etc, etc (i.e "context engineering").

Another term that has recently been added to the LLM lexicon is "context rot", which is quite a useful concept. When you use the LLM to generate, it's output is of course appended to the initial input, and over extended bouts of attempted reasoning, with backtracking etc, the clarity of the context is going to suffer ("rot") and eventually the LLM will start to fail in GIGO fashion (garbage-in => garbage-out). Your best recourse at this point is to clear the context and start over.

mumbisChungo 3 days ago||
context engineering, tool development, and orchestration

ie. the new skill in AI is complex software development

saejox 4 days ago||
Claude 3.5 was released 1 year ago. Current LLMs are not much better at coding than it. Sure they are more shiny and well polished, but not much better at all. I think it is time to curb our enthusiasm.

I almost always rewrite AI written functions in my code a few weeks later. Doesn't matter they have more context or better context, they still fail to write code easily understandable by humans.

simonw 4 days ago|
Claude 3.5 was remarkably good at writing code. If Claude 3.7 and Claude 4 are just incremental improvements on that then even better!

I actually think they're a lot more than incremental. 3.7 introduced "thinking" mode and 4 doubled down on that and thinking/reasoning/whatever-you-want-to-call-it is particularly good at code challenges.

As always, if you're not getting great results out of coding LLMs it's likely you haven't spent several months iterating on your prompting techniques to figure out what works best for your style of development.

ModernMech 4 days ago|
"Wow, AI will replace programming languages by allowing us to code in natural language!"

"Actually, you need to engineer the prompt to be very precise about what you want to AI to do."

"Actually, you also need to add in a bunch of "context" so it can disambiguate your intent."

"Actually English isn't a good way to express intent and requirements, so we have introduced protocols to structure your prompt, and various keywords to bring attention to specific phrases."

"Actually, these meta languages could use some more features and syntax so that we can better express intent and requirements without ambiguity."

"Actually... wait we just reinvented the idea of a programming language."

throwawayoldie 4 days ago||
Only without all that pesky determinism and reproducibility.

(Whoever's about to say "well ackshually temperature of zero", don't.)

whatevertrevor 3 days ago||
You forgot about lower performance and efficiency. And longer build/run cycles. And more hardware/power usage.
throwawayoldie 3 days ago||
There's just so much to like* about this technology, I was bound to forget something.

(*) "like" in the sense of "not like"

nimish 4 days ago|||
A half baked programming language that isn't deterministic or reproducible or guaranteed to do what you want. Worst of all worlds unless your input and output domains are tolerant to that, which most aren't. But if they are, then it's great
georgeburdell 4 days ago|||
We should have known up through Step 4 for a while. See: the legal system
mindok 4 days ago||
“Actually - curly braces help save space in the context while making meaning clearer”
More comments...