Top
Best
New

Posted by robotswantdata 5 days ago

The new skill in AI is not prompting, it's context engineering(www.philschmid.de)
895 points | 517 commentspage 7
rTX5CMRXIfFG 4 days ago|
So then for code generation purposes, how is “context engineering” different now from writing technical specs? Providing the LLMs the “right amount of information” means writing specs that cover all states and edge cases. Providing the information “at the right time” means writing composable tech specs that can be interlinked with each other so that you can prompt the LLM with just the specs for the task at hand.
insane_dreamer 4 days ago||
Semantics. The context is actually part of the "prompt". Sure we can call it "context engineering" instead of "prompt engineering", where now the "prompt" is part of the "context" (instead of the "context" being part of the "prompt") but it's essentially the same thing.
rvz 5 days ago||
This is just another "rebranding" of the failed "prompt engineering" trend to promote another borderline pseudo-scientific trend to attact more VC money to fund a new pyramid scheme.

Assuming that this will be using the totally flawed MCP protocol, I can only see more cases of data exfiltration attacks on these AI systems just like before [0] [1].

Prompt injection + Data exfiltration is the new social engineering in AI Agents.

[0] https://embracethered.com/blog/posts/2025/security-advisory-...

[1] https://www.bleepingcomputer.com/news/security/zero-click-ai...

Zopieux 5 days ago|
Rediscovering basic security concepts and hygiene from 2005 is also a very hot AI thing right now, so that tracks.
blensor 4 days ago||
Just yesterday I was thinking if we need a code comment system that separates intentional comments from ai note/thoughts comments when working in the same files.

I don't want to delete all thoughts right away as it makes it easier for the AI to continue but I also don't want to weed trhough endless superfluous comments

saejox 5 days ago||
Claude 3.5 was released 1 year ago. Current LLMs are not much better at coding than it. Sure they are more shiny and well polished, but not much better at all. I think it is time to curb our enthusiasm.

I almost always rewrite AI written functions in my code a few weeks later. Doesn't matter they have more context or better context, they still fail to write code easily understandable by humans.

simonw 5 days ago|
Claude 3.5 was remarkably good at writing code. If Claude 3.7 and Claude 4 are just incremental improvements on that then even better!

I actually think they're a lot more than incremental. 3.7 introduced "thinking" mode and 4 doubled down on that and thinking/reasoning/whatever-you-want-to-call-it is particularly good at code challenges.

As always, if you're not getting great results out of coding LLMs it's likely you haven't spent several months iterating on your prompting techniques to figure out what works best for your style of development.

m3kw9 4 days ago||
Just like the phasing out of prompt engineering, context engineering will phase out in around 6 months
_Algernon_ 4 days ago||
The prompt alchemists found a new buzzword to try to hook into the legitimacy of actual engineering disciplines.
damnever 1 day ago||
It is still "prompting".
surrTurr 4 days ago||
Context engineering will be just another fad, like prompt engineering was. Once the context window problem is solved, nobody will be talking about it any more.

Also, for anyone working with LLMs right now, this is a pretty obvious concept and I'm surprised it's on top of HN.

m3kw9 4 days ago|
Context “engineering” likely should involve knowing how the llm treats context size, say needle in hay stack performance, how context size affect hallucination rate, when to summerize context instead of entering the full thing.
More comments...