Top
Best
New

Posted by robotswantdata 6/30/2025

The new skill in AI is not prompting, it's context engineering(www.philschmid.de)
915 points | 518 commentspage 9
walterfreedom 6/30/2025|
I am mostly focusing in this issue during the development of my agent engine (mostly for game npcs). Its really important to manage the context and not bloat the llm with irrelevant stuff for both quality and inference speed. I wrote about it here if anyone is interested: https://walterfreedom.com/post.html?id=ai-context-management
bag_boy 6/30/2025||
Anecdotally, I’ve found that chatting with Claude about a subject for a bit — coming to an understanding together, then tasking it — produces much better results than starting with an immediate ask.

I’ll usually spend a few minutes going back and forth before making a request.

For some reason, it just feels like this doesn't work as well with ChatGPT or Gemini. It might be my overuse of o3? The latency can wreck the vibe of a conversation.

asciii 6/30/2025||
Here I was thinking that part of Prompt Engineering is understanding context and awareness for other yada yada.
stillpointlab 6/30/2025||
I've been using the term context engineering for a few months now, I am very happy to see this gain traction.

This new stillpointlab hacker news account is based on the company name I chose to pursue my Context as a Service idea. My belief is that context is going to be the key differentiator in the future. The shortest description I can give to explain Context as a Service (CaaS) is "ETL for AI".

bgwalter 6/30/2025||
These discussions increasingly remind me of gamers discussing various strategies in WoW or similar. Purportedly working strategies found by trial and error and discussed in a language that is only intelligible to the in-group (because no one else is interested).

We are entering a new era of gamification of programming, where the power users force their imaginary strategies on innocent people by selling them to the equally clueless and gaming-addicted management.

iammrpayments 7/1/2025||
This is basically how online advertising works. Nobody knows how facebook ads works so you still have gurus making money selling senseless advice on how to get lower cost per impression.
dysoco 7/1/2025|||
> Purportedly working strategies found by trial and error and discussed in a language that is only intelligible to the in-group

This really does sound like Computer Science since it's very beginnings.

The only difference is that now it's a much more popular field, and not restricted to a few nerds sharing tips over e-mail or bbs.

dawnofdusk 7/1/2025|||
>This really does sound like Computer Science since it's very beginnings.

Except in actual computer science you can prove that your strategies, discovered by trial and error, are actually good. Even though Dijkstra invented his eponymous algorithm by writing on a napkin, it's phrased in the language of mathematics and one can analyze quantitatively its effectiveness and trade-offs, and one can prove if it's optimal (as was done recently).

BoiledCabbage 7/1/2025|||
> >This really does sound like Computer Science since it's very beginnings. > Except in actual computer science you can prove that your strategies, discovered by trial and error, are actually good.

Maybe it's true for computer science - but most people on here aren't doing computer science. They're doing software engineering. And it sure as heck isn't true for software engineering. If it were, I wouldn't be hearing arguments about programming languages for years, or static vs dynamic typing, or functional vs OOP...

So what you're arguing about AI isn't exactly anything new to software development.

pbhjpbhj 7/1/2025|||
Surely claims about context engineering can also be tested using scientific methodology?
slightwinder 7/1/2025|||
Yes, in theory. But it's testing against highly complex, ever-changing systems, where small changes can have big impact on the outcome. So it's more akin to "weak" science like psychology. And weak here means that most finding have a weak standing, because of each variable having little individual contribution in the complex setup researched, making it harder to reproduce results.

Even more problematic is that too many "researchers" are just laymen, lacking a proper scientific background, and they are often just playing around with third-party-services, while delivering too much noise to the community.

So in general, AI has also something like the replication crisis in its own way. But on the other side, the latest wave of AI is just some years old (3 years now?), which is not much in real scientific progress-rates.

fleischhauf 7/1/2025||||
except the area is so hugely popular with people who unfortunately lack the rigor or curiosity to ask for this and blindly believe claims. for example this hugely popular repository https://github.com/x1xhlol/system-prompts-and-models-of-ai-t...

where the authors fail to explain how the prompts are obtained and how they prove that they are valid and not a hallucination.

parpfish 7/1/2025|||
yeah, but it's a different type of science.

the move from "software engineering" to "AI engineering" is basically a switch from a hard science to a soft science.

rather than being chemists and physicists making very precise theory-driven predictions that are verified by experiment, we're sociologists and psychologists randomly changing variables and then doing a t-test afterward and asking "did that change anything?"

bwfan123 7/1/2025||
the difference is between having a "model" and a "theory". A theory tries to explain the "why" based on some givens, and a model tell you the "how". For engineering, we want why and not how. ie, for bugs, we want to root-cause, and fix - not fix by trial-and-error.

the hard sciences have theories. and soft sciences have models.

computer science is built on theory (turing machine/lambda calc/logic).

AI models are well "models" - we dont know why it works but it seems to - thats how models are.

bigfishrunning 7/1/2025|||
> This really does sound like Computer Science since it's very beginnings.

Dijkstra is rolling in his grave. Computer Science was a rigorous sub-field of mathematics before the tech bros showed up and started moving fast and breaking things. The endless supply of VC money has destroyed this field.

matkoniecz 7/1/2025|||
> only intelligible to the in-group (because no one else is interested)

that applies to basically any domain-specific terminology, from WoW raids through cancer research to computer science and say OpenStreetMap

nixpulvis 7/1/2025|||
Having been a computer scientist and avid WoW player, I dislike this take. The best strategies always have a justification.
Madmallard 7/1/2025|||
There's quite a lot science that goes into WoW strategizing at this point.

People are using their thinking caps and modelling data.

coderatlarge 6/30/2025|||
i tend to share your view. but then your comment describes a lot of previous cycles of enterprise software selling. it’s just that this time is reaching a little uncomfortably into the builder’s /developer’s traditional areas of influence/control/workflow. how devs feel now is probably how others (ex csr, qa, sre) felt in the past when their managers pushed whatever tooling/practice was becoming popular or sine qua non in previous “waves”.
sarchertech 7/1/2025||
This has been happening to developers for years.

25 years ago it was object oriented programming.

coderatlarge 7/1/2025|||
or agile and scrums.
LtWorf 7/1/2025||
Our new CTO decided to move to agile and scrum, in an effort to reduce efficiency and morale.

He doesn't even take responsibility for it, but claims the board told him to do that.

coderatlarge 7/1/2025||
is it supposedly to “improve velocity “?
LtWorf 7/1/2025||
Is it supposed to add 30% overhead?
coderatlarge 7/2/2025||
unless you’re exceptionally lucky!
coliveira 7/1/2025|||
The difference is that with OO there was at least hope that a well trained programmer could make it work. Nowadays, any person who understands how AI knows that's near impossible.
mrits 7/1/2025|||
Tuning the JVM, compiler optimizations, design patterns, agile methodologies, seo , are just a few things that come to mind
tootie 7/1/2025||
It's also the entire SEO industry
noobermin 7/1/2025||
Once again, all the hypsters need to explain to me how than just programming yourself. I don't need to (re-)craft my context, it's already in my head.

pg said a few months ago on twitter that ai coding is just proof we need better abstract interfaces, perhaps, not necessarily that ai coding is the future. The "conversation is shifting from blah blah to bloo bloo" makes me suspicious that people are trying just to salvage things. The provided examples are neither convincing nor enlightening to me at all. If anything, it just provides more evidence for "just doing it yourself is easier."

aryehof 7/1/2025||
Yay, everyone that writes a line of text to an LLM can now claim to be an "engineer".
mumbisChungo 7/1/2025||
context engineering, tool development, and orchestration

ie. the new skill in AI is complex software development

almosthere 7/1/2025|
Which is prompt engineering, since you just ask the LLM for a good context for the next prompt.
More comments...