Posted by robotswantdata 4 days ago
Sounds like good managers and leaders now have an edge. Per Patty McCord of Netflix fame used to say: All that a manager does is setting the context.
I know context engineering is critical for agents, but I wonder if it's also useful for shaping personality and improving overall relatability? I'm curious if anyone else has thought about that.
If I'm debugging something with ChatGPT and I hit an error loop, my fix is to start a new conversation.
Now I can't be sure ChatGPT won't include notes from that previous conversation's context that I was trying to get rid of!
Thankfully you can turn the new memory thing off, but it's on by default.
I wrote more about that here: https://simonwillison.net/2025/May/21/chatgpt-new-memory/
It's good that you can turn it off. I can see how it might cause problems when trying to do technical work.
Edit: Note, the introduction of memory was a contributing factor to "the sychophant" that OpenAI had to rollback. When it could praise you while seeming to know you was encouraging addictive use.
Edit2: Here's the previous Hacker News discussion on Simon's "I really don’t like ChatGPT’s new memory dossier"
AI turtles all the way down.
Also, for anyone working with LLMs right now, this is a pretty obvious concept and I'm surprised it's on top of HN.
The concept of prompting - asking an Oracle a question - was always a bit limited since it means you're really leaning on the LLM itself - the trained weights - to provide all the context you didn't explicitly mention in the prompt, and relying on the LLM to be able to generate coherently based on the sliced and blended mix of StackOverflow and Reddit/etc it was trained on. If you are using an LLM for code generation then obviously you can expect a better result if you feed it the API docs you want it to use, your code base, your project documents, etc, etc (i.e "context engineering").
Another term that has recently been added to the LLM lexicon is "context rot", which is quite a useful concept. When you use the LLM to generate, it's output is of course appended to the initial input, and over extended bouts of attempted reasoning, with backtracking etc, the clarity of the context is going to suffer ("rot") and eventually the LLM will start to fail in GIGO fashion (garbage-in => garbage-out). Your best recourse at this point is to clear the context and start over.
ie. the new skill in AI is complex software development
I almost always rewrite AI written functions in my code a few weeks later. Doesn't matter they have more context or better context, they still fail to write code easily understandable by humans.
I actually think they're a lot more than incremental. 3.7 introduced "thinking" mode and 4 doubled down on that and thinking/reasoning/whatever-you-want-to-call-it is particularly good at code challenges.
As always, if you're not getting great results out of coding LLMs it's likely you haven't spent several months iterating on your prompting techniques to figure out what works best for your style of development.
"Actually, you need to engineer the prompt to be very precise about what you want to AI to do."
"Actually, you also need to add in a bunch of "context" so it can disambiguate your intent."
"Actually English isn't a good way to express intent and requirements, so we have introduced protocols to structure your prompt, and various keywords to bring attention to specific phrases."
"Actually, these meta languages could use some more features and syntax so that we can better express intent and requirements without ambiguity."
"Actually... wait we just reinvented the idea of a programming language."
(Whoever's about to say "well ackshually temperature of zero", don't.)
(*) "like" in the sense of "not like"