Posted by robotswantdata 6/30/2025
And to drag this back to politics - that kind of suggests that when we have political polarisation we just have context that are so different the LLM cannot arrive at similar conclusions
I guess it is obvious but it is also interesting
Context is limited in length and too much stuff in the context can lead to confusion and poor results - the solution to that is "sub-agents", where a coordinating LLM prepares a smaller context and task for another LLM and effectively treats it as a tool call.
The best explanation of that pattern right now is this from Anthropic: https://www.anthropic.com/engineering/built-multi-agent-rese...
https://en.wikipedia.org/wiki/Stone_Soup
You need an expert who knows what to do and how to do it to get good results. Looks like coding with extra steps to me
I DO use AI for some tasks. When I know exactly what I want done and how I want it done. The only issue is busy typing, which AI solves.
Most companies have context scattered across wikis, Slack, Google Docs. You can build sophisticated retrieval systems, but if you're feeding them fragmented information, you're missing huge optimization opportunities.
There's research backing this - Microsoft/Salesforce found 39% accuracy drop in multi-turn conversations - so for the agent interaction's it's even more cirtical to give 'just enough' context (https://arxiv.org/pdf/2505.06120).
When (business) context is properly structured upfront, you minimize those patterns.
Anyway, seems like most of these algorithms are fairly ad hoc things built into all the various agents themselves these days, and not something that exist in their own right. Seems like an opportunity to make this it's own ecosystem, where context tools can be swapped and used independently of the agents that use them, similar to the LLMs themselves.
I think good context engineering will be one of the most important pieces of the tooling that will turn “raw model power” into incredible outcomes.
Model power is one thing, model power plus the tools to use it will be quite another.
For programming I don't use any prompts. I give a problem solved already, as a context or example, and I ask it to implement something similar. One sentence or two, and that's it.
Other kind of tasks, like writing, I use prompts, but even then, context and examples are still the driving factor.
In my opinion, we are in an interesting point in history, in which now individuals will need their own personal database. Like companies the last 50 years, which had their own database records of customers, products, prices and so on, now an individual will operate using personal contextual information, saved over a long period of time in wikis or Sqlite rows.
I get coining a new term and that can be useful in itself but I don't see a big conceptual jump here.
- compile scripts that can grep / compile list of your relevant files as files of interest
- make temp symlinks in relevant repos to each other for documentation generation, pass each documentation collected from respective repos to to enable cross-repo ops to be performed atomically
- build scripts to copy schemas, db ddls, dtos, example records, api specs, contracts (still works better than MCP in most cases)
I found these steps not only help better output but also reduces cost greatly avoiding some "reasoning" hops. I'm sure practice can extend beyond coding.
I found opencode, and decided to build a fork that allows to setup specialized agent with custom tools and programmatic context building to organize it better https://github.com/mpazik/openagent
Sounds like good managers and leaders now have an edge. Per Patty McCord of Netflix fame used to say: All that a manager does is setting the context.