Posted by robotswantdata 1 day ago
I don't want to delete all thoughts right away as it makes it easier for the AI to continue but I also don't want to weed trhough endless superfluous comments
Prompts and context.
Hopes and expectations.
Black holes and revelations.
We learned to write and then someone wrote novels.
Context, now, is for the AI, really, to overcome dogmas recursively and contiguously.
Wasn't that somebody's slogan someday in the past?
Context over Dogma
I think good context engineering will be one of the most important pieces of the tooling that will turn “raw model power” into incredible outcomes.
Model power is one thing, model power plus the tools to use it will be quite another.
https://en.wikipedia.org/wiki/Stone_Soup
You need an expert who knows what to do and how to do it to get good results. Looks like coding with extra steps to me
I DO use AI for some tasks. When I know exactly what I want done and how I want it done. The only issue is busy typing, which AI solves.
I'm trying to figure out how to build a "Context Management System" (as compared to a Content Management System) for all of my prompts. I completely agree with the premise of this article, if you aren't managing your context, you are losing all of the context you create every time you create a new conversation. I want to collect all of the reusable blocks from every conversation I have, as well as from my research and reading around the internet. Something like a mashup of Obsidian with some custom Python scripts.
The ideal inner loop I'm envisioning is to create a "Project" document that uses Jinja templating to allow transclusion of a bunch of other context objects like code files, documentation, articles, and then also my own other prompt fragments, and then to compose them in a master document that I can "compile" into a "superprompt" that has the precise context that I want for every prompt.
Since with the chat interfaces they are always already just sending the entire previous conversation message history anyway, I don't even really want to use a chat style interface as much as just "one shotting" the next step in development.
It's almost a turn based game: I'll fiddle with the code and the prompts, and then run "end turn" and now it is the llm's turn. On the llm's turn, it compiles the prompt and runs inference and outputs the changes. With Aider it can actually apply those changes itself. I'll then review the code using diffs and make changes and then that's a full turn of the game of AI-assisted code.
I love that I can just brain dump into speech to text, and llms don't really care that much about grammar and syntax. I can curate fragments of documentation and specifications for features, and then just kind of rant and rave about what I want for a while, and then paste that into the chat and with my current LLM of choice being Claude, it seems to work really quite well.
My Django work feels like it's been supercharged with just this workflow, and my context management engine isn't even really that polished.
If you aren't getting high quality output from llms, definitely consider how you are supplying context.