Top
Best
New

Posted by robotswantdata 1 day ago

The new skill in AI is not prompting, it's context engineering(www.philschmid.de)
838 points | 473 commentspage 5
blensor 16 hours ago|
Just yesterday I was thinking if we need a code comment system that separates intentional comments from ai note/thoughts comments when working in the same files.

I don't want to delete all thoughts right away as it makes it easier for the AI to continue but I also don't want to weed trhough endless superfluous comments

megalord 18 hours ago||
I agree with everything in the blog post. What I'm struggling with right now is the correct way of executing things the most safe way but also I want flexibility for LLM. Execute/choose function from list of available fns is okay for most use cases, but when there is something more complex, we need to somehow execute more things from allowed list, do some computations in between calls etc.
sonicvrooom 20 hours ago||
Premises and conclusions.

Prompts and context.

Hopes and expectations.

Black holes and revelations.

We learned to write and then someone wrote novels.

Context, now, is for the AI, really, to overcome dogmas recursively and contiguously.

Wasn't that somebody's slogan someday in the past?

Context over Dogma

thatthatis 17 hours ago||
Glad we have a name for this. I had been calling it “context shaping” in my head for a bit now.

I think good context engineering will be one of the most important pieces of the tooling that will turn “raw model power” into incredible outcomes.

Model power is one thing, model power plus the tools to use it will be quite another.

Davidzheng 16 hours ago||
Let's grant that context engineering is here to stay and that we can never have context lengths be large enough to throw everything in it indiscriminately. Why is this not a perfect palce to train another AI whose job is to provide the context for the main AI?
defyonce 20 hours ago||
at which point AI thing stops being a Stone soup?

https://en.wikipedia.org/wiki/Stone_Soup

You need an expert who knows what to do and how to do it to get good results. Looks like coding with extra steps to me

I DO use AI for some tasks. When I know exactly what I want done and how I want it done. The only issue is busy typing, which AI solves.

walterfreedom 20 hours ago|
AI is already very impressive for natural language formatting and filtering, we use it for ratifying profiles and posts. and it takes around like an hour to implement this from scratch, and there are no alternatives that can do the same thing as comprehensively anyways
Snowfield9571 22 hours ago||
What’s it going to be next month?
taylorius 23 hours ago||
The model starts every conversation as a blank slate, so providing a thorough context regarding the problem you want it to solve seems a fairly obvious preparatory step tbh. How else is it supposed to know what to do? I agree that "prompt" is probably not quite the right word to describe what is necessary though - it feels a bit minimal and brief. "Context engineering" seems a bit overblown, but this is tech. and we do a love a grand title.
_Algernon_ 20 hours ago||
The prompt alchemists found a new buzzword to try to hook into the legitimacy of actual engineering disciplines.
colgandev 1 day ago|
I've been finding a ton of success lately with speech to text as the user prompt, and then using https://continue.dev in VSCode, or Aider, to supply context from files from my projects and having those tools run the inference.

I'm trying to figure out how to build a "Context Management System" (as compared to a Content Management System) for all of my prompts. I completely agree with the premise of this article, if you aren't managing your context, you are losing all of the context you create every time you create a new conversation. I want to collect all of the reusable blocks from every conversation I have, as well as from my research and reading around the internet. Something like a mashup of Obsidian with some custom Python scripts.

The ideal inner loop I'm envisioning is to create a "Project" document that uses Jinja templating to allow transclusion of a bunch of other context objects like code files, documentation, articles, and then also my own other prompt fragments, and then to compose them in a master document that I can "compile" into a "superprompt" that has the precise context that I want for every prompt.

Since with the chat interfaces they are always already just sending the entire previous conversation message history anyway, I don't even really want to use a chat style interface as much as just "one shotting" the next step in development.

It's almost a turn based game: I'll fiddle with the code and the prompts, and then run "end turn" and now it is the llm's turn. On the llm's turn, it compiles the prompt and runs inference and outputs the changes. With Aider it can actually apply those changes itself. I'll then review the code using diffs and make changes and then that's a full turn of the game of AI-assisted code.

I love that I can just brain dump into speech to text, and llms don't really care that much about grammar and syntax. I can curate fragments of documentation and specifications for features, and then just kind of rant and rave about what I want for a while, and then paste that into the chat and with my current LLM of choice being Claude, it seems to work really quite well.

My Django work feels like it's been supercharged with just this workflow, and my context management engine isn't even really that polished.

If you aren't getting high quality output from llms, definitely consider how you are supplying context.

More comments...