Posted by kierangill 15 hours ago
[Research] ask the agent to explain current functionality as a way to load the right files into context.
[Plan] ask the agent to brainstorm the best practices way to implement a new feature or refactor. Brainstorm seems to be a keyword that triggers a better questioning loop for the agent. Ask it to write a detailed implementation plan to an md file.
[clear] completely clear the context of the agent —- better results than just compacting the conversation.
[execute plan] ask the agent to review the specific plan again, sometimes it will ask additional questions which repeats the planning phase again. This loads only the plan into context and then have it implement the plan.
[review & test] clear the context again and ask it to review the plan to make sure everything was implemented. This is where I add any unit or integration tests if needed. Also run test suites, type checks, lint, etc.
With this loop I’ve often had it run for 20-30 minutes straight and end up with usable results. It’s become a game of context management and creating a solid testing feedback loop instead of trying to purely one-shot issues.
The biggest gotcha I found is that these LLMs love to assume that code is C/Python but just in your favorite language of choice. Instead of considering that something should be written encapsulated into an object to maintain state, it will instead write 5 functions, passing the state as parameters between each function. It will also consistently ignore most of the code around it, even if it could benefit from reading it to know what specifically could be reused. So you end up with copy-pasta code, and unstructured copy-pasta at best.
The other gotcha is that claude usually ignores CLAUDE.md. So for me, I first prompt it to read it and then I prompt it to next explore. Then, with those two rules, it usually does a good job following my request to fix, or add a new feature, or whatever, all within a single context. These recent agents do a much better job of throwing away useless context.
I do think the older models and agents get better results when writing things to a plan document, but I've noticed recent opus and sonnet usually end up just writing the same code to the plan document anyway. That usually ends up confusing itself because it can't connect it to the code around the changes as easily.
Sounds very functional, testable, and clean. Sign me up.
I have a user prompt saved called clean code to make a pass through the changes and remove unused, DRY and refactor - literally the high points of uncle bob's Clean Code. It works shockingly well at taking AI code and making it somewhat maintainable.
We've taken those prompts, tweaked them to be more relevant to us and our stack, and have pulled them in as custom commands that can be executed in Claude Code, i.e. `/research_codebase`, `/create_plan`, and `/implement_plan`.
It's working exceptionally well for me, it helps that I'm very meticulous about reviewing the output and correcting it during the research and planning phase. Aside from a few use cases with mixed results, it hasn't really taken off throughout our team unfortunately.
For planning large tasks like "setup playwright tests in this project with some demo tests" I spend some time chatting with Gemini 3 or Opus 4.5 to figure out the most idiomatic easy-wins and possible pitfalls. Like: separate database for playwright tests. Separate users in playwright tests. Skipping login flow for most tests. And so on.
I suspect that devs who use a formal-plan-first approach tend to tackle larger tasks and even vibe code large features at a time.
I've had models do the complete opposite of what I've put in the plan and guidelines. I've had them go re-read the exact sentences, and still see them come to the opposite conclusion, and my instructions are nothing complex at all.
I used to think one could build a workflow and process around LLMs that extract good value from them consistently, but I'm now not so sure.
I notice that sometimes the model will be in a good state, and do a long chain of edits of good quality. The problem is, it's still a crap-shoot how to get them into a good state.
Biggest step-change has been being able to one-shot file refactors (using the planning framework I mentioned above). 6 months ago refactoring was a very delicate dance and now it feels like it’s pretty much streamlined.
I was prepared to go back to my original message and spot an obvious-in-hindsight grey area/phrasing issue on my part as the root cause but there was nothing in the request itself that was unclear or problematic, nor was it buried deep within a laundry list of individual requests in a single message. Of course, the CLI agents did all sorts of scanning through the codebase/self debate/etc in between the request and the first code output. I'm used to how modern models/agents get tripped up by now so this was an unusually clear cut failure to encounter from the latest large commercial reasoning models.
In both instances, literally just restating the exact same request with "No, the request was: [original wording]" was all it took to steer them back and didn't become a concerning pattern. But with the unpredictability of how the CLI agents decide to traverse a repo and ingest large amounts of distracting code/docs it seems much too over confident to believe that random, bizarre LLM "reasoning" failures won't still occur from time to time in regular usage even as models improve given their inherent limitations.
(If I were bending over backwards to be charitable/anthropomorphize, it would be the human failure mode of "I understood exactly what I was asked for and what I needed to do, but then somehow did the exact opposite, haha oops brain fart!" but personally I'm not willing to extend that much forgiveness/tolerance to a failure from a commercial tool I pay for...)
LLMs become increasingly error-prone as their memory is fills up. Just like humans.
In VSCode Copilot you can keep track of how many tokens the LLM is dealing with in realtime with "Chat Debug".
When it reaches 90k tokens I should expect degraded intelligence and brace for a possible forced sumarization.
Sometimes I just stop LLMs and continue the work in a new session.
Codex and claude give a nice report and if I see they're not considering this or that I can tell em.
For really big features or plans I’ll ask the agent to create linear issue tickets to track progress for each phase over multiple sessions. Only MCP I have loaded is usually linear but looking for a good way to transition it to a skill.
It’ll report, “Numbers changed in step 6a therefore it worked” [forgetting the pivotal role of step 2 which failed and as a result the agent should have taken step 6b, not 6a].
Or “there is conclusive evidence that X is present and therefore we were successful” [X is discussed in the plan as the reason why action is NEEDED, not as success criteria].
I _think _ that what is going wrong is context overload and my remedy is to have the agent update every step of the plan with results immediately after action and before moving on to action on the next step.
When things seem off I can then clear context and have the agent review results step by step to debug its own work: “review step 2 of the results. Are the stated results confident with final conclusions? Quote lines from the results verbatim as evidence.”
At a basic level, they work akin to git-hooks, but they fire up a whole new context whenever certain events trigger (E.g. another agent finishes implementing changes) - and that hook instance is independent of the implementation context (which is great, as for the review case it is a semi-independent reviewer).
I'm far from an LLM power user, but this is the single highest ROI practice I've been using.
You have to actually observe what the LLM is trying to do each time. Simply smashing enter over and over again or setting it to auto-accept everything will just burn tokens. Instead, see where it gets stuck and add a short note to CLAUDE.md or equivalent. Break it out into sub-files to open for different types of work if the context file gets large.
Letting the LLM churn and experiment for every single task will make your token quota evaporate before your eyes. Updating the context file constantly is some extra work for you, but it pays off.
My primary use case for LLMs is exploring code bases and giving me summaries of which files to open, tracing execution paths through functions, and handing me the info I need. It also helps a lot to add some instructions for how to deliver useful results for specific types of questions.
Better than that, ask the LLM. Better than that, have the LLM ask itself. You do still have make sure it doesn't go off the rails, but the LLM itself wrote this to help answer the question:
### Pattern 10: Student Pattern (Fresh Eyes)
*Concept:* Have a sub-agent read documentation/code/prompts "as a newcomer" to find gaps, contradictions, and confusion points that experts miss.
*Why it works:* Developers write with implicit knowledge they don't realize is missing. A "student" perspective catches assumptions, undefined terms, and inconsistencies.
*Example prompt:* ``` Task: "Student Pattern Review
Pretend you are a NEW AI agent who has never seen this codebase. Read these docs as if encountering them for the first time: 1. CLAUDE.md 2. SUB_AGENT_QUICK_START.md
Then answer from a fresh perspective:
## Confusion Points - What was confusing or unclear on first read? - What terms are used without explanation?
## Contradictions - Where do docs disagree with each other? - What's inconsistent?
## Missing Information - What would a new agent need to know that isn't covered?
## Recommendations - Concrete edits to improve clarity
Be honest and critical. Include file:line references." ```
*Uses cases:* Before finalizing new documentation, evaluating prompts for future Agents.
I feel like I spend quite a bit of time telling the thing to look at information it already knows. And I'm talking about when I HAVE actually created various documents to use and prompts.
As a specific example, it regularly just doesn't reference CLAUDE.md and it seems pretty random as to when it decides to drop that out of context. That's including right at session start when it should have it fresh.
I would agree with that!
I've been experimenting with having Claude re-write those documents itself. It can take simple directives and turn them into hierarchical Markdown lists that have multiple bullet points. It's annoying and overly verbose for humans to read, but the repetition and structure seems to help the LLM.
I also interrupt it and tell it to refer back to CLAUDE.md if it gets too off track.
Like I said, though, I'm not really an LLM power user. I'd be interested to hear tips from others with more time on these tools.
Overcoming this kind of nondeterministic behavior around creating/following/modifying instructions is the biggest thing I wish I could solve with my LLM workflows. It seems like you might be able to do this through a system of Claude Code hooks, but I've struggled with finding a good UX for maintaining a growing and ever-changing collection of hooks.
Are there any tools or harnesses that attempt to address this and allow you to "force" inject dynamic rules as context?
I’ve found that when my agent flies off the rails, it’s due to an underlying weakness in the construction of my program. The organization of the codebase doesn’t implicitly encode the “map”. Writing a prompt library helps to overcome this weakness, but I’ve found that the most enduring guidance comes from updating the codebase itself to be more discoverable.
Which, I've had it delete the entire project including .git out of "shame", so my claude doesn't get permission to run rm anymore.
Codex has fewer levers but it's deleted my entire project twice now.
(Play with fire, you're gonna get burnt.)
Also, I have extremely frequent commits and version control syncs to GitHub and so on as part of the process (including when it's working on documents or things that aren't code) as a way to counteract this.
Although I suppose a sufficiently devious AI can get around those, it seems to not have been a problem.
.. and then ran a fatally flawed "git checkout" command that wiped out all unstaged changes, which it immediately realized and after flailing around for five minutes trying to undo eventually came back saying "yeah uh so sorry, but... here's the thing..."
For Claude Chrome, I highly recommend using a separate profile. I also blocked my bank.com (not just via /etc/hosts but as this message is going to get harvested for training days, I unfortunately won't say what it is here. Email me if you really have to know - and promise you'll not just turn around and tell the whole Internet to AI) out of extra paranoia. Better paranoid and not got, than getting got, imo.
My rm interdiction script (which is far from 100%). https://gist.github.com/fragmede/96f35225c29cf8790f10b1668b8...
I've been having a lot of fun taking my larger projects and decomposing them into directed graphs where the nodes are nix flakes. If I launch claude code in a flake devshell it has access to only those tools, and it sees the flake.nix and assumes that the project is bounded by the CWD even though it's actually much larger, so its context is small and it doesn't get overwhelmed.
Inputs/outputs are a nice language agnostic mechanism for coordinating between flakes (just gotta remember to `nix flake update --update-input` when you want updated outputs from an adjacent flake). Then I can have them write feature requests for each other and help each other test fixtures and features. I also like watching them debate over a design, they get lazy and assume the other "team" will do the work, but eventually settle on something reasonable.
I've been running with the idea for a few weeks, maybe it's dumb, but I'd be surprised if this kind of rethinking didn't eventually yield a radical shift in how we organize code, even if the details look nothing like what I've come up with. Somehow we gotta get good at partitioning context so we can avoid the worst parts of the exponential increase in token volume that comes from submitting the entire chat session history just to get the next response.
The focus mainly seems to be on enhancing existing workflows to produce code we currently expect - often you hear its like a junior dev.
The type of rethinking you outlined could have code organised in such a way a junior dev would never be able to extend but our 'junior dev' LLM can iterate through changes easily.
I care more about the properties of software e.g. testable, extendable, secure than how it organised.
Gets me to think of questions like
- what is the correlation between how code is organised vs its properties? - what is the optimal organisation of code to facilitate llms to modify and extend software?
https://github.com/MatrixManAtYrService/poag
I'm especially pleased with how explicit it makes the inner dependency graph. Today I'm tinkering with pact (https://docs.pact.io/). I like that I'm forced to add the pact contracts generated during consumer testing as flake outputs (so they can then be inputs to whichever flake does provider testing). It's potentially a bit more work than it would be under other schemes, but it also makes the directionality of the dependency into a first class citizen and not an implementation detail. Otherwise it would be easy to forget which batch of tests depends on artifacts generated by the other.
I suppose there's things like Bazel for that sort of thing also but I don't think you can drop an agent into a bazel... thingy... and expect it to feel at home.
You can use it as an alternative to `git bisect` where only you're only bisecting the history of a single subflake. I imagine writing a new test that indicates the presence of an old bug, and then going back in time to see when the bug was reintroduced. With git bisect, going back in time means your new test goes away too.
IMO, the best way to raise the floor of LLM performance in codebases is by building meaning into the code base itself ala DDD. If your codebase is hard to understand and grok for a human, it will be the same for an LLM. If your codebase is unstructured and has no definable patterns, it will be harder for an LLM to use.
You can try to overcome this with even more tooling and more workflows but IMO, it is throwing good money after bad. it is ironic and maybe unpopular, but it turns out LLMs prove that all the folks yapping about language and meaning (re: DDD) were right.
DDD & the Simplicity Gospel:
https://oluatte.com/posts/domain-driven-design-simplicity-go...
I have the complete opposite experience, where once some patterns already exist 2-3 times in the codebase, the LLMs start to accurately replicating them instead of trying to solve everything as one-off solutions.
> You can’t be inconsistent if there are no existing patterns.
"Consistency" shouldn't be equated to "good". If that's your only metric for quality and you don't apply any taste you'll quickly end of with a unmaintainable hodgepodge of second-grade libraries if you let an LLM do its thing in a greenfield project.
Of course, but the problem is the converse: There are too many situations where a peer engineer will know what to do but the agent won't. This means that it requires more work to make a codebase understandable to a human than it does to make it understandable to an agent.
> Moving more implementation feedback from human to computer helps us improve the chance of one-shotting... Think of these as bumper rails. You can increase the likelihood of an LLM reaching the bowling pins by making it impossible to land in the gutter.
Sort of, but this is also a little similar to claiming that P = NP. Having a an efficient way to reliably check if a solution is correct is not the same at all as a reliable way to find a solution. It's the theory of computation that tells us that it probably isn't. The likelihood may well be higher yet still not high enough. Even though theoretically NP problems are strictly easier than EXPTIME ones, in practice, in many situations (though not all) they are equally intractable.
In fact, we can put the claim to the test: there are languages, like ATS and Idris, that make almost any property provable and checkable. These languages let the programmer (human or machine) position the "bumper rails" so precisely as to ensure we hit the target. We can ask the agent to write the code, write the proof of correctness, and check it. We'd still need to check that the correctness property is the right one, but if the claim is correct, coding agents should be best at writing code, accompanied by correctness proofs, in ATS or Idris. Are they?
Obviously, mileage mauy vary dependning on the task and the domain, but if it's true that coding models will get significantly better, then the best course of action may well be, in many cases, to just wait until they do rather than spend a lot of effort working around their current limitations, effort that will be wasted if and when capabilities improve. And that's the big question: are we in for a long haul where agent capabilities remain roughly where they are today or not?
Burn through your token limit in agent mode just to thrash around a few more times trying to identify where the agent "misunderstood" the prompt.
The only time LLM's work as coding agents for me is tightly scoped prompts with a small isolated context.
Just throwing an entire codebase into an LLM in an agentic loop seems like a fools errand.
Good read but I wouldn't fully extend the garbage in, garbage out principle to the LLMs. These massive LLMs are trained on internet-scale data, which includes a significant amount of garbage, and still do pretty good. Hallucinations are due to missing or misleading context than from the noise alone. Tech debt heavy code bases though unstructured still provides information-rich context.