Top
Best
New

Posted by vinhnx 10 hours ago

How I use Claude Code: Separation of planning and execution(boristane.com)
492 points | 303 commentspage 4
nerdright 4 hours ago|
Haha this is surprisingly and exactly how I use claude as well. Quite fascinating that we independently discovered the same workflow.

I maintain two directories: "docs/proposals" (for the research md files) and "docs/plans" (for the planning md files). For complex research files, I typically break them down into multiple planning md files so claude can implement one at a time.

A small difference in my workflow is that I use subagents during implementation to avoid context from filling up quickly.

brendanmc6 4 hours ago|
Same, I formalized a similar workflow for my team (oriented around feature requirement docs), I am thinking about fully productizing it and am looking to for feedback - https://acai.sh

Even if the product doesn’t resonate I think I’ve stumbled on some ideas you might find useful^

I do think spec-driven development is where this all goes. Still making up my mind though.

puchatek 3 hours ago|||
Spec-driven looks very much like what the author describes. He may have some tweaks of his own but they could just as well be coded into the artifacts that something like OpenSpec produces.
clouedoc 4 hours ago|||
This is basically long-lived specs that are used as tests to check that the product still adheres to the original idea that you wanted to implement, right?

This inspired me to finally write good old playwright tests for my website :).

chickensong 2 hours ago||
I agree with most of this, though I'm not sure it's radically different. I think most people who've been using CC in earnest for a while probably have a similar workflow? Prior to Claude 4 it was pretty much mandatory to define requirements and track implementation manually to manage context. It's still good, but since 4.5 release, it feels less important. CC basically works like this by default now, so unless you value the spec docs (still a good reference for Claude, but need to be maintained), you don't have to think too hard about it anymore.

The important thing is to have a conversation with Claude during the planning phase and don't just say "add this feature" and take what you get. Have a back and forth, ask questions about common patterns, best practices, performance implications, security requirements, project alignment, etc. This is a learning opportunity for you and Claude. When you think you're done, request a final review to analyze for gaps or areas of improvement. Claude will always find something, but starts to get into the weeds after a couple passes.

If you're greenfield and you have preferences about structure and style, you need to be explicit about that. Once the scaffolding is there, modern Claude will typically follow whatever examples it finds in the existing code base.

I'm not sure I agree with the "implement it all without stopping" approach and let auto-compact do its thing. I still see Claude get lazy when nearing compaction, though has gotten drastically better over the last year. Even so, I still think it's better to work in a tight loop on each stage of the implementation and preemptively compacting or restarting for the highest quality.

Not sure that the language is that important anymore either. Claude will explore existing codebase on its own at unknown resolution, but if you say "read the file" it works pretty well these days.

My suggestions to enhance this workflow:

- If you use a numbered phase/stage/task approach with checkboxes, it makes it easy to stop/resume as-needed, and discuss particular sections. Each phase should be working/testable software.

- Define a clear numbered list workflow in CLAUDE.md that loops on each task (run checks, fix issues, provide summary, etc).

- Use hooks to ensure the loop is followed.

- Update spec docs at the end of the cycle if you're keeping them. It's not uncommon for there to be some divergence during implementation and testing.

paradite 5 hours ago||
Lol I wrote about this and been using plan+execute workflow for 8 months.

Sadly my post didn't much attention at the time.

https://thegroundtruth.media/p/my-claude-code-workflow-and-p...

mukundesh 6 hours ago||
https://github.blog/ai-and-ml/generative-ai/spec-driven-deve...
gregman1 2 hours ago||
It is really fun to watch how a baby makes its first steps and also how experienced professionals rediscover what standards were telling us for 80+ years.
cadamsdotcom 7 hours ago||
The author is quite far on their journey but would benefit from writing simple scripts to enforce invariants in their codebase. Invariant broken? Script exits with a non-zero exit code and some output that tells the agent how to address the problem. Scripts are deterministic, run in milliseconds, and use zero tokens. Put them in husky or pre-commit, install the git hooks, and your agent won’t be able to commit without all your scripts succeeding.

And “Don’t change this function signature” should be enforced not by anticipating that your coding agent “might change this function signature so we better warn it not to” but rather via an end to end test that fails if the function signature is changed (because the other code that needs it not to change now has an error). That takes the author out of the loop and they can not watch for the change in order to issue said correction, and instead sip coffee while the agent observes that it caused a test failure then corrects it without intervention, probably by rolling back the function signature change and changing something else.

lastdong 3 hours ago||
Google Anti-Gravity has this process built in. This is essentially a cycle a developer would follow: plan/analyse - document/discuss - break down tasks/implement. We’ve been using requirements and design documents as best practice since leaving our teenage bedroom lab for the professional world. I suppose this could be seen as our coding agents coming of age.
armanj 7 hours ago||
> “remove this section entirely, we don’t need caching here” — rejecting a proposed approach

I wonder why you don't remove it yourself. Aren't you already editing the plan?

pgt 3 hours ago||
My process is similar, but I recently added a new "critique the plan" feedback loop that is yielding good results. Steps:

1. Spec

2. Plan

3. Read the plan & tell it to fix its bad ideas.

4. (NB) Critique the plan (loop) & write a detailed report

5. Update the plan

6. Review and check the plan

7. Implement plan

Detailed here:

https://x.com/PetrusTheron/status/2016887552163119225

brumar 3 hours ago|
Same. In my experience, the first plan always benefits from being challenged once or twice by claude itself.
efnx 5 hours ago|
I’ve been using Claude through opencode, and I figured this was just how it does it. I figured everyone else did it this way as well. I guess not!
More comments...