Top
Best
New

Posted by vinhnx 1 day ago

How I use Claude Code: Separation of planning and execution(boristane.com)
875 points | 551 commentspage 13
bandrami 1 day ago|
How much time are you actually saving at this point?
bodeadly 1 day ago||
Tip: LLMs are very good at following conventions (this is actually what is happening when it writes code). If you create a .md file with a list of entries of the following structure: # <identifier> <description block> <blank space> # <identifier> ... where an <identifier> is a stable and concise sequence of tokens that identifies some "thing" and seed it with 5 entries describing abstract stuff, the LLM will latch on and reference this. I call this a PCL (Project Concept List). I just tell it: > consume tmp/pcl-init.md pcl.md The pcl-init.md describes what PCL is and pcl.md is the actual list. I have pcl.md file for each independent component in the code (logging, http, auth, etc). This works very very well. The LLM seems to "know" what you're talking about. You can ask questions and give instructions like "add a PCL entry about this". It will ask if should add a PCL entry about xyz. If the description block tends to be high information-to-token ratio, it will follow that convention (which is a very good convention BTW).

However, there is a caveat. LLMs resist ambiguity about authority. So the "PCL" or whatever you want to call it, needs to be the ONE authoritative place for everything. If you have the same stuff in 3 different files, it won't work nearly as well.

Bonus Tip: I find long prompt input with example code fragments and thoughtful descriptions work best at getting an LLM to produce good output. But there will always be holes (resource leaks, vulnerabilities, concurrency flaws, etc). So then I update my original prompt input (keep it in a separate file PROMPT.txt as a scratch pad) to add context about those things maybe asking questions along the way to figure out how to fix the holes. Then I /rewind back to the prompt and re-enter the updated prompt. This feedback loop advances the conversation without expending tokens.

dabedee 11 hours ago||
I appreciate the author taking the time to share his workflow even though I really dislike the way this article is written. My dislike stems from sentences like this one: "I’ve been using Claude Code as my primary development tool for approx 9 months, and the workflow I’ve settled into is radically different from what most people do with AI coding tools." There is nothing radically different in the way he's using it (quite the opposite) and the are so many people that wrote about their workflows (and which are almost exactly the same, here's just one example [1]). Apart from that, the obvious use of AI to write or edit the article makes it further indigestible: "That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing."

[1] https://github.com/snarktank/ai-dev-tasks

hibikir 10 hours ago||
There's no way I'd call what I do "radically different from what most people do" myself, under any circumstances. Yet in my last cross-team discussions at work, I realized that a whole lot of people were using AI in ways I'd consider either silly or mostly ineffective. We had a team boasting "we used Amazon Q to increase our projects' unit test coverage", and a principal engineer talking about how he uses Cursor as some form of advanced auto complete.

So when I point claude code at a ticket, hand it readOnly access to a qa environment so it can see how the database actually looks like, chat about implementation details and then tell it to go implement the plan, running unit tests, functional tests, run linters and all, that, they look at me like I have three heads.

So if you ask me, explaining reasonably easy ways to get good outcomes out of Codex or Claude Code is still necessary evangelism, at least in companies that haven't spent on tools to do things like what Stripe does. There's still quite a few people out there copying and pasting from the chat window.

gwerbin 10 hours ago||
> We had a team boasting "we used Amazon Q to increase our projects' unit test coverage"

Well are the tests good or no? Did it help the work get done faster or more thoroughly than without?

> how he uses Cursor as some form of advanced auto complete

Is there something wrong with that? That's literally what an LLM is, why not use it directly for that purpose instead of using the wacky indirect "run autocomplete on a conversation and accompanying script of actions" thing. Not everyone wants to be an agent jockey.

I don't see what's necessarily silly or ineffective about what you described. Personally I don't find it particularly efficient to chat about and plan out all bunch of work with a robot for every task, often it's faster to just sketch out a design on a notepad and then go write code, maybe with advanced AI completion help to save keystrokes.

I agree that if you want the AI to do non-trivial amounts of work, you need to chat and plan out the work and establish a good context window. What I don't agree with is your implication that any other less-sophisticated use of AI is necessarily deficient.

monooso 9 hours ago|||
How is this evidence of AI use?

> That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing.

That is a perfectly normal sentence, indistinguishable from one I might write myself. I am not an AI.

alonsonic 9 hours ago|||
This is a big giveaway because ai tends to overuse this same structure to "conclude"
boredtofears 9 hours ago|||
It’s not X it’s Y is one of the most obvious LLM writing patterns. Especially the heavily punctuated sentence structure.
signatoremo 8 hours ago||
> the obvious use of AI to write or edit the article makes it further indigestible: "That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing."

Any comment complaining about using AI deserves a downvote. First of all it reads like witch hunt, accusation without evidence that’s only based on some common perceptions. Secondly, whether it’s written with AI’s help or not, that particular sentence is clear, concise, and communicative. It’s much better than a lot of human written mumblings prevalent here on HN.

Anyone wants to guess if I’m using AI to help with this comment of mine?

mcv 18 hours ago||
This is great. My workflow is also heading in that direction, so this is a great roadmap. I've already learned that just naively telling Claude what to do and letting it work, is a recipe for disaster and wasted time.

I'm not this structured yet, but I often start with having it analyse and explain a piece of code, so I can correct it before we move on. I also often switch to an LLM that's separate from my IDE because it tends to get confused by sprawling context.

vazma 17 hours ago||
Sorry but I didn't get the hype with this post, isnt it what most of the people doing? I want to see more posts on how you use the claude "smart" without feeding the whole codebase polluting the context window and also more best practices on cost efficient ways to use it, this workflow is clearly burning million tokens per session, for me is a No
pajamasam 16 hours ago||
I feel like if I have to do all this, I might as well write the code myself.
recroad 1 day ago||
Use OpenSpec and simplify everything.
yunusabd 17 hours ago||
That's exactly what Cursor's "plan" mode does? It even creates md files, which seems to be the main "thing" the author discovered. Along with some cargo cult science?

How is this noteworthy other than to spark a discussion on hn? I mean I get it, but a little more substance would be nice.

kissgyorgy 13 hours ago||
There is not a lot of explanation WHY is this better than doing the opposite: start coding and see how it goes and how this would apply to Codex models.

I do exactly the same, I even developed my own workflows wit Pi agent, which works really well. Here is the reason:

- Claude needs a lot more steering than other models, it's too eager to do stuff and does stupid things and write terrible code without feedback.

- Claude is very good at following the plan, you can even use a much cheaper model if you have a good plan. For example I list every single file which needs edits with a short explanation.

- At the end of the plan, I have a clear picture in my head how the feature will exactly look like and I can be pretty sure the end result will be good enough (given that the model is good at following the plan).

A lot of things don't need planning at all. Simple fixes, refactoring, simple scripts, packaging, etc. Just keep it simple.

cawksuwcka 20 hours ago|
falling asleep here. when will the babysitting end
More comments...