Top
Best
New

Posted by vinhnx 1 day ago

How I use Claude Code: Separation of planning and execution(boristane.com)
867 points | 548 commentspage 12
RVuRnvbM2e 20 hours ago|
This is just Waterfall for LLMs. What happens when you explore the problem space and need to change up the plan?
kaydub 9 hours ago|
Do you think this is a gotcha?

You just prompt the llm to change the plan.

w4yai 21 hours ago||
You described how AntiGravity works natively.
growt 18 hours ago||
That is just spec driven development without a spec, starting with the plan step instead.
grabshot_dev 16 hours ago||
Why don't you make Claude give feedback and iterate by itself?
zhubert 22 hours ago||
AI only improves and changes. Embrace the scientific method and make sure your “here’s how to” are based in data.
MagicMoonlight 10 hours ago||
So we’re back to waterfall huh
dworks 22 hours ago||
my rlm-workflow skill has this encoded as a repeatable workflow.

give it a try: https://skills.sh/doubleuuser/rlm-workflow/rlm-workflow

beratbozkurt0 22 hours ago||
That's great, actually, doesn't the logic apply to other services as well?
baalimago 14 hours ago||
Another approach is to spec functionality using comments and interfaces, then tell the LLM to first implement tests and finally make the tests pass. This way you also get regression safety and can inspect that it works as it should via the tests.
dabedee 9 hours ago|
I appreciate the author taking the time to share his workflow even though I really dislike the way this article is written. My dislike stems from sentences like this one: "I’ve been using Claude Code as my primary development tool for approx 9 months, and the workflow I’ve settled into is radically different from what most people do with AI coding tools." There is nothing radically different in the way he's using it (quite the opposite) and the are so many people that wrote about their workflows (and which are almost exactly the same, here's just one example [1]). Apart from that, the obvious use of AI to write or edit the article makes it further indigestible: "That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing."

[1] https://github.com/snarktank/ai-dev-tasks

hibikir 8 hours ago||
There's no way I'd call what I do "radically different from what most people do" myself, under any circumstances. Yet in my last cross-team discussions at work, I realized that a whole lot of people were using AI in ways I'd consider either silly or mostly ineffective. We had a team boasting "we used Amazon Q to increase our projects' unit test coverage", and a principal engineer talking about how he uses Cursor as some form of advanced auto complete.

So when I point claude code at a ticket, hand it readOnly access to a qa environment so it can see how the database actually looks like, chat about implementation details and then tell it to go implement the plan, running unit tests, functional tests, run linters and all, that, they look at me like I have three heads.

So if you ask me, explaining reasonably easy ways to get good outcomes out of Codex or Claude Code is still necessary evangelism, at least in companies that haven't spent on tools to do things like what Stripe does. There's still quite a few people out there copying and pasting from the chat window.

gwerbin 8 hours ago||
> We had a team boasting "we used Amazon Q to increase our projects' unit test coverage"

Well are the tests good or no? Did it help the work get done faster or more thoroughly than without?

> how he uses Cursor as some form of advanced auto complete

Is there something wrong with that? That's literally what an LLM is, why not use it directly for that purpose instead of using the wacky indirect "run autocomplete on a conversation and accompanying script of actions" thing. Not everyone wants to be an agent jockey.

I don't see what's necessarily silly or ineffective about what you described. Personally I don't find it particularly efficient to chat about and plan out all bunch of work with a robot for every task, often it's faster to just sketch out a design on a notepad and then go write code, maybe with advanced AI completion help to save keystrokes.

I agree that if you want the AI to do non-trivial amounts of work, you need to chat and plan out the work and establish a good context window. What I don't agree with is your implication that any other less-sophisticated use of AI is necessarily deficient.

monooso 7 hours ago|||
How is this evidence of AI use?

> That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing.

That is a perfectly normal sentence, indistinguishable from one I might write myself. I am not an AI.

alonsonic 7 hours ago|||
This is a big giveaway because ai tends to overuse this same structure to "conclude"
boredtofears 7 hours ago|||
It’s not X it’s Y is one of the most obvious LLM writing patterns. Especially the heavily punctuated sentence structure.
signatoremo 7 hours ago||
> the obvious use of AI to write or edit the article makes it further indigestible: "That’s it. No magic prompts, no elaborate system instructions, no clever hacks. Just a disciplined pipeline that separates thinking from typing."

Any comment complaining about using AI deserves a downvote. First of all it reads like witch hunt, accusation without evidence that’s only based on some common perceptions. Secondly, whether it’s written with AI’s help or not, that particular sentence is clear, concise, and communicative. It’s much better than a lot of human written mumblings prevalent here on HN.

Anyone wants to guess if I’m using AI to help with this comment of mine?

More comments...