Top
Best
New

Posted by vinhnx 1 day ago

How I use Claude Code: Separation of planning and execution(boristane.com)
882 points | 551 commentspage 14
kissgyorgy 15 hours ago|
There is not a lot of explanation WHY is this better than doing the opposite: start coding and see how it goes and how this would apply to Codex models.

I do exactly the same, I even developed my own workflows wit Pi agent, which works really well. Here is the reason:

- Claude needs a lot more steering than other models, it's too eager to do stuff and does stupid things and write terrible code without feedback.

- Claude is very good at following the plan, you can even use a much cheaper model if you have a good plan. For example I list every single file which needs edits with a short explanation.

- At the end of the plan, I have a clear picture in my head how the feature will exactly look like and I can be pretty sure the end result will be good enough (given that the model is good at following the plan).

A lot of things don't need planning at all. Simple fixes, refactoring, simple scripts, packaging, etc. Just keep it simple.

oulipo2 17 hours ago||
Has Claude Code become slow, laggy, imprecise, giving wrong answers for other people here?
drcongo 17 hours ago||
This is exactly how I use it.
submeta 17 hours ago||
What works extremely well for me is this: Let Claude Code create the plan, then turn over the plan to Codex for review, and give the response back to Claude Code. Codex is exceptionally good at doing high level reviews and keeping an eye on the details. It will find very suble errors and omissins. And CC is very good at quickly converting the plan into code.

This back and forth between the two agents with me steering the conversation elevates Claude Code into next level.

lxe 22 hours ago||
Honestly, I found that the best way to use these CLIs is exactly how the CLI creators have intended.
chaboud 22 hours ago||
The author seems to think they've hit upon something revolutionary...

They've actually hit upon something that several of us have evolved to naturally.

LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.

So, how do you solve that? Exactly how an experienced lead or software manager does: you have systems write it down before executing, explain things back to you, and ground all of their thinking in the code and documentation, avoiding making assumptions about code after superficial review.

When it was early ChatGPT, this meant function-level thinking and clearly described jobs. When it was Cline it meant cline rules files that forced writing architecture.md files and vibe-code.log histories, demanding grounding in research and code reading.

Maybe nine months ago, another engineer said two things to me, less than a day apart:

- "I don't understand why your clinerules file is so large. You have the LLM jumping through so many hoops and doing so much extra work. It's crazy."

- The next morning: "It's basically like a lottery. I can't get the LLM to generate what I want reliably. I just have to settle for whatever it comes up with and then try again."

These systems have to deal with minimal context, ambiguous guidance, and extreme isolation. Operate with a little empathy for the energetic interns, and they'll uncork levels of output worth fighting for. We're Software Managers now. For some of us, that's working out great.

vishnugupta 21 hours ago||
Revolutionary or not it was very nice of the author to make time and effort to share their workflow.

For those starting out using Claude Code it gives a structured way to get things done bypassing the time/energy needed to “hit upon something that several of us have evolved to naturally”.

chaboud 19 hours ago|||
It's this line that I'm bristling at: "...the workflow I’ve settled into is radically different from what most people do with AI coding tools..."

Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity.

It was #6 in Boris's run-down: https://news.ycombinator.com/item?id=46470017

So, yes, I'm glad that people write things out and share. But I'd prefer that they not lead with "hey folks, I have news: we should *slice* our bread!"

copirate 17 hours ago|||
But the author's workflow is actually very different from Boris'.

#6 is about using plan mode whereas the author says "The built-in plan mode sucks".

The author's post is much more than just "planning with clarity".

mnicky 13 hours ago|||
Since some time, Claude Codes's plan mode also writes file with a plan that you could probably edit etc. It's located in ~/.claude/plans/ for me. Actually, there's whole history of plans there.

I sometimes reference some of them to build context, e.g. after few unsuccessful tries to implement something, so that Claude doesn't try the same thing again.

amelius 13 hours ago||||
The author __is__ Boris ...
copirate 12 hours ago||
They are different Boris. I was using the names already used in this thread.
locknitpicker 13 hours ago|||
> The author's post is much more than just "planning with clarity".

Not much more, though.

It introduces "research", which is the central topic of LLMs since they first arrived. I mean, LLMs coined the term "hallucination", and turned grounding into a key concept.

In the past, building up context was thought to be the right way to approach LLM-assisted coding, but that concept is dead and proven to be a mistake, like discussing the best way to force a round peg through the square hole, but piling up expensive prompts to try to bridge the gap. Nowadays it's widely understood that it's far more effective and way cheaper to just refactor and rearchitect apps so that their structure is unsurprising and thus grounding issues are no longer a problem.

And planning mode. Each and every single LLM-assisted coding tool built their support for planning as the central flow and one that explicitly features iterations and manual updates of their planning step. What's novel about the blog post?

copirate 12 hours ago||
A detailed workflow that's quite different from the other posts I've seen.
locknitpicker 10 hours ago||
> A detailed workflow that's quite different from the other posts I've seen.

Seriously? Provide context with a prompt file, prepare a plan in plan mode, and then execute the plan? You get more detailed descriptions of this if you read the introductory how-to guides of tools such as Copilot.

copirate 8 hours ago||
Making the model write a research file, then the plan and iterate on it by editing the plan file, then adding the todo list, then doing the implementation, and doing all that in a single conversation (instead of clearing contexts).

There's nothing revolutionary, but yes, it's a workflow that's quite different from other posts I've seen, and especially from Boris' thread that was mentioned which is more like a collection of tips.

Forgeties79 15 hours ago||||
I would say he’s saying “hey folks, I have news. We should slice our bread with a knife rather than the spoon that came with the bread.”
locknitpicker 13 hours ago|||
> Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity.

That's obvious by now, and the reason why all mainstream code assistants now offer planning mode as a central feature of their products.

It was baffling to read the blogger making claims about what "most people" do when anyone using code assistants already do it. I mean, the so called frontier models are very expensive and time-consuming to run. It's a very natural pressure to make each run count. Why on earth would anyone presume people don't put some thought into those runs?

fintechie 15 hours ago||||
This kind of flows have been documented in the wild for some time now. They started to pop up in the Cursor forums 2+ years ago... eg: https://github.com/johnpeterman72/CursorRIPER

Personally I have been using a similar flow for almost 3 years now, tailored for my needs. Everybody who uses AI for coding eventually gravitates towards a similar pattern because it works quite well (for all IDEs, CLIs, TUIs)

ffsm8 20 hours ago||||
Its ai written though, the tells are in pretty much every paragraph.
ratsimihah 20 hours ago|||
I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”
Thanemate 19 hours ago|||
>Most people use ai to rewrite or clean up content

I think your sentence should have been "people who use ai do so to mostly rewrite or clean up content", but even then I'd question the statistical truth behind that claim.

Personally, seeing something written by AI means that the person who wrote it did so just for looks and not for substance. Claiming to be a great author requires both penmanship and communication skills, and delegating one or either of them to a large language model inherently makes you less than that.

However, when the point is just the contents of the paragraph(s) and nothing more then I don't care who or what wrote it. An example is the result of a research, because I'd certainly won't care about the prose or effort given to write the thesis but more on the results (is this about curing cancer now and forever? If yes, no one cares if it's written with AI).

With that being said, there's still that I get anywhere close to understanding the author behind the thoughts and opinions. I believe the way someone writes hints to the way they think and act. In that sense, using LLM's to rewrite something to make it sound more professional than what you would actually talk in appropriate contexts makes it hard for me to judge someone's character, professionalism, and mannerisms. Almost feels like they're trying to mask part of themselves. Perhaps they lack confidence in their ability to sound professional and convincing?

esafak 13 hours ago||
People like to hide behind AI so they can claim credit for its ideas. It's the same thing in job interviews.
pmg101 20 hours ago||||
I don't judge content for being AI written, I judge it for the content itself (just like with code).

However I do find the standard out-of-the-box style very grating. Call it faux-chummy linkedin corporate workslop style.

Why don't people give the llm a steer on style? Either based on your personal style or at least on a writer whose style you admire. That should be easier.

xoac 19 hours ago|||
Because they think this is good writing. You can’t correct what you don’t have taste for. Most software engineers think that reading books means reading NYT non-fiction bestsellers.
ben_w 16 hours ago||
While I agree with:

> Because they think this is good writing. You can’t correct what you don’t have taste for.

I have to disagree about:

> Most software engineers think that reading books means reading NYT non-fiction bestsellers.

There's a lot of scifi and fantasy in nerd circles, too. Douglas Adams, Terry Pratchett, Vernor Vinge, Charlie Stross, Iain M Banks, Arthur C Clarke, and so on.

But simply enjoying good writing is not enough to fully get what makes writing good. Even writing is not itself enough to get such a taste: thinking of Arthur C Clarke, I've just finished 3001, and at the end Clarke gives thanks to his editors, noting his own experience as an editor meant he held a higher regard for editors than many writers seemed to. Stross has, likewise, blogged about how writing a manuscript is only the first half of writing a book, because then you need to edit the thing.

derwiki 13 hours ago|||
My flow is to craft the content of the article in LLM speak, and then add to context a few of my human-written blog posts, and ask it to match my writing style. Made it to #1 on HN without a single callout for “LLM speak”!
ben_w 16 hours ago||||
> I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”

Unfortunately, there's a lot of people trying to content-farm with LLMs; this means that whatever style they default to, is automatically suspect of being a slice of "dead internet" rather than some new human discovery.

I won't rule out the possibility that even LLMs, let alone other AI, can help with new discoveries, but they are definitely better at writing persuasively than they are at being inventive, which means I am forced to use "looks like LLM" as proxy for both "content farm" and "propaganda which may work on me", even though some percentage of this output won't even be LLM and some percentage of what is may even be both useful and novel.

stuaxo 17 hours ago||||
Even though I use LLMs for code, I just can't read LLM written text, I kind of hate the style, it reminds me too much of LinkedIn.
dawnerd 14 hours ago||||
Very high chance someone that’s using Claude to write code is also using Claude to write a post from some notes. That goes beyond rewriting and cleaning up.
chaboud 10 hours ago||
I use Claude Code quite a bit (one of my former interns noted that I crossed 1.8 Million lines of code submitted last year, which is... um... concerning), but I still steadfastly refuse to use AI to generate written content. There are multiple purposes for writing documents, but the most critical is the forming of coherent, comprehensible thinking. The act of putting it on paper is what crystallizes the thinking.

However, I use Claude for a few things:

1. Research buddy, having conversations about technical approaches, surveying the research landscape.

2. Document clarity and consistency evaluator. I don't take edits, but I do take notes.

3. Spelling/grammar checker. It's better at this than regular spellcheck, due to its handling of words introduced in a document (e.g., proper names) and its understanding of various writing styles (e.g., comma inside or outside of quotes, one space or two after a period?)

Every time I get into a one hour meeting to see a messy, unclear, almost certainly heavily AI generated document being presented to 12 people, I spend at least thirty seconds reminding the team that 2-3 hours saved using AI to write has cost 11+ person-hours of time having others read and discuss unclear thoughts.

I will note that some folks actually put in the time to guide AI sufficiently to write meaningfully instructive documents. The part that people miss is that the clarity of thinking, not the word count, is what is required.

shevy-java 20 hours ago||||
Well, real humans may read it though. Personally I much prefer real humans write real articles than all this AI generated spam-slop. On youtube this is especially annoying - they mix in real videos with fake ones. I see this when I watch animal videos - some animal behaviour is taken from older videos, then AI fake is added. My own policy is that I do not watch anything ever again from people who lie to the audience that way so I had to begin to censor away such lying channels. I'd apply the same rationale to blog authors (but I am not 100% certain it is actually AI generated; I just mention this as a safety guard).
theshrike79 15 hours ago||||
ai;dr

If your "content" smells like AI, I'm going to use _my_ AI to condense the content for me. I'm not wasting my time on overly verbose AI "cleaned" content.

Write like a human, have a blog with an RSS feed and I'll most likely subscribe to it.

ffsm8 20 hours ago||||
> I don’t think it’s that big a red flag anymore.

It is to me, because it indicates the author didn't care about the topic. The only thing they cared about is to write an "insightful" article about using llms. Hence this whole thing is basically linked-in resume improvement slop.

Not worth interacting with, imo

Also, it's not insightful whatsoever. It's basically a retelling of other articles around the time Claude code was released to the public (March-August 2025)

pi-rat 19 hours ago||||
The main issue with evaluating content for what it is is how extremely asymmetric that process has become.

Slop looks reasonable on the surface, and requires orders of magnitude more effort to evaluate than to produce. It’s produced once, but the process has to be repeated for every single reader.

Disregarding content that smells like AI becomes an extremely tempting early filtering mechanism to separate signal from noise - the reader’s time is valuable.

exe34 17 hours ago||||
If you want to write something with AI, send me your prompt. I'd rather read what you intend for it to produce rather than what it produces. If I start to believe you regularly send me AI written text, I will stop reading it. Even at work. You'll have to call me to explain what you intended to write.
DonHopkins 17 hours ago||
And if my prompt is a 10 page wall of text that I would otherwise take the time to have the AI organize, deduplicate, summarize, and sharpen with an index, executive summary, descriptive headers, and logical sections, are you going to actually read all of that, or just whine "TL;DR"?

It's much more efficient and intentional for the writer to put the time into doing the condensing and organizing once, and review and proofread it to make sure it's what they mean, than to just lazily spam every human they want to read it with the raw prompt, so every recipient has to pay for their own AI to perform that task like a slot machine, producing random results not reviewed and approved by the author as their intended message.

Is that really how you want Hacker News discussions and your work email to be, walls of unorganized unfiltered text prompts nobody including yourself wants to take the time to read? Then step aside, hold my beer!

Or do you prefer I should call you on the phone and ramble on for hours in an unedited meandering stream of thought about what I intended to write?

fasbiner 16 hours ago||
Yeah but it's not. This a complete contrivance and you're just making shit up. The prompt is much shorter than the output and you are concealing that fact. Why?

Github repo or it didn't happen. Let's go.

DonHopkins 16 hours ago||
[flagged]
layer8 15 hours ago|||
It’s certainly more interesting than whatever the AI would turn it into.
exe34 13 hours ago|||
tl;dr
elaus 20 hours ago|||
I think as humans it's very hard to abstract content from its form. So when the form is always the same boring, generic AI slop, it's really not helping the content.
rmnclmnt 20 hours ago||
And maybe writing an article or a keynote slides is one of the few places we can still exerce some human creativity, especially when the core skills (programming) is almost completely in the hands of LLMs already
foldingmoney 18 hours ago||||
>the tells are in pretty much every paragraph.

It's not just misleading — it's lazy. And honestly? That doesn't vibe with me.

[/s obviously]

handfuloflight 19 hours ago||||
So is GP.

This is clearly a standard AI exposition:

LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.

DonHopkins 17 hours ago|||
Then ask your own ai to rewrite it so it doesn't trigger you into posting uninteresting thought stopping comments proclaiming why you didn't read the article, that don't contribute to the discussion.
petesergeant 19 hours ago|||
Here's mine! https://github.com/pjlsergeant/moarcode
bambax 19 hours ago|||
Agreed. The process described is much more elaborate than what I do but quite similar. I start to discuss in great details what I want to do, sometimes asking the same question to different LLMs. Then a todo list, then manual review of the code, esp. each function signature, checking if the instructions have been followed and if there are no obvious refactoring opportunities (there almost always are).

The LLM does most of the coding, yet I wouldn't call it "vibe coding" at all.

"Tele coding" would be more appropriate.

mlaretallack 15 hours ago||
I use AWS Kiro, and its spec driven developement is exactly this, I find it really works well as it makes me slow down and think about what I want it to do.

Requirements, design, task list, coding.

marc_g 21 hours ago|||
I’ve also found that a bigger focus on expanding my agents.md as the project rolls on has led to less headaches overall and more consistency (non-surprisingly). It’s the same as asking juniors to reflect on the work they’ve completed and to document important things that can help them in the future. Software Manger is a good way to put this.
zozbot234 20 hours ago||
AGENTS.md should mostly point to real documentation and design files that humans will also read and keep up to date. It's rare that something about a project is only of interest to AI agents.
CodeBit26 21 hours ago|||
I really like your analogy of LLMs as 'unreliable interns'. The shift from being a 'coder' to a 'software manager' who enforces documentation and grounding is the only way to scale these tools. Without an architecture.md or similar grounding, the context drift eventually makes the AI-generated code a liability rather than an asset. It's about moving the complexity from the syntax to the specification.
bonoboTP 17 hours ago|||
It feels like retracing the history of software project management. The post is quite waterfall-like. Writing a lot of docs and specs upfront then implementing. Another approach is to just YOLO (on a new branch) make it write up the lessons afterwards, then start a new more informed try and throw away the first. Or any other combo.

For me what works well is to ask it to write some code upfront to verify its assumptions against actual reality, not just be telling it to review the sources "in detail". It gains much more from real output from the code and clears up wrong assumptions. Do some smaller jobs, write up md files, then plan the big thing, then execute.

jerryharri 16 hours ago||||
'The post is quite waterfall-like. Writing a lot of docs and specs upfront then implementing' - It's only waterfall if the specs cover the entire system or app. If it's broken up into sub-systems or vertical slices, then it's much more Agile or Lean.
0x696C6961 17 hours ago||||
This is exactly what I do. I assume most people avoid this approach due to cost.
le-mark 13 hours ago||
Please explain what do you mean by “cost”?
nurettin 17 hours ago|||
It makes an endless stream of assumptions. Some of them brilliant and even instructive to a degree, but most of them are unfounded and inappropriate in my experience.
xnx 9 hours ago|||
> LLM's are like unreliable interns with boundless energy.

This was a popular analogy years ago, but is out of date in 2026.

Specs and a plan are still good basis, they are of equal or more importance than the ephemeral code implementation.

jeffreygoesto 21 hours ago|||
Oh no, maybe the V-Model was right all the time? And right sizing increments with control stops after them. No wonder these matrix multiplications start to behave like humans, that is what we wanted them to do.
baxtr 20 hours ago||
So basically you’re saying LLMs are helping us be better humans?
shevy-java 20 hours ago||
Better humans? How and where?
kaycey2022 15 hours ago|||
I've been doing the exact same thing for 2 months now. I wish I had gotten off my ass and written a blog post about it. I can't blame the author for gathering all the well deserved clout they are getting for it now.
LeafItAlone 15 hours ago|||
Don’t worry. This advice has been going around for much more than 2 months, including links posted here as well as official advice from the major companies (OpenAI and Anthropic) themselves. The tools literally have had plan mode as a first class feature.

So you probably wouldn’t have any clout anyways, like all of the other blog posts.

noisy_boy 14 hours ago|||
I went through the blog. I started using Claude Code about 2 weeks ago and my approach is practically the same. It just felt logical. I think there are a bunch of us who have landed on this approach and most are just quietly seeing the benefits.
qudat 14 hours ago|||
> LLM's are like unreliable interns with boundless energy

This isn’t directed specifically at you but the general community of SWEs: we need to stop anthropomorphizing a tool. Code agents are not human capable and scaling pattern matching will never hit that goal. That’s all hype and this is coming from someone who runs the range of daily CC usage. I’m using CC to its fullest capability while also being a good shepherd for my prod codebases.

Pretending code agents are human capable is fueling this koolaide drinking hype craze.

MrDarcy 13 hours ago||
It’s pretty clear they effectively take on the roles of various software related personas. Designer, coder, architect, auditor, etc…

Pretending otherwise is counter-productive. This ship has already sailed, it is fairly clear the best way to make use of them is to pass input messages to them as if they are an agent of a person in the role.

locknitpicker 13 hours ago|||
> The author seems to think they've hit upon something revolutionary...

> They've actually hit upon something that several of us have evolved to naturally.

I agree, it looks like the author is talking about spec-driven development with extra time-consuming steps.

Copilot's plan mode also supports iterations out of the box, and draft a plan only after manually reviewing and editing it. I don't know what the blogger was proposing that ventured outside of plan mode's happy path.

user3939382 16 hours ago|||
If you have a big rules file you’re in the right direction but still not there. Just as with humans, the key is that your architecture should make it very difficult to break the rules by accident and still be able to compile/run with correct exit status.

My architecture is so beautifully strong that even LLMs and human juniors can’t box their way out of it.

BoredPositron 20 hours ago|||
It's alchemy all over again.
shevy-java 20 hours ago||
Alchemy involved a lot of do-it-yourself though. With AI it is like someone else does all the work (well, almost all the work).
BoredPositron 20 hours ago||
It was mainly a jab at the protoscientific nature of it.
vntok 18 hours ago||
Reproducing experimental results across models and vendors is trivial and cheap nowadays.
BoredPositron 18 hours ago||
Not if anthropic goes further in obfuscating the output of claude code.
vntok 17 hours ago||
Why would you test implementation details? Test what's delivered, not how it's delivered. The thinking portion, synthetized or not, is merely implementation.

The resulting artefact, that's what is worth testing.

hghbbjh 15 hours ago||
> Why would you test implementation details

Because this has never been sufficient. From things like various hard to test cases to things like readability and long term maintenance. Reading and understanding the code is more efficient and necessary for any code worth keeping around.

kobe_bryant 14 hours ago|||
if only there was another simpler way to use your knowledge to write code...
blackarrow36 21 hours ago|||
[flagged]
fy20 19 hours ago||
It's nice to have it written down in a concise form. I shared it with my team as some engineers have been struggling with AI, and I think this (just trying to one-shot without planning) could be why.
YetAnotherNick 21 hours ago||
I don't know. I tried various methods. And this one kind of doesn't work quite a bit of times. The problem is plan naturally always skips some important details, or assumes some library function, but is taken as instruction in the next section. And claude can't handle ambiguity if the instruction is very detailed(e.g. if plan asks to use a certain library even if it is a bad fit claude won't know that decision is flexible). If the instruction is less detailed, I saw claude is willing to try multiple things and if it keeps failing doesn't fear in reverting almost everything.

In my experience, the best scenario is that instruction and plan should be human written, and be detailed.

tayo42 22 hours ago||
We're just slowly reinventing agile for telling Ai agents what to do lol

Just skip to the Ai stand-ups

politician 1 day ago|
Wow, I never bother with using phrases like “deeply study this codebase deeply.” I consistently get pretty fantastic results.
More comments...