Top
Best
New

Posted by davidbarker 8 hours ago

Orchestrate teams of Claude Code sessions(code.claude.com)
308 points | 170 comments
bluerooibos 3 hours ago|
This is great and all but, who can actually afford to let these agents run on tasks all day long? Is anyone here actually using this or are these rollouts aimed at large companies?

I'm burning through so many tokens on Cursor that I've had to upgrade to Ultra recently - and i'm convinced they're tweaking the burn rate behind the scenes - usage allowance doesn't seem proportional.

Thank god the open source/local LLM world isn't far behind.

logicx24 3 hours ago||
I can't even get through my Claude Max quota, and that's only 200/mo. And I code every day and use it for various other pretty-intensive tasks.
dangus 3 hours ago||
only $200/mo…$200 a month is a used car payment.

I guarantee you that price will double by 2027. Then it’ll be a new car payment!

I’m really not saying this to be snarky, I’m saying this to point out that we’re really already in the enshittification phase before the rapid growth phase has even ended. You’re paying $200 and acting like that’s a cheap SaaS product for an individual.

I pay less for Autocad products!

This whole product release is about maximizing your bill, not maximizing your productivity.

I don’t need agents to talk to each other. I need one agent to do the job right.

__turbobrew__ 1 hour ago|||
$200/month is peanuts when you are a business paying your employees $200k/year. I think LLMs make me at least 10% more effective and therefore the cost to my employer is very worth it. Lots of trades have much more expensive tools (including cars).
dangus 44 minutes ago||
> I think LLMs make me at least 10% more effective

I know this was last year but...

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

yomismoaqui 2 hours ago||||
As company owner the math is simple:

If I pay $3k/month to a developer and a $200/month tool makes them 10% more productive I will pay it without thinking.

kesslern 2 hours ago||||
Not saying $200/mo isn't a lot, but I think you're underestimating used car payments these days. The average US used car payment is above $500 now.
nlh 2 hours ago||||
I pay $200/month, don’t come near the limits (yet), and if they raised the price to $1000/month for the exact same product I’d gladly pay it this afternoon (Don’t quote me on this Anthropic!)

If you’re not able to get US$thousands out of these models right now either your expectations are too high or your usage is too low, but as a small business owner and part/most-time SWE, the pricing is a rounding error on value delivered.

rune-dev 2 hours ago|||
As a business expense to make profit, I can understand being ok with this price point.

But as an individual with no profit motive, no way.

I use these products at work, but not as much personally because of the bill. And even if I decided I wanted to pursue a for profit side project I’d have to validate it’s viability before even considering a 200$ monthly subscription

Wowfunhappy 35 minutes ago|||
I'm paying $100 per month even though I don't write code professionally. It is purely personal use. I've used the subscription to have Claude create a bunch of custom apps that I use in my daily life.

This did require some amount of effort on my part, to test and iterate and so on, but much less than if I needed to write all the code myself. And, because these programs are for personal use, I don't need to review all the code, I don't have security concerns and so on.

$100 every month for a service that writes me custom applications... I don't know, maybe I'm being stupid with my money, but at the moment it feels well worth the price.

yomismoaqui 2 hours ago||||
You can do it for $40 month. What I'm doing:

- $20 for Claude Pro (Claude Code) - $20 for ChatGPT Plus (Codex) - Amp Free Plan (with ads and you get about $10 of daily value)

So you get to use 3 of the top coding agents for $40 month.

__turbobrew__ 57 minutes ago|||
Some tools are not meant for individuals. That 100k software defined radio isn’t meant for you either.
geraneum 2 hours ago||||
We’re gonna see an economic boom any minute.
imiric 2 hours ago||||
I'm curious: what concrete value have you extracted using these tools that is worth US$thousands?
dangus 52 minutes ago|||
"Rounding error" lol, you can hire an actual full time human in India for $1000/month.
nmfisher 5 minutes ago|||
Will they be better than Opus though?
bdangubic 46 minutes ago|||
wouldn’t hire one for $15/month…

with the US salaries for SWEs $1000/month is not a rounding error for all but definitely for some. say you make $100/hr and CC saves you say 30hrs / month? not rounding error but no brainer. if you make $200+/hr it starts to become a rounding error. I have multiple max accounts at my disposal and at this point would for sure pay $1000/month for max plan. it comes down to simple math

Wowfunhappy 43 minutes ago||||
> I’m saying this to point out that we’re really already in the enshittification phase before the rapid growth phase has even ended. You’re paying $200 and acting like that’s a cheap SaaS product for an individual.

Traditional SaaS products don't write code for me. They also cost much less to run.

I'm having a lot of trouble seeing this as enshittification. I'm not saying it won't happen some day, but I don't think we're there. $200 per month is a lot, but it depends on what you're getting. In this case, I'm getting a service that writes code for me on demand.

meowface 2 hours ago||||
I could write an essay about how almost everything you wrote either is extremely incorrect or is extremely likely to be incorrect. I am too lazy to, though, so I will just have to wait for another commenter to do the equivalent.
dangus 46 minutes ago||
Why not make your AI tool do it for you?
bryanlarsen 1 hour ago||||
That's one of 3 possible futures.

1. 1-3 LLM vendors are substantially higher quality than other vendors and none of those are open source. This is an oligarchy and the scenario you described will play out.

2. >3 LLM vendors are all high quality and suitable for the tasks. At least one of these is open source. This is the "commodity" scenario, and we'll end up paying roughly the cost of inference. This still might be hundreds per month, though.

3. Somewhere in between. We've got >3 vendors, but 1-3 of them are somewhat better than the others, so the leaders can charge more. But not as much more than they can in scenario #1.

buzzerbetrayed 2 hours ago|||
If you can’t get $200 of value out of Claude Code Max, then you need to really step up your game. That’s user error.
emp17344 3 hours ago|||
Especially for what’s basically an experiment. Gas town didn’t really work, so there’s no guarantee this will even produce anything of value.
rahimnathwani 3 hours ago|||
Many many companies can afford to hire a junior engineer for $150k/year (plus employer payroll taxes, employee benefits etc.).

Are you spending more than $150k per year on AI?

(Also, you're talking about the cost of your Cursor subscription, when the article is about Claude Code. Maybe try Claude Max instead?)

freeone3000 3 hours ago||
If it could do anything that a junior dev could, that’d be a valid point of comparison. But it continually, wildly performs slower and falls short every time I’ve tried.
rahimnathwani 3 hours ago|||

  But it continually, wildly performs slower and falls short every time I’ve tried.
If it falls short every time you've tried, it's likely that one or more of these is true:

A. You're working on some really deep thing that only world-class expects can do, like optimizing graphics engines for AAA games.

B. You're using a language that isn't in the top ~10 most popular in AI models' training sets.

C. You have an opportunity to improve your ability to use the tools effectively.

How many hours have you spent using Claude Code?

astrange 44 minutes ago|||
> A. You're working on some really deep thing that only world-class expects can do, like optimizing graphics engines for AAA games.

This is a relatively common skill. One thing I always notice about the video game industry is it's much more globally distributed than the rest of the software industry.

Being bad at writing software is Japan's whole thing but they still make optimized video games.

freeone3000 1 hour ago||||
It’s a simple compiler optimization over bayesian statistics. It’s masters-level stuff at best, given that I’m on it instead of some expert. The codebase is mixed python and rust, neither of which are uncommon.

The issues I ran into are primarily “tail-chasing” ones - it gets into some attractor that doesn’t suit the test case and fails to find its way out. I re-benchmark every few months, but so far none of the frontier models have been able to make changes that have solved the issue without bloating the codebase and failing the perf tests.

It’s fine for some boilerplate dedup or spinning up some web api or whatever, but it’s still not suitable for serious work.

rahimnathwani 1 hour ago||
Would you expect a junior engineer to perform better than this?
bryanlarsen 1 hour ago||||
> like optimizing graphics engines for AAA games.

Claude would be worse than an expert at this, but this is a benchmarkable task. Claude can do experiments a lot quicker than a human can. The hard part would be ensure that the results aren't just gaming your benchmark.

imiric 2 hours ago|||
The possibility that the performance of these tools still isn't at the level some people need it to be is not an option?

It's insulting that criticism is often met with superficial excuses and insinuation that the user lacks the required skills.

rahimnathwani 2 hours ago||
That possibility is covered by A and B.

GP said 'falls short every time I’ve tried'. Note the word 'every'.

andkenneth 2 hours ago||||
Companies are not comparing it straight to juniors. They're more making a comparison between a Senior with the assistance of one more more juniors, vs a Senior with the assistance of AI Agents.

I feel like comparison just to a junior developer is also becoming a fairly outdated comparison. Yes, it is worse in some ways, but also VASTLY superior in others.

taurath 2 hours ago||
It’s funny so many companies making people RTO and spending all this money on offices to get “hallway” moments of innovation, while emptying those offices of the people most likely to have a new perspective.
buzzerbetrayed 2 hours ago|||
I am way more productive with $200/month of AI than I would be with $5,000/month of junior developer. And it isn’t close.
reactordev 2 hours ago|||
You know those VC funded startups with just two founders… them.
jwpapi 3 hours ago||
I mean what you get for Claude Code Max is insane its 30x on the token price. If you don’t spend that all it’s your own fault. That must be below elecricity cost
mcintyre1994 5 hours ago||
I’ve been mostly holding off on learning any of the tools that do this because it seemed so obvious that it’ll be built natively. Will definitely give this a go at some point!
pronik 6 hours ago||
To the folks comparing this to GasTown: keep in mind that Steve Yegge explicitely pitched agent orchestrators to among others Anthropic months ago:

> I went to senior folks at companies like Temporal and Anthropic, telling them they should build an agent orchestrator, that Claude Code is just a building block, and it’s going to be all about AI workflows and “Kubernetes for agents”. I went up onstage at multiple events and described my vision for the orchestrator. I went everywhere, to everyone. (from "Welcome to Gas Town" https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...)

That Anthropic releases Agent Teams now (as rumored a couple of weeks back), after they've already adopted a tiny bit of beads in form of Tasks) means that either they've been building them already back when Steve pitched orchestrators or they've decided that he's been right and it's time to scale the agents. Or they've arrived at the same conclusions independently -- it won't matter in the larger scale of things. I think Steve greately appreciates it existing; if anything, this is a validation of his vision. We'll probably be herding polecats in a couple of months officially.

mohsen1 5 hours ago||
It's not like he was the only one who came up with this idea. I built something like that without knowing about GasTown or Beeds. It's just an obvious next step

https://github.com/mohsen1/claude-code-orchestrator

gbnwl 4 hours ago|||
I also share your confusion about him somehow managing to dominate credit in this space, when it doesn't even seem like Gastown ended up being very effective as a tool relative to its insane token usage. Everyone who's used an agentic tool for longer than a day will have had the natural desire for them to communicate and coordinate across context windows effectively. I'm guessing he just wrote the punchiest article about it and left an impression on people who had hitherto been ignoring the space entirely.
MattPalmer1086 1 hour ago||
It was a fun article!
behnamoh 4 hours ago|||
Exactly! I built something similar. These are such low hanging fruit ideas that no one company/person should be credited for coming up with them.
isoprophlex 6 hours ago|||
There seems to be a lot of convergent evolution happening in the space. Days before the gas town hype hit, I made a (less baroque, less manic) "agent team" setup: a shell script to kick off a ralph wiggum loop, and CLAUDE-MESSAGE-BUS.md for inter-ralph communication (Thread safety was hacked into this with a .claude.lock file).

The main claude instance is instructed to launch as many ralph loops as it wants, in screen sessions. It is told to sleep for a certain amount of time to periodically keep track of their progress.

It worked reasonably well, but I don't prefer this way of working... yet. Right now I can't write spec (or meta-spec) files quick enough to saturate the agent loops, and I can't QA their output well enough... mostly a me thing, i guess?

CuriouslyC 5 hours ago|||
Not a you thing. Fancy orchestration is mostly a waste, validation is the bottleneck. You can do E2E tests and all sorts of analytic guardrails but you need to make sure the functionality matches intent rather than just being "functional" which is still a slow analog process.
pronik 6 hours ago|||
> Right now I can't write spec (or meta-spec) files quick enough to saturate the agent loops, and I can't QA their output well enough... mostly a me thing, i guess?

Same for me, however, the velocity of the whole field is astonishing and things change as we get used to them. We are not talking that much about hallucinating anymore, just 4-5 months ago you couldn't trust coding agents with extracting functionality to a separate file without typos, now splitting Git commits works almost without a hinch. The more we get used to agents getting certain things right 100% of the time, the more we'll trust them. There are many many things that I know I won't get right, but I'm absolutely sure my agent will. As soon as we start trusting e.g. a QA agent to do his job, our "project management" velocity will increase too.

Interestingly enough, the infamous "bowling score card" text on how XP works, has demonstrated inherently agentic behaviour in more way than one (they just didn't know what "extreme" was back then). You were supposed to implement a failing test and then implement just enough functionality for this test to not fail anymore, even if the intended functionality was broader -- which is exactly what agents reliably do in a loop. Also, you were supposed to be pair-driving a single machine, which has been incomprehensible to me for almost decades -- after all, every person has their own shortcuts, hardware, IDEs, window managers and what not. Turns out, all you need is a centralized server running a "team manager agent" and multiple developers talking to him to craft software fast (see tmux requirement in Gas Town).

bonesss 6 hours ago|||
Compare both approaches to mature actor frameworks and they don’t seem to be breaking much ice. These kinds of supervisor trees and hierarchies aren’t new for actor based systems and they’re obvious applications of LLM agents working in concert.

The fact that Anthropic and OpenAI have been going on this long without such orchestration, considering the unavoidable issues of context windows and unreliable self-validation, without matching the basic system maturity you get from a default Akka installation shows us that these leading LLM providers (with more money, tokens, deals, access, and better employees than any of us), are learning in real time. Big chunks of the next gen hype machine wunder-agents are fully realizable with cron and basic actor based scripting. Deterministically, write once run forever, no subscription needed.

Kubernetes for agents is, speaking as a krappy kubernetes admin, not some leap, it’s how I’ve been wiring my local doom-coding agents together. I have a hypothesis that people at Google (who are pretty ok with kubernetes and maybe some LLM stuff), have been there for a minute too.

Good to see them building this out, excited to see whether LLM cluster failures multiply (like repeating bad photocopies), or nullify (“sorry Dave, but we’re not going to help build another Facebook, we’re not supposed to harm humanity and also PHP, so… no.”).

ttoinou 6 hours ago|||
If it was so obvious and easy, why didn't we have this a year ago ? Models were mature enough back then to make this work
CuriouslyC 5 hours ago|||
Orchestration definitely wasn't possible a year ago, the only tool that even produced decent results that far back was Aider, it wasn't fully agentic, and it didn't really shine until Gemini 2.5 03-25.

The truth is that people are doing experiments on most of this stuff, and a lot of them are even writing about it, but most of the time you don't see that writing (or the projects that get made) unless someone with an audience already (like Steve Yegge) makes it.

ttoinou 5 hours ago||
Roo Code in VSCode was working fine a year ago, even back in November 2024 with Sonnet 3.5 or 3.7
bcrosby95 4 hours ago||||
The high level idea is obvious but doing it is not easy. "Maybe agents should work in teams like humans with different roles and responsibilities and be optimized for those" isn't exactly mind bending. I experimented with it too when LLM coding became a thing.

As usual, the hard part is the actual doing and producing a usable product.

lossolo 5 hours ago||||
Because gathering training data and doing post-training takes time. I agree with OP that this is the obvious next step given context length limitations. Humans work the same way in organizations, you have different people specializing in different things because everyone has a limited "context length".
troupo 3 hours ago|||
Because they are not good engineers [1]

Also, because they are stuck in a language and an ecosystem that cannot reliably build supervisors, hierarchies of processes etc. You need Erlang/Elixir for that. Or similar implementations like Akka that they mention.

[1] Yes, they claim their AI-written slop in Claude Code is "a tiny game engine" that takes 16ms to output a couple of hundred of characters on screen: https://x.com/trq212/status/2014051501786931427

ruined 6 hours ago|||
what mature actor frameworks do you recommend?
jghn 6 hours ago|||
They did mention Akka in their post, so I would assume that's one of them.
troupo 3 hours ago|||
Elixir/Erlang. It's table stakes for them.
tyre 2 hours ago|||
Sorry, are you saying that engineers at Anthropic who work on coding models every day hadn’t thought of multiple of them working together until someone else suggested it?

I remember having conversations about this when the first ChatGPT launched and I don’t work at an AI company.

astrange 41 minutes ago||
Claude Code has already had subagent support. Mostly because you have to do very aggressive context window management with Claude or it gets distracted.
segmondy 6 hours ago|||
This is nothing new, folks have been doing this for since 2023. Lots of paper on arxiv and lots of code in github with implementation of multiagents.

... the "limit" were agents were not as smart then, context window was much smaller and RLVR wasn't a thing so agents were trained for just function calling, but not agent calling/coordination.

we have been doing it since then, the difference really is that the models have gotten really smart and good to handle it.

aaaalone 6 hours ago|||
Honestly this is one of plenty ideas I also have.

But this shows how much stuff is still to do in the ai space

dingnuts 6 hours ago||
[dead]
GoatOfAplomb 6 hours ago||
I wonder if my $20/mo subscription will last 10 minutes.
mohsen1 5 hours ago||
At this point, if you're paying out of pocket you should use Kimi or GLM for it to make sense
andai 2 hours ago|||
GLM is OK (haven't used it heavily but seems alright so far), a bit slow with ZAI's coding plan, amazingly fast on Cerebras but their coding plan is sold out.

Haven't tried Kimi, hear good things.

bluerooibos 3 hours ago|||
These are super slow to run locally, though, unless you've got some great hardware - right?

At least, my M1 Pro seems to struggle and take forever using them via Ollama.

tclancy 5 hours ago|||
Ah ok, same. I keep wondering about how this would ever accomplish anything.
simlevesque 6 hours ago||
I've had good results with Haiku for certain tasks.
d4rkp4ttern 3 hours ago||
This sounds very promising. Using multiple CC instances (or mix of CLI-agents) across tmux panes has always been a workflow of mine, where agents can use the tmux-cli [1] skill/tool to delegate/collaborate with others, or review/debug/validate each others work.

This new orchestration feature makes it much more useful since they share a common task list and the main agent coordinates across them.

[1] https://github.com/pchalasani/claude-code-tools?tab=readme-o...

bhasi 7 hours ago||
Seems similar to Gas Town
rafram 7 hours ago||
I'm not anti-whimsy, but if your project goes too hard on the whimsy (and weird AI-generated animal art), it's kind of inevitable that someone else is going to create a whimsy-free clone, and their version will win because it's significantly less embarrassing to explain to normal people.
reissbaker 6 hours ago|||
Where are the polecats, though? What about the mayor's dog?
koakuma-chan 7 hours ago|||
I don't know what Gas Town is, but Claude Code Agent Teams is what I was doing for a while now. You use your main conversation only to spawn sub agents to plan and execute, allowing you to work for a long time without losing context or compacting, because all token-heavy work is done by sub agents in their own context. Claude Code Agent Teams just streamlines this workflow as far as I can tell.
nprz 6 hours ago|||
Gas Town --> https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
nickorlow 7 hours ago|||
yeah, seems like a much simpler design though (i.e. only seems like one 'special/leader' agent, and the rest are all workers vs gastown having something like 8 different roles mayor, polecat, witnesses, etc).

Wonder how they compare?

greenfish6 7 hours ago||
i would have to imagine the gastown design isn't optimal though? why 8, and why does there need to multiple hops of agent communications before two arbitrary agents communicate with each other as opposed to single shared filespace?
Ethee 6 hours ago|||
I've been using Gas Town a decent bit since it was released. I'd agree with you that it's design is sub-optimal, but I believe that's more due to the way the actual agents/harnesses have been designed as opposed to optimal software design. The problem you often run into is that agents will sometimes hang thinking they need human input for a problem they are on, or they think they're at a natural stopping point. If you're trying to do fully orchestrated agentic coding where you don't look at the code at all (putting aside whether that's good or not for a second) then this is sub-optimal behavior, and so these extra roles have been designed to 'keep the machine going' as it were.

Often times if I'm only working on a single project or focus, then I'm not using most of those roles at all and it's as you describe, one agent divvying out tasks to other agents and compiling reports about them. But due to the fact that my velocity with this type of coding is now based on how fast I can tell that agent what I want, I'm often working on 3 or 4 projects simultaneously, and Gas Town provides the perfect orchestration framework for doing this.

cstejerean 3 hours ago||
the problem with gastown is it tries to use agents for supervision when it should be possible to use much simpler and deterministic approaches to supervision, and also being a lot more token efficient
nickorlow 4 hours ago|||
yegge's article does come off as complicated design for the sake of complication
temuze 7 hours ago|||
Yeah but worse

No polecats smh

ramesh31 7 hours ago||
>"Seems similar to Gas Town"

I love that we are in this world where the crazy mad scientists are out there showing the way that the rest of us will end up at, but ahead of time and a bit rough around the edges, because all of this is so new and unprecedented. Watching these wholly new abstractions be discovered and converged upon in real time is the most exciting thing I've seen in my career.

bredren 7 hours ago||
The action is hot, no doubt. This reminds me of Spacewar! -> Galaxy Game / Computer Space.
ottah 7 hours ago||
I absolutely cannot trust Claude code to independently work on large tasks. Maybe other people work on software that's not significantly complex, but for me to maintain code quality I need to guide more of the design process. Teams of agents just sounds like adding a lot more review and refactoring that can just be avoided by going slower and thinking carefully about the problem.
nickstinemates 5 hours ago||
You write a generic architecture document on how you want your code base to be organized, when to use pattern x vs pattern y, examples of what that looks like in your code base, and you encode this as a skill.

Then, in your prompt you tell it the task you want, then you say, supervise the implementation with a sub agent that follows the architecture skill. Evaluate any proposed changes.

There are people who maximize this, and this is how you get things like teams. You make agents for planning, design, qa, product, engineering, review, release management, etc. and you get them to operate and coordinate to produce an outcome.

That's what this is supposed to be, encoded as a feature instead of a best practice.

satellite2 5 hours ago|||
Aren't you just moving the problem a little bit further? If you can't trust it will implement carefully specified features, why would you believe it would properly review those?
frde_me 4 hours ago||
It's hard to explain, but I've found LLMs to be significantly better in the "review" stage than the implementation stage.

So the LLM will do something and not catch at all that it did it badly. But the same LLM asked to review against the same starting requirement will catch the problem almost always

The missing thing in these tools is that automatic feedback loop between the two LLMs: one in review mode, one in implementation mode.

resonious 4 hours ago||
I've noticed this too and am wondering why this hasn't been baked into the popular agents yet. Or maybe it has and it just hasn't panned out?
bashtoni 4 hours ago|||
Anecdotaly I think this is in Claude Code. It's pretty frequent to see it implement something, then declare it "forgot" a requirement and go back and alter or add to the implementation.
bethekidyouwant 1 hour ago|||
You have to dump the context window for the review to work good.
tclancy 5 hours ago|||
How does this not use up tokens incredibly fast though? I have a Pro subscription and bang up against the limits pretty regularly.
nickstinemates 12 minutes ago|||
I don't know, all I can say is with API-based billing, doing multi-thousand like refactors that would take days to do costs like $4. In terms of value : effort, it's incredible.
doctoboggan 5 hours ago||||
It _does_ use up tokens incredibly fast, which is probably why Anthropic is developing this feature. This is mostly for corporations using the API, not individuals on a plan.
digdugdirk 5 hours ago||
I'd love to see a breakdown of the token consumption of inaccurate/errored/unused task branches for claude code and codex. It seems like a great revenue source for the model providers.
shafyy 5 hours ago||
Yeah, that's what I was thinking. They do have an incentive to not get everything right on the first try, as long as they don't over do it... I also feel like that they try to get more token usage by asking unnecesary follow up questions that the user may say yes to etc.
andyferris 5 hours ago|||
It does use tokens faster, yes.
aqme28 6 hours ago|||
I agree, but I've found that making an "adversarial" model within claude helps with the quality a lot. One agent makes the change, the other picks holes in it, and cycle. In the end, I'm left with less to review.

This sounds more like an automation of that idea than just N-times the work.

Keyframe 5 hours ago|||
Glad I'm not the only one. I do the same, but I tend to have gemini be the one that critiques.
diego898 5 hours ago|||
Do you do this manually? Or some abstraction above that? skills, some light orchestration, etc?
aqme28 5 hours ago||
I just tell it to do so, but you could even add that as a requirement to CLAUDE.md
turtlebits 6 hours ago|||
Humans can't handle large tasks either, which is why you break them into manageable chunks.

Just ask claude to write a plan and review/edit it yourself. Add success criteria/tests for better results.

stpedgwdgfhgdd 6 hours ago|||
Exactly, one out of four or three prompts require tuning, nudging or just stopping it. However it takes seniority to see where it goes astray. I suspect that lots of folks dont even notice that CC is off. It works, it passes the tests, so it is good.
nprz 7 hours ago|||
There is research[0] currently being done on how to divide tasks and combine the answers to LLMs. This approach allows LLMs reach outcomes (solving a problem that requires 1 million steps) which would be impossible otherwise.

[0]https://arxiv.org/abs/2511.09030

woah 5 hours ago|||
All they did was prompt an LLM over and over again to execute one iteration of a towers of hanoi algorithm. Literally just using it as a glorified scripting language:

```

Rules:

- Only one disk can be moved at a time.

- Only the top disk from any stack can be moved.

- A larger disk may not be placed on top of a smaller disk.

For all moves, follow the standard Tower of Hanoi procedure: If the previous move did not move disk 1, move disk 1 clockwise one peg (0 -> 1 -> 2 -> 0).

If the previous move did move disk 1, make the only legal move that does not involve moving disk1.

Use these clear steps to find the next move given the previous move and current state.

Previous move: {previous_move} Current State: {current_state} Based on the previous move and current state, find the single next move that follows the procedure and the resulting next state.

```

This is buried down in the appendix while the main paper is full of agentic swarms this and millions of agents that and plenty of fancy math symbols and graphs. Maybe there is more to it, but the fact that they decided to publish with such a trivial task which could be much more easily accomplished by having an llm write a simple python script is concerning.

Spoom 4 hours ago||
Good lord, I can only imagine the wasted electricity.
ottah 6 hours ago|||
No offense to the academic profession, but they're not a good source of advice for best practices in commercial software development. They don't have the experience or the knowledge sufficient to understand my workplace and tasks. Their skill set and job is orthogonal to the corporate world.
nprz 6 hours ago||
Yes, the problem solved in the paper (Tower of Hanoi) is far more easily defined than 99% of actual problems you would find in commercial software development. Still proof of "theoretically possible" and seems like an interesting area of research.
findjashua 5 hours ago|||
you need a reviewer agent for every step of the process - review the plan generated by the planner, the update made by the task worker subagent, and a final reviewer once all tasks are done.

this does eat up tokens _very_ quickly though :(

BonoboIO 7 hours ago||
You definitely have to create some sort of PLAN.md and PROGRESS.md via a command and an implement command that delegates work. That is the only way that I can get bigger things done no matter how „good“ their task feature is.

You run out of context so quickly and if you don’t have some kind of persistent guidance things go south

ottah 7 hours ago|||
It's not sufficient, especially if I am not learning about the problem by being part of the implementation process. The models are still very weak reasoners, writing code faster doesn't accelerate my understanding of the code the model wrote. Even with clear specs I am constantly fighting with it duplicating methods, writing ineffective tests, or implementing unnecessarily complex solutions. AI just isn't a better engineer than me, and that makes it a weak development partner.
vonneumannstan 5 hours ago||
>AI just isn't a better engineer than me, and that makes it a weak development partner.

This would also be true of Junior Engineers. Do you find them impossible to work with as well?

koakuma-chan 7 hours ago|||
I tried doing that and it didn't work. It still adds "fallbacks" that just hide errors or the fact that there is no actual implementation and "In a real app, we would do X, just return null for now"
nkmnz 7 hours ago||
I’m looking for something like this, with opus in the driver seat, but the subagents should be using different LLMs, such as Gemini or Codex. Anyone know if such a tool? just-every/code almost does this, but the lead/orchestrator is always codex, which feels too slow compared to opus or Gemini.
eaf7e281 4 hours ago||
These two basically do what you want, let Claude be the manager and Codex/Gemini be the worker. Many say that Coder-Codex-Gemini is easier to understand than CCG-Workflow, which has too many commands to start with.

https://github.com/FredericMN/Coder-Codex-Gemini https://github.com/fengshao1227/ccg-workflow

This one also seems promising, but I haven't tried it yet.

https://github.com/bfly123/claude_code_bridge

All of them are made by Chinese dev. I know some people are hesitant when they see Chinese products, so I'll address that first. But I have tried all of them, and they have all been great.

nikcub 7 hours ago|||
I use opus for coding and codex for reviews. I trigger the reviews in each work task with a review skill that calls out to codex[0]

I don't need anything more complicated than that and it works fine - also run greptile[1] on PR's

[0] https://github.com/nc9/skills/tree/main/review

[1] https://www.greptile.com/

khaliqgant 4 hours ago|||
You can accomplish this with https://github.com/AgentWorkforce/relay and make the Lead/Orchestrator any harness you want. At the core agent-relay is agent to agent communication but it unlocks quite a few multi agent orchestration paradigms. I wrote about some learnings here as well https://x.com/khaliqgant/status/2019124627860050109?s=46
fosterfriends 7 hours ago|||
I think this is where future cursor features will be great - to coordinate across many different model providers depending on the sub-jobs to be done
nkmnz 7 hours ago||
What I want is something else: I want them to work in parallel on the same problem, and the orchestrator to then evaluate and consolidate their responses. I’m currently doing this manually, but it’s tedious.
sathish316 7 hours ago|||
You can run an ensemble of LLMs (Opus, Gemini, Codex) in Claude Code Router via OpenRouter or any Agent CLI that supports Subagents and not tied to a single LLM like Opencode. I have an example of this in Pied-Piper, a subagent orchestrator that runs in Claude Code or ClaudeCodeRouter and uses distinct model/roles for each Subagent:

1. GPT-5.2 Codex Max for planning

2. Opus 4.5 for implementation

3. Gemini for reviews

It’s easy to swap models or change responsibilities. Doc and steps here: https://github.com/sathish316/pied-piper/blob/main/docs/play...

knes 7 hours ago||
At Augment' we've been working on this. Multi agents orchestration, spec driven, different models for different tasks, etc.

https://www.augmentcode.com/product/intent

can use the code AUGGIE to skip the queue. Bring your own agent (powered by codex, CC, etc) coming to it next week.

Sol- 7 hours ago||
With stuff like this, might be that all the infra build-out is insufficient. Inference demand will go up like crazy.
RGamma 7 hours ago||
Unlocking the next order of magnitude of software inefficiency!

Though I do hope the generated code will end up being better than what we have right now. It mustn't get much worse. Can't afford all that RAM.

Sol- 6 hours ago||
Dunno, it's probably less energy efficient than a human brain, but being able to turn electricity into intelligence is pretty amazing. RAM and power generation are engineering problems to be solved for civilization to benefit from this.
kylehotchkiss 7 hours ago|||
It'd be nice if CC could figure out all the required permissions upfront and then let you queue the job to run overnight
Der_Einzige 7 hours ago||
Anyone paying attention has known that demand for all type of compute than can run LLMs (i.e. GPUs, TPUs, hell even CPUs) was about to blow up, and will remain extremely large for years to come.

It's just HN that's full of "I hate AI" or wrong contrarian types who refuse to acknowledge this. They will fail to reap what they didn't sow and will starve in this brave new world.

sciencejerk 5 hours ago|||
Agreed, agent scaling and orchestration indicates that demand for compute is going to blow up, if it hasn't already. The rationale for building all those datacenters they can't build fast enough is finally making sense.
emp17344 7 hours ago||||
This reads like a weird cult-ish revenge fantasy.
RGamma 7 hours ago||
And what about you? Show your "I used AI today" badge, right now!
anthem2025 6 hours ago||||
[dead]
ffffuuuuuccck 6 hours ago||||
[flagged]
aaaalone 6 hours ago|||
If ai progresses slow enough, we will end in a society were high unemployment numbers are the norm and we are stuck in capitalism.

And if I think about one 'senior' in my team I would pref an expensive ai subscription over that one person already.

Der_Einzige 6 hours ago|||
[flagged]
sciencejerk 5 hours ago|||
Blue collar work won't be safe for long. Just longer.
emp17344 6 hours ago|||
What the fuck is wrong with you? This guy is either a troll or legitimately mentally ill.
mrkeen 7 hours ago|||
Oh yeah I mean if you're a webdev and you haven't built several data centres already you're basically asking to be homeless.
giancarlostoro 5 hours ago|
I was working on my own alternative to Beads... then I realized I could do exactly this with something similar to Beads, I'm planning on open sourcing it soon because I like what I have so far, I also made it so I can sync my tasks directly to my GitHub projects as well. I think its more useful to have agent tasks eventually synched back up to real ticketing systems for historical reasons. Besides, its better to have alternatives that are agent agnostic.
More comments...