Posted by pavel_lishin 1/23/2026
> “It will be like kubernetes, but for agents,” I said.
> “It will have to have multiple levels of agents supervising other agents,” I said.
> “It will have a Merge Queue,” I said.
> “It will orchestrate workflows,” I said.
> “It will have plugins and quality gates,” I said.
More “agile for agents” than “Kubernetes for agents”.
As soon as the results actually matter, the maxim becomes "if it works, but it's stupid, it doesn't work".
So apparently the medical field is not above this logic.
He is just making up a fantasy world where his elves run in specific patterns to please him.
There is no metrics or statistics on code quality, bugs produced, feature requirements met.. or anything.
Just a gigantic wank session really.
I do think it's overly complex though; but it's a novel concept.
I think if you'd read the article through you'd know they were serious coz Yegge all but admits this himself.
One comment claims it’s not necessary to read code when there is documentation (generated by an LLM)
Language varies with geography and with time. British, Americans, and Canadians speak “similar” English, but not identical.
And read a book from 70-80 years ago to see that many words appear to be used for their “secondary meaning.” Of course, what we consider their secondary meaning today was the primary meaning back then.
(Maybe you can argue that you could then do everything with a event-driven single agent, like async for llms, if you don't mind having a single very adhd context)
I haven't seen anything to suggest that Yegge is proposing it as a serious tool for serious work, so why all the hate?
I've had very good success with a recursive sub agent scheme where a separate prompt (agent) is used to gate the recursive call. It compares the callers prompt with the proposed callee's prompt to determine if we are making a reasonable effort to reduce the problem into workable base cases. If the two prompts are identical we deny the request with an explanation. In practice, this works so well I can allow for unlimited depth and have zero fear of blowing the stack. Even if the verifier gets it wrong a few times, it only has to get it right once to reverse an infinite descent.
DeepSeekMath-V2 seems to show this, increasing the number of prover/verifier iterations gives increases accuracy. And this is with a model that has already undergone RL under a prover/verifier selection process.
However this type of subagent communication maintains full context, and is different from "breaking into tasks" style of sharding amongst subagents. I'm less convinced of the latter, because often times a problem is more complex than the sum of its parts, i.e. it's the interdependencies that make it complex and you need to consider each part in relation to the other parts, not in isolation.
Parallelism and BFS style approaches do not exhibit this property. Anything that happens within the context or token stream is a much weaker solution. Most agent frameworks are interested in appearance of speed, so they miss out on the nuance of this execution model.
There's this implied trust we all have in the AI companies that the models are either not sufficiently powerful to form a working takeover plan or that they're sufficiently aligned to not try. And maybe they genuinely try but my experience is that in the real world, nothing is certain. If it's not impossible, it will happen given enough time.
If the safety margin for preventing takeover is "we're 99.99999999 percent sure per 1M tokens", how long before it happens? I made up these numbers but any guess what they are really?
Because we're giving the models so much unsupervised compute...
I hope you might be somewhat relieved to consider that this is not so in an absolute sense. There are plenty of technological might-have-beens that didn't happen, and still haven't, and probably will never—due to various economic and social dynamics.
The counterfactual—all that's possible happens—ie almost tautological.
We should try and look at these mechanisms from an economic standpoint, and ask "do they really have the information-processing density to take significant long-term independent action?"
Of course, "significant" is my weasel word.
> we're giving the models so much unsupervised compute...
Didn't you read the article? It's wasted! It's kipple!
Anyways we'll likely always settle on simpler/boring - but the game analogies are fun in the time being. A lot of opportunity to enhance UX around design, planning, and review.