Top
Best
New

Posted by pavel_lishin 1/23/2026

Gas Town's agent patterns, design bottlenecks, and vibecoding at scale(maggieappleton.com)
403 points | 434 commentspage 3
mohsen1 1/23/2026|
I tried building something like this similar to many others here but now I’m convinced agents should just use GitHub issues and pull requests. You get nice CI and code reviews (AI or human) and state of the progress is not kept in code.

Basically simulate a software engineering team using GitHub but everyone is an agent. From tech lead to coders to QA testers.

https://github.com/mohsen1/claude-code-orchestrator

thorum 1/23/2026||
Am I wrong that this entire approach to agent design patterns is based on the assumption that agents are slow? Which yeah, is very true in January 2026, but we’ve seen that inference gets faster over time. When an agent can complete most tasks in 1 minute, or 1 second, parallel agents seem like the wrong direction. It’s not clear how this would be any better than a single Claude Code session (as “orchestrator”) running subagents (which already exist) one at a time.
Ethee 1/23/2026|
It's likely then that you are thinking too small. Sure for one off tasks and small implementations, a single prompt might save you 20-30 mins. But when you're building an entire library/service/software in 3 days that normally would have taken you by hand 30 days. Then the real limitation comes down to how fast you can get your design into a structured format. As this article describes.
thorum 1/23/2026||
Agree that planning time is the bottleneck, but

> 3 days

still seems slow! I’m saying what happens in 2028 when your entire project is 5-10 minutes of total agent runtime - time actually spent writing code and implementing your plan? Trying to parallelize 10m of work with a “town” of agents seems like unnecessary complexity.

Ethee 1/24/2026||
I think that most of the anecdotal and research experiences I've seen for AI agent use so far tell us that you need at least a couple pass-throughs to converge upon a good solution, so even in your future vision where we have models 5x as good as now, I'll still need at least a few agents to ensure I arrive at a good solution. By this I specifically mean a working implementation of the design, not an incorrect assumption of the design which leads the AI off on the wrong path which I feel like is the main issue I keep hearing over and over. So coming back to your point, assuming we can have the 'perfect' design document which lays out everything, yeah we'll probably only need like 5 agents total to actually build it in a few years.
SimianSci 1/23/2026||
I've been researching the usage of Developer tooling at mine and other organizations for years now and I'm genuinely trying to understand where agentic coding fits into the evolving landscape. One of the most solid things im beginning to understand is that many people dont understand how these tools influence technical debt.

Debt doesnt come due immediately, its accrued and may allow for the purchase of things that were once too expensive, but eventually the bill comes due.

Ive started referring to vibe-coding as "Credit Cards" for developers. Allowing them to accrue massive amounts of technical debt that were previously out of reach. This can provide some competent developers with incredible improvments to their work. But for the people who accrue more Technical Debt than they have the ability to pay off, it can sink their project and cost our organization alot in lost investment of both time and money.

I see Gas Town and tools like as debt schemes where someone applies for more credit cards to pay the payments on prior cards they've maxed out, compounding the issue with the vague goal of "eventually it pays off." So color me skeptical.

Not sure if this analogy holds up to all things, but its been helping my organization navigate the application of agents, since it allows us to allocate spend depending on the seniority of each developer. Thus ive been feeling like an underwriter having to figure out if a developer requesting more credits or budget for agentic code can be trusted to pay off the debt they will accrue.

hahahahhaah 1/24/2026|
I found AI particular useful in ossified swamps at big companies where paying down tech debt would be a major many team task unalignable with OKR. But an agent helps you use natural language to the needful boilerplate to get the cursed "do this now" task done.
perrygeo 1/24/2026||
I get that Gas Town is part tongue-in-cheek, a strawman to move the conversation on Agentic AI forward. And for that I give it credit.

But I think there's a real missed opportunity here. I don't think it goes far enough. Who wants some giant complex system of agents conceived by a human. The agents, their role and relationships, could be dynamically configured according to the task.

What good is removing human judegment from the loop, only to constrain the problem by locking in the architecture a priori. It just doens't make sense. Your entire project hinges on the waterfall-like nature of the agent design! That part feels far too important, but gas town doesn't have much curiousity at all about changing that. These Mayors, and Polecats, and Witnesses, and Deacons ... but one of infinite ways you arrange things. Why should there be just one? Why should there be an up-front design at all? A dynamic, emergent network of agents feels like the real opportunity here.

pianopatrick 1/24/2026||
People, including the author of this article, say that design and architecture are the hard parts, but I think long term those are just as solvable as coding.

I think architecture will become like an installer. Some kind of agent orchestration system will ask you "do you want this or that" and guide you through various architecture choices when you set up a project, or when those choices arise.

And for design, now that code is fast and easy to generate, an agent system can just generate two, three or four versions of the UX for each feature and ask "do you like this one, this one or that one?".

So a switch from upfront design / architecture choices you have to put into prompts to the agent orchestration system asking you to make a choice when the choice becomes relevant.

falcor84 1/24/2026||
As Yegge himself would agree, there's likely nothing that is particularly good about this specific architecture, but I think that there's something massive in this as a proof of concept for something bigger beyond the realm of software development.

Over the last few years, people have been playing around with trying to integrate LLMs into cognitive architectures like ACT-R or Soar, with not much to show for it. But I think that here we actually have an example of a working cognitive architecture that is capable of autonomous long-term action planning, with the ability to course-correct and stay on task.

I wouldn't be surprised if future science historians will look at this as an early precursor to what will eventually be adapted to give AIs full agentic executive functioning.

edg5000 1/24/2026||
First time I'm seeing this on HN. Maybe it was posted earlier.

Have been doing manual orchestration where I write a big spec which contains phases (each done by an agent) and instructions for the top level agent on how to interact with the sub agent. Works well but it's hard utilize effectively. No doubt this is the future. This approach is bottlenecked by limitations of the CC client; mainly that I cannot see inter-agent interactions fully, only the tool calls. Using a hacked client or compatible reimplementation of CC may be the answer. Unless the API was priced attractively, or other models could do the work. Gemini 3 may be able to handle it better than Opus 4.5. The Gemini 3 pricing model is complex to say the least though (really).

Mirgova8 1/28/2026||
I want to use this opportunity to say big thank you to Coin Hack Recovery for they helped me recover my stolen crypto worth $120,000 through their hacking skills I got my money back in less than 48 hours after contacting them on coinhackrecovery @gmailcom I’m so glad I came across them early because I thought I was never going to get my money back from those fake online investment websites scam.
chrisss395 1/24/2026||
Yes to Maggie & Steve's amazingly well written articles...and:

I would love to see Steve consider different command and control structures, and re-consider how work gets done across the development lifecycle. Gas Town's command and control structure read to me like "how a human would think about making software." Even the article admits you need to re-think how you interact in the Gas Town world. It actually may understate this point too much.

Where and how humans interact feels like something that will always be an important consideration, both in a human & AI dominated software development world. At least from where I sit.

More comments...