Posted by straydusk 2 days ago
https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
Is it a nano banana tendency or was it probably intentional?
Here's the prompt I used, actually:
Create a vibrant, visually dynamic horizontal infographic showing the spectrum of AI developer tools, titled "The Shift Left"
Layout: 5 distinct zones flowing RIGHT TO LEFT as a journey/progression. Use creative visual metaphors — perhaps a road, river, pipeline, or abstract flowing shapes connecting the stages. Each zone should feel like its own world but connected to the others.
Zones (LEFT to RIGHT):
1. "Specs" (leftmost) - Kiro logo, VibeScaffold logo, GitHub Spec Kit logo
Label: "Requirements → Design → Tasks"
2. "Multi-Agent Orchestration" - Claude Code logo, Codex CLI logo, Codex App logo, Conductor logo Label: "Parallel agents, fire & forget"
3. "Agentic IDE" - Cursor logo, Windsurf logo Label: "Autonomous multi-file edits"
4. "Code + AI" - GitHub Copilot logo Label: "Inline suggestions"
5. "Code" (rightmost) - VS Code logo Label: "Read & write files"
Visual style: Fun, energetic, modern. Think illustrated tech landscape or isometric world. NOT a boring corporate chart. Use warm off-white background (#faf8f5) with amber/orange (#b45309) as the primary accent color throughout. Add visual flair — icons, small illustrations, depth, texture, but don't make it visually overloaded.Aspect ratio: 16:9 landscape
We are very far away from this being a settled or agreed upon statement and I really struggle to understand how one vendor making a tool is indicative of an industry practice.
You may read all the assembly that your compiler produces. (Which, awesome! Sounds like you have a fun job.) But I don't. I know how to read assembly and occasionally do it. But I do it rarely enough that I have to re-learn a bunch of stuff to solve the hairy bug or learn the interesting system-level thing that I'm trying to track down if I'm reading the output of the compiler. And mostly even when I have a bug down at the level where reading assembly might help, I'm using other tools at one or two removes to understand the code at that level.
I think it's pretty clear that "reading the code" is going to go the way of reading compiler output. And quite quickly. Even for critical production systems. LLMs are getting better at writing code very fast, and there's no obvious reason we'll hit a ceiling on that progress any time soon.
In a world where the LLMs are not just pretty good at writing some kinds of code, but very good at writing almost all kinds of code, it will be the same kind of waste of time to read source code as it is, today, to read assembly code.
Compilers predictably transform one kind of programming language code to CPU (or VM) instructions. Transpilers predictably transform one kind of programming language to another.
We introduced various instruction architectures, compiler flags, reproducible builds, checksums exactly to make sure that whatever build artifact that's produced is super predictable and dependable.
That reproducibility is how we can trust our software and that's why we don't need to care about assembly (or JVM etc.) specifics 99% of the time. (Heck, I'm not familiar with most of it.)
Same goes for libraries and frameworks. We can trust their abstractions because someone put years or decades into developing, testing and maintaining them and the community has audited them if they are open-source.
It takes a whole lot of hand-waving to traverse from this point to LLMs - which are stochastic by nature - transforming natural language instructions (even if you call it "specs", it's fundamentally still a text prompt!) to dependable code "that you don't need to read" i.e. a black box.
What's the equivalent for an LLM? The string of prompts that non-deterministically generates code?
Also, if LLM output is analogous to assembly, then why is that what we're checking in to our source control?
LLMs don't seem to solve any of the problems I had before LLMs existed. I never worried about being able to generate a bunch of code quickly. The problem that needs to be solved is how to better write code that can be understood, and easily modified, with a high degree of confidence that it's correct, performs well, etc. Using LLMs for programming seems to do the opposite.
But with the AI tools we're not yet at the wave of "sometimes it's good to read the code" virtue signaling blog posts that will make front page next year or so, and still at the "I'm the new hot shit because I don't read code" moment, which is all a bit hard to take.
What’s worth paying for is something that is trustworthy.
Claude code is a perfect example: They blocked tools like opencode because they know quality is the only moat, and they don’t currently have it.
Even in those environments, I'd argue that AI coding can offer a lot in terms of verification & automated testing. However, I'd probably agree, in high-stakes safety environments, it's more of a 'yes and' than an either/or.
Good luck debugging any non trivial problem in such codebase
When an update script jacks up the guaranteed-to-be-robust vibed data setup in this first of a kind, one of a kind, singular installation… what then?
The pros have separate dev, test, QA, and prod environments. Immutable servers, NixOs, containers, git, and rollback options in orchestration frameworks. Why? Because uh-oh, oh-shit, say-what, no-you’re-kidding, oh-fuck, and oops are omnipresent.
MS Access was a great product with some scalability ceilings that took engineering to work past. MS Access solutions growing too big then imploding was a real concern that bit many departments. MS access was not dumping 15,000 LoC onto the laps of these non-developers and telling them they are hybrid spirit code warriors with next level hacking skills.
Ruby on Rails, Wordpress, SharePoint… there are legitimately better options out there for tiny-assed self-serving CRUD apps and cheap developer ecosystems. They’re not quite as fun, tho, and they don’t gas people up as well.
the constant asking drives me crazy
9/10 my ai generated code is bad before my verification layers 9/10 its good after.
Claude fights through your rules. And if you code in another language you could use other agents to verify code.
This is the challenge now, effectively verify the code. Whenever I end up with a bad response I ask myself what layers could i set to stop AI as early as possible.
Also things like namings, comments, tree traversal, context engineering, even data-structures, multi-agenting. I know it sounds like buzzword, but these are the topics a software-engineer really should think about. Everything else is frankly cope.