Posted by dbalatero 9/3/2025
Or, alternatively, exposure to our robot overlords makes you less discerning, less concerned with, ah, whether the thing is correct or not.
(This _definitely_ seems to be a thing with LLM text generation, with many people seemingly not even reading the output before they post it, and I assume it's at least somewhat a thing for software as well.)
I didn't know Mike Judge was such a polymath!
For experienced engineers, I'm seeing (internally in our company at least) a huge amount of caution and hesitancy to go all-in with AI. No one wants to end up maintaining huge codebases of slop code. I think that will shift over time. There are use cases where having quick low-quality code is fine. We need a new intuition about when to insist on handcrafted code, and when to just vibecode.
For non-experienced engineers, they currently hit a lot of complexity limits with getting a finished product to actually work, unless they're building something extremely simple. That will also shift - the range of what you can vibecode is increasing every year. Last year there was basically nothing that you could vibecode successfully, this year you can vibecode TODO apps and stuff like that. I definitely think that the App Store will be flooded in the coming future. It's just early.
Personally I have a side project where I'm using Claude & Codex and I definitely feel a measurable difference, it's about a 3x to 5x productivity boost IMO.
The summary.. Just because we don't see it yet, doesn't mean it's not coming.
There are very simple apps I try to vibe code that AI cannot handle. It seems very good at certain domains, and others it seems complete shit at.
For example, I hand wrote a simulation in C in just 900 LOC. I wrote a spec for it and tried to vibe code it in other languages because I wanted to compare different languages/concurrency strategies. Every LLM I've tried fails, and manages to write 2x+ more code in comparatively succinct languages such as Clojure.
I can totally see why people writing small utilities or simple apps in certain domains think its a miracle. But when it comes to things like e.g. games it seems like a complete flop.
- breaking through the analysis paralysis by creating the skeleton of a feature that I then rework (UI work is a good example)
- aggressive dev tooling for productivity on early stage projects, where the CI/CD pipeline is lacking and/or tools are clumsy. (Related XKCD: https://xkcd.com/1205/)
Otherwise, I find most of my time is understanding the client requirements and making sure they don't want conflicting features – both of which are difficult to speedup with AI. Coding is actually the easy part and even if it was sped up 100x a consistent end-to-end improvement of 2x would be a big win (see Amdahl's law).
How is this "deadly" serious? It's about software developers losing well-paid, comfortable jobs. It's even less serious if AI doesn't actually improve productivity, because they'll find new jobs in the future.
Pretty much the only future where AI will turn out "deadly serious" is if it shows human-level performance for most if not all desk jobs.