Posted by phire 7/1/2025
If I have to manually review the boilerplate after it generates then I may as well just write it myself. AI is not improving this unless you just blindly trust it without review, AND YOU SHOULDN'T
If there's a secret, silent majority of seasoned devs who are just quietly trying to weather this, I wish they would speak up
But I guess just getting those paycheques is too comfy
Sounds like we're (once again) rediscovering "No Silver Bullet".
“The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication”
AI can dramatically speed up testing and code reviews. Generating unit tests is a major application of AI.
Code reviews can be accelerated too: “please quickly explain what this code is doing” to give you a foothold; “please check this code for any obvious mistakes” enables you to quickly bounce if a new review will be needed. And better yet - the submitter can ask the AI to do that, and also to suggest refactoring that will make code review easier and faster for someone else.
As for understand code and communicating it, well that is going to less and less necessary as the abstraction level we work at is lifted.
This objection is just cope. It’s moving the goalposts because we’re scared AI is going to take out jobs.
In truth, it will simply accelerate our work until we hit AGI. And at that point (which I think is probably a way off) we’ll have much greater concerns than the job market.
Or I might build that myself.
An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.
LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.
However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.
Guess what X + Y + Z + T ... in aggregate are the bottleneck, and LLMs pretty much speed up the whole operation :)
So pretty pointless and click-baity article & title, if you ask me.