Top
Best
New

Posted by phire 7/1/2025

Writing Code Was Never the Bottleneck(ordep.dev)
776 points | 389 commentspage 7
dmezzetti 7/3/2025|
There has long been ways to reduce writing boilerplate code with IDEs. AI code generation is just another tool and it will help enable competent people.
bluefirebrand 7/3/2025|
Not unless it is deterministic

If I have to manually review the boilerplate after it generates then I may as well just write it myself. AI is not improving this unless you just blindly trust it without review, AND YOU SHOULDN'T

dmezzetti 7/3/2025||
I'm not sure many seasoned developers are really using AI that much in their workflows.
bluefirebrand 7/3/2025||
If they aren't I wish they would speak up more and push back against management more

If there's a secret, silent majority of seasoned devs who are just quietly trying to weather this, I wish they would speak up

But I guess just getting those paycheques is too comfy

dmezzetti 7/3/2025||
Software managers have long pushed "productivity" tools on developers. Most don't stick and this is likely similar. It's best to hire smart people and let them use whatever stack works best for them.
skeeter2020 7/3/2025||
>> LLMs are powerful — but they don’t fix the fundamentals

Sounds like we're (once again) rediscovering "No Silver Bullet".

rhubarbtree 7/8/2025||
I’m seeing this everywhere and it’s great to have it spelt out.

“The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication”

AI can dramatically speed up testing and code reviews. Generating unit tests is a major application of AI.

Code reviews can be accelerated too: “please quickly explain what this code is doing” to give you a foothold; “please check this code for any obvious mistakes” enables you to quickly bounce if a new review will be needed. And better yet - the submitter can ask the AI to do that, and also to suggest refactoring that will make code review easier and faster for someone else.

As for understand code and communicating it, well that is going to less and less necessary as the abstraction level we work at is lifted.

This objection is just cope. It’s moving the goalposts because we’re scared AI is going to take out jobs.

In truth, it will simply accelerate our work until we hit AGI. And at that point (which I think is probably a way off) we’ll have much greater concerns than the job market.

nkotov 7/3/2025||
We're approaching a future where creativity will be the bottleneck, everything else is going to be abstracted away.
ivolimmen 7/3/2025||
Thank you; this is exactly what was bothering me. This is my opinion as well you just found the words I could not find!
alex_hirner 7/3/2025||
True. Therefore I'm eagerly awaiting an artificially intelligent product manager.

Or I might build that myself.

kazinator 7/3/2025||
There are times when writing the code is a bottleneck. It's not everyday code. You don't quite know how to write the code. Whatever you try breaks somehow, and you don't readily understand how, even though it is deterministic and you have a 100% repro test case.

An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.

LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.

However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.

am17an 7/3/2025||
One thing I despise about LLMs is transferring the cognitive load to a machine. It’s just another form of tech debt. And you have repay it pretty fast as the project grows.
starchild3001 7/3/2025||
X isn't the bottleneck. Y isn't the bottleneck. Z isn't the bottleneck. T ...

Guess what X + Y + Z + T ... in aggregate are the bottleneck, and LLMs pretty much speed up the whole operation :)

So pretty pointless and click-baity article & title, if you ask me.

al_borland 7/3/2025|
The most outspoken person against LLMs on my team would bring this up a lot. Though the biggest bottleneck he identified was the politics and actually coming to agreements on spec of what to write. Even with perfect AI software engineers, this is still the issue, as someone still needs to tell the AI what to do. If no one is willing to do that, what’s the point of any of this?
More comments...