Top
Best
New

Posted by phire 6 days ago

Writing Code Was Never the Bottleneck(ordep.dev)
765 points | 385 commentspage 7
skeeter2020 4 days ago|
>> LLMs are powerful — but they don’t fix the fundamentals

Sounds like we're (once again) rediscovering "No Silver Bullet".

nkotov 4 days ago||
We're approaching a future where creativity will be the bottleneck, everything else is going to be abstracted away.
ivolimmen 4 days ago||
Thank you; this is exactly what was bothering me. This is my opinion as well you just found the words I could not find!
alex_hirner 4 days ago||
True. Therefore I'm eagerly awaiting an artificially intelligent product manager.

Or I might build that myself.

kazinator 4 days ago||
There are times when writing the code is a bottleneck. It's not everyday code. You don't quite know how to write the code. Whatever you try breaks somehow, and you don't readily understand how, even though it is deterministic and you have a 100% repro test case.

An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.

LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.

However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.

am17an 4 days ago||
One thing I despise about LLMs is transferring the cognitive load to a machine. It’s just another form of tech debt. And you have repay it pretty fast as the project grows.
starchild3001 4 days ago||
X isn't the bottleneck. Y isn't the bottleneck. Z isn't the bottleneck. T ...

Guess what X + Y + Z + T ... in aggregate are the bottleneck, and LLMs pretty much speed up the whole operation :)

So pretty pointless and click-baity article & title, if you ask me.

al_borland 4 days ago||
The most outspoken person against LLMs on my team would bring this up a lot. Though the biggest bottleneck he identified was the politics and actually coming to agreements on spec of what to write. Even with perfect AI software engineers, this is still the issue, as someone still needs to tell the AI what to do. If no one is willing to do that, what’s the point of any of this?
Schnitz 4 days ago||
I think a lot of teams will wrestle with the existing code review process being abused for quite a while. A lot of people are lazy or get into tech because it’s easy money. The combination of LLMs and a solid code review process means you can submit slop and not even be blamed for the results easier than ever.
afro88 4 days ago|
This is a strawman isn't it? I haven't read one post or comment saying that writing code is "the bottleneck".

It's something that takes time. That time is now greatly reduced. So you can try more ideas and explore problems by trying solutions quickly instead of just talking about them.

Let's also not ignore the other side of this. The need for shared understanding, knowledge transfer etc is close to zero if your team is agents and your code is the input context (where the actual code is now at the level that machine code is now: very rarely if ever looked at). That's kinda where we're heading. Software is about to get much grander, and your team is individuals working on loosely connected parts of the product. Potentially hundreds of them.

More comments...