Posted by phire 6 days ago
Sounds like we're (once again) rediscovering "No Silver Bullet".
Or I might build that myself.
An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.
LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.
However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.
Guess what X + Y + Z + T ... in aggregate are the bottleneck, and LLMs pretty much speed up the whole operation :)
So pretty pointless and click-baity article & title, if you ask me.
It's something that takes time. That time is now greatly reduced. So you can try more ideas and explore problems by trying solutions quickly instead of just talking about them.
Let's also not ignore the other side of this. The need for shared understanding, knowledge transfer etc is close to zero if your team is agents and your code is the input context (where the actual code is now at the level that machine code is now: very rarely if ever looked at). That's kinda where we're heading. Software is about to get much grander, and your team is individuals working on loosely connected parts of the product. Potentially hundreds of them.