Posted by CoffeeOnWrite 6 days ago
Wondering what they would be producing with LLMs?
There's a lot of posts about how to do it well, and I like the idea of it, generally. I think GenAI has genuine applications in software development beyond as a Google/SO replacement.
But then there's real world code. I constantly see:
1. Over engineering. People used to keep it simple because they were limited by how fast they can type. Well, those gloves sure did come off for a lot of developers.
2. Lack of understanding / memory. If I ask someone about how their code works, if they didn't write it (or at least carefully analyse it), it's rare for them to understand or even remember what they did there. The common answer to "how does this work?", went from "I think like this but let me double check" to "no idea". Some will be proud to tell you they auto generated documentation, too. If you have any questions about that, chances are you'll get another "no idea" response. If you ask an LLM how it works, that's very hit and miss for non-trivial systems. I always tell my devs I hire them to understand systems first and formost, building systems comes second. I feel increasingly alone with that attitude.
3. Bugs. So many bugs. It seems devs that generate code would need to do a lot more explicit testing than those who don't. There's probably just a missing feedback loop: When typing in code, you tend to have to test every little button action and so on at least once, it's just part of the work. Chances are you don't break it since you last tested it, so while this happens, manually written code generally has one time exhaustive manual testing built into the process naturally. If you generate a whole UI area, you need to do thorough testing of all kinds of conditions. Seems people don't.
So while it could be great, from my perspective, it feels like more of a net negative in practice. It's all fun and games until there's a problem. And there always is.
Maybe I have a bad sample of the industry. We essentially specialise on taking over technically disastrous projects and other kinds of tricky situations. Few people hire us to work on a good system with a strong team behind it.
But still, comparing the questionable code bases I got into two years ago with those I get into now, there is a pretty clear change for the worse.
Maybe I'm pessimistic, but I'm starting to think we'll need another software crisis (and perhaps a wee AI winter) to get our act together with this new technology. I hope I'm wrong.
I found out very early that under no circumstances you may have the code you don't understand, anywhere. Well, you may, but not in public, and you should commit to understanding it before anyone else sees that. Particularly before sales guys do.
However, AI can help you with learning too. You can run experiments, test hypotheses and burn your fingers so fast. I like it.
I have instructions for agents that are different in some details of convention, e.g. human contributors use AAA allocation style, agents are instructed to use type first. I convert code that "graduates" from agent product to review-ready as I review agent output, which keeps me honest that I don't myself submit code without scrutiny to the review of other humans: they are able to prompt an LLM without my involvement, and I'm able to ship LLM slop without making a demand on their time. Its an honor system, but a useful one if everyone acts in good faith.
I get use from the agents, but I almost always make changes and reconcile contradictions.