Posted by reasonableklout 14 hours ago
Sorry, I don't buy your argument
But equally, like, do people need Terraform if they can just tell codex “put it live”, and does that hurt to see?
It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.
That was one doctor raising that as an issue, which was dispelled very quickly. It was not a wide-spread belief at any one point. Let's not bullshit ourselves and insult our own intelligence - the chatbots != intelligence.
I'm not sure that's true. We've actually seen several open source projects that were vibe coded literally fold up and disappear because they ran into issues that the AI couldn't solve and no one understood them well enough to solve.
There's a reason openai/anthropic and friends are hiring shitloads of software engineers. You still need people that can understand and fix things when the AI goes off hte rails, which happens way more often than any of those companies would like to admit. Sure, "fixing things" often involves having the AI correct itself, but you still have to understand the system enough to know how/when to do that.
The direct analogy to automobiles would be for each automobile to be a oneoff design filled with bad and bizarre decisions, excessively redundant parts, insane routing of wires, lines, ducts, etc., generally poor serviceability, and so on. IMO the big question going forward is whether the consistent availability of LLMs can render these kinds of post-delivery issues moot (they will reliably [catch and] fix problems in the software they wrote before any real damage is caused), or whether human reliance on LLMs and abdication of understanding will just make software worse because LLMs' ability to fix their own mistakes, and the consequences thereof, generally breaks down in the same contexts/complexities where they made those mistakes in the first place.
My own observations are that moderately complex software written in the mode of "vibe coding" or "agentic engineering" tends to regress to barely-functional dogshit as features are piled on, and that once this state is reached, the teams behind it are unable to, or perhaps simply uninterested in, unfuck[ing] it. I have stopped using software that has gone down this path, not because I have some philosophical objection to it, but because it has become _literally unusable_. But you will certainly not catch me claiming to know what the future holds.
In any case, this is what blue-green deployments and gradual rollouts are for. With basic software engineering processes, you can make your end user experience pretty much bullet proof. Just pay EXTRA attention when touching DNS, network config (for core systems) and database migrations.
Distributed systems are a bit more tricky but k8s and the likes have pretty solid release mechanisms built-in. You are still doomed if your CDN provider goes down. You just have to draw a line somewhere and face the reality head on (for X cost per year this is the level of redundancy we get, but it won’t save us from Y).
The one thing I hadn’t mentioned - one I AM worried about - is security! I’ve been worried about it from before Mythos (basic prompt injection) and with more powerful models now team offence is stronger than ever.