Top
Best
New

Posted by reasonableklout 14 hours ago

I believe there are entire companies right now under AI psychosis(twitter.com)
https://xcancel.com/mitchellh/status/2055380239711457578

https://hachyderm.io/@mitchellh/116580433508108130

1383 points | 678 commentspage 12
awesomeusername 11 hours ago|
If you know these things you can take them into account while driving the AI.

Sorry, I don't buy your argument

elevation 14 hours ago||
Mitchell aches because his career has been solving broadly scoped problems by building a collection of thoughtful primitives for others to extend. LLMs seem to do the opposite but at great speed, and it hurts to watch.
alexdrydew 1 hour ago||
Honestly, I don't get this argument. In my opinion, "a collection of thoughtful primitives for others to extend" is more valuable now, not less. From LLM assisted engineering standpoint a nicely put reusable box with thoughtful interface is an easy win, more so if it is also easily extensible.
peyton 13 hours ago||
Reading more, it seems part of his point is “if you’re making these primitives, it’s up to adopters to deploy, so mean-time-to-recovery isn’t that relevant.” Which is valid I guess.

But equally, like, do people need Terraform if they can just tell codex “put it live”, and does that hurt to see?

woeirua 14 hours ago||
This doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)

It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.

9dev 14 hours ago||
If you want to draw that line of argument - it's more like horse riders being convinced to give up their horses in favour of trains: You're travelling faster, don't have to navigate yourself, or think about every boulder on the way; but there are destinations you can't go, overcrowded trains slowing down the journey, hefty ticket prices, and instead of enjoying the freedom, you're degraded to a passive passenger.
hansmayer 13 hours ago||
Very funny, this. Did we need forward deployed engineers to convince people that they absolutely need to use the trains in order to "not be left behind"? Or otherwise hype? Or was it sort of obvious and did not need to explained so much - like a bad joke called LLMs ?
9dev 13 hours ago||
Actually- absolutely! Initially, people were really afraid of trains, fearing they wouldn’t be able to breathe at those speeds. It took a lot of convincing to establish trust in the technology.
hansmayer 1 hour ago|||
> Initially, people were really afraid of trains, fearing they wouldn’t be able to breathe at those speeds

That was one doctor raising that as an issue, which was dispelled very quickly. It was not a wide-spread belief at any one point. Let's not bullshit ourselves and insult our own intelligence - the chatbots != intelligence.

uuyy 13 hours ago|||
Ever heard of subsidising? :’)
lkjdsklf 11 hours ago|||
> there’s no compelling argument as to why that is the case.

I'm not sure that's true. We've actually seen several open source projects that were vibe coded literally fold up and disappear because they ran into issues that the AI couldn't solve and no one understood them well enough to solve.

There's a reason openai/anthropic and friends are hiring shitloads of software engineers. You still need people that can understand and fix things when the AI goes off hte rails, which happens way more often than any of those companies would like to admit. Sure, "fixing things" often involves having the AI correct itself, but you still have to understand the system enough to know how/when to do that.

caconym_ 14 hours ago|||
I am sure you will feel that this is missing the point of your analogy, but we would not have gotten very far with automobiles if we didn't know how they worked.
throw310822 13 hours ago||
You are breaking the analogy because automobiles are machines for transportation, and understanding them is important to make them move. LLMs are machines to understand, and well, if they do the understanding you don't need to.
caconym_ 12 hours ago||
The thing we're worried about not understanding here is the software the LLMs write, not the LLMs themselves.

The direct analogy to automobiles would be for each automobile to be a oneoff design filled with bad and bizarre decisions, excessively redundant parts, insane routing of wires, lines, ducts, etc., generally poor serviceability, and so on. IMO the big question going forward is whether the consistent availability of LLMs can render these kinds of post-delivery issues moot (they will reliably [catch and] fix problems in the software they wrote before any real damage is caused), or whether human reliance on LLMs and abdication of understanding will just make software worse because LLMs' ability to fix their own mistakes, and the consequences thereof, generally breaks down in the same contexts/complexities where they made those mistakes in the first place.

My own observations are that moderately complex software written in the mode of "vibe coding" or "agentic engineering" tends to regress to barely-functional dogshit as features are piled on, and that once this state is reached, the teams behind it are unable to, or perhaps simply uninterested in, unfuck[ing] it. I have stopped using software that has gone down this path, not because I have some philosophical objection to it, but because it has become _literally unusable_. But you will certainly not catch me claiming to know what the future holds.

jgbuddy 13 hours ago||
agreed completely
sheepscreek 10 hours ago|
I have respect for Mitchel and I’ve spent a good deal of time trying to think of ways to justify his message. I can’t. Either I am missing a big piece or he is worrying about something that comes naturally as more software gets developed (and sooner).

In any case, this is what blue-green deployments and gradual rollouts are for. With basic software engineering processes, you can make your end user experience pretty much bullet proof. Just pay EXTRA attention when touching DNS, network config (for core systems) and database migrations.

Distributed systems are a bit more tricky but k8s and the likes have pretty solid release mechanisms built-in. You are still doomed if your CDN provider goes down. You just have to draw a line somewhere and face the reality head on (for X cost per year this is the level of redundancy we get, but it won’t save us from Y).

The one thing I hadn’t mentioned - one I AM worried about - is security! I’ve been worried about it from before Mythos (basic prompt injection) and with more powerful models now team offence is stronger than ever.

jnwatson 10 hours ago|
Yeah. The same processes that allow corporations to outsource their software to barely qualified 3rd-world body shops are the processes that allow you to deploy AI-generated code of unknown quality.