Top
Best
New

Posted by reasonableklout 6 hours ago

I believe there are entire companies right now under AI psychosis(twitter.com)
https://xcancel.com/mitchellh/status/2055380239711457578

https://hachyderm.io/@mitchellh/116580433508108130

845 points | 366 commentspage 3
tacostakohashi 6 hours ago|
"no no, it has full test coverage"

at least at my BigCo, AI is being used for everything - writing slop, writing tests, code reviews, etc.

it would make sense to use AI for writing code, but human code review. or, human code, but AI test cases... or whatever combination of cross-checking, trust-but-verify, human in the loop, etc. people prefer.

i think once it gets used for everything, people have lost the plot, it's the inmates running the asylum.

ares623 5 hours ago|
I was rewatching Rich Hickey's "Simple Made Easy" talk (as one does) and there was a great line about full test coverage.

"What's true about all bugs in production? (pause for dramatic effect) They all passed the tests!" (well, he said typechecker but I think the point stands)

apalmer 4 hours ago||
I don't think it's helpful to call this psychosis. N Beyond that I don't think it's even irrational.

It is definitely factual that there is a complete paradigm shift in the prioritization of quality in software. It's beyond just AI side effects, and now its own stand alone thing.

There have always been many industries, companies, and products who are low on quality scale but so cheap that it makes good business sense, both for the producer and the consumer.

Definitely many companies are explicitly chosing this business strategy. Definitely also many companies that don't actually realize they are implicitly doing this.

Wether the market will accept the new software quality paradigm or not remains an open question.

bsenftner 5 hours ago||
This is a critical communications issue that is becoming what I believe the defining characteristic of "This Age": nobody knows how to discuss disagreement, and because it cannot even be discussed communication ends, followed by blind obedience, forced bullying, retreat and abandonment. This is going to be a hell of a ride, because nobody can really discuss the situation with a rational tone.
robotswantdata 6 hours ago||
Most labs are shilling “AI worker” dreams to these very companies
mmaunder 3 hours ago||
Amazing how the dev community is suffering from a similar inability to approach the subject of real world AI efficiencies and business benefits. I don’t think it’s helpful to accuse the other side of psychosis. It disqualifies any data or experience they bring to the conversation.
whimsicalism 3 hours ago|
It is not the dev community writ large, it is a particular archetype among forum users, particularly common among forums with upvote mechanics
mattgreenrocks 5 hours ago||
The only way many people learn that the stove is hot is by burning their hands on it.

Let them.

dnnddidiej 3 hours ago|
More like how do you know when your charming partner is a catfish. Maybe 2 years and when you are living in a friends basement.
linkregister 5 hours ago||
I don't doubt there are companies totally misusing coding agents and LLMs in production. There are also real companies with real revenue and solid architecture using LLMs to deliver products. There are also companies with real revenue and rapidly accumulating tech debt.

Eventually the companies that can't cope with undisciplined engineering will succumb to unacceptable reliability and be outcompeted, just like in the "move fast and break things" era.

keepamovin 3 hours ago||
It seems the diagnosis of psychosis is too quick: it seeks to reestablish the frame of expert for the developer identity that is being replaced by it.

“It feels like entire companies are deluded into thinking they don’t need me, but they still need me. Help!”

The broad sentiment across statements of this “AI psychosis” type is clear, but I think the baseline reality is simpler. How can you be so certain it’s psychosis if you don’t know what will unfold? Might reaching for the premature certainty of making others wrong, satisfying that it might be to the ego, be simply a way to compensate the challenges of a changing work environment, and a substitute for actually considering the practical ways you could adapt to that? Might it not be more helpful and profitable to consider “how can I build windmills, ride this wave, and adapt to the changing market under this revolution” than soothing myself with the delusion that all these companies think they don’t need me now, but they’ll be sorry.

The developer role is changing, but it doesn’t have to be an existential crisis. Even though it may feel that way — but probably it’s gonna feel more that way the more you remain stuck in old patterns and over-certainty about how things are doesn’t help, (tho it may feel good). This is the time to be observant and curious and get ready to update your perspective.

You may hide from this broad take (that AI psychosis statements are cope) by retreating into specific nuance: “I didn’t mean it that way, you’re wrong. This is still valid.” But the vocabulary betrays motive. Resorting to clinical derogatory language like “AI psychosis” invokes a “superior expert judgment” frame immediately, and in zeitgeist context this is a big tell. It signifies a need to be right, anda deeply defensive pose rather than a clear assay of what’s real in a rapidly changing world. The anxiety driving the language speaks far louder than any technical pedantry used to justify it, and is the most important and IMO profitable thing to address.

slopinthebag 5 hours ago||
I have a ton of respect for Mitchell - I didn't really know who he was until Ghostty but his writings and viewpoints on AI seem really grounded and make the most sense to me. Including this one.

Many people on this forum are suffering under this same psychosis.

glitchcrab 5 hours ago|
I'm guessing you've never heard of Hashicorp (Terraform, Vault) then? Mitchell == Hashicorp.
ivanjermakov 5 hours ago|
Deprecating immature workflows (LLM agents in this case) is much simpler and faster than building them from scratch. Many companies get this risk assessment right. The case where being wrong is much more costly than being right.
kelnos 5 hours ago|
I'm not convinced. There's a ton of cost to adopting a radically different workflow.
More comments...