Top
Best
New

Posted by reasonableklout 11 hours ago

I believe there are entire companies right now under AI psychosis(twitter.com)
https://xcancel.com/mitchellh/status/2055380239711457578

https://hachyderm.io/@mitchellh/116580433508108130

1151 points | 532 commentspage 8
LAC-Tech 9 hours ago|
I am really looking for more reasoned approaches to AI.

I am very close to using it as a pair programmer, but with me actually coding. I am just so tired of fixing its mistakes.

nunez 9 hours ago||
Isn't going to happen without the regulation hammer being thrown down.

Probably from the EU because they seem to be the sane ones of this generation.

LAC-Tech 8 hours ago||
Talking about my own personal workflow. No company has dictated one tl me yet lol.
alexzhaosheng 8 hours ago||
[flagged]
daneel_w 8 hours ago||
I work for a small telecom services provider whose current VP immediately set an AI course when stepping on board 6 months ago. Involving AI in everything and every task is now our first priority - across all employee segments, not just us system developers - and leadership is embarking on a program to measure employees' AI usage levels as a means to gauge everyone's individual efficiency. It's like the era of the evangelic crypto bros all over again.
HNisCIS 3 hours ago||
I'm in a company going through this. Everyone outsources their thinking to LLMs and the results are painfully mediocre. The smart ones will use it to get their bearings on the topic then go to primary sources, the not so bright just ctrl-c ctrl-v.

Have you ever been in an HN thread where you're an SME on the thread topic and just been horrified by the confidently incorrect nonsense 90% of the thread is throwing around? Welcome to the training set motherfuckers.

LLMs do the same thing for what should be obvious reasons. If you search things that have some depth and you know the answer you'll be flooded by how often the models will just vomit confident half truths and misrepresented facts. They're better than they used to be, not just lying whole cloth most of the time, but truth is an asymptotic thing, not an exponential one.

Apocryphon 8 hours ago||
Make the most of it. Their delusion is your opportunity.
gverrilla 8 hours ago||
'AI psychosis' is a slop concept.
topherPedersen 8 hours ago||
Hype & greed are a hell of a drug
gregjor 5 hours ago||
Psychosis means inability to distinguish the real from the not real -- delusion. I don't think the article describes that, at least not in a literal or clinical sense. The author lifted a term usually applied to people who fall in love with chatbots and applied it to the context of software developers not understanding AI coding tools, and the limitations of those tools.

AI coding swept over the software industry faster than most previous trends. OOP and its predecessor "structured programming" took a lot longer. Agile and XP got traction fairly quickly but still took longer than AI -- and met with much of the same kind of resistance and dire predictions of slop and incompetence.

AI tools have led to two parallel delusions: The one Mitchell Hashimoto describes, and the notion that we (programmers) knew how to produce solid, reliable, useful, maintainable code before AI slop came along. As always with tools that give newbs, juniors, managers some leverage (real or imagined) we -- programmers -- get upset and react to the threat with dire warnings. We talk about "technical debt" and "maintainability" and "scalability."

In fact the large majority of non-trivial software projects fail to even meet requirements, much less deliver maintainable code with no tech debt. Most programmers don't know how to write good code for any measure of "good." Our entire industry looks more like a decades-long study of the Dunning-Kruger effect than a rigorous engineering discipline. If we knew how to write reliable code with no tech debt we could teach that to LLMs, but instead we reliably get back the same kind of mediocre code the LLMs trained on (ours), only the LLMs piece it together faster than we can.

With 50 years in the business behind me, and several years of mocking and dismissing AI coding whenever someone brought it up, I got dragged into it by my employer. And then I saw that with guidance and a critical eye, reasonably good specs, guardrails, it performed just as well and sometimes more throroughly than me and almost all of the people I have worked with during my career. It writes better code and notices mistakes, regressions, edge cases better than I can (at least in any reasonable amount of time).

AI coding tools only have to perform better -- for whatever that means to an organization -- than the median programmers. If we set the bar at "perfect" they of course fail, but so do we. We always have. Right now almost all of the buggy, insecure, ugly, confusing software I use came from teams of human programmers who didn't use AI. That will quickly change and I can blame the bugs and crashes and data losses and downtime on AI, we all can, but let's not pretend we're really losing ground with these tools or that we could all, as an industry, do better than the LLMs, because all experience shows that we can't.

andreasgl 10 hours ago||
https://xcancel.com/mitchellh/status/2055380239711457578
mhitza 10 hours ago||
https://hachyderm.io/@mitchellh/116580433508108130
teddyh 10 hours ago||
<https://twiiit.com/mitchellh/status/2055380239711457578> – will redirect to a currently-working Nitter instance.
autoexec 9 hours ago||
Seems broken. It just throws up an anime cat girl for me.
treyd 8 hours ago|||
Anubis is actually a jackal.
autoexec 7 hours ago||
I stand corrected!
slopinthebag 9 hours ago|||
> anime cat girl

seems like it's working ideally to me!

autoexec 8 hours ago||
Wait, are you calling me a bot, or are you just into anime cat girls?
slopinthebag 8 hours ago||
im not calling you a bot lol
dshaqra 2 hours ago||
[flagged]
chanki 2 hours ago|
[flagged]
More comments...