Top
Best
New

Posted by reasonableklout 7 hours ago

I believe there are entire companies right now under AI psychosis(twitter.com)
https://xcancel.com/mitchellh/status/2055380239711457578

https://hachyderm.io/@mitchellh/116580433508108130

898 points | 384 commentspage 4
insane_dreamer 4 hours ago||
Just talked to an exec yesterday about their multinational company, where the newly-installed CEO just came in with "everyone needs to be using AI" and "we should be doing everything with AI".

I cautioned them that this a terrible idea -- you have business people who don't know what they're talking about, and all they know if "if we don't 'do AI' we'll be left behind because our competitors are 'doing AI'" (whatever tf "doing AI" means).

Yes, LLMs are a great tool. But they're not like some magic bullet you stick into everything. Use it where it makes sense, and treat it like you would other tools.

You make "doing AI" some kind of KPI in your org, and you're going to have people "doing AI" amazingly (LOC counts! tokens burned! tickets cleared!) while not actually being more productive, and potentially building something that is going to come down on your head for the next team to "clean up the AI mess".

Ifkaluva 5 hours ago||
The Twitter post doesn’t even document some of the most psychotic things that are happening.
leeoniya 6 hours ago||
> "no no, it has full test coverage"

i don't have enough fingers (and toes) to count how many times i've demonstrated that "100% coverage" is almost universally bullshit.

GrumpyYoungMan 18 minutes ago||
There's a very old paper by Cem Kaner about the meaninglessness of "100% coverage" where he included an appendix where he enumerates 101 different possible types of code coverage: https://www.researchgate.net/publication/243782285_Software_...
kevinsync 6 hours ago|||
Codex is freakin hot-to-trot to churn out test coverage for every single thing it implements, and some of it is very esoteric and highly prescriptive (regexes for days) BUT .. after a while, it dawned on me that LLM-driven test coverage is less about proving “code correctness” (you’re better off writing those tests yourself alongside them), and more about just trying to ensure that whatever gets bolted on stays bolted on. For better or worse, obviously, since if you bolt on trash, trash you shall have.
throw310822 6 hours ago||
Wholeheartedly agree, but in fairness, I trust the tests of the best AI models more than those of the average human developer. There's a lot of people around that combine high diligence with complete intellectual laziness, producing tons of useless tests.

Actually no, cancel that. I realise now that I trust AIs more than the average developer, period. At this point they do produce better code than most people I've dealt with.

spicyusername 6 hours ago||
We're definitely in the mess around phase of AI adoption.

I don't think it's super clear what we'll find out.

We've all built the moat of our careers out of our expertise.

It is also very possible that expertise will be rendered significantly less valuable as the models improve.

Nobody ever cared what the code looked like. They only ever cared if it solved their problem and it was bug free. Maybe everything falls apart, or maybe AI agents ship code that's good enough.

Given the state of the industry were clearly going to find out one way or the other, hah!

HarHarVeryFunny 4 hours ago|
> I don't think it's super clear what we'll find out

I think some companies will find out that their senior engineers were providing more value and software stability than they gave them credit for!

Corporate feedback loops are very slow though, partly because management don't like to admit mistakes, and partly because of false success reporting up the chain. I'd not be surprised if it takes 5 years or more before there is any recognition of harm being done by AI, and quiet reversion to practices that worked better.

JeremyJaydan 4 hours ago||
If you don't use it you lose it, and a lot of people are losing it..
LunicLynx 6 hours ago||
Either this or we humans are out of the picture soon.
arm32 6 hours ago|
Occams' razor would assume the former.
CodingJeebus 7 hours ago||
Anyone who's taken VC funding has no choice. More money has been spent on AI commercialization than the atomic bomb, the US interstate build-out, the ISS and the Apollo program combined. Failure is going to be catastrophic and therefore, one tied to this ship cannot accept a world in which it fails.
hungryhobbit 6 hours ago||
Or anyone who even wants VC funding. 90+% of investors only want to invest in AI companies.

If you're not doing AI there's an incredibly limited pool of people who will give you $$$ ... and you're competing with EVERY OTHER NON-AI COMPANY for their attention.

infamouscow 6 hours ago||
On the bright side, my guillotine & rope startup is going to make a killing (no pun intended).
crnkofe 6 hours ago||
Sounds pretty accurate. Bunch of comments on this thread sound like AI is some kind of a new doomsday cult. The most annoying thing I find personally is that all engineering principles are getting crushed by non techies. Management counting token usage, forcing agent use, reducing headcount in the name of productivity gain. Devs building bridges but nobody knows what the bridge is, what are the standards to which it was built, how it works and how to maintain it. VCs counting extra money claiming chasing the holy profit is the future. The abundance of engineering apathy is disturbing.
hedgehog 6 hours ago|
[dead]
throwawaypath 6 hours ago|
Mitchellh is on to something. Some of the AI products I've seen seem like psychosis hallucinatory fever dreams, using terms and concepts that have no meaning. Funding? $50,000,000 pre-seed.
More comments...