Posted by Brajeshwar 1 day ago
The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.
Time constraints have caused an increase in fast food consumption and a reduction in excersize.
Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.
If your coworker keeps asking you to review merge requests filled with garbage code they copy/pasted from an LLM, sure, shaming them might be part of the solution. But if people are turning to AI because it's too difficult for them to get certain types of emotional validation in the physical world, making them feel bad about it probably isn't going to help.
I just see an article about migrants destroying things in Britain. Not to excuse the behavior but I wondered where they came from. It turned out to be shit countries fostering that behavior. Why are they shit? Have they always been like that? Well no, the British empire destroyed them. You could think that it's to long ago but they also continue to enjoy spoils. I offer no solutions. The point was that a sensationalist article wouldn't go there because the reader doesn't want to know.
So these tools can be useful when you know the subject matter. I've done queries and gotten objectively false answers. You really need to verify the information you get back. It's like these LLMs have no concept of true or false. They just say something that statistically looks right after ingesting Reddit. We've already seen cases of where ChatGPT legal briefs filed by actual lawyers include precedents that are completely made up eg [2].
There's a really interesting incentive in all this. People like to be told they're right and generally be gassed up, even when they're completely wrong. So if you just optimize for engagement and continued queries and subscriptions, you're just going to get a bunch of "yes men" AIs.
I still think this technology has so far to go. I'm somewhat reminded of Uber actually. Uber was burning VC cash at a horrific rate and was basically betting the company (initially) on self-driving. Full self-driving is still far away even though there are useful things cars can automate like lane-following on the highway and parking.
I simply can't see how the trillions spent on AI data centers can possibly be recouped.
[1]: https://www.tiktok.com/@huskistaken/video/762093124158341455...
[2]: https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-...
Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.
The problem is: flattery is often just like the cake. And the cake is a lie. Translation: people should improve their own intrinsic qualities and abilities. In theory AI can help here (I saw it used by good programmers too) but in practice to me it seems as if there is always a trade-off here. AI also influences how people think, and while some can reason that it can improve some things (it may be true), I would argue that it over-emphasises or even tries to ignore and mitigate negative aspects of AI. Nonetheless a focus on quality would be an objective basis for a discussion, e. g. whether your code improved with help of AI, as opposed to when you did not use AI. You'd still have to show comparable data points, e. g. even without AI, to compare it with yourself being trained by AI, to when you yourself train yourself. Aka like having a mentor - in one case it being AI; in the other case your own strategies to train yourself and improve. I would still reason that people may be better off without AI actually. But one has to improve nonetheless, that's a basic requirement in both situations.
It's not news at all for anyone who actually engage with the people.