Posted by cainxinth 9/3/2025
But didn’t pocket calculators present the same risk / panic?
At some point ai will probably be like calculators where once everyone is using them for everything, that will be a new and different normal from today, and the expectations and the way of judging quality etc will be different than today.
Once everyone is doing the same one weird trick as you, it's no longer useful. You can no longer pretend to be a developer or an artist etc.
There will still be a sea of bottom-feeders doing the same thing, but they will just be universally recognized as cheap junk. Annd that's actually fine, kinda. There is a place and a use for cheap junk that just barely does something, the same as a cheap junky screwdriver or whatever.
My vehicle has a number of self-driving capabilities. When I used them, my brain rapidly stopped attending to the functions I'd given over, to the extent that there was a "gap" before I noticed it was about to do the wrong thing. On resumption of performing that work myself, it was almost as if I had forgotten some elements of it for a moment while my brain sorted it out.
No real reason to think that outsourcing our thinking/writing/etc will cause our brains to respond any differently. Most of the "reasoned" arguments I see against that idea seem based on false equivalences.
Most importantly, I did not remember anything (which is a good thing because half of the output is wrong). I then switched to Stackoverflow etc. instead of the "AI". Suddenly my mental maps worked again, I recalled what I read, programming was fun again, the results were correct and the process much faster.
All the headings and bullets and phrases like "The findings are clear:" stick out like a sore thumb.
>Everyone Is Cheating Their Way Through College. ChatGPT has unraveled the entire academic project.
Sure you do, and maybe its really an actual benefit for ya. Not for most though. For young folks still going through education, this is devastating. If I didn't have kids I wouldn't care, less quality competition at work, but I do (too young to be affected by it now, and by the time they will be allowed to use these, frameworks for use and restrictions will be in place already).
But since maybe 30% of folks here are directly or indirectly dependent on LLMs to be pushed down every possible throat and then some more, I expect much more denial and resistance to critique of their little pets or investments.
My optimistic take is that the rise of AI in education could cause more workplaces to move away from "must have xyz degree" and actually determine if the candidate has the skills needed.
For this reason, I don't feel as optimistic as you do. I worry instead that equality gaps will widen significantly: there will be the majority which abuses AI and graduates with empty brains, and there will be the minority who somehow manage to avoid doing that (e.g. lucky enough to have parents with sufficient foresight to take preventative measures with their children).
LLMs may end up being both educationally valuable in certain contexts for certain users, and totally unsuitable for developing brains. I would err towards caution for young minds especially.
https://nypost.com/2025/08/19/world-news/china-restricts-ai-...
"That’s because the Chinese Communist Party knows their youth learn less when they use artificial intelligence. Surely, President Xi Jinping is reveling in this leg up over American students, who are using AI as a crutch and missing out on valuable learning experiences as a result.
It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."
https://www.scmp.com/tech/policy/article/3323959/chinas-soci...
Let's say I'm a writer of no skill who still wants attention. I could spend years learning to write better, but I still might not get any attention.
Or I could use AI to write something today. It won't be all that interesting, because AI still can't write all that well, but it may be better than I can do on my own, and I can get attention today.
If you care about your own growth (or even not dwindling) as a human, that's a trap. But not everyone cares about that...
"Won't touch it, I'd never infect my codebase with whatever garbage that thing could output" -> ChatGPT for a small function here or there -> Cursor/Copilot style autocomplete -> Claude Code fully automating 90% of my tasks.
It felt like magic at first once reaching that last (current) point. In a lot of ways for certain things it still is. But it's becoming clearer and clearer that this will never be a silver bullet, and I'm ready to evolve further to "It's another tool in the toolbox to be applied judiciously when and where it makes sense, which it usually does not.". I've also come to greatly distrust anything an LLM says that isn't verified by a domain expert.
I've also felt a great amount of joy from my work go away over this time. Much as the artisans of old who were forced to sit back and supervise the automated machines taking over their craft churn out crappier versions of something faster. There's more to this than just being an old fart who doesn't want to change. We all got into this field for a reason, and a huge part of that reason is that it brings us joy. Without that joy we are going to burn out quickly, and quality is going to nosedive.