Posted by cainxinth 9/3/2025
The ability to easily edit in word processors surely atrophied people's ability to really reason out what they wanted to write before committing it to paper. Is it sad that these traits are less readily available in the human populace? Sure. Do we still use word processors anyway because of the tremendous benefits they have? Of course. Similar could be said for spellcheckers, tractors, calculators, power tools, etc.
With LLMs, it's so much quicker to access a tremendous breadth of information, as well as drill down and get a pretty good depth on a lot of things too. We lose some things by doing it this way, and it can certainly be very misused (usually in a fairly embarrassing way). We need to keep it human, but AI is here to stay and I think the benefits far exceed the "cognitive decline" as mentioned in this journal.
Besides academics are bitter since LLMs are better at teaching than they are!
I think a better interpretation would be to say that LLMs gives people the ability to "filter out" certain tasks in our brains. Maybe a good parallel would be to point out that some drivers are able to drive long distances on what is essentially an "auto-pilot". When this happens they are able to drive correctly but don't really register every single action they've taken during the process.
In this study you are asking for information that is irrelevant (to the participant). So, I think it is expected that people would filter it out if given the chance.
[edit] Forgot to link the related xkcd: https://xkcd.com/1414/
But it does highlight that this mind-slop decline is not new in any way even if it may have accelerated with the decline and erosion of standards.
Think of it what you want, but if the standards that led to a state everyone really enjoys and benefits from are done away with, inevitably that enjoyable state everyone benefited from and you really like will start crumbling all around you.
AI is not really unusual in this manner, other than maybe that it is squarely hitting a group and population like public health policy journalists and programmers that previously thought they were immune because they were engaged in writing. Yes, programmers are essentially just writers.
"Writing is nature’s way of letting you know how sloppy your thinking is." -- Guindon
I would argue that it helps kids learn how to organize and formulate coherent thoughts and communicate with others. I'm sure it helps them do homework, too.
And so it is with many things. I wrote cursive right through the end of my high school years, but while I can type well on a computer, I have trouble even writing block lettering without mistakes now, and cursive is a lost cause.
Ubiquitous electronic calculators have eroded the heroic mental calculation skills of old. And now artificial "thinking machines" to do the thinking for you cause your brain to atrophy. Colour me surprised. The Whispering Earring story was mentioned here just recently but is totally topical.
There will always be people who misuse something, but we should not hurt those who do not. Same with drugs. There are functional junkies who know when to stop, go on a tolerance break, take just enough of a dose and so forth, vs. the irresponsible ones. The situation is quite similar and I do not want AI to be "banned" (assuming it could) because of people who misuse LLMs.
People, let us have nice things.
As for the article... did they not say the same thing about search engines and Wikipedia? Do you remember how cheating actually helps us learn (by writing down the things you want to cheat)? Problem is, people do not even bother reading the output of the LLM and that is on them.
Internet was supposed to be this wonderful free place with all information available and unbiased, not the cesspool of scams and tracking that makes 1984 look like a fairytale for children. Atomic energy was supposed to free mankind from everlasting struggle for energy dependency, end wars and whatnot. LLMs we supposed to be X and not Y and used as Z and not BBCCD.
For what population loses overall, compared to whats gained (really, what? a mild increased efficiency sometimes experienced on individual level, sometimes made up for PR), I consider these LLMs are a net loss for whole mankind.
Above should tell you something about human nature, how naive some of the brightest of us are.
If it is a human nature issue (with which I agree), then we are in a deep shit and this is why we cannot have nice things.
Educate, and if that fails, then punish those who "misuse" it. I do not have a better idea. It works for me quite well for coding, and it will continue to work as long as it is not going to get nerfed.
Well cheers to even bigger gap between elite who can afford good education and upbringing and cheap crappy rest. Number of scifi novels come to mind where poor semi-mindless masses are governed by 'educated' elites. I always thought how such society must have screwed up badly in the past to end up like that. Nope, road to hell is indeed paved with good intentions and small little steps which seem innocent or even beneficial on their own, in their time.
AI is no different. Most will use it and not learn the fundamentals. There’s still lots of work for those people. Then some of us are doing things like looking at the state machines that rust async code generation produces or inspecting what the Java JIT is producing and still others are hacking ARM assembly. I use AI to take care of the boring bits, just like writing a nice UI in C++ was tedious back in 1990 so we used VB for that.
I’ll do it once or twice, tell the llm to do it and reference the changes I made and it’s usually passable. It’s not fit for anything more imo.