Posted by cainxinth 9/3/2025
Is it safe to say that LLMs are, in essence, making us "dumber"? No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "brain damage", "passivity", "trimming" , "collapse" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.
[1]: https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...
Using LLMs to do replace the effort we would've otherwise endured to complete a task short-circuits that exercising function, and I would suggest is potentially addictive because it's a near-instant reward for little work.
It would be interesting to see a longitudinal study on the affect of LLMs, collective attention spans, and academic scores where testing is conducted on pen and paper.
It's like a drug. You start using it, and think you have super powers, and then you've forgotten how to think, and you need AI just to maybe be as smart as you were before.
Every company will need enterprise AI solutions just to maybe get the same amount of productivity as they got before without it.
When writing was invented, societies started depending on long form memorization less, which is a cognitive "decline". When calculators were invented, societies started depending on mental math less, which is a cognitive "decline".
I'm sure LLMs are doing the same thing. People aren't getting dumber, they are just outsourcing tasks more, so that their brains spend more time on the tasks that can't be outsourced.
People who maintain a high level of curiosity or a have drive to create things will most assuredly benefit from using AI to outsource work that doesn't support those drives. It has the potential to free up more time for creative endeavors or those that require more deep thinking. Few would argue the benefit there.
Unfortunately, anti-intellectualism is rampant, media literacy is in decline, and a lot of people are content to consume content and not think unless they absolutely have to. Dopamine is a helluva drug.
If LLMs reduce the cognitive effort at work, and the people go home to doom scroll on social media or veg out in front of their streaming media of choice, it seems that we're heading down the path of creating a society of mindless automatons. Idiocracy is cited so often today that I hate to do so myself, but it seems increasingly prescient.
Edit: I also don't think that AI will enable a greater work-life harmony. The pandemic showed that a large number of jobs could effectively be done remotely. However, after the pandemic, there was significant "Return to Office" movement that almost seemed like retribution for believing we could achieve a better balance. Corporations won't pass on the time savings to their employees and enable things like 4-day work weeks. They'll simply expect more productivity from the employees they have.
Also, domesticated dogs show indications of lower intelligence and memory than wolves. They don't have to plan complex strategies to find and kill food, anymore.
But humans need jobs, and jobs need to capture value from society. So we do actually still have to stay sharp, whatever form "sharp" takes.
If you're an entrepreneur, your job is to please the customer and to squeeze your vendors and employees. You still take little to no part in directly taking care of yourself, except as a hobby. Unless you want to be congratulated for wiping your own ass or lifting a fork to your mouth.
Wouldn't that be the expected result here? Less knowledge, more questions?
When I use LLMs, it’s less about patching holes in my memory and more about taking an idea a few steps further than I otherwise might. For me it’s expanding the surface area of inquiry, not shrinking it. If the study’s thesis were true in my case, I’d expect to be less curious, not more.
Now that said I also have a healthy dose of skepticism for all output but I find for the general case I can at least explore my thoughts further than what I may have done in the past.
I don't have a dog in this fight, but "asking more questions" could be evidence of cognitive decline if you're having to ask more questions than ever!
It's easy to twist evidence to fit biases, which is why I'd hold judgement to better evidence comes through.
But if I'm teaching a class, and one student keeps asking questions that they feel the material raised, I don't tend to think "brain damage". I think "engaged and interested student".
Personally, I find myself often asking AI about things I wouldn't have been bothered to find out about before.
For example I've always these funny little grates on the outside of houses near me and wondered what they are. Googling "little grates outside houses" doesn't help at all. Give AI a vagueish description and it instantly tells you they are old boot scapers.
Maybe there is a movie in the back of my head or a song. Typical search engine queries would never find it. I can give super vague references to a LLM and with search enabled get an answer that’s correct often enough.
If I’m constantly asking “what does this mean again?” that would signal decline. But if I’m asking “what if I combine this with X?” or “what are the tradeoffs of Y?” that feels like the opposite: more engagement, not less.
That’s why I’m skeptical of blanket claims from one study, the lived experience doesn’t map so cleanly.
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written.
> In contrast, 88.9% of Search and Brain-only users could quote accurately.
> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could.
Reminds me of my coworkers who have literally no idea what Chat GPT put into their PR from last week.
Could a person, armed with ChatGPT, come up with a better solution in a real world problem than without ChatGPT? Maybe that's what actually matters.
But how can they discuss any content if even the "writer" does not remember what they wrote.
I think a return to the apprentice style of institution where people try to create the best real world solution as possible with LLMs, 3D printers, etc. Then use recorded college courses like our grandparents used books.
Calculators reduced our capabilities in mental and pencil-paper arithmetic. Graphing calculators later reduced our capacity to sketch curves, and in turn, our intuition in working directly with equations themselves. Power tools and electric mixers reduced our grip strength. Cheap long distance plans and electronic messaging reduced our collective abilities in long-form letter writing. The written word decimated the population of bards who could recite Homer from memory.
It's not that there aren't pitfalls and failure modes to watch out for, but the framing as a "general decline" is tired, moralizing, motivated, clickbait.
And now people make bad decisions in their daily life about money etc. Most people can't do the math in their head but they also aren't using their calculator at the grocery store to avoid being taken advantage of. The math doesn't get done.
The lesson isn't that we survived calculators, it's that they did dull us, and our general thinking and creativity are about to get likewise dulled.
Their trial design and interpretation of results is not properly done (i.e. they are making unfair comparison of LLM users to non-LLM users), so they can't really make the kind of claims they are making.
This would not stand up to peer review in it's current form.
I'm also saying this as someone who generally does believe these declines exist, but this is not the evidence it claims to be.
Do you have links or citations to people saying these claims?
Comes down to: - Self selection bias - Trial design - Dubious intepretations of neural connectivity
just like a muscle will atrophy from disuse skills and cognitive assets, once offloaded, will similarly atrophy. People don't memorize phone numbers, gps gets you where you want to go, your IDE seamlessly helps you along so much you could never code in a text editor, your TI-89 will do most of your math homework, as a manager you direct people to do work and no longer do the work yourself.
We of course never really lower our absolute cognitive load by much, just shift it. each of those points has it's own knowledge base that is needed to use it but sometimes we lose general skills in favor of esoteric skills.
While I may now possess esoteric skills in operating my GPS, setting way-points, saving locations, entering coordinates, if I use it a lot I find I need it to get back to the hotel from just a few miles away even if I've driven the route multiple times. I'm offloading learning the route to the gps. My father on the other hand struggles to use one and if he's away he pays a lot of attention to where he's going and remembers routes better.
Am I dumber than him? with respect to operating the device certainly not but if we both drove separately to a new location and you took GPS from me once I got there I'd certainly look a lot dumber getting lost trying to get back without my mental crutch. I didn't have to remember the route, so I didn't. I offloaded that to the machine, and some people offload a LOT, pretty sure nobody ever drove into a lake because a paper map told them to.
Modern AI is only interesting insofar as it subsumes tasks that until now we would consider fundamental. Reading, writing, basic comprehension. If you let it, AI will take over these things and spoon feed you all you want. Your cognitive abilities in those areas will atrophy and you will be less cognizant of task elements where you've offloaded mental workload to the AI.
And we'll absolutely see more of this, people who are a wiz at using AI, know every app, get things done by reflex. but never learned or completely forgot how to do basic shit, like read a paper, order a salad off a menu in-person or book a flight and it'll be both funny and sad when it happens.
It wasn't immediately clear what they actually had the subjects do. It seems like they wrote an essay, which...duh? I would bet brain activity would be similar -- if not identical -- as an LLM user if the subjects were asked to have the other cohorts to write their essay.