Top
Best
New

Posted by cainxinth 9/3/2025

MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline(publichealthpolicyjournal.com)
615 points | 566 commentspage 4
flanbiscuit 9/3/2025|
> and diminished sense of ownership over their own writing.

Anecdotally, this is how I felt when I tried out AI agents to help me write code (vibe coding). I always review the code and I ask it to break it down into smaller steps but because I didn't actually write and think of the code myself, I don't have it all in my brain. Sure, yes I can spend a lot of time really going through it and building my mental model but it's not the same (for me).

But this is also how I felt when I managed a small team once. When you start to manage more and code less, you have to let go of the fact that you have more intimate knowledge of the codebase and place that trust in your team. But at least you have a team of humans.

AI agentic coding is like shifting your job from developer to manager. Like the article that was posted yesterday said: 'treating AI like a "junior developer who doesn't learn"' [1,2].

One good thing I like about AI is that it's forcing people to write more documentation. No more complaining about that.

1. https://www.sanity.io/blog/first-attempt-will-be-95-garbage

2. https://news.ycombinator.com/item?id=45107962

globular-toast 9/3/2025|
Yeah, same experience here too. I "vibe coded" a project, about 3k loc including tests. But whenever I need to look at it for bugs etc it just feels like I'm looking at someone else's code. I don't have that intuition of where things are, which bits are a bit fragile, which bits might be the likely cause of an issue etc.

I mean, ultimately, I didn't write it myself. It's more of a "remix" of other people's code. Or like if I translated this comment into French. It wouldn't improve my French so why would vibe coding be expected to improve one's programming ability?

SkyBelow 9/3/2025||
The main issue I see is that the methodology section of the paper limited the full time to 20 minutes. Is this a study of using LLMs to write an essay for you, or using LLMs to help you write an essay? To be fair, LLMs can't be swapped between the two modes, so the distinction is left up to the user in how they engage in.

Thinking about it myself, and looking at the questions and time limits, I'm not sure how I would be able to navigate that distinction given only 20 minutes. The way I would use an LLM to aid me in writing an essay on the topic wouldn't fit within the time limit, so even with an LLM, I would likely stick to brain only except in a few specific case that might occur (forgetting how to spell a word or forgetting a name for a concept).

So this study likely is applicable to similar timed instances, like letting use LLMs on a test, but that's one I would have already seen as extremely problematic for learning to begin with (granted, still worth while to find evidence to back even the 'obvious' conclusions).

lo_zamoyski 9/3/2025||
Why is this surprising? "Use it or lose it" may be a cliche, but it's true; if you don't keep some faculty conditioned, it gets "rusty". That's the general principle, so it would be surprising if this were an exception.

The age of social media and constant distraction already atrophies the ability to maintain sustained focus. Who reads a book these days, never mind a thick book requiring struggle to master? That requires immersion, sustained engagement, persevering through discomfort, and denying yourself indulgence in all sorts of temptations and enticements to get a cheap fix. It requires postponed gratification, or a gratification that is more subtle and measured and piecemeal rather than some sharp spike. We become conditioned in Pavlovian fashion, more habituated to such behavior, the more we engage in such behavior.

The reliance on AI for writing is partly rooted in the failure to recognize that writing is a form of engagement with the material. Clear writing is a way of developing knowledge and understanding. It helps uncover what you understand and what you don't. If you can't explain something, you don't know it well enough to have clear ideas about it. What good does an AI do you - you as a knowing subject - if it does the "writing" for you? You, personally, don't become wiser or better. You don't become fit by watching others exercise.

This isn't to say AI has no purpose, but our attitude toward technology is often irresponsible. We think that if we have the power to do something, we are missing out by not using it. This is boneheaded. The ultimate measure is whether the technology is good for you in some particular use case. Sometimes, we make prudential allowances for practical reasons. There can be a place for AI to "write" for us, but there are plenty of cases where it is simply senseless to use. You need to be prudent, or you end up abusing the technology.

TYPE_FASTER 9/3/2025||
I used to know a bunch of phone numbers by heart. I haven't done that since I got a cellphone. Has that had an impact on my ability to memorize things? I have no idea.

I have recently been finding it noticeably more difficult to come up with the word I'm thinking of. Is this because I've been spending more time scrolling than reading? I have no idea.

isodev 9/3/2025|
An AI is telling me these could be symptoms of the onset of a degenerative neurological condition. Is it true? I have no idea.
mansilladev 9/3/2025||
“…our cognitive abilities and creative capacities appear poised to take a nosedive into oblivion.”

Don’t sugarcoat it. Tell us how you really feel.

jennyholzer 9/3/2025|
I think developers who use "AI" coding assistants are putting their careers at risk.
dguest 9/3/2025|||
And here I'm wondering if I'm putting my career at risk by not trying them out.

Probably both are true: you should try them out and then use them where they are useful, not for everything.

Taek 9/3/2025|||
HN is full of people who say LLMs aren't good at coding and don't "really" produce productivity gains.

None of my professional life reflects that whatsoever. When used well, LLMs are exceptional and putting out large amounts of code of sufficient quality. My peers have switched entire engineering departments to LLM-first development and are reporting that the whole org is moving 2x as fast even after they fired the 50% of devs who couldn't make the switch and didn't hire replacements.

If you think LLM coding is a fad, your head is in the sand.

bgwalter 9/3/2025|||
The instigators say they were correct and fired the political opponents. Unheard of!

I have no doubt that volumes of code are being generated and LGTM'd.

mooxie 9/3/2025||||
Agreed. I work for a tiny startup where I wear multiple hats, and one of them is DevOps. I manage our cloud infra with Terraform, and anyone who's scaled cloud infrastructure out of the <10 head count company to a successful 500+ company knows how critical it can be to get a wrangle on the infrastructure early. It's basically now or never.

It used to take me days or even multiple sprints to complete large-scale infrastructure projects, largely because of having to repeatedly reference Terraform cloud provider docs for every step along the way.

Now I use Claude Code daily. I use an .md to describe what I want in as much detail as possible and with whatever idiosyncrasies or caveats I know are important from a career of doing this stuff, and then I go make coffee and come back to 99% working code (sometimes there are syntax errors due to provider / API updates).

I love learning, and I love coding. But I am hired to get things done, and to succeed (both personally and in my role, which is directly tied to our organization's security, compliance, and scalability) I can't spend two weeks on my pet projects for self-edification. I also have to worry about the million things that Claude CAN'T do for me yet, so whatever it can take off of my plate is priceless.

I say the same things to my non-tech friends: don't worry about it 'coming for your job' yet - just consider that your output and perceived worth as an employee could benefit greatly from it. If it comes down to two awesome people but one can produce even 2x the amount of work using AI, the choice is obvious.

010101010101 9/3/2025||||
Yesterday I used Warp’s LLM integrations to write two shell scripts that would have taken me longer to author myself than to do the task manually. Of the three options, this was the fastest by a wide margin.

For this kind of low stakes, easily verifiable task it’s hard to argue against using LLMs for me.

dguest 9/3/2025|||
Right now I'm mostly an "admin" coder: I look at merge requests and tell people how to fix stuff. I point them to LLMs a lot too. People I know who are actually writing a lot of code are usually saying LLMs are nice.
010101010101 9/3/2025||||
Developers who don’t understand how the most basic aspects of systems they work on function are a dime a dozen already, I’m not sure LLMs change the scale of that problem.
baq 9/3/2025||||
fighter jet pilots who use the ejection seat are putting their careers at risk, but so are the ones who don't use it when they should.
bookofjoe 9/3/2025||
>F-35 pilot held 50-minute airborne conference call with engineers before fighter jet crashed in Alaska

https://edition.cnn.com/2025/08/27/us/alaska-f-35-crash-acci...

flanked-evergl 9/3/2025||||
The future is increased productivity. If someone can outproduce you if they use AI, then they will take your job.
tmcb 9/3/2025|||
This is industrial-grade FOMO. They will take the jobs of the first handful of people. The moment it is obvious that LLMs are a productivity booster, people will learn how to use it, just like it happened with any other technology before.
boesboes 9/3/2025||||
After working with claude code for a few months, I am not worried.
falcor84 9/3/2025||
What does that mean? If you're still paying for a Claude Code, you are supposedly getting increased productivity, right? Or otherwise, why are you still using it?
lexandstuff 9/3/2025||
I find it useful. A nice little tool in the toolkit: saves a bunch of typing, helps to over come inertia, helps me find things in unfamiliar parts of the codebase, amongst other things.

But for it to be useful, you have to already know what you're doing. You need to tell it where to look. Review what it does carefully. Also, sometimes I find particular hairy bits of code need to be written completely by hand, so I can fully internalise the problem. Only once I've internalised hard parts of codebase can I effectively guide CC. Plus there's so many other things in my day-to-day where next token predictors are just not useful.

In short, its useful but no one's losing a job because it exists. Also, the idea of having non-experts manage software systems at any moderate and above level of complexity is still laughable.

falcor84 9/3/2025||
I don't think the concern is that non-experts would manage large software systems, but that experts would use it to manage larger software systems on their own before needing to hire additional devs, and in that way reduce the number of available roles. I.e. it increases the "pain threshold" before I would say to myself "it's worth the hassle to hire and onboard another dev to help with this".
hackable_sand 9/3/2025|||
Blink twice if your employer is abusing you
falcor84 9/3/2025||||
I would say that the careers of everyone who views themselves as writing code for a living are already at great risk. So if you're in that situation, you have to see how to go up (or down) the ladder of abstraction, and getting comfortable with using GenAI is possibly a good way to do that.
unethical_ban 9/3/2025||||
Were accountants that adopted Excel foolish?

Like any new tool that automates a human process, humans must still learn the manual process to understand the skill.

Students should still learn to write all their code manually and build things from the ground up before learning to use AI as an assistant.

micromacrofoot 9/3/2025|||
everyone's also telling us that if we don't use AI we're putting our careers at risk, and that AI will eventually take our jobs

personally I think everyone should shut up

pfisherman 9/3/2025||
Big caveat here is how people are using the LLMs. Here they were using them for things like information recall and ideation. LLMs as producer and human as editor / curator. They did not test another (my preferred) mode of LLM use - human as producer and LLM as editor / curator.

In this mode of use, you write out all your core ideas as stream of consciousness, bullet points or whatever without constraints of structure or style. Like more content than will make it into the essay. And then have the LLM summarize and clean it up.

Would be curious to see how that would play out in a study like this. I suspect that the subjects would not be able to quote verbatim, but would be able to quote all the main ideas and feel a greater sense of ownership.

Insanity 9/3/2025||
So, logically, I know this is the case. I can feel it happen to myself, when I use an LLM to generate any kind of work. Although I rarely use it for coding as my job is more at a higher level (designs etc), if I have the LLM write part of a trade-off analysis, I'll remember it less and be less engaged.

What's really bothering me though, is that I enjoy my job less when using an LLM. I feel less accomplished, I learn less, and I overall don't derive the same value out of my work.. But, on the flip side, by not adopting an LLM I'll be slower than my peers, which then also impacts my job negatively.

So it's like being stuck between a rock and a hard place - I don't enjoy the LLM usage but feel somewhat obligated to.

Eawrig05 9/4/2025||
This study is so limited in scope that the title is really misleading - "AI Use Reprograms the Brain" is not really a fair assessment of the study. The study focuses on one question: what is the effect of relying on an LLM writing your essay. The answer: it makes you forget how to write a good essay. I mean, I think its obvious that if you rely on an LLM to write for you, you effectually lose the skill of writing. But what if you use an LLM to teach you a concept? Would this also lead to a cognitive decline? I don't know the answer, but I think that is a question that ought to be explored.
j45 9/3/2025||
The gap i see is the definition of "AI use" is not clearly delineated between passive (usage similar to consumption) vs active.

Passive AI use where you let something else think for your will obvious cause cognitive decline.

Active use of AI as a thought partner, and learning as you go yourself seem to feel different.

The issue with studying 18-22 year olds is their prefrontal cortex (a center of logic, will power, focus, reasoning, discipline) is not fully developed until 26. But that probably doesn't matter if the study is trying to make a point about technology.

The art of learning fake information from real could also increase cognitive capacity.

teekert 9/3/2025|
Anybody who has tried to shortcut themselves into a report on something using an LLM, and was then asked to defend the plans contained within it knows that writing is thinking. And if you outsource the writing, you do less thinking and with less thinking there is less understanding. Your mental model is less complete, less comprehensive.

I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".

Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.

LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯

This is pretty obvious to me after using LLMs for various tasks over the past years.

jennyholzer 9/3/2025|
This dynamic is frustrating on the individual level, but it is poisonous on the organizational level.

I am offended by coworkers who submit incompletely considered, visibly LLM generated code.

These coworkers are dragging my team down.

gkilmain 9/3/2025|||
I find this acceptable if your coworkers are checked out and looking for that next big thing
warmedcookie 9/3/2025||||
On the bright side, if you are forced to write AI code, at least reviewing PRs of AI generated slop gives your brain an exercise, albeit a frustrating one.
teekert 9/3/2025|||
I'm sure they are, but maybe they just need some guidance. I was fortunate to learn this by myself, but when you just start out, it feels like magic. Only later do you realize you have also sacrificed something.
More comments...