Posted by cainxinth 9/3/2025
Anecdotally, this is how I felt when I tried out AI agents to help me write code (vibe coding). I always review the code and I ask it to break it down into smaller steps but because I didn't actually write and think of the code myself, I don't have it all in my brain. Sure, yes I can spend a lot of time really going through it and building my mental model but it's not the same (for me).
But this is also how I felt when I managed a small team once. When you start to manage more and code less, you have to let go of the fact that you have more intimate knowledge of the codebase and place that trust in your team. But at least you have a team of humans.
AI agentic coding is like shifting your job from developer to manager. Like the article that was posted yesterday said: 'treating AI like a "junior developer who doesn't learn"' [1,2].
One good thing I like about AI is that it's forcing people to write more documentation. No more complaining about that.
1. https://www.sanity.io/blog/first-attempt-will-be-95-garbage
I mean, ultimately, I didn't write it myself. It's more of a "remix" of other people's code. Or like if I translated this comment into French. It wouldn't improve my French so why would vibe coding be expected to improve one's programming ability?
Thinking about it myself, and looking at the questions and time limits, I'm not sure how I would be able to navigate that distinction given only 20 minutes. The way I would use an LLM to aid me in writing an essay on the topic wouldn't fit within the time limit, so even with an LLM, I would likely stick to brain only except in a few specific case that might occur (forgetting how to spell a word or forgetting a name for a concept).
So this study likely is applicable to similar timed instances, like letting use LLMs on a test, but that's one I would have already seen as extremely problematic for learning to begin with (granted, still worth while to find evidence to back even the 'obvious' conclusions).
The age of social media and constant distraction already atrophies the ability to maintain sustained focus. Who reads a book these days, never mind a thick book requiring struggle to master? That requires immersion, sustained engagement, persevering through discomfort, and denying yourself indulgence in all sorts of temptations and enticements to get a cheap fix. It requires postponed gratification, or a gratification that is more subtle and measured and piecemeal rather than some sharp spike. We become conditioned in Pavlovian fashion, more habituated to such behavior, the more we engage in such behavior.
The reliance on AI for writing is partly rooted in the failure to recognize that writing is a form of engagement with the material. Clear writing is a way of developing knowledge and understanding. It helps uncover what you understand and what you don't. If you can't explain something, you don't know it well enough to have clear ideas about it. What good does an AI do you - you as a knowing subject - if it does the "writing" for you? You, personally, don't become wiser or better. You don't become fit by watching others exercise.
This isn't to say AI has no purpose, but our attitude toward technology is often irresponsible. We think that if we have the power to do something, we are missing out by not using it. This is boneheaded. The ultimate measure is whether the technology is good for you in some particular use case. Sometimes, we make prudential allowances for practical reasons. There can be a place for AI to "write" for us, but there are plenty of cases where it is simply senseless to use. You need to be prudent, or you end up abusing the technology.
I have recently been finding it noticeably more difficult to come up with the word I'm thinking of. Is this because I've been spending more time scrolling than reading? I have no idea.
Don’t sugarcoat it. Tell us how you really feel.
Probably both are true: you should try them out and then use them where they are useful, not for everything.
None of my professional life reflects that whatsoever. When used well, LLMs are exceptional and putting out large amounts of code of sufficient quality. My peers have switched entire engineering departments to LLM-first development and are reporting that the whole org is moving 2x as fast even after they fired the 50% of devs who couldn't make the switch and didn't hire replacements.
If you think LLM coding is a fad, your head is in the sand.
I have no doubt that volumes of code are being generated and LGTM'd.
It used to take me days or even multiple sprints to complete large-scale infrastructure projects, largely because of having to repeatedly reference Terraform cloud provider docs for every step along the way.
Now I use Claude Code daily. I use an .md to describe what I want in as much detail as possible and with whatever idiosyncrasies or caveats I know are important from a career of doing this stuff, and then I go make coffee and come back to 99% working code (sometimes there are syntax errors due to provider / API updates).
I love learning, and I love coding. But I am hired to get things done, and to succeed (both personally and in my role, which is directly tied to our organization's security, compliance, and scalability) I can't spend two weeks on my pet projects for self-edification. I also have to worry about the million things that Claude CAN'T do for me yet, so whatever it can take off of my plate is priceless.
I say the same things to my non-tech friends: don't worry about it 'coming for your job' yet - just consider that your output and perceived worth as an employee could benefit greatly from it. If it comes down to two awesome people but one can produce even 2x the amount of work using AI, the choice is obvious.
For this kind of low stakes, easily verifiable task it’s hard to argue against using LLMs for me.
https://edition.cnn.com/2025/08/27/us/alaska-f-35-crash-acci...
But for it to be useful, you have to already know what you're doing. You need to tell it where to look. Review what it does carefully. Also, sometimes I find particular hairy bits of code need to be written completely by hand, so I can fully internalise the problem. Only once I've internalised hard parts of codebase can I effectively guide CC. Plus there's so many other things in my day-to-day where next token predictors are just not useful.
In short, its useful but no one's losing a job because it exists. Also, the idea of having non-experts manage software systems at any moderate and above level of complexity is still laughable.
Like any new tool that automates a human process, humans must still learn the manual process to understand the skill.
Students should still learn to write all their code manually and build things from the ground up before learning to use AI as an assistant.
personally I think everyone should shut up
In this mode of use, you write out all your core ideas as stream of consciousness, bullet points or whatever without constraints of structure or style. Like more content than will make it into the essay. And then have the LLM summarize and clean it up.
Would be curious to see how that would play out in a study like this. I suspect that the subjects would not be able to quote verbatim, but would be able to quote all the main ideas and feel a greater sense of ownership.
What's really bothering me though, is that I enjoy my job less when using an LLM. I feel less accomplished, I learn less, and I overall don't derive the same value out of my work.. But, on the flip side, by not adopting an LLM I'll be slower than my peers, which then also impacts my job negatively.
So it's like being stuck between a rock and a hard place - I don't enjoy the LLM usage but feel somewhat obligated to.
Passive AI use where you let something else think for your will obvious cause cognitive decline.
Active use of AI as a thought partner, and learning as you go yourself seem to feel different.
The issue with studying 18-22 year olds is their prefrontal cortex (a center of logic, will power, focus, reasoning, discipline) is not fully developed until 26. But that probably doesn't matter if the study is trying to make a point about technology.
The art of learning fake information from real could also increase cognitive capacity.
I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".
Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.
LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯
This is pretty obvious to me after using LLMs for various tasks over the past years.
I am offended by coworkers who submit incompletely considered, visibly LLM generated code.
These coworkers are dragging my team down.