Top
Best
New

Posted by vismit2000 18 hours ago

How AI assistance impacts the formation of coding skills(www.anthropic.com)
388 points | 301 commentspage 3
epolanski 6 hours ago||
> Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so

This is my experience exactly. I have never been learning as much as with AI.

It's interesting that numbers show most users degrade but I hate the general assumption that some cannot use it properly to learn faster as well.

oxag3n 2 hours ago||
> For novice workers in software engineering or any other industry, our study can be viewed as a small piece of evidence toward the value of intentional skill development with AI tools.

TL;DR it's not AI that makes you dumb, it's the wrong "Output style" - just choose learning style.

asyncadventure 12 hours ago||
What's fascinating is how AI shifts the learning focus from "how to implement X" to "when and why to use X". I've noticed junior developers can now build complex features quickly, but they still struggle with the architectural decisions that seniors make instinctively. AI accelerates implementation but doesn't replace the pattern recognition that comes from seeing hundreds of codebases succeed and fail.
yalogin 3 hours ago||
Is this the equivalent of cigarette companies putting “smoking kills” on their packaging?
jbellis 13 hours ago||
Good to see that Anthropic is honest and open enough to publish a result with a mostly negative headline.

> Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.

This might be cynically taken as cope, but it matches my own experience. A poor analogy until I find a better one: I don't do arithmetic in my head anymore, it's enough for me to know that 12038 x 912 is in the neighborhood of 10M, if the calculator gives me an answer much different from that then I know something went wrong. In the same way, I'm not writing many for loops by hand anymore but I know how the code works at a high level and how I want to change it.

(We're building Brokk to nudge users in this direction and not a magic "Claude take the wheel" button; link in bio.)

hollowturtle 13 hours ago||
> Unsurprisingly, participants in the No AI group encountered more errors. These included errors in syntax and in Trio concepts, the latter of which mapped directly to topics tested on the evaluation

I'm wondering if we could have the best of IDE/Editor features like LSP and LLMs working together. With an LSP syntax errors are a solved problem, if the language is statically typed I often find myself just checking out type signatures of library methods, simpler to me than asking an LLM. But I would love to have LLMs fixing your syntax and with types available or not, giving suggestions on how to best use the libraries given current context.

Cursor tab does that to some extent but it's not fool proof and it still feels too "statistical".

I'd love to have something deeply integrated with LSPs and IDE features, for example VSCode alone has the ability of suggesting imports, Cursor tries to complete them statistically but it often suggest the wrong import path. I'd like to have the twos working together.

Another example is renaming identifiers with F2, it is reliable and predictable, can't say the same when asking an agent doing that. On the other hand if the pattern isn't predictable, e.g. a migration where a 1 to 1 rename isn't enough, but needs to find a pattern, LLMs are just great. So I'd love to have an F2 feature augmented with LLMs capabilities

gorbachev 12 hours ago|
I've found the AI assisted auto-completion to be very valuable. It's definitely sped up my coding and reduced the number of errors I make.

It reduces the context switching between coding and referencing docs quite a bit.

hollowturtle 8 hours ago||
Have you read my comment or are you a bot?
comrade1234 15 hours ago||
Often when I use it I know that there is a way to do something and I know that I could figure it out by going through some api documents and maybe finding some examples on the web... IOW I already have something in mind.

For example I wanted to add a rate-limiter to an api call with proper http codes, etc. I asked the ai (in IntelliJ it used to be Claude by default but they've since switched to Gemini as default) to generate one for me. The first version was not good so I asked it to do it again but with some changes.

What would take me a couple of hours or more took less than 10 minutes.

drooby 11 hours ago|
Exactly this.

I’m starting to believe that people who think AI-generated code is garbage actually don’t know how to code.

I hit about 10 years of coding experience right before AI hit the scene, which I guess makes me lucky. I know, with high confidence, what I want my code to look like, and I make the AI do it. And it does it damn well and damn fast.

I think I sit at a unique point for leveraging AI best. Too junior and you create “working monsters.” Meanwhile, Engineering Managers and Directors treat it like humans, but it’s not AGI yet.

keeda 15 hours ago||
Another study from 2024 with similar findings: https://www.mdpi.com/2076-3417/14/10/4115 -- a bit more preliminary, but conducted with undergrad students still learning to program, so I expect the effect would be even more pronounced.

This similarly indicates that reliance on LLM correlates with degraded performance in critical problem-solving, coding and debugging skills. On the bright side, using LLMs as a supplementary learning aid (e.g. clarifying doubts) showed no negative impact on critical skills.

This is why I'm skeptical of people excited about "AI native" junior employees coming in and revamping the workplace. I haven't yet seen any evidence that AI can be effectively harnessed without some domain expertise, and I'm seeing mounting evidence that relying too much on it hinders building that expertise.

I think those who wish to become experts in a domain would willingly eschew using AI in their chosen discipline until they've "built the muscles."

Bnjoroge 6 hours ago|
gotta say this is some impressive transparency for something that seems to somehwat intersect with their business objective.
More comments...