Top
Best
New

Posted by vismit2000 1/30/2026

How AI assistance impacts the formation of coding skills(www.anthropic.com)
482 points | 347 commentspage 3
Kiboneu 1/30/2026|
When coding agents are unavailable I just continue to code myself or focus on architecture specification / feature descriptions. This really helps me retain my skills, though there is some "skew" (I'm not sure how to describe it, it's a feeling). Making instructions to LLMs to me is pretty similar to doing the basic software architecture and specification work that a lot of people tend to skip (now, there's not choice and it's directly useful). When you skip specification for a sufficiently complex project, you likely introduce footguns along the way that slows down development significantly. So what would one expect when they run a bunch of agents based on a single sentence prompt?!

Like the architecture work and making good quality specs, working on code has a guiding effect on the coding agents. So in a way, it also benefits to clarify items that may be more ambiguous in the spec. If I write some of the code myself, it will make fewer assumptions about my intent when it touches it (especially when I didn't specify them in the architecture or if they are difficult to articulate in natural language).

In small iterations, the agent checks back for each task. Because I spend a lot of time on architecture, I already have a model in my mind of how small code snippets and feature will connect.

Maybe my comfort with reviewing AI code comes form spending a large chunk of my life reverse engineering human code, to understand it to the extent that complex bugs and vulnerabilities emerge. I've spent a lot of time with different styles of code writing from awful to "this programmer must have a permanent line to god to do this so elegantly". The models is train on that, so I have a little cluster of neurons in my head that's shaped closely enough to follow the model's shape.

devnonymous 1/30/2026||
From the "Discussion" section:

> This suggests that as companies transition to more AI code writing with human supervision, humans may not possess the necessary skills to validate and debug AI-written code if their skill formation was inhibited by using AI in the first place.

I'm reminded of "Kernighan's lever" :

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

AI is writing code in the cleverest way possible which then introduces cognitive load for anyone who hasn't encountered these patterns previously. Although, one might say that AI would also assist in the debugging, you run the risk of adding further complexity in the process of 'fixing' the bugs and before you know it you have a big stinking ball of mud.

Terretta 1/30/2026|
> AI is writing code in the cleverest way possible …

On the contrary, without mastery guiding, AI writes code in the most boilerplate way possible, even if that means compromising logic or functionality.

> … which then introduces cognitive load for anyone who hasn't encountered these patterns previously

And for those who have. This is the enterprise Java effect. The old trope is Java was designed to make all devs median and all produce the same median code so enterprises don't have to worry about the individual devs, it's all the same bowl of unflavored oatmeal.

When you read code from vibe coding novice, it's difficult to grok the intended logic because that's buried within these chunks of enterprise pattern boilerplate as if the solution was somehow regex'd at random from StackOverflow until some random combination happened to pass a similarly randomized bag of tests.

The cognitive load to reverse this mess into clean clear expression of logic is very high whether a human or machine "coded" this way.

In both cases, the antidote is caring for craft and mastery first, with an almost pseudocode clarity in expressing the desired outcome.

OK, but -- even this doesn't guarantee the result one wants.

Because even if the master writes the code themselves, they may find their intent was flawed. They expressed the intent clearly, but their intention wasn't helpful for the outcome needed.

This is where rapid iteration comes in.

A master of software engineering may be able to iterate on intent faster with the LLM typing the code for them than they can type and iterate on their own. With parallel work sessions, they may be able to explore intention space faster to reach the outcome.

Each seasonal improvement in LLM models' ability to avoid implementation errors while iterating this way makes the software developer with mastery but lack of perfect pre-visualization of intent more productive. Less time cleaning novice coding errors, more cycles per hour iterating the design in their head.

This type of productivity gain has been meaningful for this type of developer.

At the same time, the "chain of thought" or "reasoning" loops being built into the model are reaching into this intention space, covering more of the prompt engineering space for devs with less mastery being unable to express much less iterate intent. This lets vibe "coders" imagine their productivity is improving as well.

If the output of the vibe coder (usually product managers, if you look closely) is considered to be something like a living mockup and not a product, then actual software engineers can take that and add the *-ilities (supportability, maintainability, etc. that the vibe coder has never specified whether vibing or product managing).

Using a vibed prototype can accelerate the transfer of product conception from the PM to the dev team more effectively than PM just yelling at a dev tech lead that the dev hasn't understood what the PM is saying the product should be. Devs can actually help this process by ensuring the product "idea" person is armed with a claude.md to orient the pattern medianizer machine with the below the waterline stuff engineering teams know are 80% of the cost-through-time.

There's not a lot of discussion of prototype vibing being a new way for product owners and engineering teams to gain clarity above the waterline, or whether it's productive. Here's a dirty secret: it's more productive in that it's more protective of the rarer skilset's time. The vibe time wasted is paid by the product owner (hallelujah), the eng team can start with a prototype the product owner iterated with while getting their intent sorted out, so now engineerings iterations shift from intent (PM headspace) to implementation (eng headspace).

Both loops were tightened.

> you run the risk of adding further complexity in the process of 'fixing' the bugs and before you know it you have a big stinking ball of mud.

Iterating where the problem lies, uncoupling these separate intention and iteration loops, addresses this paradox.

vessenes 1/30/2026||
@dang the title here is bait. I’d suggest the paper title: “Anthropic: How AI Impacts Skill Formation”
fragmede 1/30/2026|
This isn't Twitter. email hn@ycombinator.com
vessenes 4 days ago||
Have you heard of K I B O?
baalimago 1/30/2026||
I've noticed this as well. I delegate to agentic coders on tasks I need to have done efficiently, which I could do myself and lack time to do. Or on tasks which are in areas I simply don't care much for, for languages which I don't like very much etc
Wojtkie 7 days ago||
This is interesting. I started teaching myself Polars and used Claude to help me muscle through some documentation in order to meet deadlines on a project.

I found that Claude wasn't too great at first at it and returned a lot of hallucinated methods or methods that existed in Pandas but not Polars. I chalk this up to context blurring and that there's probably a lot less Polars code in the training corpus.

I found it most useful for quickly pointing me to the right documentation, where I'd learn the right implementation and then use it. It was terrible for the code, but helpful as a glorified doc search.

i_love_retros 1/30/2026||
I don't understand how so many people can be OK with inflicting brain rot on themselves and basically engineering themselves out of a career.

I use a web ui to chat with ai and do research, and even then I sometimes have to give up and accept that it won't provide the best solution that I know exists and am just to lazy to flesh out on my own. And to the official docs I go.

But the coding tools, I'm sorry but they constantly disappoint me. Especially the agents. In fact the agents fucking scare me. Thank god copilot prompts me before running a terminal command. The other day I asked it about a cypress test function and the agent asked if it could run some completely unrelated gibberish python code in my terminal. That's just one of many weird things it's done.

My colleagues vibe code things because they don't have experience in the tech we use on our project, it gets passed to me to review with "I hope you understand this". Our manager doesn't care because he's all in on AI and just wants the project to meet deadlines because he's scared for his job, and each level up the org chart from him it's the same. If this is what software development is now then I need to find another career because its pathetic, boring, and stressful for anyone with integrity.

shayonj 1/30/2026||
Being able to debug and diagnose difficult problems and distributed systems still remains a key skill, at least until Opus or some other model gets better at it.

I think being intentional about learning while using AI to be productive is where the stitch is, at least for folks earlier in their career. I touch that in my post here as well: https://www.shayon.dev/post/2026/19/software-engineering-whe...

discreteevent 1/30/2026||
The learning loop and LLMs [1] is well worth reading and the anthropic blog post above concurs with it in a number of places. It's fine to use LLMs as an assistant to understanding but your goal as an engineer should always be understanding and the only real way to do that is to have to struggle to make things yourself.

[1] https://martinfowler.com/articles/llm-learning-loop.html

epolanski 1/30/2026||
> Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so

This is my experience exactly. I have never been learning as much as with AI.

It's interesting that numbers show most users degrade but I hate the general assumption that some cannot use it properly to learn faster as well.

grahamlee 1/30/2026|
I’ve been making the case (e.g. https://youtu.be/uL8LiUu9M64?si=-XBHFMrz99VZsaAa [1]) that we have to be intentional about using AI to augment our skills, rather than outsourcing understanding: great to see Anthropic confirming that.

[1] plug: this is a video about the Patreon community I founded to do exactly that. Just want to make sure you’re aware that’s the pitch before you do ahead and watch.

More comments...