Posted by nreece 10/27/2025
I would say that AI is better at coding than most developers. If I had the option to choose between a junior developer to assist me or Claude Code, I would choose Claude Code. That's a massive achievement. Cannot be understated.
It's a dream come true for someone with a focus on architecture like myself. The coding aspect was dragging me down. LLMs work beautifully with vanilla JavaScript. The combined ability to generate code quickly and then quickly test (no transpilation/bundling step) gives me fast iteration times. Add that to the fact that I have a minimalist coding style. I get really good bang for my bucks/tokens.
The situation is unfortunate for junior developers. That said, I don't think it necessarily means that juniors should abandon the profession; they just need to refocus their attention towards the things that AI cannot do well like spotting contradictions and making decisions. Many developers are currently not great at this; maybe that's the reason why LLMs (which are trained on average code) are not good at it either. Juniors have to think more critically than ever before; on the plus side, they are freed to think about things at a higher level of abstraction.
My observation is that LLMs are so far good news for neurodivergent developers. Bad news for developers who are overly mimetic in their thinking style and interests. You want to be different from the average developer whose code the LLM was trained on.
I've generally found the quality of .NET to be quite good. It trips up sometimes when linters ping it for rules not normally enforced, but it does the job reasonably well.
The front-end javascript though? It's both an absolute genuis and a complete menace at the same time. It'll write reams of code to gets things just right but with no regards to human maintainability.
I lost an entire session to the fact that it cheerfully did:
npm install fabric
npm install -D @types/fabric
Now that might look fine, but a human would have realised that the typings library is a completely different out-dated API, the package last updated 6 years ago.Claude however didn't realise this, and wrote a ton of code that would pass unit tests but fail the type check. It'd check the type checker, re-write it all to pass the type checker, only for it now to fail the unit tests.
Eventually it semi-gave up typing and did loads of (fabric as any) all over the place, so now it just gave runtime exceptions instead.
I intervened when I realised what it was doing, and found the root cause of it's problems.
It was a complete blindspot because it just trusted both the library and the typechecker.
So yeah, if you want to snipe a vibe coder, suggest installing fabricjs with typings!
Instead of just committing more often, make the agent write commits following the conventional commits spec (feat:, fix:, refactor:) and reference a specific item from your plan.md in the commit body. That way you’ll get a self-documenting history - not just of the code, but of the agent’s thought process, which is priceless for debugging and refactoring later on
The human brain learns through mistakes, repetition, breaking down complex problems into simpler parts, and reimagining ideas. The hippocampus naturally discards memories that aren’t strongly reinforced.. so if you rely solely on AI, you’re simply not going to remember much.
Vibe-coded apps eventually fall over as they are overwhelmed by 101 bad architectural decisions stacked on top of one another. You need someone technical to make those decisions to avoid this fate.
I had this experience with my co-founder where I was shipping features quickly and he got used to a certain pace of progress. Then we ended up with like 6 different ways to perform a particular process with some differences between them; I had reused as much code as possible; all passing through the same function but without tests, it became challenging to avoid bugs/regressions... My co-founder could not understand why I was pushing back on implementing a particular feature which seemed very simple to him at a glance.
He could not believe me why I was pushing back. Thought I was just being stubborn. I explained to him all the technical challenges involved and it took me like 30 minutes to explain (at a high level) all the technical considerations and trade-offs and how much complexity would be introduced by adding this new feature and he agreed with my point of view.
People who aren't used to building software cannot grasp the complexity. Beyond a certain point, it's like every time my co-founder asked me to do something related to a particular part of the code, I'd spend several minutes pointing out the logical contradictions in his own requirements. The non-technical person thinks about software development in a kind of magical way. They don't really understand what they're asking. This isn't even getting into the issue of technical constraints which is another layer.
I am “vibe” coding my way through but the real work is in my head, not in the Cursor IDE with Claude, unit tests, or live debugging. It was me who was learning, not the machine.
I’m sure it’ll improve over time but it won’t be nearly as easy as making ai good at coding.
A while ago I discovered that Claude, left to its own devices, has been doing the LLM equivalent of Ctrl-C/Ctrl-V for almost every component it's created in an ever growing .NET/React/Typescript side project for months on end.
It was legitimately baffling seeing the degree to which it had avoided reusing literally any shared code in favor of updating the exact same thing in 19 places every time a color needed to be tweaked or something. The craziest example was a pretty central dashboard view with navigation tabs in a sidebar where it had been maintaining two almost identical implementations just to display a slightly different tab structure for logged in vs logged out users.
I've now been directing it to de-spaghetti things when I spot good opportunities and added more best practices to CLAUDE.md (with mixed results) so things are gradually getting more manageable, but it really shook my confidence in its ability to architect, well, anything on its own without micromanagement.
My experience is that the tools are like a smart intern. They are great at undergraduate level college skills but they don't really understand how things should work in the real world. Human oversight and guidance by a skilled and experienced person is required to avoid the kinds of problems that you experienced. But holy cow this intern can write code fast!
Having extensive planning and conversation sessions with the tool before letting it actually write or change any code is key to getting good results out of it. It's also helpful to clarify my own understanding of things. Sometimes the result of the planning and conversing is that I manually make a small change and realize that the problem wasn't what I originally thought.
In some ways, this seems backwards. Once you have a demo that does the right thing, you have a spec, of sorts, for what's supposed to happen. Automated tooling that takes you from demo to production ready ought to be possible. That's a well-understood task. In restricted domains, such as CRUD apps, it might be automated without "AI".
It can recognize patterns in the codebase it is looking at and extrapolate from that.
Which is why generated code is filled with comments most often seen in either tutorial level code or JavaScript (explaining the types of values).
Beyond that performance drops rapidly, and hallucinations go up inversely.
Any company claiming they've replaced engineers with AI has done so in an attempt to cover up the real reasons they've gotten rid of a few engineers. "AI automating our work" sounds much better to investors than "We overhired and have to downsize".