So if my corporate overlords will have me talk to the soul-less Claude robot all day long in a Severance-style setting, and fix its stupid bugs, but I get to keep my good salary, then I'll shed a small tear for my craft and get back to it. If not... well, then I'll be shedding a lot more tears ... i guess
I often have one programming project I do myself, on the side, and recently I've been using coding agents. Their average ability is no doubt impressive for what they are. But they also make mistakes that not even a recent CS graduate with no experience would ever make (e.g. I asked the agent for it's guess as to why a test is failing; it suggested it might be due to a race condition with an operation that is started after the failing assertion). As a lead, if someone on the team is capable of making such a mistake even once, then that person can't really code, regardless of their average performance (just as someone who sometimes lands a plane in the wrong airport or even crashes without their being a catastrophich condition outside their control can't really fly regardless of their average performance). "This is more complicated than we though and would take longer than we expected" is something you hear a lot, but "sorry, I got confused" is something you never hear. A report by Anthropic last week said, "Claude will work autonomously to solve whatever problem I give it. So it’s important that the task verifier is nearly perfect, otherwise Claude will solve the wrong problem." Yeah, that's not something a team lead faces. I wish the agent could work like a team of programmers and I would be doing my familiar role of a project lead, but it doesn't.
The models do some things well. I believe that programming is an interesting mix of inductive and deductive thinking (https://pron.github.io/posts/people-dont-write-programs), and the models have the inductive part down. They can certainly understand what a codebase does faster than I can. But their deductive reasoning, especially when it comes to the details, is severely lacking (e.g. I asked the agent to document my code. It very quickly grasped the design and even inferred some important invariants, but when it saw an `assert` in one subroutine it documented it as guarding a certain invariant. The intended invariant was correct, it just wasn't the one the assertion was guarding). So I still (have to) work as a programmer when working with coding assistants, even if in a different way.
I've read about great successes at using coding agents in "serious" software, but what's common to those cases is that the people using the agents (Mitchell Hashimoto, antirez) are experts in the respective codebase. At the other end of the spectrum, people who aren't programmers can get some cool programs done, but I've yet to see anything produced in this way (by a non programmer) that I would call serious software.
I don't know what the future will bring, but at the moment, the craft isn't dead. When AI can really program, i.e. the experience is really like that of a team lead, I don't think that the death of programming would concern us, because once they get to that point, the agents will also likely be able to replace the team lead. And middle management. And the CTO, the CFO, and the CEO, and most of the users.
It gets hard to compare AI to humans. You can ask the AI to do things you would never ask a human to do, like retry 1000 times until it works, or assign 20 agents to the same problem with slightly different prompts. Or re-do the entire thing with different aesthetics.
No they cannot, And an AI bro squeezing every talking point into a think piece while pretending to have empathy doesn't change that. You just want an exit, and you want it fast.
> and if you don’t believe me, wait six months
This reads as a joke nowadays.
Oh come on. 95% of the folks were gluing together shitty React components and slathering them with Tailwind classes.
If you really buy all that you'd be part of the investor class that crashed various video game companies upon seeing Google put together a rather lame visual stunt and have their AI say, and I quote because the above-the-fold AI response I never asked for has never been more appropriate to consult…
"The landscape of AI video game generation is experiencing a rapid evolution in 2025-2026, shifting from AI-assisted asset creation to the generation of entire interactive, playable 3D environments from text or image prompts. Leading initiatives like Google DeepMind's Project Genie and Microsoft's Muse are pioneering "world models" that can create, simulate physics, and render games in real-time."
And then you look at what it actually is.
Suuuure you will, unwanted AI google search first response. Suuure you will.
guy who doesn't realize we still use hammers. This article was embarrassing to read.
* People still craft wood furniture from felled trees entirely with hand tools. Some even make money doing it by calling it 'artisanal'. Nothing is stopping anyone from coding in any historical mode they like. Toggle switches, punch cards, paper tape, burning EPROMs, VT100, whatever.
* OP seems to be lamenting he may not be paid as much to expend hours doing "sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM." I've been there. Sometimes I'd feel mild satisfaction on solving a rat-hole problem but more often, it was significant relief. I never much liked that part of coding and began to see it as a failure mode. I found I got bigger bucks - and had more fun - the better I got at avoiding rat-hole problems in the first place.
* My entire journey creating software from ~1983 to ~2020 was about making a thing that solved someone's problem better, cheaper or faster - and, on a good day, we managed all three at once. At various times I ended up doing just about every aspect of it from low-level coding to CEO and back again, sometimes in the same day. Every role in the journey had major challenges. Some were interesting, a few were enjoyable, but most were just "what had to get done" to drag the product I'd dreamt up kicking and screaming into existence.
* From my first teenage hobby project to my first cassette-tape in-a-baggie game to a $200M revenue SaaS for F100, every improvement in coding from getting a floppy disk drive to an assembler with macros to an 80 column display to version control, new languages, libraries, IDEs and LLMs just helped "making the thing exist" be easier, faster and less painful.
* Eventually, to create even harder, bigger and better things I had to add others coding alongside me. Stepping into the player-coach role amplified my ability to bring new things into existence. It wasn't much at first because I had no idea how to manage programmers or projects but I started figuring it out and slowly got better. On a good day, using an LLM to help me "make the thing exist" feels a lot like when I first started being a player-coach. The frustration when it's 'two steps forward, one back' feels like deja vu. Much like current LLMs, my first part-time coding helpers weren't as good as I was and I didn't yet know how to help them do their best work. But it was still a net gain because there were more of them than me.
* The benefits of having more coders helping me really started paying off once I started recruiting coders who were much better programmers than I ever was. Getting there took a little ego adjustment on my part but what a difference! They had more experience, applied different patterns, knew to avoid problems I'd never seen and started coming up with some really good ideas. As LLMs get better and I get better at helping them help me - I hope that's were we're headed. It doesn't feel directionally different than the turbo-boost from my first floppy drive, macro-assembler, IDE or profiler but the impact is already greater with upside potential that's much higher still - and that's exciting.