Top
Best
New

Posted by ColinWright 11 hours ago

We mourn our craft(nolanlawson.com)
403 points | 530 commentspage 3
clarity_hacker 9 hours ago|
The blacksmith analogy is poetic but misleading. Blacksmithing was replaced by a process that needed no blacksmith at all. What's happening with code is closer to what synthesizers did to music — the instrument changed, the craft didn't die.

Musicians mourned synthesizers. Illustrators mourned Photoshop. Typesetters mourned desktop publishing. In every case the people who thrived weren't the ones who refused the new tool or the ones who blindly adopted it. They were the ones who understood that the tool absorbed the mechanical layer while the taste layer became more valuable, not less.

The real shift isn't from hand-coding to AI-coding. It's from "I express intent through syntax" to "I express intent through constraints and review." That's still judgment. Still craft. Just a different substrate.

What we're actually mourning is the loss of effort as a signal of quality. When anyone can generate working code, the differentiator moves upstream to architecture, to knowing what to build, to understanding why one approach fails at scale and another doesn't. Those are harder skills, not easier ones.

dazhbog 8 hours ago||
I dont get the hype.. And I dont think we will reach peak AI coding performance any time soon.

Yes, watching an LLM spit out lots of code is for sure mesmerizing. Small tasks usually work ok, code kinda compiles, so for some scenarios it can work out.. BUT anyone serious about software development can see how piece of CRAP the code is.

LLMs are great tools overall, great to bounce ideas, great to get shit done. If you have a side project and no time, awesome.. If your boss/company has a shitty culture and you just want to get the task done, great. Got a mundane coding task, hate coding, or your code wont run in a critical environment? please, LLM that shit over 9000..

Remember though, an LLM is just a predictor, a noisy, glorified text predictor. Only when AI reaches a point of not optimizing for short term gains and has built-in long term memory architecture (similar to humans) AND can produce some linux kernel level code and size, then we can talk..

cmiles74 8 hours ago|
I have junior people on my team using Cursor and Claude, it’s not all great. Several times they’ve checked in code that also makes small yet breaking changes to queries. I have to watch out for random (unused) annotations in Java projects and then explain why the tools are wrong. The Copilot bot we use on GitHub slows down PR reviews by recommending changes that look reasonable yet either don’t really work or negatively impact performance.

Overall, I’d say AI tooling has maybe close to doubled the time I spend on PR reviews. More knowledgeable developers do better with these tools but they also fall for the toolings false confidence from time to time.

I worry people are spending less time reading documentation or stepping through code to see how it works out of fear that “other people” are more productive.

libraryofbabel 9 hours ago||
From a blog post last month by the same author:

> Today, I would say that about 90% of my code is authored by Claude Code. The rest of the time, I’m mostly touching up its work or doing routine tasks that it’s slow at, like refactoring or renaming.

> I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it. Maybe it’s because the initial tribal battle lines have clouded everybody’s judgment, or maybe it’s because we inhabit different worlds where the technology is either better or worse (I still don’t think LLMs are great at UI for example), but there’s just a lot of patently unhelpful discourse out there, and I’m tired of it.

https://nolanlawson.com/2026/01/24/ai-tribalism/

If you're responding to this with angry anti-AI rants (or wild AI hype), might want to go read that post.

cfloyd 10 hours ago||
I fall in the demographic discussed in the article but I’ve approached this with as much pragmatism as I can muster. I view this as a tool to help improve me as a developer. Sure there will be those of us who do not stay ahead (is that even possible?) of the curve and get swallowed up but technology has had this affect on many careers in the past. They just change into something different and sometimes better. It’s about being willing to change with it.
andai 10 hours ago||
Some code is worth transcribing by hand — an ancient practice in writing, art and music.[0] Some isn't even worth looking at.

I find myself, ironically, spending more time typing out great code by hand now. Maybe some energy previously consumed by tedium has been freed up, or maybe the wacky machines brought a bit of the whimsy back into the process for me.

[0] And in programming, for the readers of Zed Shaw's books :)

reactordev 10 hours ago||
I’m in my 40 something and it’s game over for my career. The grey in my hair makes it so that I never get past the first round. The history on my resume makes it so I’m lucky to get a round. The GPT’s and Claude have fundamentally changed how I view work and frankly, I’m over it.

I’m in consulting now and it’s all the same crap. Enterprises want to “unleash AI” so they can fire people. Maximize profits. My nephews who are just starting their careers are blindly using these tools and accepting the PR if it builds. Not if it’s correct.

I’m in awe of what it can do but I also am not impressed with the quality of how it does it.

I’m fortunate to not have any debt so I can float until the world either wises up or the winds of change push me in a new direction.

I liked the satisfaction of building something “right” that was also “useful”. The current state of Opus and Codex can only pretend to do the latter.

6gvONxR4sf7o 10 hours ago||
I don't mourn coding for itself, since I've always kinda disliked that side of my work (numerical software, largely).

What I do mourn is the reliability. We're in this weird limbo where it's like rolling a die for every piece of work. If it comes up 1-5, I would have been better off implementing it myself. If it comes up 6, it'll get it done orders of magnitude faster than doing it by hand. Since the overall speedup is worthwhile, I have to try it every time, even if most of the time it fails. And of course it's a moving target, so I have to keep trying the things that failed yesterday because today's models are more capable.

zkmon 10 hours ago||
This makes me think about the craftsmen whose careers vanished or transformed through the ages due to industries, machines etc. They did not have online voices to write 1000's of blogs everyday. Nor did they have people who can read their woes online.
zeroonetwothree 9 hours ago|
Maybe not 1000s and not online, but we do have journals, articles, essays, and so on written by such people throughout history. And they had similar sentiments.
codeduck 10 hours ago||
I on the other hand await the coming of the Butlerian Jihad.
slibhb 10 hours ago|
I get where this is coming from. But at the same, AI/LLMs are such an exciting development. As in "maybe I was wrong and the singularity wasn't bullshit". If nothing else, it's an interesting transition to live through.
zeroonetwothree 10 hours ago|
I agree, in the sense that any major change is "interesting". Doesn't mean they are all good though, many major changes have been bad historically. The overall net effect has been generally good, but you never know with any individual change.
More comments...