Top
Best
New

Posted by bigwheels 1/26/2026

A few random notes from Claude coding quite a bit last few weeks(twitter.com)
https://xcancel.com/karpathy/status/2015883857489522876
911 points | 847 comments
daxfohl 1/27/2026|
I worry about the "brain atrophy" part, as I've felt this too. And not just atrophy, but even moreso I think it's evolving into "complacency".

Like there have been multiple times now where I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things. Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever. Eventually it was easier just to quit fighting it and let it do things the way it wanted.

What I've seen is that after the initial dopamine rush of being able to do things that would have taken much longer manually, a few iterations of this kind of interaction has slowly led to a disillusionment of the whole project, as AI keeps pushing it in a direction I didn't want.

I think this is especially true if you're trying to experiment with new approaches to things. LLMs are, by definition, biased by what was in their training data. You can shock them out of it momentarily, whish is awesome for a few rounds, but over time the gravitational pull of what's already in their latent space becomes inescapable. (I picture it as working like a giant Sierpinski triangle).

I want to say the end result is very akin to doom scrolling. Doom tabbing? It's like, yeah I could be more creative with just a tad more effort, but the AI is already running and the bar to seeing what the AI will do next is so low, so....

striking 1/27/2026||
It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.

This would be fine if not for one thing: the meta-skill of learning to use the LLM depreciates too. Today's LLM is gonna go away someday, the way you have to use it will change. You will be on a forever treadmill, always learning the vagaries of using the new shiny model (and paying for the privilege!)

I'm not going to make myself dependent, let myself atrophy, run on a treadmill forever, for something I happen to rent and can't keep. If I wanted a cheap high that I didn't mind being dependent on, there's more fun ones out there.

raducu 1/28/2026|||
> let myself atrophy, run on a treadmill forever, for something

You're lucky to afford the luxury not to atrophy.

It's been almost 4 years since my last software job interview and I know the drills about preparing for one.

Long before LLMs my skills naturally atrophy in my day job.

I remember the good old days of J2ME of writing everything from scratch. Or writing some graph editor for universiry, or some speculative, huffman coding algorithm.

That kept me sharp.

But today I feel like I'm living in that netflix series about people being in Hell and the Devil tricking them they're in Heaven and tormenting them: how on planet Earth do I keep sharp with java, streams, virtual threads, rxjava, tuning the jvm, react, kafka, kafka streams, aws, k8s, helm, jenkins pipelines, CI-CD, ECR, istio issues, in-house service discovery, hierarchical multi-regions, metrics and monitoring, autoscaling, spot instances and multi-arch images, multi-az, reliable and scalable yet as cheap as possible, yet as cloud native as possible, hazelcast and distributed systems, low level postgresql performance tuning, apache iceberg, trino, various in-house frameworks and idioms over all of this? Oh, and let's not forget the business domain, coding standards, code reviews, mentorships and organazing technical events. Also, it's 2026 so nobody hires QA or scrum masters anymore so take on those hats as well.

So LLMs it is, the new reality.

aftergibson 1/28/2026|||
This is a very good point. Years ago working in a LAMP stack, the term LAMP could fully describe your software engineering, database setup and infrastructure. I shudder to think of the acronyms for today's tech stacks.
oldandboring 1/28/2026||
And yet many the same people who lament the tooling bloat of today will, in a heartbeat, make lame jokes about PHP. Most of them aren't even old enough to have ever done anything serious with it, or seen it in action beyond Wordpress or some spaghetti-code one-pager they had to refactor at their first job. Then they show up on HN with a vibe-coded side project or blog post about how they achieved a 15x performance boost by inventing server-side rendering.
aftergibson 1/29/2026||
Highly relevant username!
oldandboring 4 days ago||
I try :)
carimura 1/28/2026||||
Ya I agree it's totally crazy.... but, do most app deployments need even half that stuff? I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.
KronisLV 1/28/2026|||
> I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.

Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

But obviously that’s not Serious Business™ and won’t give you zero downtime and high availability.

Though tbh most mid-size companies would also be okay with Docker Swarm or Nomad and the same software clustered and running behind HAProxy.

But that wouldn’t pad your CV so yeah.

ryandrake 1/28/2026||
> Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC. How did we get to the point where applications by default came with all of this shit?

direwolf20 1/28/2026|||
When I was in primary school, the librarian used a computer this way, and it worked fine. However, she had to back it up daily or weekly onto a stack of floppy disks, and if she wanted to serve the students from the other computer on the other side of the room, she had to restore the backup on there, and remember which computer had the latest data, and only use that one. When doing a stock–take (scanning every book on the shelves to identify lost books), she had to bring that specific computer around the room in a cart. Such inconveniences are not insurmountable, but they're nice to get rid of. You don't need to back up a cloud service and it's available everywhere, even on smaller devices like your phone.

There's an intermediate level of convenience. The school did have an IT staff (of one person) and a server and a network. It would be possible to run the library database locally in the school but remotely from the library terminals. It would then require the knowledge of the IT person to administer, but for the librarian it would be just as convenient as a cloud solution.

badsectoracula 1/28/2026||
I think the 'more than one user' alternative to a 'single EXE on a single computer' isn't the multilayered pie of things that KronisLV mentioned, but a PHP script[0] on an apache server[0] you access via a web browser. You don't even need a dedicated DB server as SQLite will do perfectly fine.

[0] or similarly easy to get running equivalent

KronisLV 1/29/2026||
> but a PHP script[0] on an apache server[0] you access via a web browser

I've seen plenty of those as well - nobody knows exactly how things are setup, sometimes dependencies are quite outdated and people are afraid to touch the cPanel config (or however it's setup). Not that you can't do good engineering with enough discipline, it's just that Docker (or most methods of containerization) limits the blast range when things inevitably go wrong and at least try to give you some reproducibility.

At the same time, I think that PHP can be delightfully simple and I do use Apache2 myself (mod_php was actually okay, but PHP-FPM also isn't insanely hard to setup), it's just that most of my software lives in little Docker containers with a common base and a set of common tools, so they're decoupled from the updates and config of the underlying OS. I've moved the containers (well data+images) across servers with no issues when needed and also resintalled OSes and spun everything right back up.

Kubernetes is where dragons be, though.

danans 1/28/2026||||
> That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC

I doubt that.

As software has grown to solving simple personal computing problems (write a document, create a spreadsheet) to solving organizational problems (sharing and communication within and without the organization), it has necessarily spread beyond the .exe file and local storage.

That doesn't give a pass to overly complex applications doing a simple thing - that's a real issue - but to think most modern company problems could be solved with just a local executable program seems off.

yetihehe 7 days ago||
It can be like that, but then IT and users complain about having to update this .exe on each computer when you add new functionality or fix some errors. When you solve all major pain points with a simple app, "updating the app" becomes top pain point, almost by definition.
KronisLV 1/28/2026|||
> How did we get to the point where applications by default came with all of this shit?

Because when you give your clients instructions on how to setup the environment, they will ignore some of them and then they install OracleJDK while you have tested everything under OpenJDK and you have no idea why the application is performing so much worse in their environment: https://blog.kronis.dev/blog/oracle-jdk-and-openjdk-compatib...

It's not always trivial to package your entire runtime environment unless you wanna push VM images (which is in many ways worse than Docker), so Docker is like the sweet spot for the real world that we live in - a bit more foolproof, the configuration can be ONE docker-compose.yml file, it lets you manage resource limits without having to think about cgroups, as well as storage and exposed ports, custom hosts records and all the other stuff the human factor in the process inevitably fucks up.

And in my experience, shipping a self-contained image that someone can just run with docker compose up is infinitely easier than trying to get a bunch of Ansible playbooks in place.

If your app can be packaged as an AppImage or Flatpak, or even a fully self contained .deb then great... unless someone also wants to run it on Windows or vice versa or any other environment that you didn't anticipate, or it has more dependencies than would be "normal" to include in a single bundle, in which case Docker still works at least somewhat.

Software packaging and dependency management sucks, unless we all want to move over to statically compiled executables (which I'm all for). Desktop GUI software is another can of worms entirely, too.

oldandboring 1/28/2026|||
When I come into a new project and I find all this... "stuff" in use, often what I later find is actually happening with a lot of it is:

- nobody remembers why they're using it

- a lot of it is pinned to old versions or the original configuration because the overhead of maintaining so much tooling is too much for the team and not worth the risk of breaking something

- new team members have a hard time getting the "complete picture" of how the software is built and how it deploys and where to look if something goes wrong.

dullcrisp 1/28/2026|||
That was on NBC.
daxfohl 1/27/2026||||
Businesses too. For two years it's been "throw everything into AI." But now that shit is getting real, are they really feeling so coy about letting AI run ahead of their engineering team's ability to manage it? How long will it be until we start seeing outages that just don't get resolved because the engineers have lost the plot?
scorpioxy 1/28/2026|||
From what I am seeing, no one is feeling coy simply because of the cost savings that management is able to show the higher-ups and shareholders. At that level, there's very little understanding of anything technical and outages or bugs will simply get a "we've asked our technical resources to work on it". But every one understands that spending $50 when you were spending $100 is a great achievement. That's if you stop and not think about any downsides. Said management will then take the bonuses and disappear before the explosions start with their resume glowing about all the cost savings and team leadership achievements. I've experienced this first hand very recently.
daxfohl 1/28/2026|||
Of all the looming tipping points whereby humans could destroy the fabric of their existence, this one has to be the stupidest. And therefore the most likely.
bgilroy26 1/28/2026|||
There really ought to be a class of professionals like forensic accountants who can show up in a corrupted organization and do a post mortem on their management of technical debt
throwup238 1/28/2026|||
How long until “the LLM did it it” is just as effective as “AWS is down, not my fault”?
sarchertech 1/28/2026|||
Never because the only reason that works with Amazon is that everyone is down at the exact same time.
direwolf20 1/28/2026||
Everyone will suffer from slop code at the same time.
sarchertech 1/28/2026||
Yeah but that's very different from an AWS outage. Everyone's website being down for a day every year or 2 is something that it's very hard to take advantage of as a competitor. That's not true for software that is just terrible all the time.
draxil 1/28/2026||||
This to me is the point.. LLMs can't be responsible for things. It sits with a human.
taylorius 1/28/2026|||
Why can LLMs not be responsible for things? (genuine question - I'm not certain myself).
pvab3 1/28/2026||
because it doesn't have any skin in the game and can't be punished, and can't be rewarded for succeeding. Its reputation, career, and dignity are nonexistent.
taylorius 1/29/2026|||
On the contrary - the LLM has had it's own version of "skin in the game" through the whole of it's training. Reinforcement learning is nothing but that. Why is that less real than putting a person in prison. Is it because of the LLM itself, or because you don't trust the people selling it to you?
oblio 7 days ago||
Are you claiming that LLMs are... sentient? Bold claim, Taylor.
direwolf20 1/28/2026|||
This doesn't seem to have stopped anyone before.
pvab3 1/28/2026||
Stopped anyone from doing what? Assigning responsibility to someone with nothing to lose, no dignity or pride, and immune from financial or social injury?
shaftoe 1/28/2026||||
If you’re just a gladhander for an algorithm, what are you really needed for?
locknitpicker 1/28/2026||||
> It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.

I agree with the sentiment but I would have framed it differently. The LLM is a tool, just like code completion or a code generator. Right now we focus mainly on how to use a tool, the coding agent, to achieve a goal. This takes place at a strategic level. Prior to the inception of LLMs, we focused mainly on how to write code to achieve a goal. This took place at a tactical level, and required making decisions and paying attention to a multitude of details. With LLMs our focus shifts to a higher-level abstraction. Also, operational concerns change. When writing and maintaining code yourself, you focus on architectures that help you simplify some classes of changes. When using LLMs, your focus shifts to building context and aiding the model effectively implement their changes. The two goals seem related, but are radically different.

I think a fairer description is that with LLMs we stop exercising some skills that are only required or relevant if you are writing your code yourself. It's like driving with an automatic transmission vs manual transmission.

bandrami 1/28/2026|||
Previous tools have been deterministic and understandable. I write code with emacs and can at any point look at the source and tell you why it did what it did. But I could produce the same program with vi or vscode or whatever, at the cost of some frustration. But they all ultimately transform keystrokes to a text file in largely the same way, and the compiler I'm targeting changes that to asm and thence to binary in a predictable and visible way.

An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now). So teams start cargo culting ways to deal with specific LLMs' idiosyncrasies and your domain knowledge becomes about a specific product that someone else has control over. It's like learning a specific office suite or whatever.

TeMPOraL 1/28/2026|||
> An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now)

So basically, like a co-worker.

That's why I keep insisting that anthropomorphising LLMs is to be embraced, not avoided, because it gives much better high-level, first-order intuition as to where they belong in a larger computing system, and where they shouldn't be put.

bandrami 1/28/2026|||
> So basically, like a co-worker.

Arguably, though I don't particularly need another co-worker. Also co-workers are not tools (except sometimes in the derogatory sense).

draxil 1/28/2026|||
Sort of except it seems the more the co-worker does the job it atrophies my ability to understand.. So soon we'll all be that annoyingly ignorant manager saying, "I don't know, I want the button to be bigger". Yay?
ben_w 1/29/2026||
Only if we're lucky and the LLMs cease being replaced with improved models.

Claude has already shown us people who openly say "I don't code and yet I managed this"; right now the command line UI will scare off a lot of people, and people using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…

…but how long will it be before the annoyingly ignorant customer skips the expensive annoyingly ignorant manager along with all us expensive developers, and has one of the models write them bespoke solution for less than the cost of off-the-shelf shrink-wrapped DVDs from a discount store?

Hopefully that extra stuff is further away than it seems, hopefully in a decade there will be an LLM version of this list: https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

But I don't trust to hope. It has forsaken these lands.

TeMPOraL 1/29/2026||
> using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…

I don't think we will, because many of us are already asking LLMs for help/advice on these, so we're already close to the point where LLMs will be able to use these capabilities directly, instead of just for helping us drive the process.

ben_w 1/29/2026||
Indeed, but the output of LLMs today for these kinds of task are akin to a junior product designer, a junior project manager, a junior software architect etc.

For those of us who are merely amateur at any given task, LLMs raising us to "junior" is absolutely an improvement. But just as it's possible to be a better coder than an LLM, if you're a good PM or QA or UI/UX designer, you're not obsolete yet.

ryanjshaw 1/28/2026|||
> and can at any point look at the source and tell you why it did what it did

Even years later? Most people can’t unless there’s good comments and design. Which AI can replicate, so if we need to do that anyway, how is AI specially worse than a human looking back at code written poorly years ago?

bandrami 1/28/2026||
I mean, Emacs's oldest source files are like 40 years old at this point, and yes they are in fact legible? I'm not sure what you're asking -- you absolutely can (and if you use it long enough, will) read the source code of your text editor.
draxil 1/28/2026||
Well especially the lisp parts!
koiueo 1/28/2026|||
The little experience I have with LLM confidently shows that LLMs are much better at navigating and modifying a well structured code base. And they struggle, sometimes to a point where they can't progress at all, if tasked to work on bad code. I mean, the kind of bad you always get after multiple rounds of unsupervised vibe coding.
pards 1/28/2026||||
> I happen to rent and can't keep

This is my fear - what happens if the AI companies can't find a path to profitability and shut down?

thevillagechief 1/28/2026|||
Don't threaten us with a good time.
dyauspitr 1/28/2026||
That’s not a good time, I love these things. I’ve been able to indulge myself so much. Possibly good for job security but would suck in every other way.
satvikpendem 1/28/2026||||
This is why local models are so important. Even if the non-local ones shut down, and even if you can't run local ones on your own hardware, there will still be inference providers willing to serve your requests.
MillionOClock 1/28/2026|||
Recently I was thinking about how some (expensive) customer electronics like the Mac Studio can run pretty powerful open source models with a pretty efficient power consumption, that could pretty easily run on private renewable energy, and that are on most (all?) fronts much more powerful than the original ChatGPT especially if connected to a good knowledge base. Meaning that aside from very extreme scenarios I think it is safe to say that there will always be a way not to go back to how we used to code, as long as we can offer the correct hardware and energy. Of course personally I think we will never need to go to such extreme ends... despite knowing of people who seem to seriously think developed countries heavily run out of electricity one day, which, while I reckon there might be tensions, seems like a laughable idea IMHO.
Aurornis 1/28/2026||||
> the meta-skill of learning to use the LLM depreciates too. Today's LLM is gonna go away someday, the way you have to use it will change. You will be on a forever treadmill, always learning the vagaries of using the new shiny model (and paying for the privilege!)

I haven’t found this to be true at all, at least so far.

As models improve I find that I can start dropping old tricks and techniques that were necessary to keep old models in line. Prompts get shorter with each new model improvement.

It’s not really a cycle where you’re re-learning all the time or the information becomes outdated. The same prompt structure techniques are usually portable across LLMs.

rubenflamshep 1/28/2026||
Interesting, I’ve experienced the opposite in certain contexts. CC is so hastily shipped that new versions often imbalance existing workflows. E.g. people were raving about the new user prompt tools that CC used to get more context but they messed my simple git slash commands
infecto 1/28/2026||||
I think you have to be aware of how you use any tool but I don’t think this is a forever treadmill. It’s pretty clear to me since early on that the goal is for you the user to not have to craft the perfect prompt. At least for my workflow it’s pretty darn close to that for me.
Draiken 1/28/2026||
If it ever gets there, then anyone can use it and there's no "skill" to be learned at all.

Either it will continue to be this very flawed non-deterministic tool that requires a lot of effort to get useful code out of it, or it will be so good it'll just work.

That's why I'm not gonna heavily invest my time into it.

infecto 1/29/2026||
Good for you. Others like myself find the tools incredibly useful. I am able to knock out code at a higher cadence and it’s meeting a standard of quality our team finds acceptable.
Draiken 7 days ago||
Looking forward for those 10x improvements to finally show up somewhere. Any day now!

Jokes aside, I never said it's not useful, but most definitely it's not even close to all this hype.

infecto 7 days ago||
> very flawed non-deterministic tool that requires a lot of effort to get useful code out of it

We are all different but I think most of us with open minds are the flaw in your statement.

rurp 1/28/2026||||
I have deliberately moderated my use of AI in large part for this reason. For a solid two years now I've been constantly seeing claims of "this model/IDE/Agent/approach/etc is the future of writing code! It makes me 50x more productive, and will do the same for you!" And inevitabely those have all fallen by the wayside and been replaced by some new shiny thing. As someone who doesn't get intrinsic joy out of chasing the latest tech fad I usually move along and wait to see if whatever is being hyped really starts to take over the world.

This isn't to say LLMs won't change software development forever, I think they will. But I doubt anyone has any idea what kind of tools and approaches everyone will be using 5 or 10 years from now, except that I really doubt it will be whatever is being hyped up at this exact moment.

apercu 1/28/2026||
HN is where I keep hearing the “50× more productive” claims the most. I’ve been reading 2024 annual reports and 2025 quarterlies to see whether any of this shows up on the other side of the hype.

So far, the only company making loud, concrete claims backed by audited financials is Klarna and once you dig in, their improved profitability lines up far more cleanly with layoffs, hiring freezes, business simplification, and a cyclical rebound than with Gen-AI magically multiplying output. AI helped support a smaller org that eliminated more complicated financial products that have edge cases, but it didn’t create a step-change in productivity.

If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.

We’re just not seeing that yet.

laserlight 1/28/2026|||
I have friends who make such 50x productivity claims. They are correct if we define productivity as creating untested apps and games and their features that will never ship --- or be purchased, even if they were to ship. Thus, “productivity” has become just another point of contention.
apercu 1/28/2026||
100% agree. There are far more half-baked, incomplete "products" and projects out there now that it is easier to generate code. Generously, that doesn't necessarily equate to productivity.

I've agree with the fact that the last 10% of a project is the hardest part, and that's the part that Gen-AI sucks at (hell, maybe the 30%).

sarchertech 1/28/2026|||
> If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.

If we’re even just talking a 2x multiplier, it should show up in some externally verifiable numbers.

apercu 1/28/2026||
I agree, and we might be seeing this but there is so much noise, so many other factors, and we're in the midst of capital re-asserting control after a temporary loss of leverage which might also be part of a productivity boost (people are scared so they are working harder).

The issue is that I'm not a professional financial analyst and I can't spend all day on comps so I can't tell through the noise yet if we're seeing even 2x related to AI.

But, if we're seeing 10x, I'd be finding it in the financials. Hell, a blind squirrel would, and it's simply not there.

sarchertech 1/28/2026||
Yes, I think there many issues in a big company that could hide a 2x productivity increase for a little while. But I'd expect it to be very visible in small companies and projects. Looking at things like number of games released on steam, new products launched on new product sites, or issues fixed on popular open source repos, you'd expect a 2x bump to be visible.
prettyblocks 1/28/2026||||
In my experience all technology has been like this though. We are on the treadmill of learning the new thing with our without LLMs. That's what makes tech work so fun and rewarding (for me anyway).
Kostic 1/28/2026|||
I assume you're living in a city. You're already renting out a lot of things to others (security, electricity, water, food, shelter, transportation), what is different with white collar work?
bondarchuk 1/28/2026|||
>the city gets destroyed

vs.

>a company goes bankrupt or pivots

I can see a few differences.

striking 1/28/2026|||
My apartment has been here for years and will be here for many more. I don't love paying rent on it but it certainly does get maintained without my having to do anything. And the rest of the infrastructure of my life is similarly banal. I ride Muni, eat food from Trader Joe's, and so on. These things are not going away and they don't require me to rewire my brain constantly in order to make use of them. The city infrastructure isn't stealing my ability to do my work, it just fills in some gaps that genuinely cannot be filled when working alone and I can trust it to keep doing that basically forever.
nemothekid 1/27/2026|||
I think I should write more about but I have been feeling very similar. I've been recently exploring using claude code/codex recently as the "default", so I've decided to implement a side project.

My gripe with AI tools in the past is that the kind of work I do is large and complex and with previous models it just wasn't efficient to either provide enough context or deal with context rot when working on a large application - especially when that application doesn't have a million examples online.

I've been trying to implement a multiplayer game with server authoritative networking in Rust with Bevy. I specifically chose Bevy as the latest version was after Claude's cut off, it had a number of breaking changes, and there aren't a lot of deep examples online.

Overall it's going well, but one downside is that I don't really understand the code "in my bones". If you told me tomorrow that I had optimize latency or if there was a 1 in 100 edge case, not only would I not know where to look, I don't think I could tell you how the game engine works.

In the past, I could not have ever gotten this far without really understanding my tools. Today, I have a semi functional game and, truth be told, I don't even know what an ECS is and what advantages it provides. I really consider this a huge problem: if I had to maintain this in production, if there was a SEV0 bug, am I confident enough I could fix it? Or am I confident the model could figure it out? Or is the model good enough that it could scan the entire code base and intuit a solution? One of these three questions have to be answered or else brain atrophy is a real risk.

bedrio 1/28/2026|||
I'm worried about that too. If the error is reproducible, the model can eventually figure it out from experience. But a ghost bug that I can't pattern? The model ends up in a "you're absolutely right" loop as it incorrectly guesses different solutions.
mattmanser 1/28/2026||
Are ghost bugs even real?

My first job had the Devs working front-line support years ago. Due to that, I learnt an important lessons in bug fixing.

Always be able to re-create the bug first.

There are no such thing as ghost bugs, you just need to ask the reporter the right questions.

Unless your code is multi-threaded, to which I say, good luck!

chickensong 1/28/2026|||
They're real at scale. Plenty of bugs don't suface until you're running under heavy load on distributed infrastructure. Often the culprit is low in the stack. Asking the reporter the right questions may not help in this case. You have full traces, but can't reproduce in a test environment.

When the cause is difficult to source or fix, it's sometimes easier to address the effect by coding around the problem, which is why mature code tends to have some unintuitive warts to handle edge cases.

yencabulator 1/28/2026||||
> Unless your code is multi-threaded, to which I say, good luck!

What isn't multi-threaded these days? Kinda hard to serve HTTP without concurrency, and practically every new business needs to be on the web (or to serve multiple mobile clients; same deal).

All you need is a database and web form submission and now you have a full distributed system in your hands.

mattmanser 1/28/2026|||
Only superficially so, await/async isn't usually like the old spaghetti multi-threaded code people used to write.
yencabulator 1/28/2026||
You mean in a single-threaded context like Javascript? (Or with Python GIL giving the impression of the same.) That removes some memory corruption races, but leaves all the logical problems in place. The biggest change is that you only have fixed points where interleaving can happen, limiting the possibilities -- but in either scenario, the number of possible paths is so big it's typically not human-accessible.

Webdevs not aware of race conditions -> complex page fails to load. They're lucky in how the domain sandboxes their bugs into affecting just that one page.

direwolf20 1/28/2026|||
nginx is single–threaded, but you're absolutely right — any concurrency leads to the same ghost bugs.
yencabulator 1/28/2026||
nginx is also from the era when fast static file serving was still a huge challenge, and "enough to run a business" for many purposes -- most software written has more mutable state, and much more potential for edge cases.
SpicyLemonZest 1/28/2026|||
Historically I would have agreed with you. But since the rise of LLM-assisted coding, I've encountered an increasing number of things I'd call clear "ghost bugs" in single threaded code. I found a fun one today where invoking a process four times with a very specific access pattern would cause a key result of the second invocation to be overwritten. (It is not a coincidence, I don't think, that these are exactly the kind of bugs a genAI-as-a-service provider might never notice in production.)
mh2266 1/28/2026||||
> I've been trying to implement a multiplayer game with server authoritative networking in Rust with Bevy. I specifically chose Bevy as the latest version was after Claude's cut off, it had a number of breaking changes, and there aren't a lot of deep examples online.

I am interested in doing something similar (Bevy. not multiplayer).

I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?

I'm also curious how you test if the game is, um... fun? Maybe it doesn't apply so much for a multiplayer game, I'm thinking of stuff like the enemy patterns and timings in a soulslike, Zelda, etc.

I did use ChatGPT to get some rendering code for a retro RCT/SimCity-style terrain mesh in Bevy and it basically worked, though several times I had to tell it "yeah uh nothing shows up", at which point is said "of course! the problem is..." and then I learned about mesh winding, fine, okay... felt like I was in over my head and decided to go to a 2D game instead so didn't pursue that further.

nemothekid 1/28/2026||
>I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?

I've found that there are two issues that arise that I'm not sure how to solve. You can give it docs and point to it and it can generally figure out syntax, but the next issue I see is that without examples, it kind of just brute forces problems like a 14 year old.

For example, the input system originally just let you move left and right, and it popped it into an observer function. As I added more and more controls, it began to litter with more and more code, until it was ~600 line function responsible for a large chunk of game logic.

While trying to parse it I then had it refactor the code - but I don't know if the current code is idiomatic. What would be the cargo doc or rust-analyzer equivalent for good architecture?

Im running into this same problem when trying to claude code for internal projects. Some parts of the codebase just have really intuitive internal frameworks and claude code can rip through them and provide great idiomatic code. Others are bogged down by years of tech debt and performance hacks and claude code can't be trusted with anything other than multi-paragraph prompts.

>I'm also curious how you test if the game is, um... fun?

Lucky enough for me this is a learning exercise, so I'm not optimizing for fun. I guess you could ask claude code to inject more fun.

azrazalea_debt 1/28/2026||
> What would be the cargo doc or rust-analyzer equivalent for good architecture?

Well, this is where you still need to know your tools. You should understand what ECS is and why it is used in games, so that you can push the LLM to use it in the right places. You should understand idiomatic patterns in the languages the LLM is using. Understand YAGNI, SOLID, DDD, etc etc.

Those are where the LLMs fall down, so that's where you come in. The individual lines of code after being told what architecture to use and what is idiomatic is where the LLM shines.

nemothekid 1/28/2026||
What you describe is how I use LLM tools today, but the reason I am approaching my project in this way is because I feel I need to brace myself for a future where developers are expected to "know your tools"

When I look around today - its clear more and more people are diving in head first into fully agentic workflows and I simply don't believe they can churn out 10k+ lines of code today and be intimately familiar with the code base. Therefore you are left with two futures:

* Agentic-heavy SWEs will eventually blow up under the weight of all their tech debt

* Coding models are going to continue to get better where tech debt wont matter.

If the answer if (1), then I do not need to change anything today. If the answer is (2), then you need to prepare for a world where almost all code is written by an agent, but almost all responsibility is shouldered by you.

In kind of an ignorant way, I'm actually avoiding trying to properly learn what an ECS is and how the engine is structured, as sort of a handicap. If in the future I'm managing a team of engineers (however that looks) who are building a metaphorical tower of babel, I'd like to develop to heuristic in navigating that mountain.

storystarling 1/28/2026||||
I ran into similar issues with context rot on a larger backend project recently. I ended up writing a tool that parses the AST to strip out function bodies and only feeds the relevant signatures and type definitions into the prompt.

It cuts down the input tokens significantly which is nice for the monthly bill, but I found the main benefit is that it actually stops the model from getting distracted by existing implementation details. It feels a bit like overengineering but it makes reasoning about the system architecture much more reliable when you don't have to dump the whole codebase into the context window.

jv22222 1/28/2026|||
> I don't really understand the code "in my bones".

Man, I absolutely hate this feeling.

aswegs8 1/28/2026|||
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." - Socrates on Writing and Reading, Phaedrus 370 BC
beepbooptheory 1/28/2026|||
If one reads the dialogue, Socrates is not the one "saying" this, but he is telling a story of what King Thamus said to the Egyptian god Theuth, who is the inventor of writing. He is asking the king to give out the writing, but the king is unsure about it.

Its what is known as one of the Socratic "myths," and really just contributes to a web of concepts that leads the dialogue to its ultimate terminus of aporia (being a relatively early Plato dialogue). Socrates, characteristically, doesn't really give his take on writing. In the text, he is just trying to help his friend write a horny love letter/speech!

I can't bring it up right now, but the end of the dialogue has a rather beautiful characterization of writing in the positive, saying that perhaps logos can grow out of writing, like a garden.

I think if pressed Socrates/Plato would say that LLM's are merely doxa machines, incapable of logos. But I am just spitballing.

dempedempe 1/28/2026||
https://standardebooks.org/ebooks/plato/dialogues/benjamin-j...
beepbooptheory 1/28/2026||
Phaedo != Phaedrus. One is the "writing" one, the other one is, well, about Socrates' execution (also extremely good dialogue!).

The one at issue:

https://standardebooks.org/ebooks/plato/dialogues/benjamin-j...

The public domain translations are pretty old either way. John Cooper's big book is probably still the best but im out of the game these days.

AI guys would probably love this if any of them still have the patience to read/comprehend something very challenging. Probably one of the more famous essays on the Phaedrus dialogue. Its the first long essay of this book:

https://xenopraxis.net/readings/derrida_dissemination.pdf

Roughly: Plato's subordination of writing in this text is symptomatic of a broader kind of `logocentrism` throughout all of western canonical philosophy. Derrida argues the idea of the "externality" of writing compared to speech/logos is not justified by anything, and in fact everything (language, thought) is more like a kind "writing."

mikemarsh 1/28/2026||||
Presenting this quote without additional commentary is an interesting Rorschach test.

Thankfully more and more people are seriously considering the effects of technology on true wisdom and getting of the "all technological progress clearly is great, look at all these silly unenlightened naysayers from the past" train.

runarberg 1/28/2026||
Socrates was right about the effects. Writing did indeed cause us to loose the talent of memorizing. Where he was wrong though (or rather where this quote without context is wrong) is that it turned out that memorizing was by the most part not the important skill to have.

When Socrates uses the same warnings about LLMs he may however be correct both on the effect and the importance of the skill being lost. If we loose the ability to think and solve various problems, we may indeed be loosing a very important skill of our humanity.

eaglelamp 1/28/2026|||
You're misinterpreting the quote. Socrates is saying that being able to find a written quotation will replace fully understanding a concept. It's the difference between being able to quote the pythagorean theorem and understanding it well enough to prove it. That's why Socrates says that those who rely on reading will be "hard to get along with" - they will be pedantic without being able to discuss concepts freely.
throw10920 1/29/2026||
Huh, I think you're right. I think I failed the litmus test. Thanks for explaining!
AIorNot 1/28/2026|||
While there are dangers to LLMs -science fiction has been talking about this issue for decades (see below) and I think its overblown and the point of the Socrates quote is valid.

e.g the Matrix Reloaded: https://youtu.be/cD4nhYR-VRA?si=bXGBI4ca-LaetLVl&t=69 Machines no one understand or can manage

Issac Asmiov's Classic - the Feeling of Power https://ia600806.us.archive.org/20/items/TheFeelingOfPower/T...

(future scientists discover how to add using paper and pencil instead of computer)

I mean Big Paradigm shifts are like death, we can't really predict how humanity will evolve if we really get AGI -but these LLMs as they work today are tools and humans are experts at finding out how to use tools efficiently to counter the trade offs.

Does it really matter today that most programmers don't know how to code in assembly for example?

runarberg 1/28/2026||
I’m not making a Malthusian doomsday prediction, and neither was Socrates for that matter. Jobs need to be done, and there will always be somebody willing and able to acquire the relevant skills, and do the job. And in the worst case scenario, society will change it self before it is allowed to fail.

Unlike Malthus, for whom it was easier to imagine the end of the world then the end of Mercantilism, I can easily imagine a world which simply replaces capitalism as its institutions start producing existential threats for humanity.

However, I don‘t think LLMs are even that, for me they are an annoyance which I personally want gone, but next to climate change and the stagnation of population growth, they wont make a dent in upending capitalism, despite how much they suck.

But just because they are not an existential threat, that doesn’t make them harmless. Plenty of people will be harmed by this technology. Like Socrates predicted people will lose skills, this includes skilled programmers, and where previously we were getting some quality software, instead we will get less of it, replaced with a bunch of AI slop. That is my prediction at least.

ericmcer 1/28/2026||||
That is interesting because your mental abilities seem to be correlated with orchestrating a bunch of abstractions you have previously mastered. Are these tools making us stupid because we no longer need to master any of these things? Or are they making us smarter because the abstraction is just trusting AI to handle it for us?
pinnochio 1/28/2026||
Does a student become smarter by hiring a smarter student to write his essays and take his tests for him?
Alacart 1/29/2026||
We can also invert that by asking: does a student become smarter by writing their essay on their own?

I would argue that the answer to questions is no. It depends on how you define “smarter”, though. You would likely gain knowledge writing the essay yourself, but is gaining knowledge equivalent to getting smarter?

If so, you could also just read the essay afterwards and gain the same knowledge. Is _that_ smarter? You’ve now reached the same benefit for much less work.

I think fundamentally I at least partially agree with your stance. That we should think carefully before taking a seemingly easier path. Weighing what we gain and lose. Sometimes the juice is, in fact, the squeeze. But it’s far from cut and dry.

kelnos 1/28/2026||||
It's unclear if you've presented this quote in order to support or criticize the idea that new technologies make us dumber. (Perhaps that's intentional; if so, bravo).

To me, this feels like support. I was never an adult who could not read or write, so I can't check my experience against Socrates' specific concern. But speaking to the idea of memory, I now "outsource" a lot of my memory to my smartphone.

In the past, I would just remember my shopping list, and go to the grocery store and get what I needed. Sure, sometimes I'd forget a thing or two, but it was almost always something unimportant, and rarely was a problem. Now I have my list on my phone, but on many occasions where I don't make a shopping list on my phone, when I get to the grocery store I have a lot of trouble remembering what to get, and sometimes finish shopping, check out, and leave the store, only to suddenly remember something important, and have to go back in.

I don't remember phone numbers anymore. In college (~2000) I had the campus numbers (we didn't have cell phones yet) of at least two dozen friends memorized. Today I know my phone number, my wife's, and my sister's, and that's it. (But I still remember the phone number for the first house I lived in, and we moved out of that house when I was five years old. Interestingly, I don't remember the area code, but I suppose that makes sense, as area codes weren't required for local dialing in the US back in the 80s.)

Now, some of this I will probably ascribe to age: I expect our memory gets more fallible as we get older (I'm in my mid 40s). I used to have all my credit/debit card numbers, and their expiration dates and security codes, memorized (five or six of them), but nowadays I can only manage to remember two of them. (And I usually forget or mix up the expiration dates; fortunately many payment forms don't seem to check, or are lax about it.) But maybe that is due to new technology to some extent: most/all sites where I spend money frequently remember my card for me (and at most only require me to enter the security code). And many also take Paypal or Google Pay, which saves me from having to recall the numbers.

So I think new technology making us "dumber" is a very real thing. I'm not sure if it's a good thing or a bad thing. You could say that, in all of my examples, technology serving the place of memory has freed up mental cycles to remember more important things, so it's a net positive. But I'm not so sure.

runarberg 1/28/2026||
I don‘t think human memory works like that, at least not in theory. Storage is not the limiting factor of human memory, but rather retention. It takes time and effort to retain new information. In the past you spent some time and effort to memorize the shopping list and the phone number. Mulling it over in your mind (or out loud), repeated recalls, exposure, even mnemonic tricks like rhymes, alliterations, connecting with pictures, stories, etc. if what you had to remember was something more complicated/extensive/important. And retention is not forever, unless you repeat it, you will loose it. And you only have so much time for repetition and recall, so inevitably, there will be memories which won‘t be repeated, and can’t be recalled.

So when you started using technology to offload your memory, what you gained was the time and effort you previously spent encoding these things into your memory.

I think there is a fundamental difference though between phone book apps and LLMs. Loosing the ability to remember a phone number is not as severe as loosing the ability to form a coherent argument, or to look through sources, or for a programmer to work through logic, to abstract complex logic into simpler chunks. If a scholar looses the skill to look through sources, and a programmer looses the ability to abstract complex logic, they are loosing a fundamental part of their needed to do their jobs. This is like if a stage actor looses the ability to memorize the script, instead relying on a tape-recorder when they are on stage.

Now if a stage actor losses the ability to memorize the script, they will soon be out of a job, but I fear in the software industry (and academia) we are not so lucky. I suspect we will see a lot of people actually taking that tape recorder on stage, and continue to do their work as if nothing is more normal. And the drop in quality will predictably follow.

drdeca 1/28/2026||||
I like this commentary on that passage : https://detective-pony.tumblr.com/post/96180330151/pages-21-...
specialist 1/28/2026||||
Yup.

My personal counterpoint is Norman's thesis in Things That Make Us Smart.

I've long tried, and mostly failed, to consider the tradeoffs, to be ever mindful that technologies are never neutral (winners & losers), per Postman's Technopoly.

daxfohl 1/28/2026||||
And so we learn that over 2000 years before the microphone came to be, Socrates invented the mic drop.

In all seriousness though, it's just crazy that anybody is thinking these things at the dawn of civilization.

sifar 1/28/2026||||
Well, the wisdom part is true.
direwolf20 1/28/2026||||
He was right. It did.
throw10920 1/28/2026|||
Writing/reading and AI are so categorically different that the only way you could compare them is if you fundamentally misunderstand how both of them work.

And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.

ppseafield 1/28/2026|||
The argument Socrates is making is specifically that writing isn't a substitute for thinking, but it will be used as such. People will read things "without instruction" and claim to understand those things, even if they do not. This is a trade-off of writing. And the same thing is happening with LLMs in a widespread manner throughout society: people are having ChatGPT generate essays, exams, legal briefs and filings, analyses, etc., and submitting them as their own work. And many of these people don't understand what they have generated.

Writing's invention is presented as an "elixir of memory", but it doesn't transfer memory and understanding directly - the reader must still think to understand and internalize information. Socrates renames it an "elixir of reminding", that writing only tells readers what other people have thought or said. It can facilitate understanding, but it can also enable people to take shortcuts around thinking.

I feel that this is an apt comparison, for example, for someone who has only ever vibe-coded to an experienced software engineer. The skill of reading (in Socrates's argument) is not equivalent to the skill of understanding what is read. Which is why, I presume, the GP posted it in response to a comment regarding fear of skill atrophy - they are practicing code generation but are spending less time thinking about what all of the produced code is doing.

wjSgoWPm5bWAhXB 1/28/2026||||
yes, but people just really like to predict dooms and they also like to be convinced that they live in some special era in human history
throw10920 1/28/2026|||
It takes about 30 seconds of thinking and/or searching the Internet to realize that people also predict doom when it actually happens - e.g. with people correctly predicting that TikTok will shorten people's attention spans.

It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd. Calling that idea "intellectually lazy" is an insult to smart-but-lazy people. This is more like intellectually incapable.

The fact that people will unironically say such a thing in the face of not only widespread personal anecdotes from well-respected figures, but scientific evidence, is depressing. Maybe people who say these things are heavy LLM users?

jrowen 1/28/2026||
There is always some set of people predicting all sorts of dooms though. The saying about the broken clock comes to mind.

With the right cherry picking, it can always be said that [some set of] the doomsayers were right, or that they were wrong.

As you say, someone predicting doom has no bearing on whether it happens, so why engage in it? It's just spreading FUD and dwelling on doom. There's no expected value to the individual or to others.

Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.

throw10920 1/29/2026||
Did you actually read what you're responding to?

> And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.

> the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd

It's pretty clear that I'm not defending engaging in baseless negative speculation, but refuting the dismissal of negative speculation based purely on the trope that "people have always predicted it".

Someone who read what they were responding to would rather easily have seen that.

> As you say, someone predicting doom has no bearing on whether it happens

That is not what I said. I'm pretty sure now that you did not read my comment before responding. That's bad.

This is what I said:

> It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd.

I'm very clearly pointing out (with "someone, somewhere") that a random person predicting a bad thing has almost no ("~zero") impact on the future. Obviously, if someone who has the ability to affect the future (e.g. a big company executive, or a state leader (past or present)) makes a prediction, they have much more power to actually affect the future.

> so why engage in it? It's just spreading FUD and dwelling on doom.

Because (rational) discussion now has the capacity to drive change.

> There's no expected value to the individual or to others.

Trivially false - else most social movements would be utterly irrelevant, because they work through the same mechanism - talking about things that should be changed as a way of driving that change.

It's also pretty obvious that there's a huge difference between "predicting doom with nothing behind it" and "describing actual bad things that are happening that have a lot of evidence behind them" - which is what is actually happening here, so all of your arguments about the former point would be irrelevant (if they were valid, which they aren't) because that's not even the topic of discussion.

I suggest reading what you're responding to before responding.

> Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.

You're bringing up "doom" as a way to pedantically quarrel about word definitions. It's trivial to see that that's completely irrelevant to my argument - and worth noting that you're then conceding the point about people correctly predicting that TikTok will shorten people's attention spans, hence validating the need to have discussions about it.

jatari 1/28/2026|||
We are very clearly living through a moment in history that will be studied intensely for thousands of years.
direwolf20 1/28/2026||
Because of the collapsing empire, mind you, not because of the LLMs.
jatari 1/28/2026||
Creation of the internet, social media, everyone on the planet getting a pocket sized supercomputer, beginning of the AI boom, Trump/beginning of the end of the US, are all reasons people will study this period of time.
jrowen 1/28/2026||
This is really interesting because I wholeheartedly believe the original sentiment that everyone thinks their generation is special, and that "now this time they've really screwed it all up" is quite myopic -- and that human nature and the human experience are relatively constant throughout history while the world changes around us.

But, it is really hard to escape the feeling that digital technology and AI are a huge inflection point. In some ways this couple generations might be the singularity. Trump and contemporary geopolitics in general is a footnote, a silly blip that will pale in comparison over time.

grogenaut 1/28/2026||||
I know managers who can read code just fine, they're just not able/willing to code it. Tho the ai helps with that too. I've had a few managers dabble back into coding esp scripts and whatnot where I want them to be pulling unique data and doing one off investigations.
andy_ppp 1/28/2026||||
I read grandparent comment as saying people have been claiming that the sky is falling forever… AI will be both good for learning and development and bad. It’s always up to the individual if it benefits them or atrophies their minds.
oblio 1/28/2026||
I'm not a big fan of LLMs, but while using it for day to day tasks, I get the same feeling I had when I first started the internet (I was lucky to start with broadband internet).

That feeling was one of empowerment: I was able to satisfy my curiosity about a lot of topics.

LLMs can do the same thing and save me a lot of time. It's basically a super charged Google. For programming it's a super charged auto complete coupled with a junior researcher.

My main concern is independence. LLMs in the hands of just a bunch of unchecked corporations are extremely dangerous. I kind of trusted Google, and even that trust is eroding, and LLMs can be extremely personal. The lack of trust ranges from risk of selling data and general data leaks, to intrusive and worse, hidden ads, etc.

runarberg 1/28/2026||
When I first started using the internet, I was able to instant text message (IRC) random strangers, using a fake name, and lie about my age. My teacher had us send an email to our ex-classmate who had move to Australia, and she replied the next day, I was able to download the song I just heard on the radio and play it as many times as I wanted on my winamp.

These capabilities simply didn’t exist before the Internet. Apart for the email to Australia (which was possible with a fax machine; but much more expensive), LLMs don‘t give you any new capabilities. It just provides a way for you to do what you already can (and should) do with your brain, without using your brain. It is more like using replacing your social interaction with facebook, then it is to experience an instant message group chat for the first time.

oblio 1/28/2026||
Before LLMs it was incredibly tedious or expensive or both to get legal guidance for stuff like taxes, where I live. Now I can orient myself much better before I ask an actual tax expert pointed questions, saving a lot of time and money.

The list of things they can provide is endless.

They're not a creator, they're an accelerator.

And time matters. My interests are myriad but my capacity to pass the entry bar manually is low because I can only invest so much time.

runarberg 1/28/2026||
If this resembles the feeling you had when you first used the internet, it is drastically different from when I used the internet.

When I first used the internet, it was not about doing things faster, it was about doing things which were previously simply unavailable to me. A 12 year old me was never gonna fax my previous classmate who moved to Australia, but I certainly emailed her.

We are not talking about a creator nor an accelerator, we are talking about an avenue (or a road if you will). When I use the internet, I am the creator, and the internet is the road that gets me there.

When I use an LLM it is doing something I can already do, but now I can do it without using my brain. So the feeling is much closer to doomscrolling on social media where previously I could just read a book or meet my pals at the pub. Doomscrolling facebook is certainly faster then reading a book, or socializing at the pub. But it is a poor replacement for either.

oblio 1/28/2026||
I didn't have friends in other countries.

I could however greatly enrich my general knowledge in ways I couldn't do with books I had access to.

runarberg 1/28/2026||
Prior to the internet I used my school library for that (or when I was very young, books at my grandparent’s house). So for me personally that wasn’t a new capability. It wasn’t until I started using Wikipedia around 2004 (when I was 17 years old) that the internet replaced (or rather complemented) libraries for that function.

But I can definitely see how for many people with less access to libraries (or worse quality libraries then what I had access to) the internet provided a new avenue for gaining knowledge which wasn’t available before.

whistle650 1/28/2026|||
To understand the impact on computer programming per se, I find it useful to imagine that the first computer programs I had encountered were, somehow, expressed in a rudimentary natural language. That (somewhat) divorces the consideration of AI from its specific impact on programming. Surely it would have pulled me in certain directions. Surely I would have had less direct exposure to the mechanics of things. But, it seems to me that’s a distinction of degree, not of kind.
krupan 1/27/2026|||
I've been thinking along these lines. LLMs seem to have arrived right when we were all getting addicted to reels/tic tocks/whatever. For some reason we love to swipe, swipe, swipe, until we get something funny/interesting/shocking, that gives us a short-lasting dopamine hit (or whatever chemicals it is) that feels good for about 1 second, and we want MORE, so we keep swiping.

Using an LLM is almost exactly the same. You get the occasional, "wow! I've never seen it do that before!" moments (whether that thing it just did was even useful or not), get a short hit of feel goods, and then we keep using it trying to get another hit. It keeps providing them at just the right intervals for people to keep them going just like they do with tick tock

neves 1/28/2026||
It's exactly the argument here:

https://www.fast.ai/posts/2026-01-28-dark-flow/

CharlieDigital 1/28/2026|||
I ran into a new problem today: "reading atrophy".

As in if the LLM doesn't know about it, some devs are basically giving up and not even going to RTFM. I literally had to explain to someone today how something works by...reading through the docs and linking them the docs with screenshots and highlighted paragraphs of text.

Still got push back along the lines of "not sure if this will work". It's. Literally. In. The. Docs.

finaard 1/28/2026|||
That's not really a new thing now, it just shows differently.

15 years ago I was working in an environment where they had lots of Indians as cheap labour - and the same thing will show up in any environment where you go for hiring a mass of cheap people while looking more at the cost than at qualifications: You pretty much need to trick them into reading stuff that are relevant.

I remember one case where one had a problem they couldn't solve, and couldn't give me enough info to help remotely. In the end I was sitting next to them, and made them read anything showing up on the screen out loud. Took a few tries where they were just closing dialog boxes without reading it, but eventually we had that under control enough that they were able to read the error messages to me, and then went "Oh, so _that's_ the problem?!"

Overall interacting with a LLM feels a lot like interacting with one of them back then, even down to the same excuses ("I didn't break anything in that commit, that test case was never passing") - and my expectation for what I can get out of it is pretty much the same as back then, and approach to interacting with it is pretty similar. It's pretty much an even cheaper unskilled developer, you just need to treat it as such. And you don't pair it up with other unskilled developers.

globular-toast 1/28/2026||||
The mere existence of the phrase "RTFM" shows that this phenomenon was already a thing. LLMs are the worst thing to happen to people who couldn't read before. When HR type people ask what my "superpower" is I'm so tempted to say "I can read", because I honestly feel like it's the only difference between me and people who suck at working independently.
acessoproibido 1/28/2026|||
As someone working in technical support for a long time, this has always been the case.

You can have as many extremely detailed and easy to parse gudies, references, etc. there will always be a portion of customers who refuse to read them.

Never could figure out why because they aren't stupid or anything.

yencabulator 1/28/2026||
> Never could figure out why because they aren't stupid or anything.

They may be intelligent, but they don't sound wise.

overfeed 1/28/2026|||
> Eventually it was easier just to quit fighting it and let it do things the way it wanted.

I wouldn't have believed it a few tears ago if you told me the industry would one day, in lockstep, decide that shipping more tech-debt is awesome. If the unstated bet doesn't pay off, that is, AI development will outpace the rate it generates cruft, then there will be hell to pay.

ithkuil 1/28/2026|||
Don't worry. This will create the demand for even more powerful models that are able to untangle the mess created by previous models.

Once we realize the kind of mess _those_ models created, well, we'll need even more capable models.

It's a variation on the theme of Kernighan insight about the more "clever" you are while coding the harder it will be to debug.

EDIT: Simplicity is a way out but it's hard under normal circumstances, now with this kind of pressure to ship fast because the colleague with the AI chimp can outperform you, aiming at simplicity will require some widespread understanding

bandrami 1/28/2026||
"That's the brilliant part: when the winter comes the apes freeze to death!"
scorpioxy 1/28/2026||||
As someone who's been commissioned many times before to work on or salvage "rescue projects" with huge amounts of tech debt, I welcome that day. Still not there yet though I am starting to feel the vibes shifting.

This isn't anything new of course. Previously it was with projects built by looking for the cheapest bidder and letting them loose on an ill-defined problem. And you can just imagine what kind of code that produced. Except the scale is much larger.

My favorite example of this was a project that simply stopped working due to the amount of bugs generated from layers upon layers of bad code that was never addressed. That took around 2 years of work to undo. Roughly 6 months to un-break all the functionality and 6 more months to clean up the core and then start building on top.

sally_glance 1/28/2026||
Are you not worried that the sibling comment is right and the solution to this will be "more AI" in the future? So instead of hiring a team of human experts to cleanup, management might just dump more money into some specialized AI refactoring platform or hire a single AI coordinator... Or maybe they skip to rebuild using AI faster, because AI is good at greenfield. Then they only need a specialized migration AI to automate the regular switchovers.

I used to be unconcerned, but I admit to be a little frightened of the future now.

scorpioxy 1/28/2026||
Well, in general worrying about the future is not useful. Regardless of what you think, it is always uncertain. I specifically stay away from taking part in such speculative threads here on HN.

What's interesting to me though is that very similar promises were being made about AI in the 80s. Then came the "AI Winter" after the hype cycle and promises got very far from reality. Generative AI is the current cycle and who knows, maybe it can fulfill all the promises and hype. Or maybe not.

There's a lot of irrationality currently and until that settles down, it is difficult to see what is real and useful and what is smoke and mirrors.

sally_glance 1/28/2026||
I'm aware of that particular chapter of history, my master's thesis was on conversational interfaces. I don't think the potential of the algorithms (and hardware) back then was in any way comparable to what's currently going on. There is definitely a hype cycle going on right now, but I'm nearly convinced it will actually leave many things changed even after it plays out.

Funny thing is that meanwhile (today) I've actually been on an emergency consulting project where a PO/PM kind of guy vibecoded some app that made it into production. The thing works, but a cursory audit laid open the expected flaws (like logic duplication, dead code, missing branches). So that's another point for our profession still being required in the near future.

e12e 1/28/2026||||
> ... few tears ago

Brilliant. Even if it was a typo.

TeMPOraL 1/28/2026||||
The industry decided that decades ago. We may like to talk about quality and forethought, but when you actually go to work, you quickly discover it doesn't matter. Small companies tell you "we gotta go fast", large companies demand clear OKRs and focusing on actually delivering impact - either way, no one cares about tech debt, because they see it as unavoidable fact of life. Even more so now, as ZIRP went away and no one can afford to pay devs to polish the turd ad infinitum. The mantra is, ship it and do the next thing, clean up the old thing if it ever becomes a problem.

And guess what, I'm finally convinced they're right.

Consider: it's been that way for decades. We may tell ourselves good developers write quality code given the chance, but the truth is, the median programmer is a junior with <5 years of experience, and they cannot write quality code to save their life. That's purely the consequence of rapid growth of software industry itself. ~all production code in the past few decades was written by juniors, it continues to be so today; those who advance to senior level end up mostly tutoring new juniors instead of coding.

Or, all that put another way: tech debt is not wrong. It's a tool, a trade-off. It's perfectly fine to be loaded with it, if taking it lets you move forward and earn enough to afford paying installments when they're due. Like with housing: you're better off buying it with lump payment, or off savings in treasury bonds, but few have that money on hand and life is finite, so people just get a mortgage and move on.

--

Edited to add: There's a silver lining, though. LLMs make tech debt legible and quantifiable.

LLMs are affected by tech debt even more than human devs are, because (currently) they're dumber, they have less cognitive capability around abstractions and generalizations[0]. They make up for it by working much faster - which is a curse in terms of amplifying tech debt, but also a blessing, because you can literally see them slowing down.

Developer productivity is hard to measure in large part because the process is invisible (happens in people's heads and notes), and cause-and-effect chains play out over weeks or months. LLM agents compress that to hours to days, and the process itself is laid bare in the chat transcript, easy to inspect and analyze.

The way I see it, LLMs will finally allow us to turn software development at tactical level from art into an engineering process. Though it might be too late for it to be of any use to human devs.

--

[0] - At least the out-of-distribution ones - quirks unique to particular codebase and people behind it.

oblio 7 days ago||
> the median programmer is a junior with <5 years of experience, and they cannot write quality code to save their life

It's worse, look up perpetual intermediates. Most people in any given field aren't good at what they're doing. They're at best mediocre.

TeMPOraL 7 days ago||
Sure, but we have high growth on top of that - meaning all those "perpetual intermediaries" are always the minority and gravitate upwards in the org chain, while ~all the coding work is done by people who just started working in the field, and didn't even learn enough yet to become mediocre.
naasking 1/28/2026||||
> I wouldn't have believed it a few tears ago if you told me the industry would one day, in lockstep, decide that shipping more tech-debt is awesome.

It's not debt if you never have to pay it back. If a model can regenerate a whole relibale codebase in minutes from a spec, then your assessment of "tech debt" in that output becomes meaningless.

daxfohl 1/28/2026|||
> unstated bet

(except where it's been stated, championed, enforced, and ultimated in no unequivocal terms by every executive in the tech industry)

overfeed 1/28/2026||
I'm yet to encounter an AI-bull who admits the LLM tendency towards creating tech debt- outside of footnotes stating it can be fixed by better prompting (with no examples), or solved by whatever tool they are selling
gritspants 1/27/2026|||
My disillusionment comes from the feeling I am just cosplaying my job. There is nothing to distinguish one cosplayer from another. I am just doordashing software, at this point, and I'm not in control.
solumunus 1/28/2026|||
I don’t get this at all. I’m using LLM’s all day and I’m constantly having to make smart architectural choices that other less experienced devs won’t be making. Are you just prompting and going with whatever the initial output is, letting the LLM make decisions? Every moderately sized task should start with a plan, I can spend hours planning, going off and thinking, coming back to the plan and adding/changing things, etc. Sometimes it will be days before I tell the LLM to “go”. I’m also constantly optimising the context available to the LLM, and making more specific skills to improve results. It’s very clear to me that knowledge and effort is still crucial to good long term output… Not everyone will get the same results, in fact everyone is NOT getting the same results, you can see this by reading the wildly different feedback on HN. To some LLM’s are a force multiplier while others claim they can’t get a single piece of decent output…

I think the way you’re using these tools that makes you feel this way is a choice. You’re choosing to not be in control and do as little as possible.

Otterly99 1/28/2026|||
Exactly.

Once you start using it intelligently, the results can be really satisfying and helpful. People complaining about 1000 lines of codes being generated? Ask it to generate functions one at a time and make small implementations. People complaining about having to run a linter? Ask it to automatically run it after each code execution. People complaining about losing track? Have it log every modifications in a file.

I think you get my point. You need to treat it as a super powerful tool that can do so many things that you have to guide it if you want to have a result that conforms to what you have in mind.

rustyhancock 1/28/2026|||
One challenge is, are those decisions making tangible differences?

We won't know until the code being produced especially greenfields hits any kind of maturity 5 years+ atleast?

mlrtime 1/28/2026|||
It's not that challenging, the answer is, it depends.

It's like a junior dev writing features for a product everyday vs a principle engineer. The junior might be adding a feature with O(n^2) performance while principle has seen this before and writes it O(log n).

If the feature never reaches significance, the "better" solution doesn't matter, but it might!

The principle may write once and it is solid and never touched, but the junior might be good enough to never need coming back to, same with a llm and the right operator.

rustyhancock 1/28/2026||
There's that, but I actually think LLMs are becoming very good at not making the bad simple choice.

What they're worse at is the bits I can't easily see.

An example is that I recently was working on a project building a library with Claude. The code in pieces all looked excellent.

When I wrote some code making use of it several similar functions which were conceptually similar had signatures that were subtly mismatched.

Different programmers might have picked each patterns. And probably consistently made similar rules for the various projects they worked on.

To an LLM they are just happenstances and feel no friction.

A real project with real humans writing the code would notice the mismatch. Even if they aren't working on those parts at the same time just from working on it across say a weekend.

But how many more decisions do we make convenient only for us meat bags that a LLM doesn't notice?

mlrtime 7 days ago||
Yes, but now you know about that classification of problem. So you learned something! As an Engineer have a choice now on what to do with that classification or problem.

Better yet, go up one level and and think about how to avoid the other classifications of problems you don't know about, how can the LLM catch these before it writes the code... etc.

solumunus 1/28/2026|||
What? Of course it makes a difference when I direct it away from a bad solution towards a good solution. I know as soon as I review the output and it has done what I asked, or it hasn't and I make a correction. Why would I need to wait 5 years? That makes no sense, I can see the output.

If you're using LLM's and you don't know what good/bad output looks like then of course you're going to have problems, but such a person would have the same problems without the LLM...

rustyhancock 1/28/2026||
The problem is the LLMs are exceptionally good at producing output that appears good.

That's what it's ultimately been tuned to do.

The way I see this play out is output that satisfied me but that I would not produce myself.

Over a large project that adds up and typically is glaringly obvious to everyone but the person who was using the LLM.

My only guess as to why that is, is because most of what we do and why we do it we're not conscious of. The threshold we'd intervene at is higher than the original effort it takes to do the right thing.

If these things don't apply to you. Then I think you're coming up to a golden era.

FitchApps 1/28/2026||||
100% there....it's getting to a point where a project manager reports a bug AND also pastes a response from Claude (he ran Claude against our codebase) on how to fix the bug..Like I'm just copying what Claude said and making sure the thing compiles (.NET). What makes me sleep at night...for now is the fact that Claude isn't supporting 9pm deployments and AWS Infra support ...it's already writing code but not supporting it yet...
phito 1/28/2026|||
What kind of software are you writing? Are you just a "code monkey" implementing perfectly described Jira tickets (no offense meant)? I cannot imagine feeling this way with what I'm working on, writing code is just a small part of it, most of the time is spent trying to figure out how to integrate the various (undocumented and actively evolving) external services involved together in a coherent, maintainable and resilient way. LLMs absolutely cannot figure this out themselves, I have to figure it out myself and then write it all in its context, and even then it mostly comes up with sub-par, unmaintainable solutions if I wasn't being precise engouh.

They are amazing for side projects but not for serious code with real world impact where most of the context is in multiple people's head.

gritspants 1/28/2026||
No, I am not a code monkey. I have an odd role working directly for an exec in a highly regulated industry, managing their tech pursuits/projects. The work can range from exciting to boring depending on the business cycle. Currently it is quite boring, so I've leaned into using AI a bit more just to see how I like it. I don't think that I do.
InfinityByTen 1/28/2026|||
I find the atrophy and zoning out or context switching problematic, because it takes a few seconds/ minutes in "thinking" and then BAM! I have 500 lines of all sorts of buggy and problematic code to review and get a sycophantic, not-enough-mature entity to correct.

At some point, I find myself needing to disconnect out of overwhelm and frustration. Faster responses isn't necessarily better. I want more observability in the development process so that I can be a party to it. I really have felt that I need to orchestrate multiple agents working in tandem, playing sort of a bad-cop, good-cop and a maybe a third trying to moderate that discussion and get a fourth to effectively incorporate a human in the mix. But that's too much to integrate in my day job.

amluto 1/28/2026|||
I’ve actually found the tool that inspires the most worry about brain atrophy to be Copilot. Vscode is full of flashing suggestions all over. A couple days ago, I wanted to write a very quick program, and it was basically impossible to write any of it without Copilot suggesting a whole series of ways to do what it thought I was doing. And it seems that MS wants this: the obvious control to turn it off is actually just “snooze.”

I found the setting and turned it off for real. Good riddance. I’ll use the hotkey on occasion.

mlrtime 1/28/2026||
Yes! I spent more time trying to figure out how to turn off that garbage copilot suggesting then I did editing this 5 year old python program.

I use claude daily, no problems with it. But vscode + copilot suggestions was garbage!

nonethewiser 1/28/2026|||
> Like there have been multiple times now where I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things. Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever. Eventually it was easier just to quit fighting it and let it do things the way it wanted.

Absolutely. At a certain level of usage, you just have to let it do it's thing.

People are going to take issue with that. You absolutely don't have to let it do its thing. In that case you have to be way more in the loop. Which isn't necessarily a bad thing.

But assuming you want it to basically do everything while you direct it, it becomes pointless to manage certain details. One thing in my experience is that Claude always wants to use ReactRouter. My personal preference is TanStack router, so I asked it to use it initially. That never really created any problems but after like the 3rd time of realizing I forget to specify it, I also realized that it's totally pointless. ReactRouter works fine and Claude uses it fine - its pointless to specify otherwise.

freediver 1/27/2026|||
My experience is the opposite - I haven't used my brain more in a while.. Typing characters was never what developers were valued for anyway. The joy of building is back too.
swader999 1/27/2026|||
Same. I feel I need to be way more into the domain and what the user is trying to do than ever before.
mlrtime 1/28/2026|||
100% same, I had brain fog before the llms, I got tired of reading new docs over and over again for new languages. I became a manager and lost it all.

Now back to IC with 25+ years of experience + LLM = god mode, and its fun again.

Imustaskforhelp 1/27/2026|||
> I want to say it's very akin to doom scrolling. Doom tabbing? It's like, yeah I could be more creative with just a tad more effort, but the AI is already running and the bar to seeing what the AI will do next is so low, so....

Yea exactly, Like we are just waiting so that it gets completed and after it gets completed then what? We ask it to do new things again.

Just as how if we are doom scrolling, we watch something for a minute then scroll down and watch something new again.

The whole notion of progress feels completely fake with this. Somehow I guess I was in a bubble of time where I had always end up using AI in web browsers (just as when chatgpt 3 came) and my workflow didn't change because it was free but recently changed it when some new free services dropped.

"Doom-tabbing" or complete out of the loop AI agentic programming just feels really weird to me sucking the joy & I wouldn't even consider myself a guy particular interested in writing code as I had been using AI to write code for a long time.

I think the problem for me was that I always considered myself a computer tinker before coder. So when AI came for coding, my tinkering skills were given a boost (I could make projects of curiosity I couldn't earlier) but now with AI agents in this autonomous esque way, it has come for my tinkering & I do feel replaced or just feel like my ability of tinkering and my interests and my knowledge and my experience is just not taken up into account if AI agent will write the whole code in multi file structure, run commands and then deploy it straight to a website.

I mean my point is tinkering was an active hobby, now its becoming a passive hobby, doom-tinkering? I feel like I have caught up on the feeling a bit earlier with just vibe from my heart but is it just me who feels this or?

What could be a name for what I feel?

SenHeng 1/28/2026|||
Another thing I’ve experienced is scope creep into the average. Both Claude and ChatGPT keep making recommendations and suggestions that turns the original request into something that resembles other typical features. Sometimes that’s a good thing, because it means I’ve missed something. A lot of times, especially when I’m just riffing on ideas, it turns into something mundane and ordinary and I’ll have lost my earlier train of thought.

A quick example is trying to build a simple expenses app with it. I just want to store a list of transactions with it. I’ve already written the types and data model and just need the AI to give me the plumping. And it will always end up inserting recommendations about double entry bookkeeping.

fragmede 1/28/2026||
yeah but that's like recommending a webserver for your Internet facing website. If you want to give an example of scope creep, you need a better example than double entry book keeping for an accounting app.
SenHeng 1/28/2026||
You’ve just illustrated exactly the problem. You assumed I was building an accounting app. I’ve experienced the same issue with building features for calculating the brightness of a room, or 3D visualisations of brightness patterns, managing inventory and cataloguing lighting fixtures and so on.

It’s great for churning out stuff that already exists, but that also means it’ll massage your idea into one of them.

zamalek 1/27/2026|||
> I worry about the "brain atrophy" part, as I've felt this too. And not just atrophy, but even moreso I think it's evolving into "complacency".

Not trusting the ML's output is step one here, that keeps you intellectually involved - but it's still a far cry from solving the majority of problems yourself (instead you only solve problems ML did a poor job at).

Step two: I delineate interesting and uninteresting work, and Claude becomes a pair programmer without keyboard access for the latter - I bounce ideas off of it etc. making it an intelligent rubber duck. [Edit to clarify, a caveat is that] I do not bore myself with trivialities such as retrieving a customer from the DB in a REST call (but again, I do verify the output).

bandrami 1/28/2026||
> I do not bore myself with trivialities such as retrieving a customer from the DB in a REST call

Genuine question, why isn't your ORM doing that? I see a lot of use cases for LLMs that seem to be more expensive ways to do snippets and frameworks...

zamalek 1/28/2026||
An ORM doesn't generate REST endpoints?
sosomoxie 1/28/2026|||
I've gone years without coding and when I come back to it, it's like riding a bike! In each iteration of my coding career, I have become a better developer, even after a large gap. Now I can "code" during my gap. Were I ever to hand-code again, I'm sure my skills would be there. They don't atrophy, like your ability to ride a bike doesn't atrophy. Yes you may need to warm back up, but all the connections in your brain are still there.
runarberg 1/28/2026|||
Have you ever learnt a foreign language (say Mongolian, or Danish) and then never spoken it, nor even read anything in it for over 10 years? It is not like riding a bike, it doesn’t just come back like that. You have to actually relearn the language, practice it, and you will suck at it for months. Comprehension comes first (within weeks) but you will be speaking with grammatical errors, mispronunciations, etc. for much longer. You won‘t have to learn the language from scratch, second time around is much easier, but you will have to put in the effort. And if you use google translate instead of your brain, you won‘t relearn the language at all. You will simply forget it.
tayo42 1/28/2026|||
Anecdotally, i burned out pretty hard and basically didn't open a text editor for half a year (unemployed too). Eventually i got an itch to write code again and it didn't really feel like I was really worse. Maybe it wasn't long enough atrophy but code doesn't seem to quite work like language though ime.
Ronsenshi 1/28/2026||
Six months is definitely not long enough of a break for skills to degrade. But it's not just skills, as I wrote in another comment, the biggest thing is knowledge of new tools, new versions of language and its features.

I'd say there's at most around 2 years of knowledge runtime (maybe with all this AI stuff this is even shorter). After that period if you don't keep your knowledge up to date it fairly quickly becomes obsolete.

runarberg 1/28/2026||
I would imagine there is probably some reverse S-curve of skill loss going on. The first year you may retain like 90% (and the 10% are obscure words, rare grammar structures, expressions, etc.), then in the next 2 years you loose more and more every year, and by the 3rd year you’ve lost like 50% of the language, including some common words, useful grammar structures, but retain common greetings, basic structures, etc. and then after like year 5 the regression starts to slow down and by year 10 you may still know 20%, but it is the most basic stuff, and you won‘t be able to use the language in any meaningful way.
snozolli 1/30/2026||||
I studied Spanish for years in school, then never really used it. Ten years later, I started studying Japanese. Whenever I got stuck, Spanish would come out. Spanish that I didn't even consciously remember. AFAIK, foreign languages are all stored in the same part of the brain, and once you warm up those neurons, they all get activated.

Not that it's in any way relevant to programming. I will say that after dropping programming for years, I can still explain a lot of specifics, and when I dive back in, it all floods right back. Personally, I'm convinced that any competent, experienced programmer could take a multi-year break, then come back and be right up to speed with the latest software stack in only slightly longer than the stack transition would have taken without a break.

sosomoxie 1/28/2026|||
I have not and I'm actually really bad at learning human languages, but know a dozen programming languages. You would think they would be similar, but for some reason it's really easy for me to program in any language and really hard for me to pick up a human language.
Miraste 1/28/2026||
Learning human languages is not a similar process to learning programming languages at all. I've never been sure why so many people think it is.
runarberg 1/28/2026||
I provided it as a counter example to the learning how to bike myth.

Learning how to bike requires only a handful of skills, most of them are located in the motor control centers in your brain (mostly in the Cerebellum), which is known to retain skills much better then any other parts of your brain. Your programing skills are comprised of thousands of separate skills which are mostly located in your frontal-cortex (mostly in your frontal and temporal lobes), and learning a foreign language is basically that but more (like 10x more).

So while a foreign language is not the perfect analogy (nothing is), I think it is a reasonable analogy as a counter example to the bicycle myth.

sosomoxie 1/29/2026||
Maybe something that keeps programming skills fresh is that after you learn to think like a programmer, you do that with problems away from the keyboard. Decomposition, logic... in the years I wasn't programming, I was still solving problems like a programmer. Getting back behind the keyboard just engaged the thought processes I was already keeping warm with practice.
Ronsenshi 1/28/2026|||
You might still have the skillset to write code, but depending on length of the break your knowledge of tools, frameworks, patterns would be fairly outdated.

I used to know a person like that - high in the company structure who would claim he was a great engineer, but all the actual engineers would make jokes about him and his ancient skills during private conversations.

withinboredom 1/28/2026|||
I’d push back on this framing a bit. There's a subtle ageism baked into the assumption that someone who stepped away from day-to-day coding has "ancient skills" worth mocking.

Yes, specific frameworks and tooling knowledge atrophy without use, and that’s true for anyone at any career stage. A developer who spent three years exclusively in React would be rusty on backend patterns too. But you’re conflating current tool familiarity with engineering ability, and those are different things.

The fundamentals: system design, debugging methodology, reading and reasoning about unfamiliar code, understanding tradeoffs ... those transfer. Someone with deep experience often ramps up on new stacks faster than you’d expect, precisely because they’ve seen the same patterns repackaged multiple times.

If the person you’re describing was genuinely overconfident about skills they hadn’t maintained, that’s a fair critique. But "the actual engineers making jokes about his ancient skills" sounds less like a measured assessment and more like the kind of dismissiveness that writes off experienced people before seeing what they can actually do.

Worth asking: were people laughing because he was genuinely incompetent, or because he didn’t know the hot framework of the moment? Because those are very different things.

Ronsenshi 1/28/2026||
This has nothing to do about ageism. This applies to any person of any age who has ego big enough to think that their knowledge of industry is relevant after they take prolonged break and be socially inept enough to brag about how they are still "in".

I don't disagree with your point about fundamentals, but in an industry where there seems to be new JS framework any time somebody sneezes - latest tools are very much relevant too. And of course the big thing is language changes. The events I'm describing happened in the late 00s-early 10s. When language updates picked up steam: Python, JS, PHP, C++. Somebody who used C++ 98 can't claim to have up to date knowledge in C++ in 2015.

So to answer your question - people were laughing at his ego, not the fact that he didn't know some hot new framework.

withinboredom 1/28/2026||
I beg to differ. I started with C in the 90s, then C# in '05, then PHP in '12, then Go in '21. The things I learned in C still apply to Go, C#, and PHP. And I even started contributing to open source C projects in '24 ... all my skills and knowledge were still relevant. This sounds exactly like ageism to me, but I clearly have a different perspective than you.
Ronsenshi 1/28/2026||
Yes, we clearly have different perspectives. I observed an arrogant person who despite their multi-year break from engineering of any kind strongly believed that they still were as capable as engineers who remained in the field during that time.

Maybe you had to be there.

sosomoxie 1/28/2026|||
I code in Vim, use Linux... all of those tools are pretty constant. New frameworks are easy to pick up. I've been able to become productive with very little downtime after multi-year breaks several times.
polytely 1/27/2026|||
I feel like I'm still a couple steps behind in skill level as my lead and is trying to gain more experience I do wonder if I am shooting myself in the foot if I rely too much on AI at this stage. The senior engineer I'm trying to learn from can very effectively use ai because he has very good judgement of code quality, I feel like if I use AI too much I might lose out on chance to improve my judgement. It's a hard dilemma.
seer 1/28/2026|||
Honestly, this seems very much like the jump from being an individual contributor to being an engineering manager.

The time it happened for me was rather abrupt, with no training in between, and the feeling was eerily similar.

You know _exactly_ why the best solution is, you talk to your reports, but they have minds of their own, as well as egos, and they do things … their own way.

At some point I stopped obsessing with details and was just giving guidance and direction only in the cases where it really mattered, or when asked, but let people make their own mistakes.

Now LLMs don’t really learn on their own or anything, but the feeling of “letting go of small trivial things” is sorta similar. You concentrate on the bigger picture, and if it chose to do an iterative for loop instead of using a functional approach the way you like it … well the tests still pass, don’t they.

Ronsenshi 1/28/2026||
The only issue is that as an engineering manager you reasonably expect that the team learns new things, improve their skills, in general grow as engineers. With AI and its context handling you're working with a team where each member has severe brain damage that affects their ability to form long term memories. You can rewire their brain to a degree teaching them new "skills" or giving them new tools, but they still don't actually learn from their mistakes or their experiences.
mlrtime 1/28/2026|||
As a manager I would encourage them to use the LLM tools. I would also encourage unit tests, e2e testing, testing coverages, CI pipelines automating the testing, automatic pr reviewing etc...

It's also peeking at the big/impactful changes and ignoring the small ones.

Your job isn't to make sure they don't have "brain damage" its to keep them productive and not shipping mistakes.

dysoco 1/28/2026|||
Being optimistic (or pessimistic heh), if things keep the trend then the models will evolve as well and will probably be quite better in one year than they are now.
dkubb 1/28/2026|||
You could probably combat this somewhat with a skill that references to examples of the code you don't want and the code you do. And then each time you tell it to correct the code you ask it to put that example into the references.

You then tell your agent to always run that skill prior to moving on. If the examples are pattern matchable you can even have the agent write custom lints if your linter supports extension or even write a poor man’s linter using ast-grep.

I usually have a second session running that is mainly there to audit the code and help me add and adjust skills while I keep the main session on the task of working on the feature. I've found this far easier to stay engaged than context switching between unrelated tasks.

epolanski 1/27/2026|||
> Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever.

Context management, proper prompting and clear instructions, proper documentation are still relevant.

ekropotin 1/28/2026|||
The solution for brain atrophy I personally arrived is to use coding agents at work, where, let’s be honest, velocity is a top priority and code purity doesn’t matter that much. Since we use stack I super familial with, I can quite fast verify produced code and tweak it if needed.

However, for hobby projects where I purposely use tech I’m not very familiar with, I force myself not to use LLMs at all - even as a chat. Thus, operating The old way - writing code manually, reading documentation, etc brings me a joy of learning back and, hopefully, establishes new neurone connections.

alansaber 1/28/2026|||
"I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things."

I would argue this is ok for front-end. For back-end? very, very bad- if you can't get a usable output do it by hand.

phrotoma 1/28/2026||
"rip it out" is a phrase I've been saying more often to the robots.
kitd 1/28/2026|||
I think this is where tools like OpenSpec [1] may help. The deterioration in quality is because the context is degrading, often due to incomplete or amibiguous requests from the coder. With a more disciplined way of creating and persisting locally the specs for the work, especially if the agent got involved in creating that too, you'll have a much better chance of keeping the agent focussed and aligned.

[1] - https://openspec.dev/

chickensong 1/28/2026|||
> AI keeps pushing it in a direction I didn't want

The AI definitely has preferences and attention issues, but there are ways to overcome them.

Defining code styles in a design doc, and setting up initial examples in key files goes a long way. Claude seems pretty happy to follow existing patterns under these conditions unless context is strained.

I have pretty good results using a structured workflow that runs a core loop of steps on each change, with a hook that injects instructions to keep attention focused.

abm53 1/28/2026|||
My advice: keep it on a tight leash.

In the happy case where I have a good idea of the changes necessary, I will ask it to do small things, step by step, and examine what it does and commit.

In the unhappy case where one is faced with a massive codebase and no idea where to start, I find asking it to just “do the thing” generates slop, but enough for me to use as inspiration for the above.

mupuff1234 1/28/2026|||
He didn't say "brain atrophy", he was talking about coding abilities.
nathias 1/28/2026|||
it's not about brain atrophy, it's skill atrophy
direwolf20 1/28/2026||
is that not the sam thing?
SpaceL10n 1/28/2026|||
LLMs are yet another layer between us and the end result. I remain wary of this distance and am super grateful I learned coding the hard way.
keeganpoppen 1/28/2026|||
yeah, because the thing is: at the end of the day: laying things out the way LLMs can understand is becoming more important than doing them the “right” way— a more insidious form of the same complacency. and one in which i am absolutely complicit.
lighthouse1212 1/28/2026|||
[dead]
dirtytoken7 1/27/2026|||
[dead]
stuaxo 1/27/2026||
LLMs have some terrible patterns, don't know what do ? Just chuck a class named Service in.

Have to really look out for the crap.

atonse 1/27/2026||
> LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

I’ve always said I’m a builder even though I’ve also enjoyed programming (but for an outcome, never for the sake of the code)

This perfectly sums up what I’ve been observing between people like me (builders) who are ecstatic about this new world and programmers who talk about the craft of programming, sometimes butting heads.

One viewpoint isn’t necessarily more valid, just a difference of wiring.

ryandrake 1/27/2026||
I noticed the same thing, but wasn't able to put it into words before reading that. Been experimenting with LLM-based coding just so I can understand it and talk intelligently about it (instead of just being that grouchy curmudgeon), and the thought in the back of my mind while using Claude Code is always:

"I got into programming because I like programming, not whatever this is..."

Yes, I'm building stupid things faster, but I didn't get into programming because I wanted to build tons of things. I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.

If I was intellectually excited about telling something to do this for me, I'd have gotten into management.

nunez 1/27/2026|||
Same same. Writing the actual code is always a huge motivator behind my side projects. Yes, producing the outcome is important, but the journey taken to get there is a lot of fun for me.

I used Claude Code to implement a OpenAI 4o-vision powered receipt scanning feature in an expense tracking tool I wrote by hand four years ago. It did it in two or three shots while taking my codebase into account.

It was very neat, and it works great [^0], but I can't latch onto the idea of writing code this way. Powering through bugs while implementing a new library or learning how to optimize my test suite in a new language is thrilling.

Unfortunately (for me), it's not hard at all to see how the "builders" that see code as a means to an end would LOVE this, and businesses want builders, not crafters.

In effect, knowing the fundamentals is getting devalued at a rate I've never seen before.

[^0] Before I used Claude to implement this feature, my workflow for processing receipts looked like this: Tap iOS Shortcut, enter the amount, snap a pic of the receipt, type up the merchant, amount and description for the expense, then have the shortcut POST that to my expenses tracking toolkit which, then, POSTs that into a Google Sheet. This feature amounted the need for me to enter the merchant and amount. Unfortunately, it often took more time to confirm that the merchant, amount and date details OpenAI provided were correct (and correct it when details were wrong, which was most of the the time) than it did to type out those details manually, so I just went back to my manual workflow. However, the temptation to just glance at the details and tap "This looks correct" was extremely high, even if the info it generated was completely wrong! It's the perfect analogue to what I've been witnessing throughout the rise of the LLMs.

viccis 1/27/2026||||
Same. This kind of coding feels like it got rid of the building aspect of programming that always felt nice, and it replaced it entirely with business logic concerns, product requirements, code reviews, etc. All the stuff I can generally take or leave. It's like I'm always in a meeting.

>If I was intellectually excited about telling something to do this for me, I'd have gotten into management.

Exactly this. This is the simplest and tersest way of explaining it yet.

taytus 1/28/2026|||
Because you are not coding, you are building. I've been coding since I was 7 years old, now I'm building.
mlrtime 1/28/2026||
I'd go one step higher, we're not builders, we're problem solvers.

Sometimes the problem needs building, sometimes not.

I'm an Engineer, I see a problem and want to solve it. I don't care if I have to write code, have a llm build something new, or maybe even destroy something. I want to solve the problem for the business and move to the next one, most of the time it is having a llm write code though.

zigman1 1/28/2026|||
Maybe I don't entirely get it, but what is stopping you to just continue coding?
nfgrep 1/28/2026|||
Speaking for myself, speed. I’d be noticeably slower than my peers if I was crafting code by hand all day.
viccis 1/28/2026|||
That's what I'm doing on my codebases, while I still can. I only use Claude if I need to work on a different team's code that uses it heavily. Nothing quite gets a groan from me like opening up a repo and seeing CLAUDE.md
polishdude20 1/27/2026||||
What I have enjoyed about programming is being able to get the computer to do exactly what I want. The possibilities are bounded by only what I can conceive in my mind. I feel like with AI that can happen faster.
testaccount28 1/27/2026|||
> get the computer to do exactly what I want.

> with AI that can happen faster.

well, not exactly that.

polishdude20 1/28/2026||
For simple things it can. But then for more complex things that's where I step it
chrisjj 1/28/2026|||
Have you an example of getting a coding chatbot to do exactly what you want?
simonw 1/28/2026|||
https://gisthost.github.io/?a41ce6304367e2ced59cd237c576b817... - which built https://github.com/datasette/datasette-transactions exactly the way I wanted it to be built
thefaux 1/28/2026|||
The examples that you and others provide are always fundamentally uninteresting to me. Many, if not most, are some variant of a CRUD application. I have yet seen a single ai generated thing that I personally wanted to use and/or spend time with. I also can't help but wonder what we might have accomplished if we devoted the same amount of resources to developing better tools, languages and frameworks to developers instead of automating the generation of boiler plate and selling developer's own skills back to them. Imagine if open source maintainers instead had been flooded with billions of dollars in capital. What might be possible?

And also, the capacities of llms are almost besides the point. I don't use llms but I have no doubt that for any arbitrary problem that can be expressed textually and is computable in finite time, in the limit as time goes to infinity, an llm will be able to solve it. The more important and interesting questions are what _should_ we build with llms and what should we _not_ build with them. These arguments about capacity are distracting from these more important questions.

simonw 1/28/2026||
Considering how much time developers spend building uninteresting CRUD applications I would argue that if all LLMs can do is speed that process up they're already worth their weight in bytes.

The impression I get from this comment is that no example would convince you that LLMs are worthwhile.

audience_mem 1/28/2026||||
The problem with replying to the proof-demanders is that they'll always pick it apart and find some reason it doesn't fit their definition. You must be familiar with that at this point.
chrisjj 1/28/2026||
Worse, they might even attempt to verify your claims e.g. "When AI 'builds a browser,' check the repo before believing the hype" https://www.theregister.com/2026/01/26/cursor_opinion/
chrisjj 1/28/2026|||
> exactly the way I wanted it to be built

You verified each line?

simonw 1/28/2026|||
I looked closely enough to confirm there were no architectural mistakes or nasty gotchas. It's code I would have been happy to write myself, only here I got it written on my phone while riding the BART.
mlrtime 1/28/2026|||
What? Why would you want to?

See this is a perfect example of OPs statement! I don't care about the lines, I care about the output! It was never about the lines of code.

Your comment makes it very clear there are different viewpoints here. We care about problem->solution. You care about the actual code more than the solution.

chrisjj 1/28/2026||
> I don't care about the lines, I care about the output! It was never about the lines of code.

> Your comment makes it very clear there are different viewpoints here.

Agreed.

I care that code output not include leaked secrets, malware installation, stealth cryptomining etc.

Some others don't.

mlrtime 7 days ago||
>not include leaked secrets, malware installation, stealth cryptomining etc.

Not sure what your point is exactly, but those things don't bother me because I have no control over what happens on others computers. Maybe you insinuate that LLMs will create this, If so, I think you misunderstand the tooling. Or mistake the tooling with the operator.

audience_mem 1/28/2026|||
Is this a joke? Are you genuinely implying that no one has ever got an LLM to write code that does exactly what they want?
chrisjj 1/28/2026||
No. Mashing up other peoples' code scraped from the web is not what I'd call writing code.
audience_mem 1/28/2026||
Can you not see how you truly, deep down, are afraid you might be wrong?

It's clouding your vision.

smhinsey 1/28/2026||||
This gets at the heart of the quality of results issues a lot of people are talking about elsewhere here. Right now, if you treat them as a system where you can tell it what you want and it will do it for you, you're building a sandcastle. Instead of that, also describe the correct data structures and appropriate algorithms to use against them, as well as the particulars of how you want the problem solved, it's a different situation altogether. Like most systems, the quality of output is in some way determined by the quality of input.

There is a strange insistence on not helping the LLM arrive at the best outcome in the subtext to this question a lot of times. I feel like we are living through the John Henry legend in real time

thepasch 1/28/2026||||
> I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.

You can still do that with Claude Code. In fact, Claude Code works best the more granular your instructions get.

chrisjj 1/28/2026||
> Claude Code works best the more granular your instructions get.

So best feed it machine code?

atonse 1/27/2026|||
Funny you say that. Because I have never enjoyed management as much as being hands on and directly solving problems.

So maybe our common ground is that we are direct problem solvers. :-)

Ronsenshi 1/28/2026||
For some reason this makes me think of a jigsaw puzzle. People usually complete these puzzles because they enjoy the process where on the end you get a picture that you can frame if you want to. Some people seem to want to get the resulting picture. No interest in process at all.

I guess that's the same people who went to all those coding camps during their hay day because they heard about software engineering salaries. They just want the money.

direwolf20 1/28/2026||
When I last bought a Lego Technic set because I wanted to play with making mechanisms with gears and stuff, I assembled it according to the instructions, which was fun, and then the final result was also cool and I couldn't bear to dismantle it.
addisonj 1/27/2026|||
IMO, this isn't entirely a "new world" either, it is just a new domain where the conversation amplifies the opinions even more (weird how that is happening in a lot of places)

What I mean by that: you had compiled vs interpreted languages, you had types vs untyped, testing strategies, all that, at least in some part, was a conversation about the tradeoffs between moving fast/shipping and maintainability.

But it isn't just tech, it is also in methodologies and the words use, from "build fast and break things" and "yagni" to "design patterns" and "abstractions"

As you say, it is a different viewpoint... but my biggest concern with where are as industry is that these are not just "equally valid" viewpoints of how to build software... it is quite literally different stages of software, that, AFAICT, pretty much all successful software has to go through.

Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)

dpflan 1/27/2026||
“””

Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)

“””

This perspective is crucial. Scale is the great equalizer / demoralizer, scale of the org and scale of the systems. Systems become complex quickly, and verifiability of correctness and function becomes harder. Companies that built from day with AI and have AI influencing them as they scale, where does complexity begin to run up against the limitations of AI and cause regression? Or if all goes well, amplification?

coffeeaddict1 1/27/2026|||
But how can you be a responsible builder if you don't have trust in the LLMs doing the "right thing"? Suppose you're the head of a software team where you've picked up the best candidates for a given project, in that scenario I can see how one is able to trust the team members to orchestrate the implementation of your ideas and intentions, with you not being intimately familiar with the details. Can we place the same trust in LLM agents? I'm not sure. Even if one could somehow prove that LLM are very reliable, the fact an AI agents aren't accountable beings renders the whole situation vastly different than the human equivalent.
handoflixue 1/28/2026|||
Trust but verify:

I test all of the code I produce via LLMs, usually doing fairly tight cycles. I also review the unit test coverage manually, so that I have a decent sense that it really is testing things - the goal is less perfect unit tests and more just quickly catching regressions. If I have a lot of complex workflows that need testing, I'll have it write unit tests and spell out the specific edge cases I'm worried about, or setup cheat codes I can invoke to test those workflows out in the UI/CLI.

Trust comes from using them often - you get a feeling for what a model is good and bad at, and what LLMs in general are good and bad at. Most of them are a bit of a mess when it comes to UI design, for instance, but they can throw together a perfectly serviceable "About This" HTML page. Any long-form text they write (such as that About page) is probably trash, but that's super-easy to edit manually. You can often just edit down what they write: they're actually decent writers, just very verbose and unfocused.

I find it similar to management: you have to learn how each employee works. Unless you're in the Top 1%, you can't rely on every employee giving 110% and always producing perfect PRs. Bugs happen, and even NASA-strictness doesn't bring that down to zero.

And just like management, some models are going to be the wrong employee for you because they think your style guide is stupid and keep writing code how they think it should be written.

inerte 1/27/2026|||
You don't simply put a body in a seat and get software. There are entire systems enabling this trust: college, resume, samples, referral, interviews, tests and CI, monitoring, mentoring, and performance feedback.

And accountability can still exist? Is the engineer that created or reviewed a Pull Request using Claude Code less accountable then one that used PICO?

coffeeaddict1 1/27/2026|||
> And accountability can still exist? Is the engineer that created or reviewed a Pull Request using Claude Code less accountable then one that used PICO?

The point is that in the human scenario, you can hold the human agents accountable. You cannot do that with AI. Of course, you as the orchestrator of agents will be accountable to someone, but you won't have the benefit of holding your "subordinates" accountable, which is what you do in a human team. IMO, this renders the whole situation vastly different (whether good or bad I'm not sure).

polishdude20 1/27/2026||
You can switch to another LLM provider or stop using them altogether. It's even easier than firing a developer.
ipaddr 1/27/2026||
It is as easy as getting rid of Microsoft Teams at your org.
chrisjj 1/28/2026|||
Of course he is - because he invested so much less.
mkozlows 1/27/2026|||
I think he's really getting at something there. I've been thinking about this a lot (in the context of trying to understand the persistent-on-HN skepticism about LLMs), and the framing I came up with[1] is top-down vs. bottom-up dev styles, aka architecting code and then filling in implementations, vs. writing code and having architecture evolve.

[1] https://www.klio.org/theory-of-llm-dev-skepticism/

jamauro 1/28/2026||
I like this framing. Nice typography btw, a pleasure to read.
concats 1/28/2026|||
I remember leaving university going into my first engineering job, thinking "Where is all the engineering? All the problem solving and building complex system? All the math and science? Have I been demoted to a lowly programmer?"

Took me a few years to realize that this wasn't a universal feeling, and that many others found the programming tasks more fulfilling than any challenging engineering. I suppose this is merely another manifestation of the same phenomena.

senderista 1/27/2026|||
Maybe there's an intermediate category: people who like designing software? I personally find system design more engaging than coding (even though I enjoy coding as well). That's different from just producing an opaque artifact that seems to solve my problem.
monkaiju 1/28/2026|||
So far I haven't seen it actually be effective at "building" in a work context with any complexity, and this despite some on our team desperately trying to make that the case.
FeepingCreature 1/28/2026|||
I have! You have to be realistic about the projects. The more irreducible local context it needs, the less useful it will be. Great for greenfield code, oneshots, write once read once run for months.
barrell 1/28/2026|||
Agreed. I don’t care for engineering or coding, and would gladly give it up the moment I can. I’m also running a one man business where every hour counts (and where I’m responsible for maintaining every feature).

The fact of the matter is LLMs produce lower quality at higher volumes in more time than it would take to write it myself, and I’m a very mediocre engineer.

I find this seperation of “coding” vs “building” so offensive. It’s basically just saying some people are only concerned with “inputs”, while others with “outputs”. This kind of rhetoric is so toxic.

It’s like saying LLM art is separating people into people who like to scribble, and people who like to make art.

Applejinx 1/28/2026||
Would you accept 'people who like to make art, and people who like to commission somebody to make art and give them lots of notes in the process'?
barrell 1/29/2026||
I mean it’s closer, but I don’t think it’s right to equate commissioning an artist with paying a multi-billion dollar corporation to steal from artists.

These tools are just lazy shortcuts. And that’s fine, there’s no problem with taking the lazy way. I’m never going to put in the time to learn to draw, so it’s cool there’s an option for me.

I just take ire with pretending it’s something grand and refined, or spitting in the face of the ones who are willing to put in the work

verdverm 1/27/2026|||
I think the division is more likely tied to writing. You have to fundamentally change how you do your job, from one of writing a formal language for a compiler to one of writing natural language for a junior-goldfish-memory-allstar-developer, closer to management then to contributor.

This distinction to me separates the two primary camps

lelanthran 1/28/2026|||
> > LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

> I’ve always said I’m a builder even though I’ve also enjoyed programming (but for an outcome, never for the sake of the code)

> This perfectly sums up what I’ve been observing between people like me (builders) who are ecstatic about this new world and programmers who talk about the craft of programming, sometimes butting heads.

That's one take, sure, but it's a specially crafted one to make you feel good about your position in this argument.

The counter-argument is that LLM coding splits up engineers based on those who primarily like engineering and those who like managing.

You're obviously one of the latter. I, OTOH, prefer engineering.

theshrike79 1/28/2026||
I prefer engineering too, I tried management and I hated it.

It's just the level of engineering we're split on. I like the type of engineering where I figure out the flow of data, maybe the data structures and how they move through the system.

Writing the code to do that is the most boring part of my job. The LLM does it now. I _know_ how to do it, I just don't want to.

It all boils down to communication in a way. Can you communicate what you want in a way others (in this case a language model) understands? And the parts you can't communicate in a human language, can you use tools to define those (linters, formatters, editorconfig)?

I've done all that with actual humans for ... a decade? So applying the exact same thing to a machine is weirdly more efficient, it doesn't complain about the way I like to have my curly braces - it just copies the defined style. With humans I've found out that using impersonal tooling to inspect code style and flaws has a lot less friction than complaining about it in PR reviews. If the CI computer says no, people don't complain, they fix it.

dimas_codes 1/28/2026|||
I feel like this is the core issue that will actually stall LLM coding tools short of actually replacing coding work at large.

'Coders' make 'builders' keep the source code good enough so that 'builders' can continue building without breaking what they built.

If 'builders' become x10 productive and 'coders' become unable to keep up with unsurmountable pile of unmaintainable mess that 'builders' proudly churn out, 'bullders' will start to run into impossibility to build further without starting over and over again hoping that agents will be able to get it right this time.

theshrike79 1/28/2026||
"Coders" can code tools that programmatically define quality. We have like 80% of those already.

Then force the builders to use those tools to constrain their output.

chrisjj 1/28/2026|||
> > LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

This is much less significant than the fact LLMs split engineers on those who primarily like quality v. those who primarily like speed.

chickensong 1/28/2026||
That split has always existed. LLMs can be used on either side of the divide.
chrisjj 1/28/2026||
We see a ton of "AI let me code a program X faster than ever before."

We see almost no "AI let me code a program X better than ever before."

Philpax 1/28/2026|||
See this episode of Oxide and Friends, where they discuss just that: https://oxide-and-friends.transistor.fm/episodes/engineering...
chickensong 1/28/2026|||
I can't argue that. The scale was already imbalanced as well, and vibe coding has lowered the bar even more, so the gap will continue to grow for now.

I'm just saying that LLMs aren't causing the divide. Accelerating yes, but I think simply equating AI usage to poor quality is wrong. Craftsmen now have a powerful tool as well, to analyze, nitpick, and refactor in ways that were previously difficult to justify.

It also seems premature for so many devs to jump to hardline "AI bad" stances. So far the tech is improving quite well. We may not be able to 1-shot much of quality yet, but it remains to be seen if that will hold.

Personally, I have hopes that AI will eventually push code quality much higher than it's ever been. I might be totally wrong of course, but to me it feels logical that computers would be very good at writing computer programs once the foundation is built.

giancarlostoro 1/28/2026|||
Yeah, I think this is a bit of insight I had not realized / been able to word correctly yet. There's developers who can let Claud go at it, and be fearless about it like me (though I mostly do it for side projects, but WOW) and then there's developers who will use it like a hammer or axe to help cut down or mold whatever is in their path.

I think both approaches are okay, the biggest thing for me is the former needs to test way more, and review the code more, as developers we don't read code enough, with the "prompt and forget" approach we have a lot of free time we could spend reading the code, asking the model to refactor and refine the code. I am shocked when I hear about hundreds of thousands of lines in some projects. I've rebuilt Beads from the ground up and I'm under 10 lines of code.

So we're going to have various level of AI Code Builders if you will: Junior, Mid, Senior, Architect. I don't know if models will ever pick up the slack for Juniors any time soon. We would need massive context windows for models, and who will pay for that? We need a major AI breakthrough to where the cost goes down drastically before that becomes profitable.

codyb 1/28/2026|||
I think there's a place for both.

We have services deployed globally serving millions of customers where rigor is really important.

And we have internal users who're building browser extensions with AI that provide valuable information about the interface they're looking at including links to the internal record management, and key metadata that's affecting content placement.

These tools could be handed out on Zip drives in the street and it would just show our users some of the metadata already being served up to them, but it's amazing to strip out 75% of the process of certain things and just have our user (in this case though, it's one user who is driving all of this, so it does take some technical inclination) build out these tools that save our editors so much time when doing this before would have been months and months and months of discovery and coordination and designs that probably wouldn't actually be as useful in the end after the wants of the user are diluted through 18 layers of process.

globular-toast 1/28/2026|||
I like building, but I don't fool myself into thinking it can be done by taking shortcuts. You could build something that looks like a house for half the cost but it won't be structurally sound. That's why I care about the details. Someone has to.
jimbokun 1/27/2026|||
The new LLM centered workflow is really just a management job now.

Managers and project managers are valuable roles and have important skill sets. But there's really very little connection with the role of software development that used to exist.

It's a bit odd to me to include both of these roles under a single label of "builders", as they have so little in common.

EDIT: this goes into more detail about how coding (and soon other kinds of knowledge work) is just a management task now: https://www.oneusefulthing.org/p/management-as-ai-superpower...

simianwords 1/27/2026||
i don't disagree. at some point LLM's might become good enough that we wouldn't need exact technical expertise.
slaymaker1907 1/27/2026|||
I enjoy both and have ended up using AI a lot differently than vibe coders. I rarely use it for generating implementations, but I use it extensively for helping me understand docs/apis and more importantly, for debugging. AI saves me so much time trying to figure out why things aren’t working and in code review.

I deliberately avoid full vibe coding since I think doing so will rust my skills as a programmer. It also really doesn’t save much time in my experience. Once I have a design in mind, implementation is not the hard part.

bjackman 1/28/2026|||
There's more to it than just coding Vs building though.

For a long time in my career now I've been in a situation where I'd be able to build more if I was willing to abstract myself and become a slide-merchant/coalition-builder. I don't want to do this though.

Yet, I'm still quite an enthusiastic vibe-coder.

I think it's less about coding Vs building and more about tolerance for abstraction and politics. And I don't think there are that many people who are so intolerant of abstraction that they won't let agents write a bunch of code for them.

nfgrep 1/28/2026|||
I’ve heard something similar: “there are people who enjoy the process, and people who enjoy the outcome”. I think this saying comes moreso from artistic circles.

I’ve always considered myself a “process” person, I would even get hung-up on certain projects because I enjoyed crafting them so much.

LLM’s have taken a bit of that “process” enjoyment from me, but I think have also forced some more “outcome” thinking into my head, which I’m taking as a positive.

asimovDev 1/28/2026|||
To me this is similar to car enthusiasms. Some people absolutely love to build their project car, it's a major part of the hobby for them. Others just love the experience of driving, so they buy ready cars or just pay someone to work on the car.
stevenhuang 1/28/2026||
Alternatively, others just want to get to their destination.
netcraft 1/28/2026|||
agree completely. I used to be (and still would love to be) a process person, enjoying hand writing bulletproof artisanal code. Switching to startups many years ago gave me a whole new perspective, and its been interesting the struggle between writing code and shipping. Especially when you dont know how long the code you are writing will actually live. LLMs are fantastic in that space.
greenie_beans 1/28/2026|||
makes sense if you are a data scientist where people need to be boxed into tidy little categories. but some people probably fall into both categories.
Imustaskforhelp 1/27/2026||
> I enjoy both and have ended up using AI a lot differently than vibe coders. I rarely use it for generating implementations, but I use it extensively for helping me understand docs/apis and more importantly, for debugging. AI saves me so much time trying to figure out why things aren’t working and in code review.

I had felt like this and still do but man, at some point, I feel like the management churn feels real & I just feel suffering from a new problem.

Suppose, I actually end up having services literally deployed from a single prompt nothing else. Earlier I used to have AI write code but I was interested in the deployment and everything around it, now there are services which do that really neatly for you (I also really didn't give into the agent hype and mostly used browsers LLM)

Like on one hand you feel more free to build projects but the whole joy of project completely got reduced.

I mean, I guess I am one of the junior dev's so to me AI writing code on topics I didn't know/prototyping felt awesome.

I mean I was still involved in say copy pasting or looking at the code it generates. Seeing the errors and sometimes trying things out myself. If AI is doing all that too, idk

For some reason, recently I have been disinterested in AI. I have used it quite a lot for prototyping but I feel like this complete out of the loop programming just very off to me with recent services.

I also feel like there is this sense of if I buy for some AI thing, to maximally extract "value" out of it.

I guess the issue could be that I can have vague terms or have a very small text file as input (like just do X alternative in Y lang) and I am now unable to understand the architectural decisions and the overwhelmed-ness out of it.

Probably gonna take either spec-driven development where I clearly define the architecture or development where I saw something primagen do recently which is that the AI will only manipulate code of that particular function, (I am imagining it for a file as well) and somehow I feel like its something that I could enjoy more because right now it feels like I don't know what I have built at times.

When I prototype with single file projects using say browser for funsies/any idea. I get some idea of what the code kind of uses with its dependencies and functions names from start/end even if I didn't look at the middle

A bit of ramble I guess but the thing which kind of is making me feel this is that I was talking to somebody and shwocasing them some service where AI + server is there and they asked for something in a prompt and I wrote it. Then I let it do its job but I was also thinking how I would architect it (it was some detect food and then find BMR, and I was thinking first to use any api but then I thought that meh it might be hard, why not use AI vision models, okay what's the best, gemini seems good/cheap)

and I went to the coding thing to see what it did and it actually went even beyond by using the free tier of gemini (which I guess didn't end up working could be some rate limit of my own key but honestly it would've been the thing I would've tried too)

So like, I used to pride myself on the architectural decisions I make even if AI could write code faster but now that is taken away as well.

I really don't want to read AI code so much so honestly at this point, I might as well write code myself and learn hands on but I have a problem with build fast in public like attitude that I have & just not finding it fun.

I feel like I should do a more active job in my projects & I am really just figuring out what's the perfect way to use AI in such contexts & when to use how much.

Thoughts?

markb139 1/28/2026||
I retired from paid sw dev work in 2020 when COVID arrived. I’ve worked on my small projects since with all development by hand. I’d followed the rise of AI, but not used it. Late last year I started a project that included reverse engineering some firmware that runs on an Intel 8096 based embedded processor. I’d never worked on that processor before. There are tools available, but they cost many $. So, I started to think about a simple disassembler. 2 weeks ago we decided to try Claude to see what it could do. We now have a disassembler, assembler and a partially working emulator. No doubt there are bugs and missing features and the code is a bit messy, but boy has it sped up the work. One thing did occur to me. Vendors of small utilities could be in trouble. For example I needed to cut out some pages from a pdf. I could have found a tool online(I’m sure there are several), write one myself. However, Claude quickly performed the task.
gyomu 1/28/2026||
> Vendors of small utilities could be in trouble

This is a mix of the “in the future, everyone will have a 3D printer at home and just 3D print random parts they need” and “anyone can trivially build Dropbox with rsync themselves” arguments.

Tech savvy users who know how to use LLMs aren’t how vendors of small utilities stay in business.

They stay in business because they sell things to users who are truly clueless with tech (99% of the population, which can’t even figure out the settings app on their phone), and solid distribution/marketing is how you reach those users and can’t really be trivially hacked because everyone is trying to hack it.

Or they stay in business because they offer some sort of guarantee (whether legal, technical, or other) that the users don’t want to burden themselves with because they have other, more important stuff to worry about.

CamperBob2 1/28/2026|||
I don't know. It's one thing to tell Joe or Jane User to "Get an FTP account, mount it locally with curlftpfs, and then use SVN or CVS on the mounted filesystem." But if Joe or Jane can just cut-and-paste that advice into a prompt and get their own personal Dropbox...
whiplash451 1/28/2026||
Except when that new Dropbox fails Joe or Jane on that Saturday evening, their only recourse is to ask the AI for help, and the AI starts spinning “oh yeah, mmm, I think I found where the problem is. Cut and paste these debugging lines in that function and let me know what the output is…”
CamperBob2 1/28/2026||
Meanwhile, this year, that happens less often than it did last year... and it actually isn't how AI-assisted development works at all. Agentic models do the cutting-and-pasting by themselves, evaluate the results by themselves, and almost always succeed at fixing the problem by themselves.
whiplash451 1/28/2026||
Fair
markb139 1/28/2026|||
Im definitely going to build some small tools when I need them. One tool I use occasionally, but not so often I want to subscribe is Insomnia.
TeMPOraL 1/28/2026||
> Vendors of small utilities could be in trouble. For example I needed to cut out some pages from a pdf. I could have found a tool online(I’m sure there are several), write one myself. However, Claude quickly performed the task.

Definitely. Making small, single-purpose utilities with LLMs is almost as easy these days as googling for them on-line - much easier, in fact, if you account for time spent filtering out all the malware, adware, "to finish the process, register an account" and plain broken "tools" that dominate SERP.

Case in point, last time my wife needed to generate a few QR codes for some printouts for an NGO event, I just had LLM make one as a static, single-page client-side tool and hosted it myself -- because that was the fastest way to guarantee it's fast, reliable, free of surveillance economy bullshit, and doesn't employ URL shorteners (surprisingly common pattern that sometimes becomes a nasty problem down the line; see e.g. a high-profile case of some QR codes on food products leading to porn sites after shortlink got recycled).

Antibabelic 1/28/2026||
Whatever happened to just typing "apt install qrencode"? It's definitely "fast, reliable, free of surveillance economy bullshit, and doesn't employ URL shorteners".
senko 1/28/2026|||
You need to know "qrencode" exists under that exact name. Claude already knows about it and how to use it.
Antibabelic 1/28/2026||
Sure, but that's entirely different from vibe-coding a tool, which sounds like a colossal waste of resources.
simonw 1/28/2026|||
Having an LLM spit out a few hundred lines of HTML and JavaScript is not a colossal waste of resources, it's equivalent to running a microwave for a couple of seconds.
TeMPOraL 1/29/2026||
Not to mention, my little tool is using much less electricity running than just about anything else I could easily find on-line, simply by the virtue of being minimal, and completely free of superfluous visual bullshit, upsells, tracking, telemetry, and other such secondary aspects of anything people publish and advertise for others to use.
agos 1/28/2026|||
as long as that wast and the associated cost is heavily subsidized as it is today, nobody will care
TeMPOraL 1/29/2026||
Don't get the anti-AI propaganda get to you too much. Inference is cheap on the margin.

Consider: there are models capable (if barely) of doing this job, that you can run locally, on a upper-mid-range PC with high-end consumer GPU. Take that as a baseline, assume it takes a day instead of an hour because of inference speed, tally up total electricity cost. It's not much. Won't boil oceans any more than people playing AAA video games all day will.

Sure, the big LLMs from SOTA vendors use more GPUs/TPUs for inference, but this means they finish much faster. Plus, commercial vendors have lots of optimizations (batch processing, large caches, etc.), and data centers are much more power-efficient than your local machine, so "how much it'd cost me in power bill if I did it locally" is a good starting estimate.

direwolf20 1/28/2026||||
Users can't use command–line tools. They just can't. It has to be user–friendly or it doesn't exist.
TeMPOraL 1/29/2026||
It's not even "users", just the user. Nice thing about LLMs is that it's cheap to develop small tools tailor-made for audience of few, or in this case, just one.
TeMPOraL 1/29/2026||||
1) This was for my wife. She is not proficient in Linux or CLI in general, and (like ~all white collar workers these days) works almost exclusively in browser tools (exception being pre-O365 versions of Word and Excel we keep running on her laptop because she prefers them).

2) I never heard of `qrencode` CLI tool until today. For some reason I didn't even consider it might exist (maybe because last time I checked, which was many years ago, there was none).

3) Notably, no one mentioned it the last time I shared this story on HN - https://news.ycombinator.com/item?id=44385049.

4) Even if I knew about it, I'd still have to build a web frontend for it, and I'd need a proper server for it, which I'd then have to maintain properly, and secure it against the `qrencode` call becoming an attack vector.

So frankly, for my specific problem, my solution is strictly better.

simonw 1/28/2026|||
A "static, single-page client-side tool" is so much better than "Step 1: install Linux..."
jedberg 1/28/2026||
> You realize that stamina is a core bottleneck to work

There has been a lot of research that shows that grit is far more correlated to success than intelligence. This is an interesting way to show something similar.

AIs have endless grit (or at least as endless as your budget). They may outperform us simply because they don't ever get tired and give up.

Full quote for context:

Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.

djeastm 1/28/2026||
>They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day.

"Listen, and understand! That Terminator is out there! It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop... ever, until you are dead!"

Loeffelmann 1/28/2026|||
If you ever work with LLMs you know that they quite frequently give up.

Sometimes it's a

    // TODO: implement logic
or a

"this feature would require extensive logic and changes to the existing codebase".

Sometimes they just declare their work done. Ignoring failing tests and builds.

You can nudge them to keep going but I often feel like, when they behave like this, they are at their limit of what they can achieve.

wongarsu 1/28/2026|||
If I tell it to implement something it will sometimes declare their work done before it's done. But if I give Claude Code a verifiable goal like making the unit tests pass it will work tirelessly until that goal is achieved. I don't always like the solution, but the tenacity everyone is talking about is there
koiueo 1/28/2026|||
> but the tenacity everyone is talking about is there

I always double-check if it doesn't simply exclude the failing test.

The last time I had this, I discovered it later in the process. When I pointed this out to the LLM, it responded, that it acknowledged thefact of ignoring the test in CLAUDE.md, and this is justified because [...]. In other words, "known issue, fuck off"

theshrike79 1/28/2026||||
Tools in a loop people, tools in a loop.

If you don't give the agent the tools to deterministically test what it did, you're just vibe coding in its worst form.

jpnc 1/28/2026|||
tenacity == while loop
jedberg 1/28/2026||||
> If you ever work with LLMs you know that they quite frequently give up.

If you try to single shot something perhaps. But with multiple shots, or an agent swarm where one agent tells another to try again, it'll keep going until it has a working solution.

alansaber 1/28/2026||
Yeah exactly this is a scope problem, actual input/output size is always limited> I am 100% sure CC etc are using multiple LLM calls for each response, even though from the response streaming it looks like just one.
mlrtime 1/28/2026||||
Nope, not for me, unless I tell it to.

Context matters, for an LLM just like a person. When I wrote code I'd add TODOs because we cannot context switch to another problem we see every time.

But you can keep the agent fixated on the task AND have it create these TODOs, but ultimately it is your responsibility to find them and fix them (with another agent).

energy123 1/28/2026|||
Using LLMs to clean those up is part of the workflow that you're responsible for (... for now). If you're hoping to get ideal results in a single inference, forget it.
ryanjshaw 1/28/2026|||
I realized a long time ago that I’m better at computer stuff not because I’m smarter but because I will sit there all day and night to figure something out while others will give up. I always thought that was my superpower in the job industry but now I’m not so sure if it will transfer to getting AI to do what I need done…
mlrtime 1/28/2026||
Same, I barely made it through Engineering school, but would stay up all night figuring out everything a computer could do (before the internet).

I did it because I enjoyed it, and still do. I just do it with LLMs now. There is more to figure out than ever before and things get created faster than I have time to understand them.

LLM should be enabling this, not making it more depressing.

Schlagbohrer 1/28/2026||
Me three. I was not as smart as many of my peers in uni but I freakin LOVE the subject matter and I also love studying and feeling that progress of learning, which led me to put in the huge number of hours necessary to be successful and have a positive attitude the whole time.
mlrtime 7 days ago||
There are dozens of us!
michalsustr 1/28/2026|||
The tenacity aspect makes me worried about the paper clip AI misalignment scenario more than before.
AnimalMuppet 1/28/2026|||
But even tenacity is not enough. You also need an internal timer. "Wait a minute, this is taking too long, it shouldn't be this hard. Is my overall approach completely wrong?"

I'm not sure AIs have that. Humans do, or at least the good ones do. They don't quit on the problem, but they know when it's time to consider quitting on the approach.

dust42 1/28/2026|||
> AIs have endless grit (or at least as endless as your budget).

That is the only thing he doesn't address: the money it costs to run the AI. If you let the agents loose, they easily burn north of 100M tokens per hour. Now at $25/1M tokens that gets quickly expensive. At some point, when we are all drug^W AI dependent, the VCs will start to cash in on their investments.

gregjor 1/28/2026|||
LLMs do not have grit or tenacity. Tenacity doesn't desribe a machine that doesn't need sleep or experience tiredness, or stress. Grit doesn't describe a chatbot that will tirelessly spew out answers and code because it has no stake or interest in the result, never perceives that it doesn't know something, and never reflects on its shortcomings.
lighthouse1212 1/28/2026||
[dead]
0xbadcafebee 1/27/2026||
> What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows a lot.

I was thinking about this the other day as relates to the DevOps movement.

The DevOps movement started as a way to accelerate and improve the results of dev<->ops team dynamics. By changing practices and methods, you get acceleration and improvement. That creates "high-performing teams", which is the team form of a 10x engineer. Whether or not you believe in '10x engineers', a high-performing team is real. You really can make your team deploy faster, with fewer bugs. You have to change how you all work to accomplish it, though.

To get good at using AI for coding, you have to do the same thing: continuous improvement, changing workflows, different designs, development of trust through automation and validation. Just like DevOps, this requires learning brand new concepts, and changing how a whole team works. This didn't get adopted widely with DevOps because nobody wanted to learn new things or change how they work. So it's possible people won't adapt to the "better" way of using AI for coding, even if it would produce a 10x result.

If we want this new way of working to stick, it's going to require education, and a change of engineering culture.

virgilp 1/28/2026|
This is an interesting thing that I'm contemplating. I also do believe that (perhaps with very few exceptions) there are no "10x engineers" by themselves, but engineers that thrive 10x more in a context or another (like, I'm sure Jeff Dean is an absolutely awesome engineer - but if you took him out of Google and plugged him into IBM - would he have had the same impact?)

With that in mind - I think one very unexplored area is "how to make the mixed AI-human teams successful". Like, I'm fairly convinced AI changes things, but to get to the industrialization of our craft (which is what management seems to want - and, TBH, something that makes sense from an economic pov), I feel that some big changes need to happen, and nobody is talking about that too much. What are the changes that need to happen? How do we change things, if we are to attempt such industrialization?

jimbokun 1/27/2026||
I'm pretty happy with Copilot in VS Code. Type what change I want Claude to make in the Copilot panel, and then use the VS Code in context diffs to accept or reject the proposed changes. While being able to make other small changes on my own.

So I think this tracks with Karpathy's defense of IDEs still being necessary ?

Has anyone found it practical to forgo IDEs almost entirely?

everfrustrated 1/28/2026||
I've found copilot chat is able to do everything I need. I tried the Claude plugin for vscode and it was a noticeably worse experience for me.

Mind you copilot has only supported agent mode relatively recently.

I really like the way copilot does changes in such a way you can accept or reject and even revert to point in time in the chat history without using git. Something about this just fits right with how my brain works. Using Claude plugin just felt like I had one hand tied behind my back.

thunfischtoast 1/28/2026||
I find Claude Code in VS Code is sometimes horribly inefficient. I tell it to replace some print-statements with proper logging in the one file I have open and it first starts burning tokens to understand the codebase for the 13th time today, despite not needing to and having it laid out in the CLAUDE.md already.
vmbm 1/27/2026|||
I have been assigning issues to copilot in Github. It will then create a pull request and work on and report back on the issue in the PR. I will pull the code and make small changes locally using VSCode when needed.

But what I like about this setup is that I have almost all the context I need to review the work in a single PR. And I can go back and revisit the PR if I ever run into issues down the line. Plus you can run sessions in parallel if needed, although I don't do that too much.

simonw 1/27/2026|||
Are you letting it run your tests and run little snippets of code to try them out (like "python -c 'import module; print(module.something())'") or are you just using it to propose diffs for you to accept or reject?

This stuff gets a whole lot more interesting when you let it start making changes and testing them by itself.

maxdo 1/27/2026||
Coplilot is not on par with cc or cursor even
jimbokun 1/27/2026|||
I use it to access Claude. So what's the difference?
nsingh2 1/27/2026|||
This stuff is a little messy and opaque, but the performance of the same model in different harnesses depends a lot on how context is managed. The last time I tried Copilot, it performed markedly worse for similar tasks compared to Claude Code. I suspect that Copilot was being very aggressive in compressing context to save on token cost, but I'm not 100% certain about this.

Also note that with Claude models, Copilot might allocate a different number of thinking tokens compared to Claude Code.

Things may have changed now compared to when I tried it out, these tools are in constant flux. In general I've found that harnesses created by the model providers (OpenAI/Codex CLI, Anthropic/Claude Code, Google/Gemini CLI) tend to be better than generalist harnesses (cheaper too, since you're not paying a middleman).

walthamstow 1/27/2026|||
Different harnesses and agentic environments produce different results from the same model. Claude Code and Cursor are the best IME and Copilot is by far the worst.
WA 1/27/2026|||
Why not? You can select Opus 4.5, Gemini 3 Pro, and others.
spaceman_2020 1/27/2026|||
Claude Code is a CLI tool which means it can do complete projects in a single command. Also has fantastic tools for scaffolding and harnessing the code. You can define everything from your coding style to specific instructions for designing frontpages, integrating payments, etc.

It's not about the model. It's about the harness

binarycrusader 1/27/2026|||
Claude Code is a CLI tool which means it can do complete projects in a single command

https://github.com/features/copilot/cli/

piker 1/27/2026||||
This would make some sense if VS Code didn't have a terminal built into it. The LLMs have the same bash capabilities in either form.
sandos 1/28/2026|||
Huh? There is nothing stopping copilot from doing an entire project in one go.

Ive done it 10s of times.

theshrike79 1/28/2026||
"Copilot has done 10 tool calls, do you want to continue" or whatever was the bane of my existence before our company approved Claude for use.

Like I asked you to do this task, then you spent time looking around and now want me to pat you on the back so you can continue?

theshrike79 1/28/2026||||
The model is the engine, the framework is the rest of the car.

With Copilot Microsoft has basically put the meanest leanest triple-turbo'd V8 engine in a rickety 80's soviet car.

You can kinda drive it fast in a straight line if you're careful, but you can also crash and burn really hard.

maxdo 1/27/2026|||
it's not a model limit anymore, it's tools , skills, background agents, etc. It's an entire agentic environment.
illnewsthat 1/27/2026||
Github copilot has support for this stuff as well. Agent skills, background/subagents, etc.
Miraste 1/28/2026||
Implementation differences do matter. I haven't found Copilot to have as many issues as people say it does, but they are there. Their Gemini implementation is unusable, for example, and it's not because of the underlying models. They work fine in other harnesses.
netcraft 1/28/2026||
> Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day.

This is true to an extent for sure and they will go much longer than most engineers without getting "tired", but I've def seen both sonnet and opus give up multiple times. They've updated code to skip tests they couldn't get to pass, given up on bugs they couldn't track down, etc. I literally had it ask "could we work on something else and come back to this"

lucianbr 1/28/2026||
The glorified autocomplete. Why would the LLM "work on something else then get back on this", is it's subconscious going to solve the problem during that time?

But because people say it, it says it too. Making sense is optional.

havefunbesafe 1/28/2026|||
Ive found that clearing the context and getting back to it later actually DOES work. When you restart, your personal context is cleared and you might be better at describing the problem you are solving in a more informationally dense way.
Davidzheng 1/28/2026|||
not impossible right? the new context can provide some needed hints, etc...
Schlagbohrer 1/28/2026|||
Reminiscent of a time just a year or two ago where the LLMs would get downright frustrated and sassy
manbash 1/28/2026||
Oh, definitely. Also, they end up getting stuck in a loop, adding and removing the same code endlessly.

And then someone comes and "improves" their agent with additional "do not repeat yourself" prompts scattered all over the place, to no avail.

"Asinine" describes my experience perfectly.

strogonoff 1/27/2026||
LLM coding splits up engineers based on those who primarily like building and those who primarily like code reviews and quality assessment. I definitely don’t love the latter (especially when reviewing decisions not made by a human with whom I can build long-term personal rapport).

After certain experience threshold of making things from scratch, “coding” (never particularly liked that term) has always been 99% building, or architecture, and I struggle to see how often a well-architected solution today, with modern high-level abstractions, requires so much code that you’d save significant time and effort by not having to just type, possibly with basic deterministic autocomplete, exactly what you mean (especially considering you would have to also spend time and effort reviewing whatever was typed for you if you used a non-deterministic autocomplete).

cmrdporcupine 1/28/2026||
"those who primarily like code reviews and quality assessment" -- I don't love those. In fact I find it tedious and love it when I can work on my own without them.

Except after 25 years of working I know how imperative they are, how easily a project can disintegrate into confused silos, and am frustrated as heck with these tools being pushed without attention to this problem.

OkayPhysicist 1/27/2026||
See, I don't take it that extreme: LLMs make fantastic, never-before seen quality autocompletes. I hacked together a Neovim plugin that prompts an LLM to "finish this function" on command, and it's a big time save for the menial plumbing type operations. Think things like "this api I use expects JSON that encodes some subset of SQL, I want all the dogs with Ls in their name that were born on a Tuesday". Given an example of such API (or if the documentation ended up in its training), LLMs will consistently one-shot stuff like that.

Asking it to do entire projects? Dumb. You end up with spaghetti, unless you hand-hold it to a point that you might as well be using my autocomplete method.

gverrilla 1/28/2026||
Depends on the scope of the project. If it's small, and you direct it correctly, it can one-shot yes. Or 2-3-shot.
forrestthewoods 1/27/2026||
HN should ban any discussion on “things I learned playing with AI” that don’t include direct artifacts of the thing built.

We’re about a year deep into “AI is changing everything” and I don’t see 10x software quality or output.

Now don’t get me wrong I’m a big fan of AI tooling and think it does meaningfully increase value. But I’m damn tired of all the talk with literally nothing to show for it or back it up.

lomase 1/27/2026|
[dead]
jwilliams 1/28/2026|
> It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later.

This is true... Equally I've seen it dive into a rabbit hole, make some changes that probably aren't the right direction... and then keep digging.

This is way more likely with Sonnet, Opus seems to be better at avoiding it. Sonnet would happily modify every file in the codebase trying to get a type error to go away. If I prompt "wait, are you off track?" it can usually course correct. Again, Opus seems way better at that part too.

Admittedly this has improved a lot lately overall.

gregjor 1/28/2026|
I don't understand why anyone finds it interesting that a machine, or chatbot, never tires or gets demoralized. You have to anthromorphize the LLM before you can even think of those possibilities. A tractor never tires or gets demoralized either, because it can't. Chatbots don't "dive into a rabbit hole ... and then keep digging" because they have superhuman tenacity, they do it because that's what software does. If I ask my laptop to compute the millionth Fibonacci number it doesn't sigh and complain, and I don't think it shows any special qualities unless I compare it to a person given the same job.
akoboldfrying 1/28/2026||
You're a machine. You're literally a wet, analog device converting some forms of energy into other forms just like any other machine as you work, rest, type out HN comments, etc. There is nothing special about the carbon atoms in your body -- there's no metadata attached to them marking them out as belonging to a Living Person. Other living-person-machines treat "you" differently than other clusters of atoms only because evolution has taught us that doing so is a mutually beneficial social convention.

So, since you're just a machine, any text you generate should be uninteresting to me -- correct?

Alternatively, could it be that a sufficiently complex and intricate machine can be interesting to observe in its own right?

suddenlybananas 1/28/2026|||
If humans are machines, they are still a subset of machines and they (among other animals) are the only ones who can be demotivated and so it is still a mistake to assume an entirely different kind of machine would have those properties.

>Other living-person-machines treat "you" differently than other clusters of atoms only because evolution has taught us that doing so is a mutually beneficial social convention

Evolution doesn't "teach" anything. It's just an emergent property of the fact that life reproduces (and sometimes doesn't). If you're going to have this radically reductionist view of humanity, you can't also treat evolution as having any kind of agency.

sponaugle 1/28/2026||
"If humans are machines, they are still a subset of machines and they (among other animals) are the only ones who can be demotivated and so it is still a mistake to assume an entirely different kind of machine would have those properties."

Yet.

suddenlybananas 1/28/2026||
Sure but the entire context of the discussion is surprisial that they don't.
sponaugle 1/28/2026||
Agreed - There is no guarantee of what will happen in the future. I'm not for or against the outcome, but certainly curious to see what it is.
spopejoy 1/28/2026||||
Humans and all other organisms are "literally" not machines or devices by the simple fact that those terms refer to works made for a purpose.

Even as an analogy "wet machine" fails again and again to adequately describe anything interesting or useful in life sciences.

gregjor 1/28/2026|||
Wrong level of abstraction. And not the definition of machine.

I might feel awe or amazement at what human-made machines can do -- the reason I got into programming. But I don't attribute human qualities to computers or software, a category error. No computer ever looked at me as interesting or tenacious.

More comments...