Posted by birdculture 12/17/2025
The only relevant point here is keeping a talent pipeline going, because well duh. That it even needs to be said like it's some sort of clever revelation is just another indication of the level of stupid our industry is grappling with.
The. Bubble. Cannot. Burst. Soon. Enough!!
The "over" deserves a lot of emphasis. To this day, I save my code at least once per line that I type because of the daily (sometimes hourly) full machine crashes I experienced in the 80s and 90s.
I don't remember which app made me think that. Maybe some old version of Matlab cleared unsaved files when hung and with autosave enabled.
I have always cared a lot about quality and craftsmanship. Now when I am working and notice something wrong, I just fix it. I can code it entirely with AI in the time it would've take me to put it on an eternal backlog somewhere.
Is learning to code with AI coding agents going to make you a better programmer than one who learns to code without such tools?
You should replace devs vertically, not horizontally, otherwise, who'll be you senior dev tomorrow?
Jokes aside, AI has the potential to reduce workforce across the board, but companies should strive to retain all levels staffed with humans. Also, an LLM can't fully replace even a junior, not yet at least.
Pair them with a senior so they can learn engineering best practices:
And now you've also just given your senior engineers some extra experience/insights into how to more effectively leverage AI.
It accelerates the org to have juniors (really: a good mix of all experience levels)
Why? That seems unlikely to me. That's like saying juniors are likely the most comfortable with jj, zed, or vscode.
Now claude had access to this[2] link and it got the daya in the research prompt using web-searcher. But that's not the point. Any Junior worth their salt — distributed systems 101 — would know _what_ was obvious, failure to pay attention to the _right_ thing. While there are ideas on prompt optimization out there [3][4], the issue is how many tokens can it burn to think about these things and come up with optimal prompt and corrections to it is a very hard problem to solve.
[1] https://github.com/humanlayer/humanlayer/blob/main/.claude/c... [2] https://litestream.io/guides/vfs/#when-to-use-the-vfs [3] https://docs.boundaryml.com/guide/baml-advanced/prompt-optim... [4]https://github.com/gepa-ai/gepa
It's like expecting someone to know how to use source control (which at some point wasn't table stakes like it is today).
https://news.ycombinator.com/item?id=44972151
Does this story add anything new?
Folks in Hyderabad can run LLMs too and data centre and infrastructure costs are lower in India.