Posted by bigwheels 1/26/2026
There may come a point where having a "survivor machine" with auto-update turned off may be a really good idea.
1. hand arithmetic -> using a calculator
2. assembly -> using a high level language
3. writing code -> making an LLM write code
Number 3 does not belong. Number 3 is a fundamentally different leap because it's not based on deterministic logic. You can't depend on an LLM like you can depend on a calculator or a compiler. LLMs are totally different.
It often doesn't work. That's the point. A calculator works 100% of the time. A LLM might work 95% of the time, or 80%, or 40%, or 99% depending on what you're doing. This is difference and a key feature.
To me that isn’t a show stopper. Much of the real world works like that. We put very unreliable humans behind the wheel of 2 ton cars. So in a way this is perhaps just programmers aligning with the messy real world?
Perhaps a bit like architects can only model things so far eventually you need to build the thing and deal with the surprises and imperfection of dirt
It doesn't matter how good you are at calculations the answer to 2 + 2 is always 4. There are no methods of solving 2 + 2 which could result in you accidentally giving everyone who reads the result of your calculation write access to your entire DB. But there are different ways to code a system even if the UI is the same, and some of these may neglect to consider permissions.
I think a good parallel here would be to imagine that tomorrow we had access to humanoid robots who could do construction work. Would we want them to just go build skyscrapers and bridges and view all construction businesses which didn't embrace the humanoid robots as akin to doing arithmetic by hand?
You could of course argue that there's no problem here so long as trained construction workers are supervising the robots to make sure they're getting tolerances right and doing good welds, but then what happens 10 years down the road when humans haven't built a building in years? If people are not writing code any more then how can people be expected to review AI generated code?
I think the optimistic picture here is that humans just won't be needed in the future. In theory when models are good enough we should be able to trust the AI systems more than humans. But the less optimistic side of me questions a future in which humans no longer do, or even know how to do such fundamental things.
you might think I'm kidding but Search redox on github, you will find that project and the anonymous contributions
Decided to figure out what this "vibe coding" nonsense is, and now there's a certain level of joy to all of this again. Being able to clearly define everything using markdown contexts before any code is even written has been a great way to brain dump those 25 years of experience and actually watch something sane get produced.
Here are the stats Claude Code gave me:
Overview
┌───────────────┬────────────────────────────┐
│ Metric │ Value │
├───────────────┼────────────────────────────┤
│ Total Commits │ 365 │
├───────────────┼────────────────────────────┤
│ Project Age │ 7 days (Jan 20 - 27, 2026) │
├───────────────┼────────────────────────────┤
│ Open Issues │ 5 │
├───────────────┼────────────────────────────┤
│ Contributors │ 1 │
└───────────────┴────────────────────────────┘
Lines of Code by Language
┌───────────────────────────┬───────┬────────┬───────────┐
│ Language │ Files │ Lines │ % of Code │
├───────────────────────────┼───────┼────────┼───────────┤
│ Rust (Backend) │ 94 │ 31,317 │ 51.8% │
├───────────────────────────┼───────┼────────┼───────────┤
│ TypeScript/TSX (Frontend) │ 189 │ 29,167 │ 48.2% │
├───────────────────────────┼───────┼────────┼───────────┤
│ SQL (Migrations) │ 34 │ 1,334 │ — │
├───────────────────────────┼───────┼────────┼───────────┤
│ CSS │ — │ 1,868 │ — │
├───────────────────────────┼───────┼────────┼───────────┤
│ Markdown (Docs) │ 37 │ 9,485 │ — │
├───────────────────────────┼───────┼────────┼───────────┤
│ Total Source │ 317 │ 60,484 │ 100% │
└───────────────────────────┴───────┴────────┴───────────┘I then realized I could feed it everything it ever needed to know. Just create a docs/* folder and tell it to read that every session.
Through discovery I learned about CLAUDE.md, and adding skills.
Now I have an /analyst, /engineer, and /devops that I talk to all day with their own logic and limitations, as well as the more general project CLAUDE.md, and dozens of docs/* files we collaborate on.
I'm at the point I'm running happy.engineering on my phone and don't even need to sit in front of the computer anymore.
I wonder if this line
> It will configure an auth_backend.rs and wire up a basic user
over a big enough number of projects will lead to at least 2-3 different user names.
I actually disagree with Andrej here re: "Generation (writing code) and discrimination (reading code) are different capabilities in the brain." and I would argue that the only reason he can read code fluently, find issues, etc. is because he has spent year in a non-AI assisted world writing code. As time goes on, he will become substantially worse.
This also bodes incredibly poorly for the next generation, who will mostly in their formative years now avoid writing code and thus fail to even develop a idea of what good code is, how it works/why it works, why you make certain decisions, and not others, etc. and ultimately you will see them become utterly dependent on AI, unable to make progress without it.
IMO outsourcing thinking is going to have incredibly negative consequences for the world at large.
I expect interviews will evolve into "build project X with an LLM while we watch" and audit of agent specs
fun stats: corelation is real, people who were good at vibe code, also had offer(s) with other companies that didn't run vibe code interviews.
It doesn’t work you can’t be productive without agent capable of doing queries to db etc
What? I can't parse this sentence. Maybe get an ai to rewrite it?
This is about where I'm at. I love pure claude code for code I don't care about, but for anything I'm working on with other people I need to audit the results - which I much prefer to do in an IDE.
Writing code in many cases is faster to me than writing English (that is how PLs are designed, btw!) LLM/agentic is very “neat” but still a toy to the professional, I would say. I doubt reports like this one. For those of us building real world products with shelf-lives (Is Andrej representative of this archetype?), I just don’t see the value-add touted out there. I’d love to be proven wrong. But writing code (in code, not English), to me and many others, is still faster than reading/proving it.
I think there’s a combination of fetishizing and Stockholm syndroming going on in these enthusiastic self-reports. PMW.
True, I feel as though i'd have to become Stienbeck to get it to do what i "really" wanted, with all the true nuance.
Interesting.