Posted by phire 7/1/2025
Authoring has never been the bottle neck, the same way my typing speed has never been the bottle neck.
The bottle neck has been, and continues to be, code review. It was in our pitch deck 4 years ago; it's still there.
For most companies, by default, it's a process that's synchronously blocked on another human. We need to either make it async (stacking) or automate it (better, more intelligent CI), or--ideally---both.
The tools we have are outdated, and if you're a team with more than 50 eng you've already spun up a sub team (devx, dev velocity, or dev productivity) whose job is to address this. Despite that, industry wide, we've still done very little because it's a philosophically poorly understood part of the process (why do we do code review? Like seriously, in three bullet points what's the purpose - most developers realize they haven't thought that deeply here).
-functionality, does it work? And is it meeting reqs?
-bug prevention, reliability, not breaking things
-matching of system architecture and best practices for the codebase
Other ideas:
-style and readability
-learning for the junior and less so the senior probably
-checking the “code review” box off your list
1. Collaborate asynchronously on architectural approach: (simplify, avoid wheel reinvention)
2. Ask "why" questions, document answers in commits and/or comments to increase understanding
3. Share knowledge
4. Bonus: find issues/errors
There are other benefits, like building rapport, getting some recognition for especially great code.
To me code reviews are supposed to be a calm process that takes time, not a hurdle to quickly kick out of the way. Many disagree with me however, but I'm not sure what the alternative is.
Edit: people tend to say reviews are for "bug finding" and "verifying requirements". I think that's at best a bonus side effect, that's too much to ask a person merely reading the code. In my case, code reviews don't go beyond reading the code (albeit deeply, carefully). We do however have QA that is more suited for verifying overall functionality.
This really gets at the benefits you mention and keeps people aligned with them instead of feeling like code review should be rushed.
An AI maximalist might say that code review is no longer necessary because in the case that there is an issue in a subsystem nobody is familiar with, you can simply ask the AI to read that source code and come back with a report of where the bug is and a proposal of how to fix it. And, since code review is useless anyway, might as well take the human out of the loop entirely - just have AI immediately commit the change and push it to production and iterate if or when another issue emerges.
This is the dream of autonomous, self-managing systems! Of course this dream is decades old at this point, and despite developing ever more complex systems it turns out that we were never quite able to do away with humans altogether. Thus, code review still appears to be useful. But it's only useful if everybody goes into it with the mindset that the goal is knowledge sharing. If the outcome of a review is not that everyone comes out of it with a good understanding of the purpose and function of the code being committed, then imo it was a waste of time.
Also hi Peter! Long time :)
Using Claude Code to first write specs, then break it down into cards, build glossaries, design blueprints, and finally write code, is just a perfect fit for someone like me.
I know the fundamentals of programming, but since 1978 I've written in so many languages that the syntax now gets in the way, I just want to write code that does what I want, and LLM's are beyond amazing at that.
I'm building API's and implementing things I'd never dreamed of spending time on learning, and I can focus on what I really want, design, optimisation, simplification, and outcomes.
LLM's are amazing for me.
Often giving 90% of what you need.
But those junior devs…
Code is not a bottleneck. Specs are. How the software is supposed to work, down to minuscule detail. Not code, not unit tests, not integration tests. Just plain English, diagrams, user stories.
Bottleneck is designing those specs and then iterating them with end users, listening to feedback, then going back and figuring out if spec could be improved (or even should). Implementing actual improvements isn't hard once you have specs.
If specs are really good -- then any sufficiently good LLM/agent should be able to one-shot the solution, all the unit tests, and all the integration tests. If it's too large to one-shot -- product specs should never be a single markdown file. Think of it more like a wiki -- with links and references. And all you have to do is implement it feature by feature.
So... coding. :P
Product Spec is written in English and in a such way that everybody can understand it without technical knowledge. Because it doesn't operate in C arrays and nuances of how queues work -- just common sense and real-life objects/entities! Then the actual code is an abstraction that takes the spec and implements it for certain architecture / scalability requirements and so on.
For example you can ask to make to-do list app. You can build it on:
- As a CLI tool
- As a UI web interface
- As an AI agent that operates entirely on voice recognition and text-to-speech as interface
A product spec can omit this "tiny" detail, code -- cannot.
Even if you specify this is Web UI to-do app in product spec, there are still tons of things to choose for developer:
- Programming language
- Cloud/Self-hosted
- Architecture (monolith/microservices/modular monolith/SOA/event-driven)
You wouldn't specify in a Product spec that to-do items has to go to an SQS queue that lambda picks up and adds them to a DB, would you? That has to be a separate technical spec document, which is simply called documentation (+actual code in the repositories).
otherwise i'm writing embedded systems. fine, LLM, you hold the scope probe and figure out why that PWM is glitching
the people who have to read your self-review will simply throw what you gave them into their own instance of the same corporate AI
at which point why not simply let the corporate AI tell you what to do as your complete job description; the AI will tell you to "please hold the scope probe as chatbotAI branding-opportunity fixes the glitches in the PWM"
I guess we pass the butter now...
This narrative is not new. Many times I've seen decisions were made on the basis "does it require writing any code or not". But I agree with the sentiment, the problem is not the code itself but the cost of ownership of this code: how it is tested, where it is deployed, how it is monitored, by whom it's maintained etc.
My backspace and delete keys loom larger than the rest of my keyboard combined. Plodding through with meager fingers, I could always find fault faster than I could produce functionality. It was a constant, ego-depleting struggle to set aside encountered misgivings for the sake of maintaining forward progress on some feature or ticket.
Now, given defined goals and architectural vision, which I've never been short of, the activation energy for producing large chunks of 'good enough' code to move projects forward is almost zero. Even my own oversized backspace is no match for the torrent.
Again - personal variation - but I expect that I am easily 10x in both in ambition and in execution compared to a year ago.
That's now melted away. For the first time my mind feels free to think. Everything is moving almost as fast as i am thinking, Im far less bogged down in the slow parts, which the llm can do. I spend so much more time thinking, designing, architecting, etc. Distracting thoughts now turn into completed quality of life features, done on the side. I guess it rewards add, the real kind, in a way that the regular world understandably punishes.
And it must free up mental space for me because i find i can now review others prs more quickly as well. I don't use llm for this, and don't have a great explanation for what is happening here.
Anyways im not sure this is the same issue as yours, and so its interesting to think about what kinds of minds it's freeing, and what kinds its of less use to.
Either the bottleneck between product organizations and engineering on getting decent requirements to know what to build and engineering teams being unwilling to start until they have every I dotted and T crossed.
The backend of the problem is that already most of the code e see written is poorly documented across the spectrum. How many commit messages have we seen of "wip" for instance? Or you go to a repository and the Readme is empty?
So the real danger is the stack overflow effect on steroids. It's not just a block of code that was put in that wasn't understood, its now entire projects, and there's little to no documentation to explain what was done or why decisions were made.
If the developer is not savvy about the business case, he cannot have that vision, and all he can do is implement requirements as described by the business, which itself doesn't sufficiently understand technology to build the right path.
The tricky part is always the action plan: how do we achieve X in steps without blowing budget/time/people/other resources?
I'm going to skip the obvious answer about how LLMs can actually improve code quality and reviewability and focus on a different argument: why engineers even care about code quality.
Most code is not written as a work of art but as an important functional piece to achieve return on capital. Programmers get paid by companies to produce code. The payment ultimately is driven by expectation of a return on investment of the code. Ultimately business owners and owners do not really care about code quality as long as it can deliver on return on capital. There are plenty of profitable businesses run on spaghetti code and old technology. However, engineers realized that bad code resulted in costly downstream consequences including consequences that affected ROI. Tech debt had to be paid not just in developer hours but also dollars and cents. Thus this obsession with code quality, code reviews, and this current debate.
Many including Andrew Ng at YC startup school recently are realizing that writing bad code is now a two way door instead of the one way door that it used to be. With LLMs you can deploy some bad code, realize it's bad, and rewrite that entire codebase tomorrow with near negligible cost. The fact that LLM can write some very very bad code is less important than the return on invested capital of that code especially when taking into account the speed at which that code can be fixed / completely re-written in the future, and especially when that in that future, LLMs will be even more capable than it is now.
Here's my advice: give in the shitty code and merge it. Claude 6 will refactor all of it to your liking very soon.
Most of these only exist because one person cannot code fast enough to produce all the code. If one programmer was fast enough, you would not need a team and then you wouldn't have coordination and communication overhead and so on.
If the amount of code grows without bounds and is an incoherent mess, team sizes may not, in fact, actually get smaller.
One useful dimension to consider team organization is the "lone genius" to "infinite monkeys on typewriters" axis. Agile as usually practised, microservices, and other recent techniques seem to me to be addressing the "monkey on typewriters" end of the spectrum. Smalltalk and Common Lisp were built around the idea of putting amazing tools in the hands of a single or small group of devs. There are still things that address this group (e.g. it's part of Rails philosophy) but it is less prominent.