Top
Best
New

Posted by phire 7/1/2025

Writing Code Was Never the Bottleneck(ordep.dev)
776 points | 389 commentspage 2
tomasreimers 7/3/2025|
I've felt like a broken record the past few weeks, but this.

Authoring has never been the bottle neck, the same way my typing speed has never been the bottle neck.

The bottle neck has been, and continues to be, code review. It was in our pitch deck 4 years ago; it's still there.

For most companies, by default, it's a process that's synchronously blocked on another human. We need to either make it async (stacking) or automate it (better, more intelligent CI), or--ideally---both.

The tools we have are outdated, and if you're a team with more than 50 eng you've already spun up a sub team (devx, dev velocity, or dev productivity) whose job is to address this. Despite that, industry wide, we've still done very little because it's a philosophically poorly understood part of the process (why do we do code review? Like seriously, in three bullet points what's the purpose - most developers realize they haven't thought that deeply here).

https://graphite.dev

Dumblydorr 7/3/2025||
What is the purpose of code review in three points? I’ll take a try, let me know other thoughts!

-functionality, does it work? And is it meeting reqs?

-bug prevention, reliability, not breaking things

-matching of system architecture and best practices for the codebase

Other ideas:

-style and readability

-learning for the junior and less so the senior probably

-checking the “code review” box off your list

hakunin 7/3/2025|||
I don't honestly know why most people do code reviews, because it's often presented as some kind of "quick sanity check" or "plz approve". Here's why we do code reviews where I get to lead the practice:

1. Collaborate asynchronously on architectural approach: (simplify, avoid wheel reinvention)

2. Ask "why" questions, document answers in commits and/or comments to increase understanding

3. Share knowledge

4. Bonus: find issues/errors

There are other benefits, like building rapport, getting some recognition for especially great code.

To me code reviews are supposed to be a calm process that takes time, not a hurdle to quickly kick out of the way. Many disagree with me however, but I'm not sure what the alternative is.

Edit: people tend to say reviews are for "bug finding" and "verifying requirements". I think that's at best a bonus side effect, that's too much to ask a person merely reading the code. In my case, code reviews don't go beyond reading the code (albeit deeply, carefully). We do however have QA that is more suited for verifying overall functionality.

kevmo314 7/3/2025|||
I've found great benefit in voluntary code reviews. Engineers are self-aware enough that if they're at all worried about a change working they will elect for a voluntary code review. As a reviewer I also feel like my opinion is more welcomed because I know someone chose to do it instead of being forced so, so I pay more attention.

This really gets at the benefits you mention and keeps people aligned with them instead of feeling like code review should be rushed.

tomasreimers 7/3/2025|||
This.
alisonatwork 7/4/2025|||
To me the core purpose of code review is clear - knowledge sharing. If the only person who knows about a particular change is the person who wrote it, then you have a critical point of failure. If there is an issue in a subsystem whose owner is away or who has moved on, that issue is likely to take much longer to resolve if the person on the case is looking at the code for the very first time.

An AI maximalist might say that code review is no longer necessary because in the case that there is an issue in a subsystem nobody is familiar with, you can simply ask the AI to read that source code and come back with a report of where the bug is and a proposal of how to fix it. And, since code review is useless anyway, might as well take the human out of the loop entirely - just have AI immediately commit the change and push it to production and iterate if or when another issue emerges.

This is the dream of autonomous, self-managing systems! Of course this dream is decades old at this point, and despite developing ever more complex systems it turns out that we were never quite able to do away with humans altogether. Thus, code review still appears to be useful. But it's only useful if everybody goes into it with the mindset that the goal is knowledge sharing. If the outcome of a review is not that everyone comes out of it with a good understanding of the purpose and function of the code being committed, then imo it was a waste of time.

peterldowns 7/3/2025||
Hey Tomas, been a while! I like the approach that graphite is taking to AI code review — focus on automating the “lint” or “hey this is clearly wrong” or “you probably wanted to not introduce a security flaw here” type stuff, so that humans can focus on the more important details in a changeset. As your AI reviewers take on more tasks, have your answers to your question (“why do we do code review”) changed at all?
tomasreimers 7/3/2025||
Certainly! A lot less proof reading and pair programming and a lot more architecture / "hey should we be going in this direction" / sharing tribal knowledge

Also hi Peter! Long time :)

ogou 7/3/2025||
My last job had a team with about 50% temp and contract. When the LLMs got popular, I could tell right away. When I reviewed their code, it was completely different than their actual style. The seniors pushed back because it was costing us more time to review and we knew it was generated. Also, they couldn't talk about about what they did in meetings. They didn't know what it was really doing. Eventually the department manager got tired of our complaining and said "it's all inevitable." Then those mercenaries started to just rubber stamp each other's PRs. The led to some colossal fuckups in production. Some of them were fired quietly, and the new people promptly started doing the same thing. Why should they care, it's just a short term contract on the way to the big payday, right?
calrain 7/3/2025||
I've always enjoyed software design, for me the coding was the bottleneck and it was frustrating as I had to roll through different approaches when I so clearly knew the outcome that I wanted.

Using Claude Code to first write specs, then break it down into cards, build glossaries, design blueprints, and finally write code, is just a perfect fit for someone like me.

I know the fundamentals of programming, but since 1978 I've written in so many languages that the syntax now gets in the way, I just want to write code that does what I want, and LLM's are beyond amazing at that.

I'm building API's and implementing things I'd never dreamed of spending time on learning, and I can focus on what I really want, design, optimisation, simplification, and outcomes.

LLM's are amazing for me.

pragmatic 7/3/2025|
Agreed LLMs are fantastic autocomplete for experts.

Often giving 90% of what you need.

But those junior devs…

konovalov-nk 7/3/2025||
Nobody mentioned Joel Spolsky's October 2nd, 2000 article, so I'll start: https://www.joelonsoftware.com/2000/10/02/painless-functiona...

Code is not a bottleneck. Specs are. How the software is supposed to work, down to minuscule detail. Not code, not unit tests, not integration tests. Just plain English, diagrams, user stories.

Bottleneck is designing those specs and then iterating them with end users, listening to feedback, then going back and figuring out if spec could be improved (or even should). Implementing actual improvements isn't hard once you have specs.

If specs are really good -- then any sufficiently good LLM/agent should be able to one-shot the solution, all the unit tests, and all the integration tests. If it's too large to one-shot -- product specs should never be a single markdown file. Think of it more like a wiki -- with links and references. And all you have to do is implement it feature by feature.

intelVISA 7/3/2025|
> How the software is supposed to work, down to minuscule detail.

So... coding. :P

konovalov-nk 7/4/2025||
Code is a technical specification and could be any programming language, markup, terraform files, configuration, whatever.

Product Spec is written in English and in a such way that everybody can understand it without technical knowledge. Because it doesn't operate in C arrays and nuances of how queues work -- just common sense and real-life objects/entities! Then the actual code is an abstraction that takes the spec and implements it for certain architecture / scalability requirements and so on.

For example you can ask to make to-do list app. You can build it on:

- As a CLI tool

- As a UI web interface

- As an AI agent that operates entirely on voice recognition and text-to-speech as interface

A product spec can omit this "tiny" detail, code -- cannot.

Even if you specify this is Web UI to-do app in product spec, there are still tons of things to choose for developer:

- Programming language

- Cloud/Self-hosted

- Architecture (monolith/microservices/modular monolith/SOA/event-driven)

You wouldn't specify in a Product spec that to-do items has to go to an SQS queue that lambda picks up and adds them to a DB, would you? That has to be a separate technical spec document, which is simply called documentation (+actual code in the repositories).

kabdib 7/3/2025||
my LLM win this year was to give the corporate AI my last year's worth of notes, emails and documents and ask it to write my self review. it did a great job. i'm never writing another one of those stupid bits of psychological torture again

otherwise i'm writing embedded systems. fine, LLM, you hold the scope probe and figure out why that PWM is glitching

williamdclt 7/3/2025||
That's a really good idea, and would have the double-benefit that it would incentivise me to keep better track of information and communication, as well as take more notes, all of which certainly has various other benefits.
ysofunny 7/3/2025||
but as soon as you are doing that,

the people who have to read your self-review will simply throw what you gave them into their own instance of the same corporate AI

at which point why not simply let the corporate AI tell you what to do as your complete job description; the AI will tell you to "please hold the scope probe as chatbotAI branding-opportunity fixes the glitches in the PWM"

I guess we pass the butter now...

neoden 7/3/2025||
> Now, with LLMs making it easy to generate working code faster than ever, a new narrative has emerged: that writing code was the bottleneck, and we’ve finally cracked it.

This narrative is not new. Many times I've seen decisions were made on the basis "does it require writing any code or not". But I agree with the sentiment, the problem is not the code itself but the cost of ownership of this code: how it is tested, where it is deployed, how it is monitored, by whom it's maintained etc.

NiloCK 7/3/2025||
Writing code was never the only bottleneck, and I'm sure there is personal variation, but for me personally it has always been the dominant bottleneck.

My backspace and delete keys loom larger than the rest of my keyboard combined. Plodding through with meager fingers, I could always find fault faster than I could produce functionality. It was a constant, ego-depleting struggle to set aside encountered misgivings for the sake of maintaining forward progress on some feature or ticket.

Now, given defined goals and architectural vision, which I've never been short of, the activation energy for producing large chunks of 'good enough' code to move projects forward is almost zero. Even my own oversized backspace is no match for the torrent.

Again - personal variation - but I expect that I am easily 10x in both in ambition and in execution compared to a year ago.

cloverich 7/3/2025|
It's interesting to think about, but LLMs are perhaps not impacting everyone, even at the same level, the same ways. I'm similarly more productive, and for a different reason. I've always struggled with task persistence when the task is easy and monotonous.... or something. Easy jobs, books, code, that still took a while to do, always took the longest, focus was impossible. Hard tasks, books, classes, etc, i always did the best at. nearly failed school from the easiest course; highest marks in the hardest ones. I've never gotten over this.

That's now melted away. For the first time my mind feels free to think. Everything is moving almost as fast as i am thinking, Im far less bogged down in the slow parts, which the llm can do. I spend so much more time thinking, designing, architecting, etc. Distracting thoughts now turn into completed quality of life features, done on the side. I guess it rewards add, the real kind, in a way that the regular world understandably punishes.

And it must free up mental space for me because i find i can now review others prs more quickly as well. I don't use llm for this, and don't have a great explanation for what is happening here.

Anyways im not sure this is the same issue as yours, and so its interesting to think about what kinds of minds it's freeing, and what kinds its of less use to.

smoothdev-bp 7/3/2025||
I dont think the authors comments are without merit. My experience has shown me issues are usually more upfront and after the fact.

Either the bottleneck between product organizations and engineering on getting decent requirements to know what to build and engineering teams being unwilling to start until they have every I dotted and T crossed.

The backend of the problem is that already most of the code e see written is poorly documented across the spectrum. How many commit messages have we seen of "wip" for instance? Or you go to a repository and the Readme is empty?

So the real danger is the stack overflow effect on steroids. It's not just a block of code that was put in that wasn't understood, its now entire projects, and there's little to no documentation to explain what was done or why decisions were made.

mgaunard 7/3/2025|
In my experience the difficulty in building good software is having a good vision of what the end result should look like and how to get there.

If the developer is not savvy about the business case, he cannot have that vision, and all he can do is implement requirements as described by the business, which itself doesn't sufficiently understand technology to build the right path.

nkjoep 7/3/2025||
I tend to agree. Ideas are cheap and can be easily steered around.

The tricky part is always the action plan: how do we achieve X in steps without blowing budget/time/people/other resources?

aiisahik 7/12/2025||
Lots of devs complaining about code quality and understandability here.

I'm going to skip the obvious answer about how LLMs can actually improve code quality and reviewability and focus on a different argument: why engineers even care about code quality.

Most code is not written as a work of art but as an important functional piece to achieve return on capital. Programmers get paid by companies to produce code. The payment ultimately is driven by expectation of a return on investment of the code. Ultimately business owners and owners do not really care about code quality as long as it can deliver on return on capital. There are plenty of profitable businesses run on spaghetti code and old technology. However, engineers realized that bad code resulted in costly downstream consequences including consequences that affected ROI. Tech debt had to be paid not just in developer hours but also dollars and cents. Thus this obsession with code quality, code reviews, and this current debate.

Many including Andrew Ng at YC startup school recently are realizing that writing bad code is now a two way door instead of the one way door that it used to be. With LLMs you can deploy some bad code, realize it's bad, and rewrite that entire codebase tomorrow with near negligible cost. The fact that LLM can write some very very bad code is less important than the return on invested capital of that code especially when taking into account the speed at which that code can be fixed / completely re-written in the future, and especially when that in that future, LLMs will be even more capable than it is now.

Here's my advice: give in the shitty code and merge it. Claude 6 will refactor all of it to your liking very soon.

noelwelsh 7/3/2025|
> The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication. All of this wrapped inside the labyrinth of tickets, planning meetings, and agile rituals.

Most of these only exist because one person cannot code fast enough to produce all the code. If one programmer was fast enough, you would not need a team and then you wouldn't have coordination and communication overhead and so on.

brazzy 7/3/2025|
That hypothetical one person would not just need to produce the code, but also understand how it fulfills the requirements. Otherwise they are unable to fix problems or make changes.

If the amount of code grows without bounds and is an incoherent mess, team sizes may not, in fact, actually get smaller.

noelwelsh 7/3/2025||
Agreed. I don't think anyone can produce useful code without understanding what it should do.

One useful dimension to consider team organization is the "lone genius" to "infinite monkeys on typewriters" axis. Agile as usually practised, microservices, and other recent techniques seem to me to be addressing the "monkey on typewriters" end of the spectrum. Smalltalk and Common Lisp were built around the idea of putting amazing tools in the hands of a single or small group of devs. There are still things that address this group (e.g. it's part of Rails philosophy) but it is less prominent.

More comments...