Top
Best
New

Posted by phire 7/1/2025

Writing Code Was Never the Bottleneck(ordep.dev)
776 points | 389 commentspage 9
2d8a875f-39a2-4 7/3/2025|
The author puts the BLUF: "The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication."

They're not wrong, but they're missing the point. These bottlenecks can be reduced when there are fewer humans involved.

Somewhat cynically:

code reviews: now sometimes there's just one person involved (reviewing LLM code) instead of two (code author + reviewer)

knowledge transfer: fewer people involved means this is less of an overhead

debugging: no change, yet

coordination and communication: fewer people means less overhead

LLMs shift the workload — they don’t remove it: sure, but shifting workload onto automation reduces the people involved

Understanding code is still the hard part: not much change, yet

Teams still rely on trust and shared context: much easier when there are fewer people involved

... and so on.

"Fewer humans involved" remains a high priority goal for a lot of employers. You can never forget that.

mdavid626 7/3/2025||
Slowly it’s getting time to become a goose farmer. Enough of this AI shit.
gabrielso 7/3/2025||
The article misses the point that LLMs are not removing the bottleneck of writing code for people who know how to write code. It's removing this bottleneck for everyone else.
CerebralCerb 7/3/2025||
I have yet to see anyone who previously could not write code be able to do so, beyond simple scripts, with LLM's.
gabrielso 7/3/2025|||
In my experience, non-coders with LLMs can go beyond simple scripts and build non-trivial small applications nowadays, but the difference of outcomes between them and a competent coder with LLMs is still staggering.
bubblyworld 7/3/2025||||
I have - somebody in my mushroom foraging group wrote an app that predicts what kinds of mushrooms you are likely to find in different spots in our area, based on weather forecasts and data he's been collecting for years. It's a dead simple frontend/backend, but it works, he built and deployed it himself and he had zero coding experience before this. Pretty impressive, from my perspective.

As a programmer I can see all the rough edges but that doesn't seem to bother the other 99% of people on the group who use it.

oc1 7/3/2025|||
At least they will be more confident than ever that they can when all the LLM ever says is "You are absolutely right!" ;)
dankobgd 7/3/2025||
Then human resource woman should be the only programmer
mhandley 7/3/2025||
I've spent the last few weeks writing a non-trivial distributed system using Codex (OpenAI's agentic coding system). I started by writing a design brief, and iterated with o3 to refine it so it was more complete and less ambiguous. Then I asked it to write a spec of all the messages - didn't like its first attempt, but iterated on it til I did like it. Then got it to write a project plan, and iterated on that. Only then did I start on the code. The purpose of all this is to provide it some context.

It generated around 13K lines of Go for me in just over two weeks. I didn't previously speak Go, but its not hard to skimread to get the gist of its approach. I probably wrote about 100 lines, though I added and removed a lot of logging at various times to understand what was actually happening. I got it to write a lot of unit tests, so that coverage testing is very good. But I didn't actually pay a lot of attention to most of those tests on the first pass, because it generally got all the fine detail stuff exactly right on the first pass. So why all the tests? First, if something seems off, I have a place to start a deep dive. Second, it pins down the architecture so that functionality can't creep without me noticing that it is needing to change the unit tests.

Some observations.

- Coding this way is very effective - the new models almost never make fine detail mistakes. But I want to step it through chunks of new functionality at a size that I can at least skim and understand. So that 13K LoC is about 300 PRs. Otherwise I lose track of the big picture, and in this world, the big picture is my task.

- Normally the big design decisions are separated by days of fine detail coding. Using codex means I get to make all those decisions nearly back-to-back. This is both good and bad. The experience is quite intense - mostly I found the fine-detail coding to be "therapeutic", but I don't get that anymore. But not needing to pay attention to the fine detail (at least most of the time), means I think I have a better picture in my head of the overall code structure. We only have so much attention at any time, and if I don't have to hold the details, I can pay attention to the more important things.

- It's very good at writing integration tests quickly, so I write a lot more of them. These I do pay a lot of attention to. Its these tests that tell me if I got the design right, and if not, these are the place I start digging to understand what I need to change.

- Because it takes 10-30m to come back with a response, I try to keep it working on around three tasks at a time. That takes some effort, as it does require come context switching, and effort to give it tasks that won't result in large merge conflicts. If it was faster, I would not bother to set multiple tasks in parallel.

- Codex allows you to ask for multiple solutions. For simpler stuff, I've found asking for one is fine. For slightly more open questions, it's good to ask for multiple solutions, review them and decide which you prefer.

- Just prompting it with "find a bug and suggest a fix" every now and then often shows up real bugs. Mostly they tend to be some form if internal inconsistency, where I'd changed my mind about part of the code, and the something elsewhere needed to be changed to be consistent.

- I learned a lot about Go from it. If I'd been writing myself, my Go would have looked more like C++ which I'm very familiar with. But it wrote more idiomatic Go from the start, and I've learned along the way.

- Any stock algorithm stuff it will one-shot. "Load this set of network links, build a graph from them, run dijkstra over the graph from this node, and tell me the histogram of how many equal-cost shortest paths there are to every other node." That sort of stuff it will one-shot.

- It's much better than me about reasoning about concurrency. Though of course this is also one of Go's strengths.

Now I don't have any experience of how good it would be for maintaining a much larger codebase, but for this sort of scale of utility, I'm very impressed with how effective it has been.

Disclaimer: I work at OpenAI, but on networks, not AI.

nektro 7/9/2025||
yep!
cies 7/3/2025||
Funny article, but it seems that the author did not get the "Definition of Done" memo.

While...

> Writing Code Was Never the Bottleneck

...it was also never the job that needed to get done. We wanted to put well working functionality in the hands of users, in an extendible way (so we could add more features later without too much hassle).

If lines of code were the metric of success (like "deal value" is for sales) we would incentivize developers for lines of code written.

lmm 7/3/2025||
> We wanted to put well working functionality in the hands of users, in an extendible way (so we could add more features later without too much hassle).

I think the author agrees, and is arguing that LLMs don't help with that.

ctenb 7/3/2025|||
This article nowhere suggests that lines of code is something to be maximized.
weego 7/3/2025|||
We used to. Then we went though a phase of 'rockstar' developers who would spend their time on the fledgling social media sites musing on how their real value was measured in lines of code removed.
teiferer 7/3/2025||
[dead]
strawberrisword 7/4/2025||
[dead]
khazhoux 7/3/2025|
Yet another article trying to take away from the impact of LLMs. This one is more subtle than most, but still the message is "this problem that was solved, was never actually the problem."

Except... writing code is often a bottleneck. Yeah, code reviews, understanding the domain, etc, is also a bottleneck. But Cursor lets me write apps and tools in 1/20th the time it would take me in an area where I am an expert. It very much has removed my biggest bottleneck.

Aeglaecia 7/3/2025|
I feel like the author gave a pretty balanced take by recognising multiple ends of the equation ... do you yourself recognise that such the speedup described in your perspective is contingent on the environment that you have applied llms to? regarding frontend , wysiwyg , this is an environment where edge cases are deprioritized and llms thus excel. on the other hand working in an environment reliant on non publicly available technical documentation , llms are borderline useless. and in an environment where edge cases are paramount , llms actively cause harm , as described elsewhere in the thread. these three environments are concurrently true , they do not detract from each other ..