Top
Best
New

Posted by phire 7/1/2025

Writing Code Was Never the Bottleneck(ordep.dev)
776 points | 389 commentspage 4
btbuildem 7/3/2025|
> LLMs reduce the time it takes to produce code, but they haven’t changed the amount of effort required to reason about behavior, identify subtle bugs, or ensure long-term maintainability.

I'd argue that they're slowly changing that as well -- you can ask an LLM to "read" code, summarize / review / criticize it. At the least, it can help accelerate onboarding onto new / unfamiliar codebases.

orwin 7/3/2025||
I used Sonnet4 to write my last frontend task, fully, with minimal input. It is so much better than ChatGPT it's unbelievable, but while a 6hour coding task was transformed into a 30 minutes supervision task that generated good, but also correct code, I was a bit afraid for new engineers coming into an old project.

How are you supposed to understand code if you don't at least read it and fail a bit?

I'll continue using Sonnet4 for frontend personally, it always had been a pain point in the team and I ended up being the most knowledgeable on it. Unless it's a new code architecture, I will understand what was changed and why, so I have confidence I can handle rapid iteration of code on it, but my coworkers who already struggled with our design will probably have even more struggles.

Sadly I think in the end our code will be worse, but we are a team of 5 doing the work of a team of 8, so any help is welcome (we used to do the work of 15 but our 10x developper sadly (for us) got caught being his excellent self by the CTO and now handle a new project. Hopefully with executive-level pay)

pragmatic 7/3/2025|
LLMs are fantastic at summaries and finding where XYZ happens.

“Where is the customer entity saved to the database?”

richx 7/3/2025||
I work on business software.

I think one very important aspect is requirements collection and definition. This includes communicating with the business users, trying to understand their issues and needs that the software is supposed to address. And validating if the actual software is actually solving it or not (sufficiently).

All of this requires human domain knowledge, communication and coordination skills.

intended 7/3/2025||
I predict that using LLMs is going to be a firing offense.

There will be a 100 justifications for and against it, but in the end you are going to need junior devs.

If said junior dev has not done the work, and an LLM has helped them, you are going to lose your hair walking through the code - every single time.

So you will choose between doing the work yourself, hiring new devs, or making the environment you do your work in become predictable.

We can argue that LLMs are massive piracy monstrosities, with huge amounts of public code in them, or that they are interfering with the ability of people to learn the culture of the company. The argument doesn't matter, because the reasoning being done here is motivated reasoning.

You will not care how LLMs are kept out of the playing pen, you just care that they are out.

So incentives will be structured to ensure that is the case.

What will really be game set and match, will be when some massive disaster strikes because of bad code, and it can either directly or tangentially be linked to LLMs.

pclowes 7/3/2025||
I have not found LLMs to be most beneficial in straight writing code especially in complex large scale systems. However, I have found them extremely useful at the aspects the author claims they make more difficult: understanding, testing, and trusting that code.

An LLM is an extremely useful search engine. Given access to plenty of CLI tools asking it questions about an unfamiliar code base is extremely helpful. It can read and summarize much faster than I can. I don't trust its exact understanding but I do trust it to give me a high level understanding of an architecture, call out dependencies, summarize APIs, and give me an idea of what parts of the code base are new/old etc.

Additionally, having LLMs write tests after setting up the rough testing structure and giving it a few examples massively decreases the amount of time it takes to prove my understanding of the code through the tests thereby increasing my confidence/trust in it.

kulahan 7/3/2025||
This shouldn't be surprising to anyone in software development. Regardless of how essential your software is, you can just shit out any stupid-ass thing that vaguely works and you've finished your ticket.

Who thought lazy devs were the bottleneck? The industry needs 8x as much regulation as it has now; they can do whatever they want at the moment lol.

actinium226 7/4/2025||
It's always nice to look at history to parallels to modern conversations, and I think it's instructive to look back at the era when software jobs were getting outsourced to overseas. Even though the cost of generating code was cheaper, and the people on the other end can participate in meetings and all that, software is still primarily done in person. Even today, several years after the pandemic and the height of remote work, many tech companies are mandating in-office policies even for software workers.

The point is that there's a human element to code that we can't even capture when working remotely with intelligent humans. LLMs will always be like a remote worker.

marginalia_nu 7/3/2025||
I think most of the supposed bottlenecks are mostly a consequence of attempting to increase development speed by throwing additional developers at the problem. They're trivially problems that don't exist for a solo dev, and there's a strong argument that a small team won't suffer much from them either.

If you can use tools to increase individual developer productivity (let's say all else being equal, code outputs 2x as fast) in a way where you can cut the team size in half, you'll likely a significant productivity benefit since your communication overhead has gone down in the process.

This is of course assuming a frictionless ideal gas at STP where the tool you're looking at is a straight force multiplier.

sublimefire 7/3/2025||
A bit more interesting is slightly inverse to this. What will win in the next 10 years?

IMO expectations are now so high from users that you need to create websites, apps, auth, payment integration, customer supoort forums and chats. And this is to break the ice and have a good footing for the business to move forward. You could see how this is a problem for a non technical person. Nobody will hire someone to do all that as it will be prohibitively expensive. AI is not for the engineers, it is a “good enough” for folks that do not understand the code.

A lot depends on where the money will be invested, and what will consumers like as well. I bet the current wave of ai coding will morph into other spheres to try and improve efficiency.

bGl2YW5j 7/3/2025|
I can’t get over the idea that I won’t ever trust my data to a product made entirely by AI; one with no or limited human oversight.
z3t4 7/3/2025|
When I learned coding it took a lot of effort just to get something to work at all, it took many years until I could take an idea and write code that works right away after the spelling errors have been fixed. Now I have colleges that have no idea what they're doing but AI gives them code that works... Meanwhile the coding standards, languages and frameworks changes faster then I have time to keep up. I always liked code that was simple, easy to understand, and easy to change, remove and rewrite. Writing and working with such code is very satisfying. But noone cares about code anyway. It's more of an brutalist abstract artform that very few people appreciate.
More comments...