Top
Best
New

Posted by Anon84 5 days ago

The bottleneck was never the code(www.thetypicalset.com)
581 points | 410 commentspage 5
booleandilemma 3 days ago||
I swear AI has made this entire industry crazy.

Here's Robert Martin saying non-determinism in AI is ok:

https://x.com/i/status/2044440457422549407

Who wants a non-deterministic banking app?

whatever1 3 days ago||
Yes it was. We were stuck on never-ending design and requirements discussions because writing the wrong code was too expensive.

Now if your design / requirements are wrong who cares? Tomorrow you will have a brand new stack.

zabzonk 3 days ago||
> Real programmers don’t document their programs.

Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.

MyHonestOpinon 3 days ago|
> Real programmers don’t document their programs.

This is kind of a straw man. I suspect people say that tongue in cheek.

Good programmers try to make their code clear and easy to understand. They add comments to clarify, specially their whys.

The problem I have with documentation is that you end up with mountains of documents over time about a lot of things that are no longer true and many times contradictory. The only solution I have seen is making sure that documents have owners that update them periodically.

spiderfarmer 3 days ago||
For me it was. Solo entrepreneurs are the ones who profit the most from AI assisted development.
frollogaston 3 days ago|
Or startups where coding was always a bottleneck because it was very expensive to hire swes, unlike big corps which would often throw swes at a problem.
charlieflowers 3 days ago||
I have been thinking about this a lot lately. How do you capture key factors succinctly, and even harder, keep it succinct as it evolves?

The shrinking that property based testing does when it finds an issue is kind of what we need for specs/context.

munk-a 3 days ago||
I can type faster than I can think of the correct things to type. My experience may be non-standard but I think for most serious software folks the code has never been the bottleneck.
randallsquared 3 days ago||
So managers are overwhelmed because the code is now happening a lot faster? It sounds like the immediate bottleneck really was the code, at least frequently. Now it seems the bottleneck is managerial.
skiing_crawling 3 days ago||
As software engineer, we should collectively realize that this is all cope. Every article or comment about how AI will never be smart enough, etc, etc will only be true until its not. One of our main valuable skill sets is now partially automated. Some of us are completely obsolete and its coming for the specialists and more experienced ones within a decade tops. You're not going to convince anyone that "um actually we're better because we bike shed more".

Stuff like this is ridiculous and comes off as frantically trying to save your ass. Its pretty obvious at this point that we will just throw more matmuls at it until it can do this or something equivalent.

> Agents cannot do osmosis. They do not get context by being in the room, by half-hearing the planning conversation, or by carrying the memory of the last incident.

Pannoniae 3 days ago|
Yeah, you're definitely right about the shifting goalposts ("it's a stochastic parrot" -> "it hallucinates all the time, it can't even get APIs right" -> "it can generate functions but can't reason about the codebase" -> "the bottleneck was never shipping code")

At the same time, humans can move up the abstraction ladder faster than the LLMs can. At least, some humans. Agents can produce lots of code. They can also do the entirely wrong thing. The impact of wrong decisions have been massively write-amplified with more and more intelligent LLMs. With earlier ones, it got a sentence or a function wrong, you reprompted, the cost of a mistake was 10 seconds. Now, you can burn hours or even days of work on the entirely wrong thing without a competent human operator stepping in and course-correcting.

The trajectory of agents have been bigger and bigger context windows, bigger autonomy, but at the same time, a bigger blast radius. In this context, I don't think the human experts will be out of their jobs any time soon.

skiing_crawling 3 days ago||
> At the same time, humans can move up the abstraction ladder faster than the LLMs can

This was kind of the point, its only true for now. I agree with you that this kind of stuff will take longer. I don't think there's probably good training data for it right now. Handling abstractions and course correcting is probably the job now, and it also happens to be exactly the data that we will be typing in our prompts. They'll train on it or something like it.

Pannoniae 2 days ago||
Unless something radical changes (and that isn't unprecedented! I'm just writing this as of today), the trend is still "just" a bigger hammer. It's bigger, you can get way more done, but the blast radius is also larger.

Take the strawman: even if AI can one-shot basically any application below let's say, 1MLoC, if your prompt is 4 lines, it will generate something. It can't read your mind. If you make proper specs, then you'll get what you want - but many people don't know what they want. And even if they do, they might have contradictions in their requirements, might be asking for something impossible, etc.

luodaint 3 days ago|
The paper hits the nail right on the head, but it misses the mark on the next constraint: how to decide what to build.

In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.

mihailupu 3 days ago|
[flagged]
More comments...