Posted by Anon84 5 days ago
Here's Robert Martin saying non-determinism in AI is ok:
https://x.com/i/status/2044440457422549407
Who wants a non-deterministic banking app?
Now if your design / requirements are wrong who cares? Tomorrow you will have a brand new stack.
Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.
This is kind of a straw man. I suspect people say that tongue in cheek.
Good programmers try to make their code clear and easy to understand. They add comments to clarify, specially their whys.
The problem I have with documentation is that you end up with mountains of documents over time about a lot of things that are no longer true and many times contradictory. The only solution I have seen is making sure that documents have owners that update them periodically.
The shrinking that property based testing does when it finds an issue is kind of what we need for specs/context.
Stuff like this is ridiculous and comes off as frantically trying to save your ass. Its pretty obvious at this point that we will just throw more matmuls at it until it can do this or something equivalent.
> Agents cannot do osmosis. They do not get context by being in the room, by half-hearing the planning conversation, or by carrying the memory of the last incident.
At the same time, humans can move up the abstraction ladder faster than the LLMs can. At least, some humans. Agents can produce lots of code. They can also do the entirely wrong thing. The impact of wrong decisions have been massively write-amplified with more and more intelligent LLMs. With earlier ones, it got a sentence or a function wrong, you reprompted, the cost of a mistake was 10 seconds. Now, you can burn hours or even days of work on the entirely wrong thing without a competent human operator stepping in and course-correcting.
The trajectory of agents have been bigger and bigger context windows, bigger autonomy, but at the same time, a bigger blast radius. In this context, I don't think the human experts will be out of their jobs any time soon.
This was kind of the point, its only true for now. I agree with you that this kind of stuff will take longer. I don't think there's probably good training data for it right now. Handling abstractions and course correcting is probably the job now, and it also happens to be exactly the data that we will be typing in our prompts. They'll train on it or something like it.
Take the strawman: even if AI can one-shot basically any application below let's say, 1MLoC, if your prompt is 4 lines, it will generate something. It can't read your mind. If you make proper specs, then you'll get what you want - but many people don't know what they want. And even if they do, they might have contradictions in their requirements, might be asking for something impossible, etc.
In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.