Top
Best
New

Posted by nimbleplum40 4/3/2025

Dijkstra On the foolishness of "natural language programming"(www.cs.utexas.edu)
448 points | 275 commentspage 3
James_K 4/3/2025|
It's pretty obvious to me that this LLM business won't be economically feasible until it can actually produce better code than a team of humans could without it. The reason programmers are paid so highly is because their work is incredibly productive and valuable. One programmer can enable and improve the work of many hundreds of other people. Cost cutting on the programmer isn't worth it because it'll create greater losses in other places. Hence the high salaries. Every saving you make on the programmer is magnified a hundred times in losses elsewhere.
metalliqaz 4/3/2025|
given that LLMs are going to produce code that is essentially an average of the code it has been trained on, which is all human code of varying quality, I don't see how the current methods are going to actually produce better code than humans do when working with their own domain-specific knowledge.
Rodmine 4/3/2025||
What needs to be done can and is almost always described in natural language.

Whether that is feasible is a different question (https://xkcd.com/1425/), but also can be described in natural language.

Here is something I tried with o3-mini:

> Create a program that takes an input image and can tell if there is a bird in it.

> ChatGPT said:

> Reasoned for 38 seconds

> Below is an example in Python that uses TensorFlow’s Keras API and a pre-trained ResNet50 model to classify an input image. The code loads an image, preprocesses it to the required size, obtains the top predictions, and then checks if any of the top labels appear to be a bird. You can expand the list of bird keywords as needed for your application.

> python code that works

If you take the critical view, you can always find a way to find an exception that will fail. I can see many happy cases which will just work most of the time, even with the currently available technology. Most of the programming work done today is putting libraries and api services together.

voidhorse 4/4/2025||
Dijkstra is entirely correct in this, and it's something I've been trying to urge people to recognize since the beginnings of this LLM wave.

There is inherent value in using formal language to refine, analyze, and describe ideas. This is, after all, why mathematical symbolism has lasted in spite of the fact that all mathematicians are more than capable of talking about mathematical ideas in their natural tongues.

Code realizes a computable model of the world. A computable model is made up of a subset of the mathematical functions we can define. We benefit greatly from formalism in this context. It helps us be precise about the actual relationships and invariants that hold within a system. Stable relationships and invariants lead to predictable behaviors, and predictable systems are reliable systems on the plan of human interaction.

If you describe your system entirely in fuzzily conceived natural language, have you done the requisite analysis to establish the important relationships and invariants among components in your system, or are you just half-assing it?

Engineering is all about establishing relative degrees of certainty in the face of the wild uncertainty that is the default modality of existence. Moving toward a world in which we "engineer" systems increasingly through informal natural language is a step backwards on the continuum of reliability, comprehensibility, and rigor. The fact that anyone considers using these tools and still thinks of themselves as an "engineer" of some kind is an absolute joke.

Animats 4/3/2025||
(2010)

This refers to the era of COBOL, or maybe Hypertalk, not LLMs.

guy234 4/3/2025||
The original text was from 1978 according to other sources
bambax 4/3/2025|||
Ah, thanks. Yes it couldn't have been 2010 because he died in 2002. But the date this was written is important, otherwise his references to "the last decade" don't mean anything!
yaris 4/3/2025||
Many would say that "the last decade" (with the surrounding context) is timeless, or at least that it is still relevant today.
Animats 4/3/2025|||
Right, it seemed earlier.
imglorp 4/3/2025||
Apple didn't learn those lessons with Apple Script either: there is a rigid sequence of characters that represents a valid program and if you're off by one character, tough. If you're lucky, the tool will guide you to what it wants. If not, you're stuck looking up the exact syntax so off to the reference manual you go, either way.

So there's minimal to looking up the syntax, whether it's based on some natural language phrase or a less wordy or ambiguous artificial language.

"friendly" or "natural" really is not a thing.

0xbadcafebee 4/4/2025||
I read this when I was younger, but I only now get it, and realize how true it all is.

13) Humans writing code is an inherently flawed concept. Doesn't matter what form the code takes. Machine code, assembly language, C, Perl, or a ChatGPT prompt. It's all flawed in the same way. We have not yet invented a technology or mechanism which avoids it. And high level abstraction doesn't really help. It hides problems only to create new ones, and other problems simply never go away.

21) Loosely coupled interfaces made our lives easier because it forced us to compartmentalize our efforts into something manageable. But it's hard to prove that this is a better outcome overall, as it forces us to solve problems in ways that still lead to worse outcomes than if we had used a simpler [formal] logic.

34) We will probably end up pushing our technical abilities to the limit in order to design a superior system, only to find out in the end that simpler formal logic is what we needed all along.

55) We're becoming stupider and worse at using the tools we already have. We're already shit at using language just for communicating with each other. Assuming we could make better programs with it is nonsensical.

For a long time now I've been upset at computer science's lack of innovation in the methods we use to solve problems. Programming is stupidly flawed. I've never been good at math, so I never really thought about it before, but math is really the answer to what I wish programming was: a formal system for solving a problem, and a formal system for proving that the solution is correct. That's what we're missing from software. That's where we should be headed.

nizarmah 4/3/2025||
One of the most challenging aspects in my career has been: communication.

This is largely because it leaves chance for misinterpretation or miscommunication. Programming languages eliminated misinterpretation and made miscommunication easier to notice through errors.

Programming language enables micromanaging proactively, I specify the instructions before they run. I often find myself micromanaging retroactively with LLMs, until I reach the path I am looking for.

HarHarVeryFunny 4/3/2025||
Seeing as much of the discussion here is about LLMs, not just the shortcomings of natural language as a programming language, another LLM-specific aspect is how the LLM is interpreting the natural language instructions it is being given...

One might naively think that the "AI" (LLM) is going to apply it's intelligence to give you the "best" code in response to your request, and in a way it is, but this is "LLM best" not "human best" - the LLM is trying "guess what's expected" (i.e. minimize prediction error), not give you the best quality code/design per your request. This is similar to having an LLM play chess - it is not trying to play what it thinks is the strongest move, but rather trying to predict a continuation of the game, given the context, which will be a poor move if it thinks the context indicates a poor player.

With an RL-trained reasoning model, the LLM's behavior is slightly longer horizon - not just minimizing next token prediction errors, but also steering the output in a direction intended to match the type of reasoning seen during RL training. Again, this isn't the same as a human, applying their experience to achieve (predict!) a goal, but arguably more like cargo-cult reasoning - following observed patterns of reasoning in the training set, without the depth of understanding and intelligence to know if this is really applicable in the current context, nor with the ability to learn from it's mistakes when it is not.

So, while natural language itself is of course too vague to program in, which is part of the reason that we use programming languages instead, it's totally adequate as a way to communicate requirements/etc to an expert human developer/analyst, but when communicating to an LLM instead of a person, one should expect the LLM to behave as an LLM, not as a human. It's a paperclip maximizer, not a human-level intelligence.

jedimastert 4/3/2025||
Over the last couple of weeks or so of me finally starting to use AI pair programming tools (for me, Cursor) I've been realizing that, much like when I play music, I don't really think about programming a natural language terms in the first place, it's actually been kind of hard to integrate an AI coding agent into my workflow mentally
Dansvidania 4/3/2025||
- some people found error messages they couldn't ignore more annoying than wrong results

I wonder if this is a static vs dynamic or compiled vs interpreted reference.

Anyway I love it. Made me giggle that we are still discussing this today, and just to be clear I love both sides, for different things.

arkh 4/3/2025|
50 years ago, in "The Mythical Man-Month" Fred Brooks was already discussing the cost of cloud based solutions.

> Since size is such a large part of the user cost of a programming system product, the builder must set size targets, control size, and devise size-reduction techniques, just as the hardware builder sets component-count targets, controls component count, and devises count-reduction techniques. Like any cost, size itself is not bad, but unnecessary size is.

And the why of Agile, DDD and TDD:

> Plan the System for Change [...] Plan the Organization for Change

cafard 4/3/2025|
So many Dijkstra links amount to "Dijkstra on the [pejorative noun] of [whatever was bothering Dijkstra]."

I promise to upvote the next Dijkstra link that I see that does not present him as Ambrose Bierce with a computer.

sitkack 4/3/2025|
Dijkstra is actually a time traveling Brian Rantrill
More comments...