Top
Best
New

Posted by nimbleplum40 4/3/2025

Dijkstra On the foolishness of "natural language programming"(www.cs.utexas.edu)
448 points | 275 commentspage 4
ur-whale 4/3/2025|
> The foolishness of "natural language programming"

Wasn't that the actual motivation behind the development of SQL?

IIRC, SQL was something that "even" business people could code in because it was closer to "natural language".

When you see the monstrosity the motivation gave birth to, I think the "foolish" argument was well warranted at the time.

Of course, in these days of LLM's, Dijkstra's argument isn't as clear cut (even if LLM's aren't there yet, they're getting much closer).

aubanel 4/3/2025||
Djikstra's clarity of expression and thoguht is indeed impressive. One nuance : he seems to completely equate ease of language with ability to do undetectable mistakes. I disagree: I know people whose language is extremely efficient at producing analogies that can shortcut for the listener many pages of painful mathematical proofs: for instance, convenu the emergence of complexity for many simple processes by a "swarming"
godelski 4/3/2025|

  > he seems to completely equate ease of language with ability to do undetectable mistakes.
I do not believe this is his argument. He was making the point that there is a balance. You need to consider the context of the times, and remember that in this context a language like C is considered "high-level", not a language like Python. He later moves on to discuss formalism through mathematics (referencing Vieta, Descartes, Leibniz, and Boole), in how this symbolism is difficult to perform and many are adverse to it, but that through its birth we've been able to reap a lot of rewards. He precisely makes the claim that were we not to move to formal methods and instead maintain everyday language, we would still be stuck at the level of the Greeks.

Actually in one season of An Opinionated History of Mathematics, the host (a mathematician) specifically discusses the transition in the Greeks and highlights how many flaws there were in this system. How the slow move to mathematical formalism actually enabled correctness.

The point is that human language is much more vague. It has to be this way. But the formalism in symbolics (i.e. math) would similarly make a terrible language for computing. The benefit of the symbolic approach is the extreme precision, but it also means the language is extremely verbose. While in human languages we trade precision for speed and flexibility. To communicate what I have with a mathematical language would require several pages of text. Like he says, by approaching human language this shifts more responsibility to the machine.

karmasimida 4/3/2025||
Who is laughing now?

It is clear NLU can't be done in the reign of PL itself, there is never going to be natural language grammar that is precise as PL.

But LLM is a different kind of beast entirely.

hinkley 4/3/2025||
About every six to ten years I look in on the state of the art on making artificial human languages in which one cannot be misunderstood.

If we ever invent a human language where laws can be laid out in a manner that the meaning is clear, then we will have opened a door on programming languages that are correct. I don’t know that a programmer will invent this first. We might, but it won’t look that natural.

teleforce 4/3/2025||
> thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole.

Please check this talk on the contributions of these mentioned people for the complementary form of deterministic AI (machine intelligence) namely logic, optimization and constraint programming in a seminal lecture by John Hooker [1].

I have got the feeling that if we combine the stochastic nature of LLM based NLP with the deterministic nature of feature structure trchnique based NLP (e.g. CUE), guided by logic, optimization and constraint programming we probably can solve intuitive automation or at least perform proper automation (or automatic computing as Dijkstra put it).

Apparently Yann LeCun also recently proposing optimization based AI namely inference through optimization, or objective driven AI in addition to data-driven AI [2].

Fun facts, you can see Donald Knuth asking questions towards the end of the JH's lecture presentation.

[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:

https://www.youtube.com/live/TknN8fCQvRk

[2] Mathematical Obstacles on the Way to Human-Level AI - Yann LeCun - Meta - AMS Josiah Willard Gibbs Lecture at the 2025 Joint Mathematics Meetings (2025) [video]:

https://youtu.be/ETZfkkv6V7Y

yapyap 4/3/2025||
(2010) by the way, which makes this article all the more impressive.

I was under the assumption this was a current body of work seeing as Dijkstra spoke so well about the possibilities but this just goes to show some people were ahead of their tike with their worries.

Also adding your home address to something you write / publish / host on the internet is pretty hardcore.

timvdalen 4/3/2025|
Dijkstra died in 2002, so I assume 2010 is just the date of transcription
esafak 4/3/2025||
1979. https://doi.org/10.1007/BFb0014656
wewewedxfgdf 4/3/2025||
We are AI 1.0

Just like Web 1.0 - when the best we could think of to do was shovel existing printed brochures onto web pages.

In AI 1.0 we are simply shoveling existing programming languages into the LLM - in no way marrying programming and LLM - they exist as entirely different worlds.

AI 2.0 will be programming languages - or language features - specifically designed for LLMs.

wpollock 4/3/2025||
I love reading literate-programming software! The problem is that very few programmers are as skilled at writing clearly as are Knuth and Dijkstra. I think I read somewhere that book publishers receive thousands of manuscripts for each one they publish. Likewise, few programmers can write prose worth reading.
yagyu 4/3/2025||
In the same vein, Asimov in 1956:

Baley shrugged. He would never teach himself to avoid asking useless questions. The robots knew. Period. It occurred to him that, to handle robots with true efficiency, one must needs be expert, a sort of roboticist. How well did the average Solarian do, he wondered?

odyssey7 4/3/2025|
This is also, inadvertently, an argument against managers.

Why talk to your team when you could just program it yourself?

More comments...