Top
Best
New

Posted by nimbleplum40 4/3/2025

Dijkstra On the foolishness of "natural language programming"(www.cs.utexas.edu)
448 points | 275 commentspage 2
hamstergene 4/3/2025|
Reminds me of another recurring idea of replacing code with flowcharts. First I've seen that idea coming from some unknown Soviet professor from 80s, and then again and again from different people from different countries in different contexts. Every time it is sold as a total breakthrough in simplicity and also every time it proves to be a bloat of complexity and a productivity killer instead.

Or weak typing. How many languages thought that simplifying strings and integers and other types into "scalar", and making any operation between any operands meaningful, would simplify the language? Yet every single one ended up becoming a total mess instead.

Or constraint-based UI layout. Looks so simple, so intuitive on simple examples, yet totally failing to scale to even a dozen of basic controls. Yet the idea keeps reappearing from time to time.

Or an attempt at dependency management by making some form of symlink to another repository e.g. git modules, or CMake's FetchContent/ExternalProject? Yeah, good luck scaling that.

Maybe software engineering should have some sort of "Hall of Ideas That Definitely Don't Work", so that young people entering the field could save their time on implementing one more incarnation of an already known not good idea.

Folcon 4/3/2025||
> Maybe software engineering should have some sort of "Hall of Ideas That Definitely Don't Work", so that young people entering the field could save their time on implementing one more incarnation of an already known not good idea.

I'm deeply curious to know how you could easily and definitively work out what is and is not an idea that "Definitely Don't Work"

Mathematics and Computer Science seem to be littered with unworkable ideas that have made a comeback when someone figured out how to make them work.

antonvs 4/3/2025||
Well, "Hall of Ideas That Are So Difficult To Make Work Well That They May Not In Fact Be Much Use" doesn't roll off the tongue as smoothly.

What this Hall could contain, for each idea, is a list of reasons why the idea has failed in the past. That would at least give future Quixotes something to measure their efforts by.

Folcon 4/3/2025||
Ok, so better documentation about what was tried, why, how it failed so as to make obvious if it's viable to try again or not.

I can get behind that :)...

Animats 4/3/2025|||
Constraint-based layout works, but you need a serious constraint engine, such as the one in the sketch editors of Autodesk Inventor or Fusion 360, along with a GUI to talk to it. Those systems can solve hard geometry problems involving curves, because you need that when designing parts.

Flowchart-based programming scales badly. Blender's game engine (abandoned) and Unreal Engine's "blueprints" (used only for simple cases) are examples.

d1sxeyes 4/3/2025|||
Not sure if you’re talking about DRAKON here, but I love it for documentation of process flows.

It doesn’t really get complicated, but you can very quickly end up with drawings with very high square footage.

As a tool for planning, it’s not ideal, because “big-picture” is hard to see. As a user following a DRAKON chart though, it’s very, very simple and usable.

Link for the uninitiated: https://en.m.wikipedia.org/wiki/DRAKON

hmhhashem 4/3/2025|||
For young engineers, it is a good thing to spend time implementing what you call "bad ideas". In the worst-case, they learn from their mistake and gain valuable insight into the pitfalls of such ideas. In the best case, you can have a technological breakthrough as someone finds a way to make such an idea work.

Of course, it's best that such learning happens before one has mandate to derail the whole project.

oytis 4/3/2025|||
> Maybe software engineering should have some sort of "Hall of Ideas That Definitely Don't Work", so that young people entering the field could save their time on implementing one more incarnation of an already known not good idea.

FWIW, neural networks would be in that pool until relatively recently.

antonvs 4/3/2025||
If we change "definitely don't work" to "have the following so far insurmountable challenges", it addresses cases like this. Hardware scaling limitations on neural networks have been known to be a limitation for a long time - Minsky and Papert touched in this in Perceptrons in 1969.

The Hall would then end up containing a spectrum ranging from useless ideas to hard problems. Distinguishing between the two based on documented challenges would likely be possible in many cases.

octacat 4/3/2025|||
Most popular dependency management systems literally linking to a git sha commit (tag), see locks file that npm/rebar/other tool gives you. Just in a recursive way.
hamstergene 4/3/2025||
They do way more than that. For example they won't allow you to have Foo-1 that depends on Qux-1 and Bar-1 that depends on Qux-2 where Qux-1 and Qux-2 are incompatible and can't be mixed within the same static library or assembly. But may allow it if mixing static-private Qux inside dynamic Foo and Bar and the dependency manager is aware of that.

A native submodule approach would fail at link time or runtime due to attempt to mix incompatible files in the same build run. Or, in some build systems, simply due to duplicate symbols.

That "just in a recursive way" addition hides a lot of important design decisions that separate having dependency manager vs. not having any.

octacat 4/3/2025||
They do way less then that. They just form a final list of locks and download that at the build time. Of course you have to also "recursively" go though all your dep tree and add submodules for each of subdependencies (recommend to add them in the main repo). Then you will have do waste infinite amount of time setting include dirs or something. If you have two libs that require a specific version of a shared lib, no dep manager would help you. Using submodules is questionable practice though. Useful for simple stuff, like 10 deps in total in the final project.
cubefox 4/3/2025|||
> Or weak typing. How many languages thought that simplifying strings and integers and other types into "scalar", and making any operation between any operands meaningful, would simplify the language? Yet every single one ended up becoming a total mess instead.

Yet JavaScript and Python are the most widely used programming languages [1]. Which suggests your analysis is mistaken here.

[1] https://www.statista.com/statistics/793628/worldwide-develop...

throw1111221 4/3/2025|||
Python went through a massive effort to add support for type annotations due to user demand.

Similarly, there's great demand for a typed layer on top of Javascript:

- Macromedia: (2000) ActionScript

- Google: (2006) GWT [Compiling Java to JS], and (2011) Dart

- Microsoft: (2012) Typescript

teddyh 4/3/2025||
You’re talking about static typing, the opposite of which is dynamic typing. User hamstergene is talking about weak vs. strong typing, which is another thing entirely. Python has always been strongly typed, while JavaScript is weakly typed. Many early languages with dynamic types also experimented with weak typing, but this is now, as hamstergene points out, considered a bad idea, and virtually all modern languages, including Python, are strongly typed.
teddyh 4/3/2025|||
JavaScript is indeed weakly typed, and is widely lampooned and criticized for it¹². But Python has strong typing, and has always had it.

(Both JavaScript and Python have dynamic typing; Python’s type declarations are a form of optional static type checking.)

Do not confuse these concepts.

1. <https://www.destroyallsoftware.com/talks/wat>

2. <https://eqeq.js.org/>

cubefox 4/4/2025|||
Ah, weak typing, a.k.a. implicit type conversions.
piokoch 4/3/2025||
This is recurring topic indeed. I remember it was hot topic at least two times, when ALM tools were introduced (e.g. Borland ALM suite - https://www.qast.com/eng/product/develop/borland/index.htm), next when BPML language become popular - processes were described by the "marketing" and the software was, you know, generated automatically.

All this went out of fashion, leaving some good stuff that was built at that time (remaining 95% was crap).

Today's "vibe coding" ends when Chat GPT and alikes want to call on some object a method that does not exist (but existed in 1000s of other objects LLM was trained with, so should work here). Again, we will be left with the good parts, the rest will be forgotten and we will move to next big thing.

weeeee2 4/3/2025||
Forth, PostScript and Assembly are the "natural" programming languages from the perspective of how what you express maps to the environment in which the code executes.

The question is "natural" to whom, the humans or the computers?

AI does not make human language natural to computers. Left to their own devices, AIs would invent languages that are natural with respect to their deep learning architectures, which is their environment.

There is always going to be an impedance mismatch across species (humans and AIs) and we can't hide it by forcing the AIs to default to human language.

truculent 4/3/2025||
Any sufficiently advanced method of programming will start to look less like natural language and more like a programming language.

If you still don’t want to do programming, then you need some way to instruct or direct the intelligence that _will_ do the programming.

And any sufficiently advanced method of instruction will look less like natural language, and more like an education.

wiz21c 4/3/2025||
> Remark. As a result of the educational trend away from intellectual discipline, the last decades have shown in the Western world a sharp decline of people's mastery of their own language: many people that by the standards of a previous generation should know better, are no longer able to use their native tongue effectively, even for purposes for which it is pretty adequate.

Compare that to:

https://news.ycombinator.com/item?id=43522966

0x1ceb00da 4/3/2025||
When was it written? The date says 2010 but dijkstra died in 2002.
SirHumphrey 4/3/2025|
It’s just a date of transcription. The letter was written in 1978.
grahamlee 4/3/2025||
Dijkstra also advocated for proving the correctness of imperative code using the composition of a set of simple rules, and most programmers ignore that aspect of his work too.
seumars 4/3/2025|
Any specific paper or article of his you would recommend?
sitkack 4/3/2025|||
https://en.wikipedia.org/wiki/Predicate_transformer_semantic...

Found in about 9 seconds.

grahamlee 4/3/2025|||
_A Discipline of Programming_
centra_minded 4/4/2025||
Modern programming already is very, very far from strict obedience and formal symbolism. Most programmers these days (myself included!) are using libraries, frameworks, and other features that mean what they are doing in practice is wielding sky-high abstractions, gluing things together they do not (and can not) fully understand the inner workings of.

If I create a website with Node.js, I’m not manually managing memory, parsing HTTP requests byte-by-byte, or even attempting to fully grasp the event loop’s nuances. I’m orchestrating layers of code written by others, trusting that these black boxes will behave as advertised according to my best, but deeply incomplete, understanding of them.

I'm not sure what this means for LLMs programming, but I already feel separated from the case Dijkstra lays out.

tired-turtle 4/4/2025|
> Modern programming already is very, very far from strict obedience and formal symbolism

Difficult to sort this out with what follows.

Consider group theory. A group G is a set S with an operator * that supports an identity, closure, and an inverse. With that abstraction comes a hefty amount of power. In some sense, a group is akin to a trait on some type, much like how a class in Java can implement or extend Collection. (Consider how a ring ‘extends’ a group.)

I’d posit frameworks and libraries are no different in terms of formal symbolism from the math structure laid out above. Maybe the interfaces are fuzzy and the documentation is shoddy, but there’s still a contract we use to reason about the tool at hand.

> I’m not manually managing memory, parsing HTTP requests byte-by-byte

If I don’t reprove Peano’s work, then I’m not really doing math?

still_grokking 4/3/2025||
By chance I came just today across GitHub's SpecLang:

https://githubnext.com/projects/speclang/

Funny coincidence!

I leave it here for the nice contrast it creates in light of the submission we're discussing.

jruohonen 4/3/2025||
A great find!

The whole thing seems a step (or several steps) backwards also in terms of UX. I mean surely there was a reason why ls was named ls, and so forth?

A bonus point is that he had also something to say about a real or alleged degeneration of natural languages themselves.

chilldsgn 4/3/2025|
This is the most beautiful thing I've read in a long time.
gitanovic 4/3/2025|
Me too, I printed it and underlined it, I will try to memorize some of the concepts and the exposition, because this is a cristallization of what I vaguely feel about the abuse of LLM I am currently witnessing
chilldsgn 4/3/2025||
Absolutely with you on the abuse of LLMs. I'm deeply concerned about loss of competence and I am so burned out from having to deal with other people's messy and overly complex code.

I think people who think about this like us need to start building resilience for the very real possibility that in a couple of years we'll be the ones dealing with these awful LLM-generated code bases, fixing bad logic and bugs.

More comments...