Top
Best
New

Posted by speckx 6 hours ago

Tony Hoare has died(blog.computationalcomplexity.org)
1001 points | 144 commentspage 2
groos 5 hours ago|
I've had the good fortune to attend two of his lectures in person. Each time, he effortlessly derived provably correct code from the conditions of the problem and made it seem all too easy. 10 minutes after leaving the lecture, my thought was "Wait, how did he do it again?".

RIP Sir Tony.

astahlx 3 hours ago||
Tony advised me to make money with the software model checker I have been writing. In contrast to the typical practice to make these tools open source and free for use. Would have loved to learn more from him. He was a great teacher but also a great and sharp listener. Still remember the detour we made on the way to a bar in London, talking too much and deep about refinement relations. RiP.
arch_deluxe 6 hours ago||
One of the greats. Invented quicksort and concurrent sequential processes. I always looked up to him because he also seemed very humble.
adrian_b 5 hours ago||
He also invented many other things, like enumeration types, optional types, constructors. He popularized the "unions" introduced by McCarthy, which were later implemented in ALGOL 68, from where a crippled form of them was added to the C language.

Several keywords used in many programming languages come from Hoare, who either coined them himself, or he took them from another source, but all later programming language designers took them from Hoare. For example "case", but here only the keyword comes from Hoare, because a better form of the "case" statement had been proposed first by McCarthy many years earlier, under the name "select".

Another example is "class" which Simula 67, then all object-oriented languages took from Hoare, However, in this case the keyword has not been used first by Hoare, because he took "class", together with "record", from COBOL.

Another keyword popularized by Hoare is "new" (which Hoare took from Wirth, but everybody else took from Hoare), later used by many languages, including C++. At Hoare, the counterpart of "new" was "destroy", hence the name "destructor", used first in C++.

The paper "Record Handling", published by C.A.R. Hoare in 1965-11 was a major influence on many programming languages. It determined significant changes in the IBM PL/I programming language, including the introduction of pointers . It also was the source of many features of the SIMULA 67 and ALGOL 68 languages, from where they spread in many later programming languages.

The programming language "Occam" has been designed mainly as an implementation of the ideas described by Hoare in the "Communicating Sequential Processes" paper published in 1978-08. OpenMP also inherits many of those concepts, and some of them are also in CUDA.

EdNutting 4 hours ago||
And, of course, the Go programming language.
linhns 4 hours ago||
I would not say he invented Go, although Go is probably the only relevant implementation of CSP nowadays.
EdNutting 3 hours ago||
I was adding Go to the list at the very end of the comment:

>OpenMP also inherits many of those concepts, and some of them are also in CUDA.

embit 5 hours ago|||
Talking about Quicksort, John Bentley’s deep dive in Quicksort is quite illuminating. https://m.youtube.com/watch?v=QvgYAQzg1z8
znpy 4 hours ago||
oh man, google tech talks. what a throwback.

there was a time, 10-15 years ago, when they were super cool. at some point they """diluted""" the technicality content and the nature of guests and they vanished into irrelevance.

baruchel 5 hours ago|||
Yes, but don't forget his formal work also (Hoare logic).
rramadass 4 hours ago||
To me, this is his most important contribution; Everybody else built on top of this.

Hoare Logic - https://en.wikipedia.org/wiki/Hoare_logic

lkuty 4 hours ago||
Rediscovering it through the Dafny programming language. Brings back memories of a 1994 University course.
rramadass 4 hours ago||
I think it is even more relevant today.

Hoare Logic + Dijkstra's weakest precondition + Meyer's Design-by-Contract is what should be used to get LLMs to generate proof with code in a correctness-by-construction approach to implementation.

References:

Correctness-by-Construction (CbC) - https://www.tu-braunschweig.de/en/isf/research/cbc

A Course on Correctness by Construction - https://wp.software.imdea.org/cbc/

madsohm 5 hours ago|||
They were never concurrent, they were communicating. https://en.wikipedia.org/wiki/Communicating_sequential_proce...
adrian_b 5 hours ago||
That is indeed the correct title, but the processes were concurrent.

However, they were not just concurrent, but also communicating.

wood_spirit 5 hours ago||
And regretful inventor of the null reference!

His “billion dollar mistake”:

https://www.infoq.com/presentations/Null-References-The-Bill...

bazoom42 4 hours ago|||
The mistake was not null references per se. The mistake was having all references be implicitly nullable.

He states around minute 25 the solution to the problem is to explicitly represent null in the type system, so nullable pointers are explicitly declared as such. But it can be complex to ensure that non-nullable references are always initialized to a non-null value, which is why he chose the easy solution to just let every reference be nullable.

deathanatos 2 hours ago||
I think even having references that aren't necessarily null is only part of it. Image that your language supports two forms of references, one nullable, one not. Let's just borrow C++ here:

  &ref_that_cannot_be_null
  *ref_that_can_be_null
The latter is still a bad idea, even if it isn't the only reference form, and even if it isn't the default, if it lets you do this:

  ref_that_can_be_null->thing()
Where only things that are, e.g., type T have a `thing` attribute. Nulls are "obviously" not T, but a good number of languages' type system which permit nullable reference, or some form of it, permit treating what is in actuality T|null in the type system as if it were just T, usually leading to some form of runtime failure if null is actually used, ranging from UB (in C, C++) to panics/exceptions (Go, Java, C#, TS).

It's an error that can be caught by the type system (any number of other languages demonstrate that), and null pointer derefs are one of those bugs that just plague the languages that have it.

adrian_b 5 hours ago||||
The null reference was invented by Hoare as a means to implement optional types, which works regardless of their binary representation.

Optional types were a very valuable invention and the fact that null values have been handled incorrectly in many programming languages or environments is not Hoare's fault.

tialaramex 3 hours ago||
Having "Optional types" only makes sense if your type system is powerful enough.

There are two ways this might happen, both will solve the Billion Dollar Problem but I think one is the clear winner. The first way is explicit optionality, often retro-fitted to languages for example in C# the difference between the types Goose and Goose? are that (in a suitable C# project enabling this rule) the first one is always a Goose and the second might be null instead.

The second way is if you have Sum types you can just add "or it's null" to the type.

I think sum types are better because they pass my "three purposes" rule where I can think of not one (Option<T> replaces optionality) or two (Result<T,E> for error handling) but at least three (ControlFlow<B, C> reifies control flow) distinct problems I don't need separate solutions for any more if I have this feature.

If your type system is too weak you suffer the Billion Dollar problem with Hoare's idea and perhaps if this "feature" had never been invented we'd have all migrated to languages with a better type system decades ago.

adrian_b 3 hours ago||
I agree that for the correct use of both Optional types and Sum types (a.k.a. Union types) a type system that is powerful enough is essential. Moreover, a convenient syntax is also important.

In my opinion, besides being passed as arguments of functions whose parameters are declared as having the corresponding Optional or Sum type, there is only one other permissible use of values of such types.

Variables of an Optional type shall be allowed in the Boolean expression of an "if" or equivalent conditional statement/expression, while variables of a Sum type shall be allowed in an expression that tests which is the current type in a select/case/switch or whatever is the name used for a conditional statement or expression with multiple branches.

Then in the statements or expressions that are guarded by testing the Optional- or Sum-type variable, that variable shall be used as having the corresponding non-optional type or the type among those possible for a Sum type that has been determined by the test.

This syntax ensures that such variables will not be misused, while also avoiding the excessive and unhelpful verbosity that exists in some languages.

Milpotel 5 hours ago|||
I'm pretty sure that this is not true. I talked to Bud Lawson (the inventor of the pointer) and he claimed that they had implemented special behaviour for null pointers earlier. When I talked to Tony later about it, he said he had never heard of Bud Lawson. So probably both invented them independently, but Bud came first.
elch 3 hours ago|||
If we start playing the "who was first" game, then for the Soviet machine Kiev (Kyiv), an "address language" with a "prime operation" was created in 1957-59.

The prime operation and address mapping.

The prime operation defines a certain single‑argument function. Its symbol (a prime mark) is written above and to the left of the argument: 'a = b where a is the argument and b is the result of the operation. This is read as: "prime a equals b" (or "b is the contents of a"). The argument a is called an address, and the function value b is called the contents of the address. The prime function ' defines a mapping from the set of addresses A to the set of contents B, which we will call an address mapping.

Page 36, chapter III https://torba.infoua.net/files/kateryna-yushchenko/Vychislit...

adrian_b 2 hours ago|||
Pointers and indirect addressing were used in assembly languages and machine languages much earlier than that, perhaps even in some relay-based computers.

In any case, by 1954 already most or all electronic computers used this.

The only priority questions can refer to which are the first high-level programming languages that have used pointers.

In my opinion the first language having pointers with implicit dereferencing was CPL, published in 1963-08, and the first language having pointers with explicit dereferencing was Euler, published completely in 1966-01, but this feature had already been published in 1965-11. The first mainstream programming language, with a large installed base, which had pointers, was the revised IBM PL/I, starting with its version from 1966-07.

Thanks for the link to the book describing the "Kiev" computer. It seems an interesting computer for the year 1957, but it does not have anything to do with the use of pointers in high-level programming languages.

At the page indicated by you there is a description of what appears to be a symbolic assembler. The use of a symbolic assembly language was a great progress at that early date, because many of the first computer programs had been written directly in machine language, or just with a minimal translation, e.g. by using mnemonics instead of numeric opcodes.

However this does not have anything to do with HLL pointers and means to indicate indirect addressing in an assembly language have existed earlier, because they were strictly necessary for any computed that provided indirect addressing in hardware.

In the very first computers, the instructions were also used as pointers, so a program would modify the address field of an instruction, which was equivalent to assigning a new value to a pointer, before re-executing the instruction.

Later, to avoid the re-writing of instructions, both index registers and indirect addressing were introduced. Indirect addressing typically reserved one bit of an address to mark indirection. So when the CPU loaded a word from the memory, if the indirect addressing bit was set, it would interpret the remainder of the word as a new address, from which a new word would be loaded. This would be repeated if the new word also had the indirection bit set.

The assembly languages just had to use some symbol to indicate that the indirection bit must be set, which appears to have been "prime" for "Kiev".

Milpotel 48 minutes ago||
> the first language having pointers with explicit dereferencing was Euler, published completely in 1966-01

I could only find a manual for PDP-10 Euler with references. Do you have a source for an Euler with pointers?

Milpotel 2 hours ago|||
Nice, and that was implemented and qualifies as high-level language?
adrian_b 2 hours ago|||
You should provide a citation for where Bud Lawson has published his invention.

The use of pointers in assembly language does not count as an invention, as it was used since the earliest automatic computers. The use of implicit reference variables, which cannot be manipulated by the programmer, like in FORTRAN IV (1962) does not count as pointers.

The method for forcing another level of evaluation of a variable by using a "$" prefix, which was introduced in SNOBOL in January 1964, and which has been inherited by the UNIX shell and its derivatives does not count as a pointer.

The term "pointer" was introduced in a revision of the IBM PL/I language, which was published in July 1966. In all earlier publications that I have ever seen the term used was "reference", not "pointer".

There are 2 high-level programming languages that were the first to introduce explicit references (i.e. pointers). One language was Euler, published in January 1966 by Niklaus Wirth and Helmut Weber. However Hoare knew about this language before the publication, so he mentioned it in his paper from November 1965, where he discussed the use of references (i.e. pointers).

The other language was the language CPL, which had references already in August 1963. The difference between how CPL used references and how Euler used references is that in Euler pointer dereferencing was explicit, like later in Pascal or in C. On the other hand, in CPL (the ancestor of BCPL), dereferencing a pointer was implicit, so you had to use a special kind of assignment to assign a new value to a pointer, instead of assigning to the variable pointed by the pointer.

Looking now in Wikipedia, I see a claim that Bud Lawson has invented pointers in 1964, but there is no information about where he has published this and about which is the high-level programming language where the pointers of Bud Lawson had been used.

If the pointers of Bud Lawson were of the kind with explicit dereferencing, they would precede by a year the Euler language.

On the other hand, if his pointers were with implicit dereferencing, then they came a year after the British programming language CPL.

Therefore, in the best case for Bud Lawson, he could have invented an explicit dereferencing operator, like the "*" of C, though this would not have been a great invention, because dereferencing operators were already used in assembly languages, they were missing only in high-level languages.

However, the use of references a.k.a. pointers in a high-level programming language has already been published in August 1963, in the article "The main features of CPL", by Barron, Buxton, Hartley, Nixon and Strachey.

Until I see any evidence for this, I consider that any claim about Bud Lawson inventing pointers is wrong. He might have invented pointers in his head, but if he did not publish this and it was not used in a real high-level programming language, whatever he invented is irrelevant.

I see on the Internet a claim that he might have been connected with the pointers of IBM PL/I.

This claim appears to be contradicted by the evidence. If Bud Lawson had invented pointers in 1964, then the preliminary version of PL/I would have had them.

In reality, the December 1964 version of PL/I did not have pointers. Moreover, the first PL/I version used in production, from the middle of 1965 also did not have pointers.

The first PL/I version that has added pointers was introduced only in July 1966, long enough after the widely-known publications of Hoare and of Wirth about pointers. That PL/I version also added other features proposed by Hoare, so there is no doubt that the changes in the language were prompted by the prior publications.

So I think that the claim that Bud Lawson has invented pointers is certainly wrong. He might have invented something related to pointers, but not in 1964.

PL/I had one original element, the fact that pointer dereferencing was indicated by replacing "." with "->". This has later been incorporated in the language C, to compensate its mistake of making "*" a prefix operator.

The "->" operator is the only invention of PL/I related to pointers, so that is a thing that has been invented by an IBM employee, but I am not aware of any information about who that may be. In any case, this was not invented in 1964, but in 1966.

tombert 21 hours ago||
Damn.

Tony Hoare was on my bucket list of people I wanted to meet before I or they die. My grad school advisor always talked of him extremely highly, and while I cannot seem to confirm it, I believe Hoare might have been his PhD advisor.

It's hard to overstate how important Hoare was. CSP and Hoare Logic and UTP are all basically entire fields in their own right. It makes me sad he's gone.

jballanc 20 hours ago||
You can always check his entry on the Mathematics Genealogy Project: https://mathgenealogy.org/id.php?id=45760
tombert 19 hours ago||
I actually knew about that, but it says "advisor unknown".

Regardless, he certainly knew Tony Hoare, and spoke extremely highly of him.

gjm11 20 minutes ago||
You've probably tried this already, but just in case: If you can find a copy of his PhD thesis it's likely (or at least would be likely without the information that you've had trouble tracking down his advisor) to have some mention of his advisor's name in it.
dboreham 20 hours ago||
When I met him unfortunately I didn't realize how important he was (1987). The place where I worked used formal methods to verify the design of an FPU, in collaboration with the PRG. iirc the project was a success. I never heard of formal methods being successfully used again until TLA+ a few years ago.
EdNutting 20 hours ago|||
Inmos’ Occam-based verification of their FPU in collaboration with researchers at Bristol and Oxford iirc? Citation: http://people.cs.bris.ac.uk/~dave/formalmethods.pdf

David May was my PhD supervisor and always spoke very highly of Sir Tony Hoare.

Edit: I’m also lucky enough to have worked with Geoff Barrett, the guy that completed that formal verification (and went on to do numerous other interesting things). Some people may be interested to learn that this work was the very first formal verification of an FPU - and the famous Intel FPU bug could have been avoided had Intel been using the verification methods that the Inmos and University teams pioneered.

tombert 5 hours ago||
I actually had two PhD advisors [1]; Jim Woodcock and Simon Foster.

Both of them are legitimately wonderful and intelligent humans that I can only use positive adjectives to describe, but the one I was referring to in this was Jim Woodcock [2]. He had many, many nice things to say about Tony Hoare.

[1] Just so I'm not misleading people, I didn't finish my PhD. No fault at all of the advisor or the school.

[2] https://en.wikipedia.org/wiki/Jim_Woodcock

fanf2 20 hours ago|||
Inmos? Transputers were inspired by Hoare’s CSP.
EdNutting 20 hours ago|||
“Inspired by” is an understatement of the century lol. David May and Sir Tony worked very closely together to enable the architecture to be as pure a runtime for CSP as you could get - at least in early versions of the architecture and accompanying Occam language. It expanded and deviated a bit later on iirc.

Source: David loved to tell some of these stories to us as students at Bristol.

EdNutting 20 hours ago|||
It’s also worth highlighting that the mathematical purity of the designs were also partly the problem with them. As a field, we’re still developing the maths of Effects and Effectful Algebras that are needed to make these systems both mathematically ‘pure’ (or at least sound to within some boundary) and ALSO capable of interfacing to the real world.

Transputer and Occam were, in this sense, too early. A rebuild now combining more recent developments from Effect Algebras would be very interesting technically. (Commercially there are all sorts of barriers).

EdNutting 20 hours ago|||
Further Reading for the curious:

On specifically the relationship between Occam and Transputer architecture: http://people.cs.bris.ac.uk/~dave/transputer1984.pdf

Wider reading: http://people.cs.bris.ac.uk/~dave

dboreham 1 hour ago|||
Yes.
pradn 4 hours ago||
He came to give a lecture at UT Austin, where I did my undergrad. I had a chance to ask him a question: "what's the story behind inventing QuickSort?". He said something simple, like "first I thought of MergeSort, and then I thought of QuickSort" - as if it were just natural thought. He came across as a kind and humble person. Glad to have met one of the greats of the field!
srean 4 hours ago||
Happy to meet you. I was there and I remember that question being asked. I think it was 2010.

If I remember correctly he had two immediate ideas, his first was bubble sort, the second turned out to be quicksort.

He was already very frail by then. Yet clarity of mind was undiminished. What came across in that talk, in addition to his technical material, was his humor and warmth.

gsanghani 4 hours ago|||
I remember this vividly! I believe he said that he thought of _Bubble Sort_ first, but that it was too slow, so he came up with QuickSort next
mceachen 4 hours ago||
He discusses this and his sixpence wager here: https://youtu.be/pJgKYn0lcno

(Source: TFA)

mynegation 21 hours ago||
Sir Tony Hoare visited Institute for System Programming in Moscow and gave a lecture quarter of the century ago. It was unforgettable experience to see the living legend of your field. He was a senior person then already and today I am going to celebrate his long and wonderful life.
csb6 21 hours ago||
Sad that his (and many others') dream of widespread formal verification of software never came true. He made really fundamental contributions to computer science but will probably be mostly known for quicksort and the quote about his "billion dollar mistake", not his decades-long program to make formal methods more tractable.

Makes me think of an anecdote where Dijkstra said that he feared he would only be remembered for his shortest path algorithm.

hinkley 21 hours ago||
Almost all of the earliest cited works on concurrency management in software were authored by C A R 'Tony' Hoare.

I genuinely forget he authored quicksort on the regular.

yodsanklai 20 hours ago||
Actually, thanks to AI, this may change soon! we may be in a place where widespread formal verification is finally possible.
ziyao_w 5 hours ago||
Random anecdote and Mr. Hoare (yep not a Dr.) has always been one of my computing heroes.

Mr. Hoare did a talk back during my undergrad and for some reason despite totally checked out of school I attended, and it is one of my formative experiences. AFAICR it was about proving program correctness.

After it finished during the Q&A segment, one student asked him about his opinions about the famous Brooks essay No Silver Bullet and Mr. Hoare's answer was... total confusion. Apparently he had not heard of the concept at all! It could be a lost in translation thing but I don't think so since I remember understanding the phrase "silver bullet" which did not make any sense to me. And now Mr. Hoare and Dr. Brooks are two of my all time computing heroes.

EdNutting 4 hours ago|
"Sir", not "Mr." if you're going to be pedantic about titles ;)

Edit: Oh and he has multiple honorary doctorates (at least 6!), so would be just as much "Dr." too!

tialaramex 3 hours ago|||
It is not usual to call people with an honorary doctorate "Doctor" except in the context of the awarding institution. Most likely the awarding institutions will have actually specified that the recipient should not give anybody the false impression and I can't imagine Tony is the type to do otherwise.
robotresearcher 3 hours ago||
His title at Oxford was 'Professor', and he was addressed as 'Tony'.

He made incoming DPhil (PhD) students a cup of tea individually in his office at the Computing Laboratory. It was a small group, but still I appreciated this personal touch.

ziyao_w 2 hours ago|||
Lol you are totally right! ;-)

I am normally a casual guy but for a giant being a bit more formal (pun intended) seems appropriate. Or maybe I am a nerd through and through :-)

susam 20 hours ago||
I first came across Tony Hoare about 24 years ago while learning C from The C Programming Language by Kernighan and Richie. I knew him only as C. A. R. Hoare for a long time. When I got on the Internet, it took me a while to realise that when people said Tony Hoare, it was the same person I knew as C. A. R. Hoare. Quoting the relevant text from the book:

> Another good example of recursion is quicksort, a sorting algorithm developed by C.A.R. Hoare in 1962. Given an array, one element is chosen and the others partitioned in two subsets - those less than the partition element and those greater than or equal to it. The same process is then applied recursively to the two subsets. When a subset has fewer than two elements, it doesn't need any sorting; this stops the recursion.

> Our version of quicksort is not the fastest possible, but it's one of the simplest. We use the middle element of each subarray for partitioning. [...]

It was one of the first few 'serious' algorithms I learnt to implement on my own. More generally, the book had a profound impact on my life. It made me fall in love with computer programming and ultimately choose it as my career. Thanks to K&R, Tony Hoare and the many other giants on whose shoulders I stand.

samiv 3 hours ago|
With respect I say that the one can only feel gobsmacked about how much complexity has grown.

In the 60s inventing one single algorithm with 10 lines of code was a thing.

If you did that today nobody would bat an eye.

Today people write game engines, compilers, languages, whole OS and nobody bats an eye cause there are thousands of those.

Quick sort isn't even a thing for leet code interviews anymore because it's not hard enough.

More comments...