Top
Best
New

Posted by baruchel 1 day ago

Why don't you use dependent types?(lawrencecpaulson.github.io)
267 points | 111 commentspage 2
stevan 1 day ago|
> But people have regularly asked why Isabelle dispenses with proof objects. The two questions are essentially the same, because proof objects are intrinsic to all the usual type theories. They are also completely unnecessary and a huge waste of space.

I believe proof by reflection relies on proof objects? Georges Gonthier's proof of the four-colour theorem crucially uses proof by reflection.

zozbot234 1 day ago|
Proof by reflection is accomplished by running some arbitrary program during proof checking that has been proven to only return a "true" result if the goal is true. You can do the exact same thing in an LCF system, and in fact that's arguably what a complex LCF "tactic" amounts to in the first place. If anything, the viability of proof by reflection simply shows that the divide with LCF-like checkers is not really that large.
stevan 1 day ago||
> Proof by reflection is accomplished by running some arbitrary program during proof checking that has been proven to only return a "true" result if the goal is true. You can do the exact same thing in an LCF system, and in fact that's arguably what a complex LCF "tactic" amounts to in the first place.

I think the difference is that in a type theory you can prove the soundness of the decision procedure to be correct within the system?

From "Metatheory and Reflection in Theorem Proving: A Survey and Critique" by John Harrison, 1995:

> "No work on reflection has actually been done in HOL, but Slind (1992) has made some interesting proposals. His approach is distinguished from those considered previously in two important respects. First, he focuses on proving properties of programs written in Standard ML using the formal semantics to be found in Milner, Tofte, and Harper (1990). This contrasts with the other approaches we have examined, where the final jump from an abstract function inside the logic to a concrete implementation in a serious programming language which appears to correspond to it is a glaring leap of faith. [...]"

Proving that your LCF-like tactics are sounds using the (informal) semantics of the tactic language (ML) seems cumbersome.

Furthermore I believe proof by reflection crucially relies on computation happening at the logical level in order to minimise proof checking. Harrison concludes:

> "Nevertheless it is not clear that reflection’s practical utility has yet been convincingly demonstrated."

This was from 1995, so fair enough, but Paulson should be aware of Gonthier's work, which makes me wonder if anything changed since then?

practal 1 day ago||
The basic idea is: You run a program F on some input x, obtain r, and then you have some sort of justification why F x = r is a theorem. There are various levels of how you can do this, one is for example that "running the program" consists of applying a curated set of (proven) rewriting rules that take you from the theorem F x = F x to the theorem F x = r by applying them only to the right hand side. That is basically the same as "Proof by reflection" as used by Gonthier, where the Coq kernel acts as the (unverified) rewriting engine.

So this is not a matter of dependent or static typing or not, the idea is simple and the same (e.g., I've used it for my PhD thesis in Isabelle that is from 2008), it is just a matter of how practical this is to use in your theorem prover of choice.

stevan 23 hours ago||
> That is basically the same as "Proof by reflection" as used by Gonthier, where the Coq kernel acts as the (unverified) rewriting engine.

I don't think it's "basically the same", because this application of the rewrite rules in a LCF-like system is explicit (i.e. the proof checking work grows with the size of the problem), while in proof by reflection in a type theory it happens implicitly because the "rewriting" happens as part of reduction and makes use of with the definitional equality of the system?

For small and medium examples this probably doesn't matter, but I would think that for something like the four colour theorem it would.

practal 23 hours ago|||
As I said, it depends on how you practically implement it.

I've used it for proving linear inequalities as part of the Flyspeck project (formal proof of the Kepler conjecture), and there I implemented my own rewrite engine for taking a set of rewrite rules and do the computation outside of the LCF kernel, for example by compiling the rules to Standard ML. You can view that engine as an extension of the LCF kernel, just one more rule of how to get theorems. In that instance, it is exactly the same.

zozbot234 23 hours ago||
How does the proof of correctness work in this case? Is the rewriting ultimately resorting to proof steps within the kernel like a LCF-tactic would, or is there simply an informal argument that whatever you're doing in the rewriter is correct, on par with the proof kernel itself?
practal 23 hours ago||
There is a proof as part of my thesis that the engine is correct, but it is not formal in the sense of machine-checked.

Note that the final result of the Flyspeck project does not depend on that proof, as the linear inequalities part has later on been redone and extended in HOL-Light by Alexey Solovyev, using just the LCF kernel of HOL-Light. Which proves that using a simple LCF kernel can definitely be fast enough for such computations, even on that scale!

anonzzzies 1 day ago||
You just prove / use dependently typed languages / tla+ where it makes sense, not for everything. The latter might make sense if it's mostly automated maybe, but it takes really painful elaborate work to get full coverage and for sure most stuff really doesn't need that. I always think these formal methods + unit/integration tests cover so much that you are already far more robust than most on earth.
zozbot234 1 day ago||
The claim that dependently typed languages are inherently reliant on fully written-out proof objects looks quite wrong to me. You could easily imagine a proof term with opaque typed "holes" (written `_`) where the content of each "hole" is simply replaced by a LCF-like proof script that was somehow proven (in entirely unspecified ways, having to do with the peculiar programming language that the LCF-like checker uses for its implementation - so the soundness boundary has been expanded a lot, we have given up on having an easily checkable 'kernel'!) to generate some term of the correct type, starting from its environment. Since the content is opaque, no other part of the proof development can tell what exactly was in the hole, and we can dispense with writing that part of the proof term out.
nextaccountic 1 day ago||
That's how F* (FStar) works, right? You may write out proof objects manually but most of time they are inferred by SMT
whatshisface 1 day ago||
That doesn't sound that easy.
zozbot234 1 day ago||
If you mean that implementing the LCF architecture OP advocates for or evaluating any one implementation of it for soundness isn't easy, I absolutely agree. But assuming that you've solved that part, making use of it within a system that otherwise uses dependent types is not that hard.
netdevphoenix 23 hours ago||
Sounds like for most implementations of DTs, you have to go all in which is likely overkill for many LoB apps doing CRUD with some custom logic on it. The ideal would be to be able to delegate some modules onto a separate system using DTs and the rest using your good old OOP
zozbot234 22 hours ago|
There are some analogies between your good old OOP and dependent types, in that a derived object in OOP has a "type" (which is reflected in its methods dictionary - it's not merely a runtime "tag") that varies at runtime based on program logic. You can implement some of this with typestate, but for full generality you need to allow for downcasts that might only become statically checkable when you do have full dependent types.
shiandow 21 hours ago||
Is it me or do dependent types look really similar to the axiom of choice?
fluffypony 1 day ago||
Agree with this. The punchline here is not "dependent types bad", it is "choose your battles". Isabelle/HOL pushed frighteningly far without proof objects or dependent types, from schemes to BSG, and never hit the mythical wall. What moved the needle was automation, libraries, and legible proofs, not a fancier core calculus. Lean is great, but if the toolchain bogs down and equality games leak into your day, your fancy types are like Tesla FSD: impressive demo energy, unpredictable commute (no offense to anyone who uses it regularly). Knowing when not to use them is the real superpower imho.

If you need finely indexed invariants, sure, reach for DT. For the other 95%, HOL plus type classes and locales, backed by a small kernel and big libraries, will get you to production faster and with fewer regrets. Milner's LCF insight still pays the bills. And yes, croissants are delicious, but optional axioms are a risky breakfast.

boulevard 1 day ago||
The real skill is knowing when not to make something dependent otherwise you just slow yourself down.
heikkilevanto 1 day ago||
I hate titles like "Why don't you use blah-blah". Usually because blah-blah might be an acceptable (maybe good?) solution to a problem which I don't have. Let me ask in return: Why should I even care about blah-blah. If the first (two?) paragraphs don't give a clear answer to that, never mind!
svat 1 day ago||
The title currently on HN has dropped the quotes that are in the article title: the article is not titled Why don't you use dependent types? (i.e. asking that question of the readers) but is titled "Why don't you use dependent types?" (i.e. the author quotes that question and answers it in the blog post).
leegao 1 day ago||
For what it's worth, the article is the author arguing why they don't personally use blah-blah (Dependent Types) despite being a leading academic in the field (PLT) where blah-blah is frequently touted as the holy grail of that field, and justifies his experience using blah-blah-2 (Higher Order Logic), a tried and true "sophomoric" choice that seems dusty and crusty by comparison (literally, PLT undergrads learn how to formalize systems using blah-blah-2-reduced frequently in their sophomore years, as a way to learn SML). The rest of the article is really only interesting for the PLT/proof automation community since it is pretty niche. His conclusions is that you don't need the shiny new blah-blah to do things, often in more complicated ways, if older blah-blah-2s can do things mostly just as well and have the benefit of simplicity and ease of automation.
adastra22 1 day ago||
In coding terms this is the equivalent of Donald Knuth writing a blog post titled “Why I don’t use version control.”
golemotron 1 day ago||
The juice isn't worth the squeeze.
Pxtl 1 day ago|
Same reason I don't use a variety of other good programming ideas: the tools I use don't support it usefully.