Posted by baruchel 1 day ago
I believe proof by reflection relies on proof objects? Georges Gonthier's proof of the four-colour theorem crucially uses proof by reflection.
I think the difference is that in a type theory you can prove the soundness of the decision procedure to be correct within the system?
From "Metatheory and Reflection in Theorem Proving: A Survey and Critique" by John Harrison, 1995:
> "No work on reflection has actually been done in HOL, but Slind (1992) has made some interesting proposals. His approach is distinguished from those considered previously in two important respects. First, he focuses on proving properties of programs written in Standard ML using the formal semantics to be found in Milner, Tofte, and Harper (1990). This contrasts with the other approaches we have examined, where the final jump from an abstract function inside the logic to a concrete implementation in a serious programming language which appears to correspond to it is a glaring leap of faith. [...]"
Proving that your LCF-like tactics are sounds using the (informal) semantics of the tactic language (ML) seems cumbersome.
Furthermore I believe proof by reflection crucially relies on computation happening at the logical level in order to minimise proof checking. Harrison concludes:
> "Nevertheless it is not clear that reflection’s practical utility has yet been convincingly demonstrated."
This was from 1995, so fair enough, but Paulson should be aware of Gonthier's work, which makes me wonder if anything changed since then?
So this is not a matter of dependent or static typing or not, the idea is simple and the same (e.g., I've used it for my PhD thesis in Isabelle that is from 2008), it is just a matter of how practical this is to use in your theorem prover of choice.
I don't think it's "basically the same", because this application of the rewrite rules in a LCF-like system is explicit (i.e. the proof checking work grows with the size of the problem), while in proof by reflection in a type theory it happens implicitly because the "rewriting" happens as part of reduction and makes use of with the definitional equality of the system?
For small and medium examples this probably doesn't matter, but I would think that for something like the four colour theorem it would.
I've used it for proving linear inequalities as part of the Flyspeck project (formal proof of the Kepler conjecture), and there I implemented my own rewrite engine for taking a set of rewrite rules and do the computation outside of the LCF kernel, for example by compiling the rules to Standard ML. You can view that engine as an extension of the LCF kernel, just one more rule of how to get theorems. In that instance, it is exactly the same.
Note that the final result of the Flyspeck project does not depend on that proof, as the linear inequalities part has later on been redone and extended in HOL-Light by Alexey Solovyev, using just the LCF kernel of HOL-Light. Which proves that using a simple LCF kernel can definitely be fast enough for such computations, even on that scale!
If you need finely indexed invariants, sure, reach for DT. For the other 95%, HOL plus type classes and locales, backed by a small kernel and big libraries, will get you to production faster and with fewer regrets. Milner's LCF insight still pays the bills. And yes, croissants are delicious, but optional axioms are a risky breakfast.