Top
Best
New

Posted by baruchel 11/2/2025

Why don't you use dependent types?(lawrencecpaulson.github.io)
269 points | 116 commentspage 2
a-saleh 11/3/2025|
I think I mostly liked those type-systems that lean towards dependent, but not go all the way.

Purescript might be favourite? Even by default you get more power than i.e. vanilla Haskell, with row-types. But then you can get type-level list, typelevel string, even typelevel regex! And you use these through type-classes in a kind of logic-programming way.

zozbot234 11/2/2025||
The claim that dependently typed languages are inherently reliant on fully written-out proof objects looks quite wrong to me. You could easily imagine a proof term with opaque typed "holes" (written `_`) where the content of each "hole" is simply replaced by a LCF-like proof script that was somehow proven (in entirely unspecified ways, having to do with the peculiar programming language that the LCF-like checker uses for its implementation - so the soundness boundary has been expanded a lot, we have given up on having an easily checkable 'kernel'!) to generate some term of the correct type, starting from its environment. Since the content is opaque, no other part of the proof development can tell what exactly was in the hole, and we can dispense with writing that part of the proof term out.
nextaccountic 11/3/2025||
That's how F* (FStar) works, right? You may write out proof objects manually but most of time they are inferred by SMT
whatshisface 11/2/2025||
That doesn't sound that easy.
zozbot234 11/2/2025||
If you mean that implementing the LCF architecture OP advocates for or evaluating any one implementation of it for soundness isn't easy, I absolutely agree. But assuming that you've solved that part, making use of it within a system that otherwise uses dependent types is not that hard.
fluffypony 11/2/2025||
Agree with this. The punchline here is not "dependent types bad", it is "choose your battles". Isabelle/HOL pushed frighteningly far without proof objects or dependent types, from schemes to BSG, and never hit the mythical wall. What moved the needle was automation, libraries, and legible proofs, not a fancier core calculus. Lean is great, but if the toolchain bogs down and equality games leak into your day, your fancy types are like Tesla FSD: impressive demo energy, unpredictable commute (no offense to anyone who uses it regularly). Knowing when not to use them is the real superpower imho.

If you need finely indexed invariants, sure, reach for DT. For the other 95%, HOL plus type classes and locales, backed by a small kernel and big libraries, will get you to production faster and with fewer regrets. Milner's LCF insight still pays the bills. And yes, croissants are delicious, but optional axioms are a risky breakfast.

throwthrow0987 11/3/2025||
So it seems the thesis of some pretty in the know mathematicians is that the secret of dependent types is knowing when not to use them. Is that necessarily an argument against Lean or Rocq? In the sense could one simply just not use the dependent types on offer in these languages in certain circumstances, and try emulate an Isabelle proof using custom built tactics?
netdevphoenix 11/3/2025||
Sounds like for most implementations of DTs, you have to go all in which is likely overkill for many LoB apps doing CRUD with some custom logic on it. The ideal would be to be able to delegate some modules onto a separate system using DTs and the rest using your good old OOP
zozbot234 11/3/2025|
There are some analogies between your good old OOP and dependent types, in that a derived object in OOP has a "type" (which is reflected in its methods dictionary - it's not merely a runtime "tag") that varies at runtime based on program logic. You can implement some of this with typestate, but for full generality you need to allow for downcasts that might only become statically checkable when you do have full dependent types.
Pxtl 11/2/2025||
Same reason I don't use a variety of other good programming ideas: the tools I use don't support it usefully.
golemotron 11/2/2025||
The juice isn't worth the squeeze.
boulevard 11/2/2025||
The real skill is knowing when not to make something dependent otherwise you just slow yourself down.
shiandow 11/3/2025||
Is it me or do dependent types look really similar to the axiom of choice?
heikkilevanto 11/2/2025|
I hate titles like "Why don't you use blah-blah". Usually because blah-blah might be an acceptable (maybe good?) solution to a problem which I don't have. Let me ask in return: Why should I even care about blah-blah. If the first (two?) paragraphs don't give a clear answer to that, never mind!
leegao 11/2/2025||
For what it's worth, the article is the author arguing why they don't personally use blah-blah (Dependent Types) despite being a leading academic in the field (PLT) where blah-blah is frequently touted as the holy grail of that field, and justifies his experience using blah-blah-2 (Higher Order Logic), a tried and true "sophomoric" choice that seems dusty and crusty by comparison (literally, PLT undergrads learn how to formalize systems using blah-blah-2-reduced frequently in their sophomore years, as a way to learn SML). The rest of the article is really only interesting for the PLT/proof automation community since it is pretty niche. His conclusions is that you don't need the shiny new blah-blah to do things, often in more complicated ways, if older blah-blah-2s can do things mostly just as well and have the benefit of simplicity and ease of automation.
adastra22 11/3/2025||
In coding terms this is the equivalent of Donald Knuth writing a blog post titled “Why I don’t use version control.”
svat 11/3/2025||
The title currently on HN has dropped the quotes that are in the article title: the article is not titled Why don't you use dependent types? (i.e. asking that question of the readers) but is titled "Why don't you use dependent types?" (i.e. the author quotes that question and answers it in the blog post).