Posted by lukastyrychtr 3 days ago
Up until now it's always been a hard sell, people say "Well, I can prove it myself, and that's less work than getting the computer to prove it", and they weren't completely wrong.
However, now we have LLMs which can do lots of interesting work, but really can't be trusted for anything important (like an LLVM optimisation pass, for example). If those LLMs can convince a theorem prover the LLVM optimisation pass is correct, then suddenly their output is much more useful.
The TLS 1.3 "Selfie attack" is an example of a gap between what we did prove and what we intuitively understood.
The formal proof for TLS 1.3 says Alice talking to Bob gets all the defined benefits of this protocol, and one option is they have a Pre-shared Key (PSK) for that conversation. They both know the same key, in that sense it's symmetric.
But the human intuition is that we're talking about an Alice+Bob key, whereas the proof says this is an Alice->Bob conversation key. If we re-use the same PSK for Bob->Alice conversations too we get the Selfie Attack, the formal proof never said we can expect that to work, it was just our intuition which confused us.
I also prefer to use DER, because it is better than BER and CER. DER is actually a subset of BER but BER has some messiness which is avoided by DER; because there are not as any ways to encode data by DER this makes it simpler to handle.
DER is also the format used by X.509 certificates, so this is fortunate; however, I use DER for other stuff too (since I think it is generally better than XML, JSON, CSV, etc).
I wrote a library in C to serialize and deserialize DER.
Sometimes people have good experiences with tools and like to share them.
(as in it isn't one standard but a group of standards, like asn.1 without any encoding is split in ~4 standards by itself. Through to be fair all or CER, BER and DER are in the same standard. But PER is another standard, so is XER, OER, JER, GSER, RXER each and others.)