Posted by thadt 7 hours ago
At the very least, you want to start using hybrid legacy / pqc algorithms so engineers at Cisco will know not to limit key sizes in PDUs to 128 bytes.
The incident you're thinking of doesn't sound familiar. None of the extensions in 1.1 really were that big, though of course certs can get that big if you work hard enough. Are you perhaps thinking instead of the 256-511 byte ClientHello issue addressed ion [1]
[0] https://blog.cloudflare.com/pq-2025/ [1] https://datatracker.ietf.org/doc/html/rfc7685
I feel like the NSA pushing a (definitely misguided and obviously later exploited by adversaries) NOBUS backdoor has poorly percolated into the collective consciousness, missing the NOBUS part entirely.
See https://keymaterial.net/2025/11/27/ml-kem-mythbusting/ for whether the current standards can hide NOBUS backdoors. It talks about ML-KEM, but all recent standards I read look like this.
Also, that was the time of export ciphers and Suite A vs Suite B, which were very explicit about there being different algorithms for US NatSec vs. everything else. This time there's only CNSA 2.0, which is pure ML-KEM and ML-DSA.
So no, there is no history of the NSA pushing non-NOBUS backdoors into NatSec algorithms.
In fairness, that was from 1975. I don't particularly trust the NSA, but i dont think things they did half a century ago is a great way to extrapolate their current interests.
while i agree with filippo, the way you worded this makes me think that you may not be aware that gutmann is also an expert in the field. so, if you are giving filippo weight because he is an expert, it is worth giving some amount to gutmann as well.
i dont really get your reply/insincere apology.
if you are going to bother mentioning filippo's expertise in the first place, its just weird to frame it the way you did. that is how someone would typically dismiss some random blogger with an appeal to authority. but if both people are authorities, it doesnt make sense.
if you already knew, than my comment can be context for future readers that dont and might just dismiss gutmann as a non-expert getting rebutted by an expert.
Also, I went over Filippo's post again and still can't see where it references the Gutmann / Neuhaus paper. Are we talking about the same post?
> This paper presents implementations that match and, where possible, exceed current quantum factorisation records using a VIC-20 8-bit home computer from 1981, an abacus, and a dog.
From the link:
> Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise[1]. As Scott Aaronson said[2]:
> > Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
[1]: https://bas.westerbaan.name/notes/2026/04/02/factoring.html