Posted by linolevan 1/19/2026
nitpicking at the RFCs when everyone knows DNS is a big old thing with lots going on
how do they not have basic integration tests to check how clients resolve
it seems very unlike cloudflare of old that was much more up front - there is no talk of the need to improve process, just blaming other people
And I also being shocked that Cisco Switch goes to reboot loop with this DNS order issue.
Also, what's the right mental framework behind deciding when to release a patch RFC vs obsoleting the old standard for a comprehensive update?
Otherwise I might go to consult my favorite RFC and not even know its been superseded. And if it has been superseded with a brand new doc, now I have to start from scratch again instead of reading the diff or patch notes to figure out what needs updating.
And if we must supersede, I humbly request a warning be put at the top, linking the new standard.
https://datatracker.ietf.org/doc/html/rfc5245
I agree, that it would be much more helpful if made obvious in the document itself.
It's not obvious that "updated by" notices are treated in any more of a helpful manner than "obsoletes"
They write reordering, push it and glibc tester fires, fails and you quickly discover "Crap, tests are failing and dependency (glibc) doesn't work way I thought it would."
Reminds me of https://news.ycombinator.com/item?id=37962674 or see https://tech.tiq.cc/2016/01/why-you-shouldnt-use-cloudflare/
Any change to a global service like that, even a rollback (or data deployment or config change), should be released to a subset of the fleet first, monitored, and then rolled out progressively.
Each resolved record would be asserted as a fact, and a tiny search implementation would run after all assertions have been made to resolve the IP address irrespective of the order in which the RRsets have arrived.
A micro Prolog implementation could be rolled into glibc's resolver (or a DNS resolver in general) to solve the problem once and for all.