Top
Best
New

Posted by todsacerdoti 7 hours ago

DNS-Persist-01: A New Model for DNS-Based Challenge Validation(letsencrypt.org)
155 points | 64 commentspage 2
newsoftheday 4 hours ago|
Today I do the following:

/usr/bin/letsencrypt renew -n --agree-tos --email me@example.com --keep-until-expiring

Will I need to change that? Will I need to manually add custom DNS entries to all my domains?

PS To add, compared to dealing with some paid certificate services, LetsEncrypt has been a dream.

dextercd 4 hours ago|
This adds a new validation method that people can use if they want. The existing validation methods (https://letsencrypt.org/docs/challenge-types/) aren't going away, so your current setup will keep working.
jsheard 4 hours ago|||
And to elaborate, the reasons you might want to use a DNS challenge are to acquire wildcard certificates, or to acquire regular certificates on a machine or domain which isn't directly internet-facing. If neither of those apply to you then the regular HTTP/TLS methods are fine.
newsoftheday 4 hours ago||
OK I was sort of thinking that might be the case but wanted to make sure in case I had to start prepping now, thanks. We use no wildcard domains today, maybe down the road.
bombcar 1 hour ago||
Wildcard domains are a great way to get certs for all your "internal systems" with only having to expose one (or a bit of one on DNS) to the Internet at large.

This is going to greatly simplify some of my scripts.

newsoftheday 4 hours ago|||
This is good news, not sure I got that from reading the article but even if I had to do it, it wouldn't be the end of the world I guess.
basilikum 3 hours ago||
> The timestamp is expressed as UTC seconds since 1970-01-01

That should be TAI, right? Is that really correct or do they actually mean unix timestamps (those shift with leap seconds unlike TAI which is actually just the number of seconds that have passed since 1970001Z)?

wtallis 3 hours ago|
Do leap seconds even matter here? Doing anything involving DNS or certificates in a way that requires clock synchronization down to the second would seem to be asking for trouble.
tialaramex 50 minutes ago|||
Abolition of the Leap Second is basically a done deal. So, the differences caused by leap seconds will become frozen as arbitrary offsets, GPS time versus UTC for example.

Basically when it was invented leap seconds seemed like a good idea because we assumed the inconvenience versus value was a good trade, but in practice we've discovered the value is negligible and the inconvenience more than we expected, so, bye bye leap seconds.

The body responsible has formal treaty promises to make UTC track the Earth's spin and replacing those treaties is a huge pain, so, the "hack" proposed is to imagine into existence a leap minute or even a leap hour that could correct for the spin, and then in practice those will never be used either because it's even less convenient than a leap second - but by the time they're asked to set a date for these hypothetical changes likely the signatory countries won't exist and their successors can just sign a revised treaty, countries only tend to last a few hundred years, look at the poor US which is preparing 250th anniversary celebrations while also approaching civil war.

toast0 58 minutes ago|||
Probably yeah, seconds don't really matter here. You would have to work hard for the 27 second difference to be material. But precision is nice.

unixtime is almost certainly what is meant by the standard, but it is not the count of UTC seconds since 1970; unix time is the number of seconds since 1970 as if all days had 86400 seconds. UTC, TAI, and GPS seconds are all the same length, and the same number have happened since 1970, but TAI appears 37 seconds ahead of UTC because TAI has days with 86400 seconds, while UTC has some days with 86401 seconds and was 10 seconds ahead of UTC in 1970. unixtime and UTC are in sync because unixtime allows some days to encompass 86401 UTC seconds while unixtime only counts 86400 seconds.

csense 4 hours ago||
To get a Let's Encrypt wildcard cert, I ended up running my own DNS server with dnsmasq and delegating the _acme-challenge subdomain to it.

Pasting a challenge string once and letting its continued presence prove continued ownership of a domain is a great step forward. But I agree with others that there is absolutely no reason to expose account numbers; it should be a random ID associated with the account in Let's Encrypt's database.

As a workaround, you should probably make a new account for each domain.

bombcar 1 hour ago||
Your account ID is exposed in the certificate generated; what's the real difference?
Spivak 4 hours ago||
You bothered to manage your LE accounts? I only say because when using the other two challenge types with most deployment scenarios you were generating a new account per cert so your account ID was just a string of random numbers.
micw 5 hours ago||
I wonder why they switched from a super-secure-super-complex (in terms of operations) way of doing DNS auth to a super-simple-no-cryptography-involved method that just relies on the account id.

Why not using some public/private key auth where the dns contains a public key and the requesting server uses the private key to sign the cert request? This would decouple the authorization from the actual account. It would not reveal the account's identity. It could be used with multiple account (useful for a wildcard on the DNS plus several independent systems requesting certs for subdomains).

tptacek 5 hours ago||
The most common vector for DNS-based attacks on issuance is compromised registrar accounts, and no matter how complicated you make the cryptography, if you're layering it onto the DNS, those attacks will preempt the cryptography.
Spivak 4 hours ago||
Because LE keeps a mapping of account ids to emails and public keys. You have to have the private key to the ACME account to issue a cert. The cryptography is still there but the dance is done by certbot behind the scenes.

Prior to this accounts were nearly pointless as proof of control was checked every time so people (rightfully) just threw away the account key LE generated for them. Now if you use PERSIST you have to keep it around and deploy it to servers you want to be able to issue certs.

mmh0000 5 hours ago||
I really like and hate this at the same time.

Years ago, I had a really fubar shell script for generating the DNS-01 records on my own (non-cloud) run authoritative nameserver. It "worked," but its reliability was highly questionable.

I like this DNS-PERSIST fixes that.

But I don't understand why they chose to include the account as a plain-text string in the DNS record. Seems they could have just as easily used a randomly generated key that wouldn't mean anything to anyone outside Let's Encrypt, and without exposing my account to every privacy-invasive bot and hacker.

Ajedi32 3 hours ago||
> they could have just as easily used a randomly generated key

Isn't that pretty much what an accounturi is in the context of ACME? Who goes around manually creating Let's Encrypt accounts and re-using them on every server they manage?

ragall 5 hours ago|||
Those who choose to use DNS-PERSIST-01 should fully commit to automation and create one LetsEncrypt account per FQDN (or at least per loadbalancer), using a UUID as username.
mcpherrinm 5 hours ago||
There is no username in ACME besides the account URI, so the UUID you’re suggesting isn’t needed. The account uri themselves just have a number (db primary key).

If you’re worried about correlating between domains, then yes just make multiple accounts.

There is an email field in ACME account registration but we don’t persist that since we dropped sending expiry emails.

9dev 2 hours ago|||
It’s still a valid point IMHO - why not just use the public key directly? It seems like the account URI just adds problems instead of resolving any.
mcpherrinm 1 hour ago||
It has these primary advantages:

1. It matches what the CAA accounturi field has

2. Its consistent across an account, making it easier to set up new domains without needing to make any API calls

3. It doesn’t pin a users key, so they can rotate it without needing to update DNS records - which this method assumes is nontrivial, otherwise you’d use the classic DNS validation method

glzone1 2 hours ago|||
Interesting.

I didn't realize the email field wasn't persisted. I assumed it could be used in some type of account recovery scenario.

bflesch 53 minutes ago||
> But I don't understand why they chose to include the account as a plain-text string in the DNS record.

Simple: it's for tracking. Someone paid for that.

chaz6 4 hours ago||
Is it possible to create an ACME account without requesting a certificate? AFAICT is is not so you cannot use this method unless you have first requested a certificate with some other method. I hope I am wrong!
dextercd 4 hours ago|
An account needs to be created before you can request a certificate. Some ACME clients might create the account for you implicitly when you request the first certificate, but in the background it still needs to start by registering an account.

`certbot register` followed by `certbot show_account` is how you'd do this with certbot.

chaz6 2 hours ago||
Great, thank you!
Havoc 4 hours ago||
Interesting. Think a lot of the security headaches went away for me when I discovered providers like CF can restrict the scope of tokens to a single domain and lock it to my IP.
amluto 4 hours ago|
Even CF cannot restrict the scope of a token to a single host.
cube00 3 hours ago||
Or a single DNS record.
infogulch 2 hours ago||
This is a nice increment in ACME usability.

Once again I would like to ask CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs. Last year's request: https://news.ycombinator.com/item?id=43563676

ocdtrekkie 4 hours ago||
This might be the first time in ten years that a certificate proposal intends to make issuing certificates more reasonable and not less. More of this, less of 7-day-lifetime stupidity.
CqtGLRGcukpy 4 hours ago|
"Support for the draft specification is available now in Pebble, a miniature version of Boulder, our production CA software. Work is also in progress on a lego-cli client implementation to make it easier for subscribers to experiment with and adopt. Staging rollout is planned for late Q1 2026, with a production rollout targeted for some time in Q2 2026."
More comments...