Posted by schmuckonwheels 14 hours ago
This isn't LE's decision: a 47 day max was voted on by the CA/Browser Forum.
https://www.digicert.com/blog/tls-certificate-lifetimes-will...
https://cabforum.org/2025/04/11/ballot-sc081v3-introduce-sch...
https://groups.google.com/a/groups.cabforum.org/g/servercert... - public votes of all members, which were unanimously Yes or Abstain.
IMO this is a policy change that can Break the Internet, as many archived/legacy sites on old-school certificates may not be able to afford the upfront tech or ongoing labor to transition from annual to effectively-monthly renewals, and will simply be shut down.
And, per other comments, this will make LE the only viable option to modernize, and thus much more of a central point of failure than before.
But Let's Encrypt is not responsible for this move, and did not vote on the ballot.
Ideally, this will take less ongoing labor than annual manual rotations, and I'd argue sites that can't handle this would have been likely to break at the next annual rotation anyways.
If they have certificates managed by hosters, the hosters will deal with it. If they don't, then someone was already paying for the renewal and handling the replacement on the server side, making it much more likely that it will be fixed.
Nobody's paying for EV certificates now browsers don't display the EV details. The only reason to pay for a certificate is if you're rotating certificates manually, and the 90 day expiry of Lets Encrypt certificates is a hassle.
If the CA/Browser Forum is forcing everyone to run ACME clients (or outsource to a managed provider like AWS or Cloudflare) doesn't that eliminate the last substantial reason to give money to a CA?
Microsoft voted for it, and now they are basically the only game in town for cloud signing that is affordable for individuals. The Forum needs voting representatives for software developers and end users or else the members will just keep enriching themselves at our expense.
Seems to me CAs have intermediate certificates and can rotate those, not much upside to rotating the root certificates, and lots of downsides.
1. These might need to happen as emergencies if something bad happens
2. If roots rotate often then we build the muscle of making sure trust bundles can be updated
I think the weird amount they are being rotated today is the real root cause if broken devices and we need to stop the bleed at some point.
Five years is not enough incentive to push this change. A TV manufacturer can simply shrug and claim that the device is not under warranty anymore. We'll only end up with more bricked devices.
Isn't this the whole point of intermediate certificates, though?
You know, all the CA's online systems only having an intermediate certificate (and even then, keeping it in a HSM) and the CA's root only being used for 20 seconds or so every year to update the intermediate certificates? And the rest of the time being locked up safer than Fort Knox?
If the vendor is really unable to update, then it's at best negligence when designing the product, and at worst -- planned obsolescence.
2. Product is a smart fridge or whatever, reasonable users might keep it offline for 5+ years.
3. New homeowner connects it to the internet.
4. Security update fails because the security update server's SSL cert isn't signed by a trusted root.
Nothing stays the same forever, software is never done. It’s absurd pretend otherwise.
The CA folks and the Browser folks may have had differences of opinions.
I expect they will introduce new, "more secure", proprietary methods, and ride the vendor lock-in until the paid certificate industries death.
Large companies will keep on using paid providers also for business continuity in case free provider will fail. Also I don’t know what kind of SLA you have on let’s encrypt.
It is more complicated than „oh it is free let’s move on”.
Let's Encrypt isn't the only free ACME provider, you can take your pick from them, ZeroSSL, SSL.com, Google and Actalis, or several of them for redundancy. If you use Caddy that's even the default behavior - it tries ZeroSSL first and automatically falls back to Let's Encrypt if that fails for whatever reason.
No, that's false. It's the other way around.
“If Caddy cannot get a certificate from Let's Encrypt, it will try with ZeroSSL”. Source: https://caddyserver.com/docs/automatic-https#issuer-fallback
Which makes sense, since the ACME access to ZeroSSL must go through an account created by a manual registration step. Unless the landscape changed very recently, LE is still the only free ACME that does not require registration. Source: https://poshac.me/docs/v4/Guides/ACME-CA-Comparison/#acme-ca...
https://caddy.community/t/using-zerossls-acme-endpoint/9406
Correction: the default behavior is to use Let's Encrypt alone, but if you provide an email then it's Let's Encrypt with fallback to ZeroSSL.
Oh, on LE the Rate Limit Adjustment Request forms the contractual things (if that's what they are?) do not load: https://isrg.formstack.com/forms/rate_limit_adjustment_reque...
ZeroSSL, SSL.com and Actalis offer paid services on top of their basic free certificates.
Google is Google.
So your "free" ssl certs are provided by surveillance capitalism, and paid for with your privacy (and probably your website user's privacy too)?
A more realistic concern with using Googles public CA is they may eventually get bored and shut it down, as they tend to do. It would be prudent to have a backup CA lined up.
I'm not sure that's technically true. As a CA they definitely have the power to facilitate a MitM attack. They can also issue fraudulent certificates.
> AIUI the ACME protocol never lets the CA see the private key, only the public key, which is public by definition anyway.
That has more to do with HTTPS end to end encryption, not the protocol of issuance.
That's not really how ssl certs work - google isn't getting any information they wouldn't have otherwise had by issuing the ssl cert.
The vote was more about whether the CAB would continue to be relevant. "Accept the reality, or browsers aren't even going to show up anymore".
I wrote a bunch about this recently: https://www.certkit.io/blog/47-day-certificate-ultimatum
- What is the problem with stale certificates if a domain changes hands? It seems to me that whether they renew the certificate or not, the security situation for the user is still the same, no?
- Is CertKit a similar solution to Anchor Relay? (https://anchor.dev/relay)
I do still feel that "that blog/publication that had immense cultural impact years ago, that was acquired/put on life support with annual certificate updates, will now be taken offline rather than migrated to a system that can support ACME automations, because the consultants charge more than the ad revenue" will be an unfortunate class of casualty. But that's progress, I suppose.
Today, people are complaining that automation of certificate renewals are annoying (I'm sure they were). Before that, the complaint was that random US companies were simply buying and deploying their own root certificates, issuing certs for arbitrary strangers domains, so their IT teams wouldn't have to update their desktop configurations.
Things are better now.
The difference being that there's at least a little bit of popular dissatisfaction with the status quo of browsers unilaterally dictating web standards, whereas no one came to the defense of CAs, since everybody hated them. A useful lesson that you need to do reputation management even if you're running a successful racket, since if people hate you enough they might not stick up for you even if someone comes for you "illegally".
The CA industry is the new taxi industry.
Which AI did you use for writing it? It's pretty good.
Whole IT teams are just going to wash their hands of this and punt to a provider or their cloud IaaS.
We're doing a beta of it for some other groups now. https://www.certkit.io/
All of these required, complex, constantly moving components mean we're beholden to larger tech companies and can't do things by ourselves anymore without increasing effort. It also makes it easier for central government to start revoking access once they put a thumb on cert issuance. Parties that the powers don't like can be pruned through cert revocation. In the future issuance may be limited to parties with state IDs.
And because mainstream tech is now incompatible with any other distribution method, you suddenly lose the ability to broadcast ideas if you fall out of compliance. The levers of 1984.
But that is what ISPs did! Injecting (more) ads. Replaced ads with their own. Injecting Javascript for all sorts of things. Like loading a more compressed version of a JPEG and you had to click on a extra button to load the full thing. Removing the STARTTLS string from a SMTP connection. Early UMTS/G3 Vodafone was especially horrendous.
I also remember "art" projects where you could change the DNS of a public/school PC and it would change news stories on spiegel.de and the likes.
The problem is not "your part", it's the "between you and the client" part.
It becomes trivial to inject extra content, malicious JavaScript, adverts etc into the flow. And this isn't "targetted" at your site, its simply applied to all insecure sites.
TLS is not about restricting your ability to broadcast information. It's about preserving your ability to guarentee that your reader reads what you wrote.
TLS is free and easy to implement. The only reason not to do it is laziness. You may see TLS as a violation of your principles- but I see it as an attitude of "I don't care about my readers safety - let someone inject malicious JavaScript (or worse) on my page, their security is not my problem".
(If the govt want to censor you they can do that via dns).
Of course I have modern laptops, but I still fire up my old Pismo PowerMac G3 occasionally to make sure my sites are still working on HTTP, accessible to old hardware, and rendering tolerably on old browsers.
PRISM works fine to recover HTTPS-protected communications. If anything NSA would be happier if every site they could use PRISM on used HTTPS, that's simply in keeping with NOBUS principles.
Source:
They collect it straight from the company after it's already been transmitted. It's not a wiretap, it's more akin to an automated subpoena enforcement.
> Next year, you’ll be able to opt-in to 45 day certificates for early adopters and testing via the tlsserver profile. In 2027, we’ll lower the default certificate lifetime to 64 days, and then to 45 in 2028.
If paid certs drop in max validity period, then yeah, zero reason to burn money for no reason.
Unfortunately, the people making these decisions simply do not care how they impact actual real world users. It happens over and over that browser makers dictate something that makes sense in a nice, pure theoretical context, but sucks ass for everyone stuck in the real world with all its complexities and shortcomings.
Google calls the shots and the others fall in line.
This is not centralizing everything to Let's Encrypt. it's forcing everyone to use ACME, and many CAs support ACME (and those that don't probably will soon due to this change).
There are lots of organizations that support the ACME protocol. LE is the most well known, but there are others, and more on the way.
Existing CAs don't necessarily vanish with this change. They are free to implement ACME (or some proprietary protocol) and they are completely free to keep charging for certificates if they like.
The real result of this change is that processes will change (where they haven't already) improving both customer experience and security.
But to be clear there's no "one organization" in the loop here. You can rest easy on that front.
A bit off-topic, but I find this crazy. In basically every ecosystem now, you have to specifically go out of your way to turn on mandatory rotation.
It's been almost a decade since it's been explicitly advised against in every cybersec standard. Almost two since we've done the research to show how ill-advised mandatory rotations are.
In any case, all my homies hate PCI.
I tried at my workplace to get them to stop mandatory rotation when that research came out. My request was shot down without any attempt at justification. I don't know if it's fear of liability or if the cyber insurers are requiring it, but by gum we're going to rotate passwords until the sun burns out.
0 2 1 * * /usr/local/bin/letsencrypt-renew
And the script: #!/bin/sh
certbot renew
service lighttpd restart
service exim4 restart
service dovecot restart
... and so on for all my servicesThat's it. It should be bulletproof, but every few renewals I find that one of my processes never picked up the new certificates and manually re-running the script fixes it. Shrug-emoji.
I haven't touched my OpenBSD (HTTP-01) acme-client in five years: acme-client -v website && rcctl reload httpd
My (DNS-01) LEGO client sometimes has DNS problems. But as I said, it will retry daily and work eventually.
It's the five lines below "the script:"
Not to mention that the WEBPKI has made it completely unviable to deliver any kind of consumer software as an offline personal web server, since people are not going to be buying their own DNS domains just to get their browser to stop complaining that accessing local software is insecure. So, you either teach your users to ignore insecure browser warnings, or you tie the server to some kind of online subscription that you manage and generate fake certificates for your customer's private IPs just to get the browsers to shut up.
OCSP (with stapling) was an attempt to get these benefits with less disruption, but it failed for the same reason this change is painful: server operators don’t want to have to configure anything for any reason ever.
Well, it's the people who want to MITM that started it, a lot of effort has been spent on a red queen's race ever since. If you humans would coordinate to stay in high-trust equilibria instead of slipping into lower ones you could avoid spending a lot on security.
Well, you could also give every random server you happen to configure an API key with the power change any DNS record it wishes.. what could go wrong?
#security
Holding public PKI advancements hostage so that businesses can be lazy about their intranet services is a bad tradeoff for the vast majority of people that rely on public TLS.
There are more things on the internet than web servers.
You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
I don’t even think mail servers work well with the letsencrypt model unless its a single server for everything without redundancies.
I guess nobody runs those anymore though, and, I can see why.
The solution however is pretty trivial. For our setup I just made a very small server with a couple of REST endpoints.
Each customer gets their own login to our REST server. All they do is ask "get a new cert".
The DNS-01 challenge is handled by the REST server, and the cert then supplied to the client install.
So the actual customer install never sees our DNS API keys.
> and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?
The certificate you get for the domain can be used for whatever the client accepts it for - the HTTP part only matters for the ACME provider. So you could point port 80 to an ACME daemon and serve only the challenge from there. But this is not necessarily a great solution, depending on what your routing looks like, because you need to serve the same challenge response for any request to that port.
> You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
The server using the certificate doesn't have to be the one going through the ACME flow, and once you have multiple nodes it's often better that it isn't. It's very rare for even highly sophisticated users of ACME to actually provision one certificate per server.
Very common in mobile command centres for emergency management, inflight entertainment systems and other systems of that nature.
I personally have a media server on my home LAN that I let my relatives use when they’re staying at our place. It has a publicly trusted certificate I manually renew every year, because I am not going to make visitors to my home install my PKI root CA. That box has absolutely no reason to be reachable from the Internet, and even less reason to be allowed to modify my public DNS zones.
Or that TLS and HTTPS are unrelated, when HTTPS is just HTTP over TLS; and TLS secures far more, from APIs and email to VPNs, IoT, and non-browser endpoints? Both are bunk; take your pick.
Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).
Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
Inspired.
It does none of these. Putting more elbow grease into your ACME setup with existing, open source tools solves this for basically any use case where you control the server. If you're operating something from a vendor you may be screwed, but if I had a vote I'd vote that we shouldn't ossify public PKI forever to support the business models of vendors that don't like to update things (and refuse to provide an API to set the server certificate programmatically, which also solves this problem).
> Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
Yes, but unironically. If rotating certs is a once a year process and the guy who knew how to do it has since quit, how quickly is your org going to rotate those certs in the event of a compromise? Most likely some random service everyone forgot about will still be using the compromised certificate until it expires.
> And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
Everyone likes to meme on this, but TLS without verification is actually substantially stronger than nothing for server-to-server SMTP (though verification is even better). It's much easier to snoop on a TCP connection than it is to MITM it when you're communicating between two different datacenters (unlike a coffeeshop). And most mail is between major providers in practice, so they were able to negotiate how to establish trust amongst themselves and protect the vast majority of email from MITM too.
First, one of the purposes of shorter certificates is to make revocation easier in the case of misissuance. Just having certificates issued to you be shorter-lived doesn't address this, because the attacker can ask for a longer-lived certificate.
Second, creating a new browser wouldn't address the issue because sites need to have their certificates be acceptable to basically every browser, and so as long as a big fraction of the browser market (e.g., Chrome) insists on certificates being shorter-lived and will reject certificates with longer lifetimes, sites will need to get short-lived certificates, even if some other browser would accept longer lifetimes.
The same in web certs that could have been something like "domain.xyz can request non-wildcard certs for up to 10 days validity". Where I think certs fell apart with it is they placed all the eggs in client side revocation lists and then that failure fell to the admins to deal with collectively while the issuers sat back.
For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
You could be proposing two things here:
(1) Something like CAA that told CAs how to behave. (2) Some set of constraints that would be enforced at the client.
CAA does help some, but if you're concerned about misissuance you need to be concerned about compromise of the CA (this is also an issue for certificates issued by the CA the site actually uses, btw). The problem with constraints at the browser is that they need to be delivered to the browser in some trustworthy fashion, but the root of trust in this case is the CA. The situation with RPKI is different because it's a more centralized trust infrastructure.
> For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
> I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
Same reasoning between us I think, just a difference in interpreting what it was saying. Kind of like sarcasm - a "yes, you can do it just as they say" which in reality highlights "no, you can't actually do _it_ though" type point. You read it as solely the former, I read it as highlighting the latter. Maybe GP meant something else entirely :).
That said, I'm not sure I 100% agree it's really related to the strictest major browser does alone though. E.g. if Firefox set the limit to 7 days then I'd bet people started using other browsers vs all sites began rotating certs every 7 days. If some browsers did and some didn't it'd depend who and how much share etc. That's one of the (many) reasons the browser makers are all involved - to make sure they don't get stuck as the odd one out about a policy change.
.
Thanks for Let's Encrypt btw. Irks about the renewal squeeze aside, I still think it was a net positive move for the web.
And, Well the create-a-browser was a joke, its what ive seen suggested for those who don't like the new rules.
Google https://pki.goog/
SSL.com https://www.ssl.com/blogs/sslcom-supports-acme-protocol-ssl-...
ZeroSSL https://zerossl.com/documentation/acme/
I don't actually think Cloudflare runs an ACME Certificate Authority. They just partner with LetsEncrypt? Edit: Looks like they don't run any CA, they just delegate out to a bunch of others https://developers.cloudflare.com/ssl/reference/certificate-...
https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
That's been true for a while, regardless of cert length.
Everyone leans on them and unlike CF and other choke points of the internet...Let's Encrypt is a non-profit
yes
We're not quite there yet, but the logical progression of shorter and shorter certificate lifetimes to obviate the problems related to revocation lists would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet", alongside AWS, Cloudflare, and friends. With cert lifetimes measured in years or months, the CA can have a bad day and as long as you didn't wait until the last possible minute to renew, you're unimpacted. With cert lifetimes trending towards days or less, now your CA really does need institutionally important levels of high availability.
Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
I think that particular ship sailed a decade ago!
> Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
Okay, this is what I wanted clarified. I don't disagree that CAs are critical infrastructure, and that there's latent risk whenever infrastructure becomes critical. I just think that risk is justified, and that LE in particular is no more or less of a SPOF with these policy changes.
Hell, you can still set it to renew when cert still have month left.
I'm more worried that the clowns at the helm will push into something stupid like week or 3 days, "coz it improves security in some theoretical case"
Certificates have historically been a "fire and forget" but constant re-issuance will make LE as important as DNS and web hosting.
The longer certificates were valid the more often we'd have breakage due to admins forgetting renewal, or how do install the new certificates. It was a daily occurrence, often with hours or days of downtime.
Today, it's so rare I don't even remember when I last encountered an expired certificate. And I'm pretty sure it's not because of better observability...
It's okay for something to be a good thing and to celebrate it. We don't have to frown about everything.
Oh and you would definitely know about this outage because you would hear about it in your news, and the monitoring you already have set up to yell at you when you cert is about to retire (you already have that right? Right?). And you can STILL trivially switch to another CA that supports ACME.
There are other CA with ACME support
Including paying CA, if you really want to pay : sectigo
You can renew your sectigo certificates with ACME so from a technical point of view, just trigger your cron more often
The EU started building the Galileo GNSS ("GPS") in 2008 as a backup in case the US turned hostile. And now look where we are in 2025 with the US president openly talking about taking Greenland. Wise move. It seemed like a gigantic waste back then. It was really, really expensive.
Then lots of European countries ordered F35s from Lockheed Martin. What an own goal. This includes Denmark/Greenland.
But i digress...
While I love Let's Encrypt it feels so silly to use a third party to verify I can generate a Cloudflare API key (even .well-known is effectively "can you run a webserver on said dns entry").
Edit: TIL about https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...
Also, at some point in the lifetime graph, you start getting diminishing returns. There aren't many scenarios where you get your private keys stolen, but the bad guys couldn't maintain access for more than a couple of weeks.
In my humble opinion, if this is the direction the CA/B and other self-appointed leaders want to go, it is time to rethink the way PKI works. We should maybe stop thinking of LetsEncrypt as a CA but it (and similar services) can function as more of a real-time trust facilitators? If all they're checking for is server control, then maybe a near-real-time protocol to validate that, issue a cert, and have the webserver use that immediately is ideal? Lots of things need to change for this to work of course, but it is practical.
Not so long ago, very short DNS TTL's were met with similar apprehension. Perhaps the "cert expiry" should be tied to the DNS TTL. With the server renewing much more frequently (e.g.: If the TTL is 1 hour, the server will renew every 15 minutes).
Point being, the current system of doing things might not be the best place to experiment with low expiry lifetimes, but new ways of doing things that can make this work could be engineered.
One of the setups that gives me issues is machines that are resumed from a historical snapshot and start doing things immediately, if the NTP date hasn't been updated since the last snapshot you start getting issues (despite snapshots being updated after every daily run). Most sites won't break (especially with a 24h window, although longer always have issues), but enough sites change their certs so frequently now, it's a constant issue.
Even with a 10 year cert, if you access at the right time you'll have issues, the difference now is it isn't a once in a 10 year event, but once in every few days some times.
Perhaps if TLS clients requesting a time update to the OS was a standardized thing and if NTP client daemons supported that method it would be a lot less painful?
The closest I've gotten to this would be something like a Raspberry Pi, but even then NTP is pretty snappy as soon as there's network access, and until there's network access I'm not hitting any TLS certs.
Honestly, I just wish that browsers used NTP directly and used that instead of the system time. If the CA/B wants to go this direction, maybe this will be a good enhancement to make it more tenable?
Let's encrypt issues certificates with "notBefore" set an hour in the past to compensate for incorrect clocks. An hour is plenty of compensation.
They aren't used much, but they are a neat solution. Google forcing this change just means there's even more overhead when updating certs in a larger project.
I might add I've changed my mind a bit on this.
But in this case, the upsides are definitely greater than in the usual case.
Why not 60 seconds? That's way more secure. Everything is automated after all, so why risk compromised certificate for such a long period, like 45 days?
Or how about we issue a new certificate per request? That should remove most of the risks.
One: most of the reasoning is available for reading. Lots of discussion was had. If you're actually curious, I would suggest starting with the CA/B mailing group. Some conversation is in the MDSP (Mozilla's dev-security-policy) archives as well.
Two: it's good to remember that the competing interests in almost every security-related conversation is the balance between security and usability. Obviously, a 60-second lifetime is unusable. The goal is to get the overlap between secure and usable to be as big as possible.
The point of the CA/BF settling on 47-day certs is yes, to strongly push automation, but also to still allow time for manual intervention when automation fails.
It ranges from old systems like libpq which just loads certs on connection creation to my knowledge, so it works, down to some JS or Java libraries that just read certs into main memory on startup and never deal with them again. Or other software folding a feature request like "reload certs on SIGHUP" with "oh, transparently do listen socket transfer between listener threads on SIGHUP", and the latter is hard and thus both never happen.
45 days is going to be a huge pain for legacy systems. Less than 2 weeks is a huge pain even with modern frameworks. Even Spring didn't do it right until a year or two ago and we had to keep in-house hacks around.
To be honest, the issue is not the time frame, you can literally have certs being made every day. And there are plenty of ways to automated this. The issue is the CT log providers are going to go ape!
Right now, we are at 24B certificates going back to around 2017 when they really started to grow. There are about 4.5B unique domains, if we reduce this to the amount of certs per domain, its about 2.3B domain certs we currently need.
Now do 2.3B, x 8 renewals ... This is only about 18,4B new certs in CT logs per year. Given how popular LE is, we can assume that the actual growth is maybe 10B per year (those that still use 1 year or multi year + the tons of LE generated ones).
Remember, i said the total going back to 2017 currently is now only 24B ... Yea, we are going to almost double the amount of certs in CT logs, every two years.
And that assumes LE does not move to 17 days, because then i am sure we are doubling the current amount, each year.
Good luck as a CT log provider... fyi, a typical certificate to store is about 4.5kb, we are talking 45TB of space needing per year, and 100TB+ if they really drop it down to 17 days. And we did not talk databases, traffic to the CT logs, etc...
Its broken Jim ... Now imagine for fun, a daily cert, ... 1700TB per year in CT log storage?
A new system will come from Google etc because its going to become unaffordable, even for those companies.
1: https://www.youtube.com/watch?v=uSP9uT_wBDw A great explainer of how they work and why they're better.
2: https://davidben.github.io/merkle-tree-certs/draft-davidben-... The current working draft
We can solve the storage requirements, it’s fine.
Do not forget that we had insane long certificates not that long ago.
The main issue is that currently you can not easily revoke certs, so your almost forced to keep a history of certs, and when one has been revoked in the CT logs.
In theory, if everybody is forced to change certs every 47 days, sure, you can invalidated them and permanently remove them. But it requires a ton of automatization on the user side. There is still way too much software that relies on a single year or multi year certificated that is manually added to it. Its also why the fadeout to 47 days, is over a 4 year time periode.
And it still does not change the massive increased in requests to check validation, that hits CT logs providers.
You can store that kind of information in a lot less space. It doesn't need to be duplicated with each renewal.
> The main issue is that currently you can not easily revoke certs, so your almost forced to keep a history of certs, and when one has been revoked in the CT logs.
This is based on the number of active certificates, which has almost no connection with how long they last.
> There is still way too much software that relies on a single year or multi year certificated that is manually added to it.
Hopefully less and less going forward.
> And it still does not change the massive increased in requests to check validation, that hits CT logs providers.
I'm not really sure how that works but yeah someone needs to pay for that.
It's not final yet, but interesting development.
Does that mean IP certificates will be generally available some time this week?
This vision still needs a several more developments to land before it actually results in an increment in user privacy, but they are possible:
1. User agents can somehow know they can connect to a host with IP SNI and ECH (a DNS record?)
2. User agents are modified to actually do this
3. User agents use encrypted DNS to look up the domain
4. Server does not combine its IP cert with it's other domain certs (SAN)This only affects you if you have a server set up to verify mTLS clients against the Let's Encrypt root certificate(s), or maybe every trusted CA on the system. You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificates.
You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
Sure, but it seems like all the CAs are stopping issuing certificates with the client EKU. At least LetsEncrypt and DigiCert, since by the Google requirement they can't do that and normal certs, and I guess there's not enough market to have one just for that.
> You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificate
Sure, what's wrong with that?
> You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
That lets the client verify the host, but the server doesn't know where the connection is coming from. Generating mTLS pairs means pinning and coordinated rotation and all that. Currently servers can simply keep an up to date CA store (which is common and easy), and check the subject name, freeing the client to easily rotate their cert.
But, if you really need to use certificates from the CAs anyways, you might ignore some of the fields of the certificate.