The self-service unpause is brilliant. The worst thing about hitting these sorts of limits is that time window when you think you've fixed the problem but you can't check because you're throttled - so there's nothing you can do but wait. Giving literally any affordance so that a human can make progress with a fix removes this huge source of frustration.
philjohn · 1d ago
That's what the staging CA is for - and why it has much higher rate limits.
efitz · 1d ago
I really appreciate the thoughtful and non-punitive approach, and intend to add your self-service-unpause approach to my own arsenal of tricks.
aorth · 1d ago
Happy to be running Caddy on a growing number of servers instead of renewing certs through certbot. Caddy has really good defaults and does the right thing with TLS certs without much hassle. Less moving parts too.
NicolaiS · 1d ago
Agree
Caddy even supports 'ACME profiles' for people that want to follow the latest recommendation from CAB / want shortlived certs
dieulot · 1d ago
Certbot does too as of 4.0.0 (2025-04-08).
meltyness · 1d ago
My server got renewal halted. I rolled my own wrapper for certbot. Idk it's just a blog, I'm not that attached. It hit some rock a few months ago, I just retried and manually installed it, and it seems to have perked back up and continued receiving certs. Probably would have been more frustrating if it were a huge fleet but, it wasn't even worth my time to check logs and figure out what precisely happened (cert distributed with a modified that didn't match the ASN.1 expiry? transient issuance failure? issues the same cert? ...who knows.)
globie · 1d ago
Were you running certbot multiple times per day?
Looking at the relevant limit, "Consecutive Authorization Failures per Hostname per Account"[0], it looks like there's no way to hit that specific limit if you only run once per day.
Ah, to think how many cronjobs are out there running certbot on * * * * *!
By my memory, a cron runs a script that checks my cert file's last modified daily. When it is a certain number of days since (flavored Bash statements) the file last modified I'll certbot and install whatever comes back.
It's very under-engineered, maybe a trifold pamphlet on light A11 printed with a laser jet running out of ink.
I've probably spent more time talking about how much it sucks than I have bothered considering a proper solution, at this point.
stirfish · 11h ago
>I've probably spent more time talking about how much it sucks than I have bothered considering a proper solution, at this point.
I respect this. Reading someone else write this makes me feel more comfortable thinking about the things in my life I could be doing more to improve, which makes me respect this even more.
bbarnett · 1d ago
Isn't that where we are going eventually? Certs only lasting a day?
globie · 1d ago
That's a good point. I suspect as the renewal period is shortened, scripts will attempt renewal faster and faster.
I hope they don't go any shorter than a month. Let the user pick, any value up to a year should do.
conradludgate · 1d ago
Browsers are eventually going to deny any certificate after 47 days iirc
ferngodfather · 1d ago
They simultaneously want shorter certs but can't cope with the current load
CrossVR · 1d ago
Nowhere in the blog post does it say they can't cope with the load, which is why the rate limits are so high. This is only about reducing wasted resources by blocking requests which are never going to succeed.
Arnavion · 19h ago
They definitely can't cope with the load at midnight, or at least couldn't back in 2022, and the fact that they mention midnight specifically in this post makes me assume they still can't. I say this because I had cert issuance fail for multiple days because of DB timeouts on their end from that: https://community.letsencrypt.org/t/post-to-new-order-url-fa...
Incidentally the fact that it took them 4 days to respond to that issue is why I'll be wary of getting 6-day certs for them. The only reason it wasn't a problem there was that it was a 30d cert and had plenty of time remaining, so I was in no rush. (Also ideally they'd have a better support channel than an open forum where an idiot "Community Leader" who doesn't know what he's talking about wastes your time, as happened in that thread.)
UltraSane · 1d ago
No, they will never get that short due to reliability issues. I could see getting down to maybe two weeks.
To make 24 hour valid certs practical you would need to generate them ahead of time and locally switch them out. This would be a lot more reliable if systems supported two certs with 50% overlapping validity periods at the same time.
jaas · 1d ago
Let’s Encrypt has already started issuing a limited number of 6-day certs and they will be generally available later this year.
(90 days will remain the default though)
genewitch · 1d ago
Timezones going to make that hilarious, probably go back to much longer certs. I like free so I put up with LE. The automated stuff only works on half my servers, the other half I either run without https or I manually install it. Except now I wait until the service stops working, spend 15 minutes debugging why, go to the domain in a browser and see the warning, and then go fix it. Why? LE decided sending 4 emails a year is too many. And let's be real, sending automated emails is expensive. I think AWS charges like $0.50 per email when you use their hosted email sender.
SomeUserName432 · 1d ago
> I think AWS charges like $0.50 per email when you use their hosted email sender.
SES? Around $0.0001 per e-mail
genewitch · 15h ago
Yes, it was facetious, i am jabbing at Let's Encrypt for ceasing email operations.
greatgib · 1d ago
As they have the account email, they could also notify of the issue by email when there are too many issues renewing for too long.
Macha · 1d ago
Note that Lets Encrypt are winding down their email notifications as of today, actually:
Which is the only email they send to individual account holders.
The only email they're keeping are mailing lists which you need to subscribe to seperately which are presumably run by an external provider.
xp84 · 1d ago
Sure, and they must have already emailed the person when they failed to get a new cert before their last one expired. But I suspect a lot of people don't use a real email address for LE, since there's no enforcement/verification. Or they might be using one that isn't their main one.
TonyTrapp · 1d ago
A Let's Encrypt account is not required to be associated with an email address.
undebuggable · 1d ago
I highly appreciate their saintlike patience to my buggy cronjobs and snippy requests.
smallnix · 22h ago
Thanks for all the work that goes into this crucial service!
3% and "3,200 people manually unpaused issuance" does seem much higher than expected to me and no cause for celebration, especially at this scale.
Are there no better patterns to be exploited to identify 'zombies'? Running experiments with blocking and then unblocking to validate should work here.
I guess this falls into the bucket of: sure we can do that, given sufficient time and resources
tux1968 · 21h ago
Why do you think that this indicates a problem in identifying zombies? The pause may have simply been the reason that someone became aware there was even a problem. The zombie might have persisted, if it hadn't been paused.
smallnix · 20h ago
> Why do you think that this indicates a problem in identifying zombies?
I understood a zombie to represent a client that is dead and will never come back to live again. Since they came back to live they were not actually zombies. So manual action from actually alive clients was required. That may be ok, since they behavior was not acceptable, but in the spirit of not penalizing it would be better to not block those clients if they can be identified and sufficient resources are available to shoulder their misbehaviour.
> The pause may have simply been the reason that someone became aware there was even a problem.
I didn't take that into account and it would be neat. But why would they become aware after this change? Because the error message(/code?) is now different?
jadbox · 18h ago
Does the Unpause button have a CAPTCHA, because it's only a matter of time when software will try to auto-unpause if there's a failure... and the cycle repeats. Hence CAPTCHA on the button should at least discourage software devs from automating the process of unpausing.
RadiozRadioz · 3h ago
No, I don't think that will happen at large because there's no good reason for it.
If this is the error that you're getting, then hitting unpause won't make the certificate requests start working. You'll just go back to receiving the persistent error messages from before the pause.
What do you gain by automating it? This isn't an error that you'll experience in day-to-day successful operation. It's not an error that reoccurs after resolution because it can be removed for years with one action. This lock will only happen if a cert request is consistently broken for a really long time.
Fixing the underlying cause of the cert issuance failures requires human intervention anyway, a human can easily click the button. They also provide first-class support for bulk enablement.
The motivations for automating button are extremely small.
tough · 14h ago
aren't captchas a solved automation problem nowadays
Palomides · 20h ago
I'm kinda surprised they bothered, it's only caught 100,000 out of the 600,000,000 domains they handle?
wolfgang42 · 18h ago
A working domain needs one validation every ~60 days, but these zombie domains sound like they’re making multiple requests per hour (per the article, twice daily would still take 10 years to hit the limit) which is a massively disproportionate amount of resources.
saagarjha · 1d ago
I’m curious if they could send emails to accounts indicating that they plan to shut off their access?
Caddy even supports 'ACME profiles' for people that want to follow the latest recommendation from CAB / want shortlived certs
Looking at the relevant limit, "Consecutive Authorization Failures per Hostname per Account"[0], it looks like there's no way to hit that specific limit if you only run once per day.
Ah, to think how many cronjobs are out there running certbot on * * * * *!
[0]: https://letsencrypt.org/docs/rate-limits/#consecutive-author...
It's very under-engineered, maybe a trifold pamphlet on light A11 printed with a laser jet running out of ink.
I've probably spent more time talking about how much it sucks than I have bothered considering a proper solution, at this point.
I respect this. Reading someone else write this makes me feel more comfortable thinking about the things in my life I could be doing more to improve, which makes me respect this even more.
I hope they don't go any shorter than a month. Let the user pick, any value up to a year should do.
Incidentally the fact that it took them 4 days to respond to that issue is why I'll be wary of getting 6-day certs for them. The only reason it wasn't a problem there was that it was a 30d cert and had plenty of time remaining, so I was in no rush. (Also ideally they'd have a better support channel than an open forum where an idiot "Community Leader" who doesn't know what he's talking about wastes your time, as happened in that thread.)
To make 24 hour valid certs practical you would need to generate them ahead of time and locally switch them out. This would be a lot more reliable if systems supported two certs with 50% overlapping validity periods at the same time.
(90 days will remain the default though)
SES? Around $0.0001 per e-mail
https://letsencrypt.org/2025/01/22/ending-expiration-emails/
The only email they're keeping are mailing lists which you need to subscribe to seperately which are presumably run by an external provider.
3% and "3,200 people manually unpaused issuance" does seem much higher than expected to me and no cause for celebration, especially at this scale.
Are there no better patterns to be exploited to identify 'zombies'? Running experiments with blocking and then unblocking to validate should work here.
I guess this falls into the bucket of: sure we can do that, given sufficient time and resources
I understood a zombie to represent a client that is dead and will never come back to live again. Since they came back to live they were not actually zombies. So manual action from actually alive clients was required. That may be ok, since they behavior was not acceptable, but in the spirit of not penalizing it would be better to not block those clients if they can be identified and sufficient resources are available to shoulder their misbehaviour.
> The pause may have simply been the reason that someone became aware there was even a problem.
I didn't take that into account and it would be neat. But why would they become aware after this change? Because the error message(/code?) is now different?
If this is the error that you're getting, then hitting unpause won't make the certificate requests start working. You'll just go back to receiving the persistent error messages from before the pause.
What do you gain by automating it? This isn't an error that you'll experience in day-to-day successful operation. It's not an error that reoccurs after resolution because it can be removed for years with one action. This lock will only happen if a cert request is consistently broken for a really long time.
Fixing the underlying cause of the cert issuance failures requires human intervention anyway, a human can easily click the button. They also provide first-class support for bulk enablement.
The motivations for automating button are extremely small.