SSL certificate requirements are becoming obnoxious

176 unl0ckd 229 8/26/2025, 12:50:06 PM chrislockard.net ↗

Comments (229)

Intermernet · 3h ago
Since the advent of LetsEncrypt, ACME, and Caddy I haven't thought about SSL/TLS for more than about an hour per year, and that's only because I forget the steps required to setup auto-renewal. I pay nothing, I spend a tiny amount of time dealing with it, and it works brilliantly.

I'm not sure why many people are still dealing with legacy manual certificate renewal. Maybe some regulatory requirements? I even have a wildcard cert that covers my entire local network which is generated and deployed automatically by a cron job I wrote about 5 years ago. It's working perfectly and it would probably take me longer to track down exactly what it's doing than to re-write it from scratch.

For 99.something% of use cases, this is a solved problem.

stego-tech · 3h ago
As someone on the other side of the fence who lives primarily in IT land, this is far from a solved problem. Not every device supports SSH for copying certs across the network, some devices have arbitrary requirements for the certs themselves (like timelines, lack of SANs, specific cryptography requirements, etc), and signing things internally (so that they’re only valid within the intranet, not on the internet) doesn’t work with LE at present.

So unless you’re part of the folks fine heavily curating (or jailbreaking) devices to make the above possible, PKI is hardly a solved problem. If anything it remains a nightmare for orgs of all sizes. Even in BigCo at a major SV company, we had a dedicated team to manage PKI for internal certificates - complete with review board, justification documents, etc - and that still only bought us a manual process with a lead time of 72 hours for a cert.

That said, it is measurably improved and I do think ACME/certbot/LE is on the right track here. Instead of constant bureaucratic revisioning of rules and standards documents, I believe the solution here is a sort of modern Wireguard-esque implementation of PKI and an associated certification program for vendors and devices. “Here’s the cert standard you have to accept, here’s the tool to automatically request and pin a certificate, here’s how that tool is configured for internal vs external PKI, and here’s the internal tooling standards projects that want to sign internal certs have to follow.”

Basically an AD CA-alike for SMB and Enterprise both. Saves me time having to get into the nitty gritty of why some new printer/IoT/PLC doesn’t support a cert, and improves the posture of the wider industry.

Shank · 2h ago
I feel like a lot of these requirements need to be really solved from first principles. What do you need these certificates for -- specifically, TLS certificates?

If the biggest issue is "we want to encrypt traffic" then the answer really should be something more automated. To put it another way, TLS certificates used to convey a lot of things. We had basic certs that said "you are communicating with the rightful owner of domain example.com" and we had EV certs that said "you are communicating with the rightful legal entity Example Org, who happens to own example.com" and so-on and so-forth.

But the thing is, we've killed off a lot of these certificate types. We don't have EV certs anymore. Let's Encrypt effectively democratized it to the point where you don't need to do any manual work for a normal "we want to encrypt data" certificate. I just don't understand what your specific requirements are, if they aren't directly "encrypt this traffic" focused, where you actually need valid certificates that work across the internet.

Put differently, if you're running an internal CA, you can push out your own certificates via MDM tools and then not worry about this. If you aren't running your own CA but you're implementing all of this pomp and circumstances, what are you trying to solve? Do you really need all of this ceremony for all of your devices and applications?

TurboSkyline · 58m ago
> But the thing is, we've killed off a lot of these certificate types. We don't have EV certs anymore.

EV certificates are no longer in use? Do you know why?

plorkyeran · 36m ago
They turned out to be expensive theater. One major problem is that company names aren't actually unique. An EV cert for "Stripe, Inc" still doesn't tell you that you're talking to the correct "Stripe, Inc". The other big problem was that users had no clue they were a thing, and when browsers tried to emphasize them in the UI it just confused users and made phishing easier.

You can still buy EV certs if you want to donate money to a CA, but that's about all they accomplish.

nikanj · 1h ago
What do I need these certificates for? I need them because browsers have started equating a vanilla http server to a malware-infested North Korean honeypot
dspillett · 55m ago
> browsers have started equating a vanilla http server to a malware-infested North Korean honeypot

It isn't that they are equal, just that it is difficult to tell them apart. The change over time is that UAs have more and more erred on the side of not trusting when there is any question.

Of course HTTPS sites with valid certificates could also be malware infested hot zones, but it is less likely. Sites with invalid certs are more likely to be a problem than those with no cert (the situation might imply a DNS poisoning issue for instance), and sites with no cert are a higher risk than those with a valid one.

At least we seem to have dropped the EV cert theatre, the extra checks done before issuing one of those were so easy to fake or work around in many cases that they essentially meant nothing [source: in DayJob we once had an EV cert for a client instance, and I know how easy it was to get because I was the person at our end who applied for it and installed it once issued].

pelagicAustral · 3h ago
I work for government and I can tell you the guys working infrastructure are still paying for shitty SSL certificates every year, in most cases for infrastructure that doesn't even see the light of day (internal), and the reason for that is none other that not knowing any better, and being unable to get their head out of their asses for enough time to learn something novel and implement it in their workflow. So yeah, there are those types out there in thew wild still.
stego-tech · 3h ago
In our defense, it’s because we’re expected to give everything a cert but often have no say on the security and cryptography capabilities of what’s brought onto the network in the first place, nevermind the manpower and time to build such an automated solution internally. Execs bringing in MFPs that don’t support TLS, PLCs that require SHA-1, routers with a packet buffer measured in single-digit integers but with a Java web GUI, all of these horrors and more are why it’s such a shitshow even today.

Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses. Believe me, I’ve been fighting this battle internally my entire career and I hate it. I hate the shitty state of PKI today, and how the sole focus seems to be public-facing web services instead of, y’know, the other 90% of a network’s devices and resources.

PKI isn’t a solved problem.

tetha · 1h ago
Aye. Between Lets Encrypt and compatible vendors, and e.g. Vault for internal CA-chains, /issuing/ certs in a secure, controlled and audited way (at least for the internal ones) is indeed a solved issue. We do have a lot of internal PKI chains running through vault and most of them are pushing 72 hour TTLs.

If you can do that, do that. It's great.

That being said, deploying and loading these certs is a fun adventure, even in somewhat modern software.

We're discovering a surprising amount of fairly modern software which handles certs in stupid ways. For example, if I recall right, NodeJS applications tend to load certificates to secure a connection to PostgreSQL into memory and never bother to reload. libpq on the other hand loads them on connection startup. Even Spring wasn't able to properly reload certificates 3-4 years ago and this was only added in recent versions with dynamic certificates - a colleague patched this in back in the day. Our loadbalancers folded the ticket of "Hey, reload these three certs" into the issue of reloading the entire configuration, including opening, closing ports, handing TCP connections over transparently, and a billion other topics.

Funny enough, if there is enough stuff running, there's not even an agreement on how to store stuff. Some stuff needs PEM-files, some stuff needs PKCS8 stores, some stuff wants 1-2 PKCS12 keyrings with certs and keys in there in separate entries. I even had to patch a PKCS12 handling library because one ruby library needed everything in one entry in a PKCS12 key store and there was no go-code under the sun to find to do this.

So there is a sunny place in which PKI is indeed solved. And besides that there is a deep dark hole that goes deeper than it should on earth.

stackskipton · 3h ago
It is solved but devices you are talking about refuse to get on board with the fix so here we are.

Also, I used to do IT, I get it but what do you think the fix here is? You could also run your own CA that you push to all the devices and then you can cut certificates as long as you want.

Muromec · 2h ago
Where I am right now, if devices cant do what regulator and internal security asks, we find a new vendor.
znpy · 3h ago
Don't take this as a snarky comment, but that sounds quite literally as "skill issue". Not in you personally, but in the environment you work in.

> PKI isn’t a solved problem.

PKI is largely a solved issue nowadays. Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.

It's been out for years now, integrating the root CA shouldn't be much of an issue via group policies (in windows, there are equivalents for mac os and gnu/linux i guess).

> Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses.

Quite the contrary: it means that the process is technically so trivial the masses can do it in an afternoon and live off it for years with little to no maintenance.

Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.

ecb_penguin · 3h ago
> but that sounds quite literally as "skill issue". Not in you personally, but in the environment you work in.

You have no idea the environment they work in. The "skill issue" here is you thinking your basic knowledge of Vault matters.

> Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.

They didn't tell you their needs, but you're convinced this vendor product solves it.

Are you a non-technical CTO by chance?

> there are equivalents for mac os and gnu/linux i guess

You guess? I'm sensing a skill issue. Why would you say it's solved for their environment, "I guess??"

> Quite the contrary: it means that the process is technically so trivial the masses can do it in an afternoon and live off it for years with little to no maintenance.

I'm sensing you work in a low skill environment if you think "home lab trivial" translates to enterprise and defense.

> Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.

Absolutely meaningless statement.

stego-tech · 2h ago
Interjecting into this back-and-forth real quick.

* Yes, I have experience with Vault. I have deployed it internally, used it, loathed it, and shelved it. It’s entirely too cumbersome for basic PKI and secrets management in non-programmatic environments, which is the bulk of enterprise and business IT in my experience.

* You’re right, the organization is the problem. Let me just take that enlightened statement to my leadership and get my ass fired for insubordination, again, because I have literally tried this before with that outcome. Just because I know better doesn’t mean the org has to respect that knowledge or expertise. Meritocracies aren’t real.

* The reason I don’t solve my own PKI issues with Caddy in my homelab is because that’s an irrelevant skill to my actual day job, which - see the point above - doesn’t actually respect the skills and knowledge of the engineers doing the work, only the opinions of the C-suite and whatever Gartner report they’re foisting upon the board. Hence why we have outdated equipment on outdated technologies that don’t meet modern guidelines, which is most enterprises today. Outside of the tech world, you’re dealing with comparable dinosaurs (no relation) who see neither the value or the need for such slick, simplified solutions, especially when they prevent politicians inside the org from pulling crap.

I’ve been in these trenches for fifteen years. I’ve worked in small businesses, MSPs, school campuses, non-profits, major enterprises, manufacturing concerns, and a household name on par with FAANG. Nobody had this solved, anywhere, except for the non-profit and a software company that both went all-in on AD CA early-on and threw anything that couldn’t use a cert from there off the network.

This is why I storm into the comments on blogs like these to champion their cause.

PKI sucks ass, and I’m tired of letting DevOps people claim otherwise because of Let’s Encrypt and ACME.

rfl890 · 2h ago
LeifCarrotson · 2h ago
> It's been out for years now, integrating the root CA shouldn't be much of an issue via group policies (in windows, there are equivalents for mac os and gnu/linux i guess).

How do you do this on a proprietary device from the late 90s that runs a WindRiver VXWorks RTOS with 1 MB of SRAM? The updated (full color!) Panelview HMI is running Windows CE 6.0, so it's perhaps more likely to be compatible, but I don't think the same group policies exist on that platform.

The masses can do it in an afternoon because they can choose to only install modern equipment that's compatible with the new requirements. Some of the "heavy iron" castings for the machines we have to work with were built more than a century ago, and only later automated with tubes and relays, and then again with the first PLCs that could do the job. But now "SSL everywhere" policies and certificate expiration timelines that don't make a distinction between firewalled OT networks and Internet-facing webservers don't allow anything to run for a decade without major, risky rewrites that cost tens of thousands of dollars for highly specialized engineering services and minimal downtime. Sure, adding a cert to the SCADA server is trivial, it runs Windows Server and has a NIC that can access the Internet, but on the other NIC...there's a menagerie of 30 years of industrial oddities.

If your homelab is still working after 2 years, that's great, but if it's not running after 100 years would you call that an organizational failure?

wrs · 1h ago
Put a modern reverse proxy in front of the legacy thing and turn off cert verification in the proxy? Or if the problem is the reverse, tell the legacy thing to use a forward proxy configured like it’s 2001 as its router.
Muromec · 2h ago
>How do you do this on a proprietary device from the late 90s that runs a WindRiver VXWorks RTOS with 1 MB of SRAM?

You dont and you are not supposed to bend security posture backwards to accomodate insecure devices that should be in the landfill by now

luma · 3h ago
Alternatively, those people are dealing with legacy systems that are pathologically resistant to cert automation (looking SQUARELY AT YOU vmware) and elect for the longest lasting certs they can get their hands on to minimize the disruption.

It’s generally best to assume experts in other fields are doing things for good reasons, and if you don’t understand the reason it might be something other than them being dumb.

ruszki · 1h ago
Generally agree, that we shouldn’t jump on conclusions, but experts in my own field are doing bad things mainly because of lack of knowledge. With the lack of will as a very close contender for the top position. “The lack of possibility” is the exception.

When I argued here that Linux is still pain to maintain on my laptops, it wasn’t because it’s not possible. I just didn’t have enough willpower.

People who jumped on me because of this were still idiots, because they thought that their even less knowledge (what were the error messages, my exact setup, etc) can be helpful for me, but still it would be wrong to state that I was unsuccessful because it’s not possible, or I had full knowledge back then.

Especially now, that I’m running Linux full time.

For example, I’m quite sure that you can solve this problem with a proxy, no matter what’s behind it. Maybe it’s infeasible, because you would need a custom proxy implementation, but it’s definitely possible.

pelagicAustral · 3h ago
Yeah OK, touché, I was over the top... I just need my cup of coffee...
o_m · 3h ago
The older I get the more skeptical I get to free services that run on others servers. They have a bunch of expenses and you are getting it for free. You are not the customer. I rather pay for a service than gamble on some free service that might be shut down at any time, or that might have malicious intents.
PhilippGille · 3h ago
Let's Encrypt is run by a nonprofit organization [1], funded by corporate and individual sponsors (like Google and AWS, but also the EFF and Mozilla) [2].

That doesn't guarantee they don't have malicious intents, but it's different from a for-profit company that tries to make money with you.

[1] https://www.abetterinternet.org/about/

[2] https://www.abetterinternet.org/sponsors/

IcePic · 2h ago
I think for certs, you are not better of paying $5 for the cert, than paying nothing to get an LE cert. It is already "subsidized" into cheapness, and the $5 company will bug you with ads for EV certs and whatnot in order to make a profit off you somehow since you are now a customer.

What I think LE did was to gather the required bag of money that any cert issuer needs to pony up to get the infra up and validated, and then skipped the $5 part and just run on donations. So while LE might stop tomorrow, you don't have any good guarantees that the $5 cert company will last longer if their sidebusiness goes under, and if you go to a $100 cert company, you are just getting scammed from some company who soon will realize that most certs are being given away and that they can't prove why their $100 certs are "better" in any meaningful way so they will also be at risk of going under. In all these cases, you get to use your cert for whatever validity period you had, and then rush over to the next issuer, whoever that is left when the pay-for-certs business tanks.

As opposed to cars or whatever, you can't really put more "quality math" into the certs so they last longer, the CAs have limits on how long they are allowed to last, so no more 10-year certs for public services anyhow. You might aswell get the cheapest of the ones that are still valid and useful (ie, exists in browser CA lists) and LE is one of those. Might be more (zerossl?) but same argument would hold for those. The CA list is curated by the browser teams lots better than me or you shopping around websites that make weird claims on why their certs are worth paying $100 for.

jraph · 3h ago
With you in general, but in this specific case, the whole thing seems healthy:

- Many companies (including competitors) are sponsoring LE, so the funding should be quite robust

- These companies are probably winning from the web being more secure, so the incentives are aligned with you (contrary to say, a company that offers something free but want to sink you under ads)

- the vendor lock-in is very weak. The day LE goes awry, you can move to another CA pretty painlessly

There are CAs supporting ACME that provide paid services as well.

aitchnyu · 3h ago
Whats a LetsEncrypt competitor which has convenient automated renewal?
patrakov · 12m ago
ZeroSSL - they provide Sectigo certificates under the hood. Works well with Dehydrated.
trenchpilgrim · 2h ago
Any that support ACME. Most of the big SSL companies do nowadays.
trenchpilgrim · 3h ago
There are paid ACME services - basically LE with paid support.
znpy · 3h ago
Yeah, one of those is https://zerossl.com/
basscomm · 2h ago
> Since the advent of LetsEncrypt, ACME, and Caddy I haven't thought about SSL/TLS for more than about an hour per year

I run a couple of low-stakes websites just for fun and manually updating certificates takes me about 10 minutes a year, and most of that is remembering how to generate the csr. Setting up an automated process gains me nothing except additional points of failure. Ratcheting the expiration down to 47 days is an effort to force everyone to use automation, which makes no sense for small hobby sites.

> I'm not sure why many people are still dealing with legacy manual certificate renewal

Not everyone is a professional sysadmin. Adding automation is another layer of complexity on top of what it already takes to run a website. It's fine for large orgs and professionals who do this for a living at their day jobs, but for someone just getting their feet wet it's a ridiculous ask.

eythian · 1h ago
I run a few low-stakes hobby things, and LE cert automation took the "once a year or so figure out how to do this because I haven't done it in a year and I should write it down but when I'm done I just go to the pub instead" to "", which was a nice change. Now I only have to interact with it when I add a new vhost to the web server, and that's just run a command to do so.
roblabla · 2h ago
What's frankly ridiculous is that the big softwares like Nginx and Apache don't deal with this on their own. I've been letting Caddy (my http host of choice) deal with TLS for me for _ages_ now. I don't have to think about anything, I don't have to setup automation. I just... configure my caddy to host my website on https://my.domain.com and it just fetches the TLS for me, renews it when necessary, and uses it as necessary.

You don't need to be a professional sysadmin to deal with this - so long as the software you use isn't ass. Nginx will _finally_ get this ability in the next release (and it'll still be more configuration than caddy, that just defaults to the sane thing)...

fanf2 · 1h ago
Apache has had mod_md since 2018 https://httpd.apache.org/docs/2.4/mod/mod_md.html
alanfranz · 1h ago
Nginx just added support for acme iirc.
jtbayly · 46m ago
One example is FreePBX, which supposedly supports automated SSL renewal via LetsEncrypt, but constantly overwrites the config, thus breaking the “auto.” So I have to manually renew the cert, or I have to upgrade (ie rebuild the whole system from scratch, including installing the OS) to the next major version of FreePBX and hope they fixed the issue.

So I’m really not excited about the time limit dropping from 90 days to 47 days.

TrueDuality · 1h ago
Speaking as someone who has worked in tightly regulated environment, certificates are kind of a nasty problem and there are a couple of requirements that are in conflict for going to full automation of certificates.

- Rotation of all certificates and authentication material must be renewed at regular intervals (no conflict here, this is the goal)

- All infrastructure changes need to have the commands executed and contents of files inspected and approved in writing by the change control board before being applied to the environment

That explicit approval of any changes being made within the environment go against these being automated in any way shape or form. These boards usually meet monthly or ad-hoc for time-sensitive security updates and usually have very long lists of changes to review causing the agenda to constantly overflow to the next meeting.

You could probably still make it work as a priority standing agenda idea but its going to still involve manual process and review every month. I wouldn't want to manually rotate and approve certificates every month and many of these requirements have been signed into law (at least in the US).

Starting to see another round of modernization initiatives so maybe in the next few years something could be done...

ericpauley · 1h ago
It seems like the core problem here is that certificates are considered part of infrastructure, and further that they're part of infrastructure that requires approval!

Clearly not all automated infrastructure requires approval: autoscaling groups spin up and tear down compute instances all the time. Further, changes to data can't universally require approval, otherwise every CRUD operation would require a committee meeting.

Are certificates truly explicitly defined to be infrastructure that requires change approval? If not, perhaps more careful interpretation of the regulations could allow for improved automation and security outcomes.

syncsynchalt · 19m ago
> Clearly not all automated infrastructure requires approval: autoscaling groups spin up and tear down compute instances all the time.

In these sort of environments, they do not.

We're talking about environments where it is forbidden to make _any_ change of any kind without a CCB ticket. Short cert lifetimes are fundamentally at odds with this. Luckily these systems often aren't public and don't need public certs, but there's a slice of them that do.

mrgaro · 1h ago
What dictates that certificate update needs to have a manual change process? I'd bet that it's just legal team saying that "this is how it's always been" instead of adjusting their interpretation as the environment around changes.
whynotmaybe · 35m ago
Yes, those rules always come from legal.

In some regulated places, someone is legally responsible for authorizing a change in production.

If it fails, that person's on the hook. So the usual way is to have a manual authorization for every change. Yes, it's a PITA.

One place I've worked changed their process to automatically allow changes in some specific parts for a specific period during the development of a new app.

And for some magical reasons, the person usually associated with such legal responsibility are the one that don't trust automatic process.

evilduck · 3h ago
For all but one of my personal use cases, Tailscale + Caddy have even automated away the setup steps and autorenewal of SSL with LetsEncrypt. Just toggle on the SSL features with `tailscale cert`, maybe point Caddy at the Tailscale socket file depending on your user setup, then point an upstream at the correct hostname and you're done.
CamouflagedKiwi · 3h ago
I've worked with a major financial institution (let's just say that you'd definitely recognise the name) in a past job, and while I couldn't really see exactly what was going on with the certs they issued, I'm sure it was a pretty manual process given our observations of things changing then reverting again later. I don't think regulation was really the issue though, just bad old processes.

I wonder what they will do with the shorter validity periods. They aren't required to comply in the same way; it's not a great look not to but I can't believe the processes will scale (for them or their customers) to renewing an order of magnitude more frequently.

gchamonlive · 1h ago
I think part of this can be explained by the fact that in many corporations, code and everything supporting it, like development time, developer quality, infrastructure maintenance and software architecture aren't seen as investiment but sunk cost that needs to be reduced ad infinitum to appease shareholders, so they throw human suffering and business risk at it to make it look good in the next quarterly report.
allan_s · 2h ago
For regulatory requirements: yes !

I currently for EIDAS certificates, I can only choose a vouched certificate provider, and it's mostly somes that requires me to in person with my ID card with someone verifying the guy who made the CSR is actually me.

The certificate is used for double SSL to authentify the server doing the request , i.e that the server doing an API call to the bank server is one I own. (I find it a pretty neat solution and much better than requiring to make a theater dance to get a token to renew every 3600 seconds )

Muromec · 2h ago
Well, its eidas, ofc you need to show the id
electroly · 2h ago
Lazy vendors. I know how to set up Let's Encrypt for web servers that I directly control, but some of the web servers are embedded in commercial vendor products. If this were just about people's directly-controlled nginx/caddy webservers this would be easy. We're not talking about homelabs here.
holoduke · 3h ago
Older Android 7 devices are not supported with letsencrypt. For us still 20% of our userbase. We went for a paid subscription with zerossl.
Symbiote · 2h ago
That's six years obsolete, without security patches. What sector has 20% of users with that hardware?
ronsor · 2h ago
Half of all Android manufacturers suck, and many users are lazy to update, so if you target the platform, you always have to support a bunch of random old versions.
GuB-42 · 2h ago
And yet, I am a bit worried that now, most of the web depends on LetsEncrypt. That's a single point of failure. Sure, they are "good guys", really, but remember that Google used to be "good guys" too. And this is a US-based organization, dependent on US rules, which is not so bad, but alternatives would be nice.

And yes, there are alternatives, but everything is made so that LetsEncrypt is the only reasonable choice.

First, if you are not using https, you get shunned by every major web browser, you don't get the latest features, even those that has nothing to do with encryption (ex: brotli compression), downloads get blocked, etc... So you need https, good thing LetsEncrypt make it so easy, so you use LetsEncrypt.

Because of the way LetsEncrypt verification works, you get short-term certificates, ok, fine. Other CAs do things differently, making it short-term certificates impractical, so your certificates last longer. But now, browsers are changing their requirements to only short-term certificate, but it is not a problem, just switch to LetsEncrypt, and it is free too.

Also, X.509 certificates, which is the basis of https (incl. TLS, HTTP/3, ...) only supports a single signature, so I guess it is LetsEncrypt and nothing else.

antoinealb · 2h ago
There are more ACME-compatible CAs than just Let's Encrypt, should they ever become the bad guys, or if you don't want to trust them for any reason, see [0].

I understand that people get annoyed at shorter cert lifetime, for instance if you are managing appliances or use SSL certs for other reasons than the common use case. But if you just want to serve a website, there are not so many reasons not to use HTTPS today, either on Let's Encrypt or on something else.

[0] https://acmeclients.com/certificate-authorities/

ameliaquining · 2h ago
How would you propose things should work instead?
GuB-42 · 2h ago
The idea would be the ability for a certificate to accept multiple signatures, making it more of a "web-of-trust" system. So you still have your LetsEncrypt certificate, but maybe augmented by another signature from an similar authority located in another country, or some other reputable organization that has your best interests in mind.

Maybe there are problems with that, but I never really understood the limit of a single signature for certificates. Is it because of bandwidth and performance requirements? Is it really a problem nowadays? especially with ECDSA making public keys much smaller.

ameliaquining · 1h ago
Does this solve any problem that isn't solved equally well by just acquiring multiple separate certificates? I guess it would make your service highly available in case of revocation, but unexpected revocations are rare enough that almost everyone is willing to run the risk of a brief outage in case one occurs.
unethical_ban · 2h ago
Having worked in IT, I assure you this is not a 99% implemented solution.

Weird internal setups, dozens of proprietary or in-house sites, different verification needs for internal vs. external.

And I'm speaking of the easier scenario of an internal CA.

echelon · 3h ago
I really hate the HTTPS requirement that Google unilaterally mandated for everyone.

Just wait until SSL is used to prevent us from publishing anything.

Your ID will have to be on file and be compliant.

We've gone from really simple tools to tools that could easily be used to ensnare us and rid us of our rights.

Encryption doesn't necessarily mean privacy. It can also mean control.

kangs · 3h ago
it mainly means control these days. ive made SSL then later TLS requirements for web browsers and we had fights on this sort of stuff.

yeah encryption is needed. but then you need authentication. and then, if authentication is controlled by corporations you're f'd.

instead youd want identities to be distributed and owned by everyone. many trust models have been developed and other than raw UI problems (hi gpg/pgp) its really not a terrible UX.

icedchai · 3h ago
Certs are free. All you need is a domain name and letsencrypt.
echelon · 2h ago
Slippery slope. Who controls the domain name system? Who controls how certs are handled by browsers and which ones are trusted?

All of these things we take for granted can change. You're watching it happen right now.

icedchai · 1h ago
The domain system needs at least some central control at the top, or it literally won't work. Do you remember the various "alternate root" projects in the 90's? I've stopped following them.

The cert situation has vastly improved over the past 30 years. I remember paying $100's of dollars for certificates in the 90's, faxing in forms, etc.

ameliaquining · 2h ago
Isn't this problem already inherent to domain names, even without encryption? There's always a central authority that can take away your stuff, and always has been. (In theory you can solve this with a blockchain, but, well, gestures)
gdbsjjdn · 4h ago
I understand OP's frustration, but the alternate view is that mandating better practices is a forcing function for businesses that otherwise don't give a shit about users or their privacy or security.

For all the annoyance of SOC2 audits, it sure does make my manager actually spend time and money on following the rules. Without any kind of external pressure I (as a security-minded engineer) would struggle to convince senior leadership that anything matters beyond shipping features.

Jeslijar · 4h ago
Why is a month's expiration better than a year or two years?

Why wouldn't you go with a week or a day? isn't that better than a whole month?

Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.

allan_s · 3h ago
I think it's all about change management

a whole month put you in the "if you don't have the resource to automate it, it's still doable by a human, not enough to crush somebody, but still enough to make the option , let's automate fully something to consider"

hence why it's better than a week or a day (it's too much pressure for small companies) better than hours/minutes/secondes (it means you go from 1 year to 'now it must be fully automated right now ! )

a year or two years was not a good idea, because you loose knowledge, it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

A month, you either start to fully document it, or at least to have it fresh in your mind. A month give you time to everytime think "ok, we have 30 certicates, can't we have a wild card, or a certificate with several domain in it?"

> Perhaps it's time to go with another method entirely.

I think that's the way forward, it's just that it will not happen in one step, and going to one month is a first step.

source: We have to manage a lot of certificate for a lot of different use cases (ssh, mutual ssl for authentification, classical HTTPS certificate etc. ) and we learned the hard way that no 2 years is not better than 1 , and I agree that one month would be better

also https://www.digicert.com/blog/tls-certificate-lifetimes-will...

ameliaquining · 3h ago
I think the less conservative stakeholders here would honestly rather do the six-day thing. They don't view the "still doable by a human" thing as a feature; they'd rather everyone think of certificate management as something that has to be fully automated, much like how humans don't manually respond to HTTP requests. Of course, the idea is not to make every tiny organization come up with a bespoke automation solution; rather, it's to make everyone who writes web server software designed to be exposed to the public internet think of certificate management as included within the scope of problems that are their responsibility to solve, through ACME integration or similar. There isn't any reason in principle why this wouldn't work, and I don't think there'd have been a lot of objections if it had worked this way from the beginning; resistance is coming primarily from stakeholders who don't ever want to change anything as they view it as a pure cost.

(Why not less than six days? Because I think at that point you might start to face some availability tradeoffs even if everything is always fully automated.)

belval · 3h ago
> it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

Ah yes, let's make a terrible workflow to externally force companies who can't be arsed to document their processes to do things properly, at the expense of everyone else.

hombre_fatal · 3h ago
But it's a decent trade-off and you're using sarcasm in place of fleshing out your claim.

Monthly expiration is a simple way to force you to automate something. Everyone benefits from automating it, too.

FuriouslyAdrift · 3h ago
I just recently had a executive level manager ask if we could get a 100 year cert for our ERP as the hassle of cert management and the massive cost of missing a renewal made it worth it.

He said six figures for the price would be fine. This is an instance where business needs and technology have gotten really out of alignment.

op00to · 2h ago
Start your own business - nginx proxy in front of ERP where you handle the SSL for them, put $$ in a trust to ensure there's enough money to pay for someone to update the cert.
9dev · 3h ago
How on earth would that make more sense than properly setting up ACME and forgetting about the problem for the next hundred years?? If your bespoke ERP system is really so hostile toward cert changes, put it behind a proper reverse proxy with modern TLS features and self-sign a certificate for a hundred years, and be done with it.

It'll take about fifteen minutes of time, and executive level won't ever have to concern themselves with something as mundane as TLS certificates again.

FuriouslyAdrift · 2h ago
Support contract states we cannot put it behind a proxy. We used to use HAProxy and multiple web server instances, but the support switched to India and they claimed they could no longer undertsand or support that configuration. Since it is a main system for the entire org and the support contract is part of our financial liability and data insurance, the load balancer had to go. This is corporate enterprise IT. Now you know why sysadmins are so grumpy.
9dev · 42m ago
My condolences:)
darkwater · 3h ago
> How on earth would that make more sense than properly setting up ACME and forgetting about the problem for the next hundred years?? If your bespoke ERP system is really so hostile toward cert changes, put it behind a proper reverse proxy with modern TLS features and self-sign a certificate for a hundred years, and be done with it.

I completely agree with you but you would be astonished by how many companies, even small/medium companies that uses recent technologies and are otherwise pretty lean, still think that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.

moduspol · 51m ago
I don't know about OP but I've also worked plenty of places where I seem to be the only person who understands TLS.

And not even at the "math" level. I mean, like, how to get them into a Java keystore. Or how to get Apache or nginx to use them. That you need to include the intermediate certificate. How to get multiple SANs instead of a wildcard certificate. How to use certbot (with HTTP requests or DNS verification). How to get your client to trust a custom CA. How to troubleshoot what's wrong from a client.

I think the most rational takeaway is just that it's too difficult for a typical IT guy to understand, and most SMBs that aren't in tech don't have anyone more knowledgeable on staff.

9dev · 43m ago
> I think the most rational takeaway is just that it's too difficult for a typical IT guy to understand, and most SMBs that aren't in tech don't have anyone more knowledgeable on staff.

Where would that kind of thinking lead us..? Most medical procedures are too complex for someone untrained to understand. Does that mean clinics should just not offer those procedures anymore, or should they rather make sure to train their physicians appropriately so they’re able to… do their job properly?

moduspol · 30m ago
Well I mean there's no inherent requirement that PKI work the way it does. We've mostly just accepted it because it's good enough.

Even if your server admins fully understand TLS, there are still issues like clock skew on clients breaking things, old cipher suites needing to be reviewed / sunset, users clicking past certificate warnings despite training, and the list of (sometimes questionable) globally trusted CAs that the security of the Internet depends upon.

Of course they should do their job properly, but I'm skeptical that we (as software developers) can't come up with something that can more reliably work well.

FuriouslyAdrift · 2h ago
I have to schedule at least 30 days out on any change or restart for main systems and I may be overruled by ANY manager.

I actually watched for crashes (thank you inventory control department shenanigans) so that I can sneak in changes during a reset.

9dev · 3h ago
> […] that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.

I mean… There's a tradeoff to be sure. I also have a list of things that could be solved properly, but can't justify the time expense to doing so compared to repeating the shortcut every so often.

It's like that expensive espresso machine I've been drooling over for years—I can go out and grab a lot of great coffee at a barista shop before the machine would have saved me money.

But in this particular instance, sure; once you factor the operational risk in, proper automation often is a no-brainer.

zoeysmithe · 3h ago
Yep this. This is just "we have so much technical debt, our square pegs should fit into all round holes!"

Business culture devaluing security is the root of this and I hope people see the above example of everything that's wrong with how some technology companies operate, and "just throw money at the problem because security in an annoying cost center" is super bad leadership. I'm going to guess this guy also have an MFA exception on his account and a 7 character password because "it just works! It just makes sense, nerds!" I've worked with these kinds of execs all my career and they are absolutely the problem here.

FuriouslyAdrift · 2h ago
IT serves business needs... not the other way around. If anything, cloud services and mobile device access has made securing anything just about impossible.
Loudergood · 20m ago
Classic case of business not understanding that it doesn't just need access to the data, it needs secure access to the data.
btown · 3h ago
Sure, there is an argument about slippery slopes here. But the thing about the adage of "if you slowly boil a frog..." (https://en.wikipedia.org/wiki/Boiling_frog) is that not only is the biological metaphor completely false, it also ignores the fact that there can be real thresholds that can change behavior.

Imagine you run an old-school media company who's come into possession of a beloved website with decades of user-generated and reporter-generated content. Content that puts the "this is someone's legacy" in "legacy content." You get some incremental ad revenue, and you're like "if all I have to do is have my outsourced IT team do this renewal thing once a year, it's free money I guess."

But now, you have to pay that team to do a human-in-the-loop task monthly for every site you operate, which now makes the cost no longer de minimis? Or, fully modernize your systems? But since that legacy site uses a different stack, they're saying it's an entirely separate project, which they'll happily quote you with far more zeroes than your ads are generating?

All of a sudden, something that was infrequent maintenance becomes a measurable job. Even a fully rational executive sees their incentives switch - and that doesn't count the ones who were waiting for an excuse to kill their predecessors' projects. We start seeing more and more sites go offline.

We should endeavor not to break the internet. That's not "don't break the internet, conditional on fully rational actors who magically don't have legacy systems." It's "don't break the internet."

tyzoid · 3h ago
Pretty much any legacy system can have a modern reverse proxy in front of it. If the legacy application can't handler certs sanely, use the reverse proxy for terminating TLS.
btown · 2h ago
"Just use Nginx" was not a viable option here, without additional Certbot etc. orchestration, until 14 days ago! And this is still in preview! https://blog.nginx.org/blog/native-support-for-acme-protocol

And, if you haven't been using a reverse proxy before, or for business/risk reasons don't want to use your main site's infrastructure to proxy the inherited site, and had been handling certificates in your host's cPanel with something like https://www.wpzoom.com/blog/add-ssl-to-wordpress/ - it is indeed a dedicated project to install a reverse proxy!

johannes1234321 · 3h ago
The exact time probably has no "best" but from past times: I have seen so many places where multi-year certificates were used and people forgot about them, till some service suddenly stopped working and then people having to figure out how to replace that cert.

A short cycle ensures either automation or keeping memory fresh.

Automation of course can also be forgotten and break, but it's at least somewhere written down in some form (code) rather than personal memory of a long gone employee who previously uploaded certs to some CA website for signing manually etc

Thorrez · 3h ago
>Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Then if your CA went down for an hour, you would go down too. With 47 days, there's plenty of time for the CA to fix the outage and issue you a new cert before your current one expires.

8organicbits · 3h ago
Lots of ACME software supports configuring CA fallbacks, so even if a CA is down hard for an extended period you can issue certificates with the others.

Using LetsEncrypt and ZeroSSL together is a popular approach. If you need a stronger guarantee of uptime, reach for the paid options.

https://github.com/acmesh-official/acme.sh?tab=readme-ov-fil...

Thorrez · 3h ago
If everyone uses that with 1 minute or 1 second expirations, I could certainly see a case where an outage in 1 CA causes traffic migration to another, causing performance issues on the fallback CA too.

>If you need a stronger guarantee of uptime, reach for the paid options.

We don't. If we had 1 minute or 1 second lifetimes, we would.

8organicbits · 2h ago
Oh, agreed. I was responding to the part about extended outages.
yladiz · 3h ago
I'm not sure if you're arguing in good faith, but assuming you are, it should be pretty self-evident why you wouldn't generate the certificate dynamically each request: it would take too much time to do so, and so every request would be substantially slower, probably as slow as using Tor, since you would need to ask for the certificate from a central authority. In general it's all about balance, 1 month isn't necessarily better than 1 year, but the reduced timeframe means that there's less complexity in keeping some renovation list and passing it to clients, and it's not so short to require more resources on both the issuer and the requester of the certificate.

> Perhaps it's time to go with another method entirely.

What method would you suggest here?

zimpenfish · 3h ago
> since you would need to ask for the certificate from a central authority

Could it work that your long-term certificate (90 days, whatever) gives you the ability to sign ephemeral certificates (much like, e.g. LetsEncrypt signs your 90 day certificate)? That saves calling out to a central authority for each request.

yladiz · 3h ago
Without knowing the technical details too much: Maybe, although I don’t think it would make much difference in my argument, since it would still add too much time to the request. Likely less, but still noticeable.
ozim · 3h ago
There was an attempt doing it differently by CRL but it turns out certificate revoking is not feasible in practice on web scale.

Now they are doing next plausible solution. Seems like 47 days is something they found out by let’s encrypt experience estimating load by current renewals but that last part I am just imagining.

fanf2 · 2h ago
CRL distribution at web scale is now possible thanks to work by John Schanck at Mozilla https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...

But CRL sizes are also partly controlled by expiry time, shorter lifetimes produce smaller CRLs.

yjftsjthsd-h · 3h ago
> Why wouldn't you go with a week or a day? isn't that better than a whole month?

There is in fact work on making this an option: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...

> Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

> Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Eventually the overhead actually does start to matter

> Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.

Like what?

supertrope · 1h ago
As the limit approaches zero you re-invent Kerberos.
nisegami · 3h ago
Every year is too infrequent to force automation, leading to admins forgetting to renew their certs. Every minute/day may be too demanding on ACME providers and clutters transparency logs. Dynamic certs just move the problem around because whatever is signing those certs just becomes the SSL cert in practice unless it happens over acme in which case see the point above.
bananapub · 4h ago
dunno why you're being so obnoxious about it?

a month is better than a year because we never ever ever managed to make revocation work, and so the only thing we can do is reduce the length of certs so that stolen or fraudulently obtained certs can be used for less time.

naasking · 3h ago
On the vulnerability ladder since SSL was introduced, how common and how disastrous have stolen or fraudulent certs really compared to other security problems, and by how much will these changes reduce such disasters?
FuriouslyAdrift · 3h ago
China currently has a large APT campaign using a comprised CA (Billbug).

https://www.darkreading.com/endpoint-security/china-based-bi...

naasking · 2h ago
I agree with the article, this is "potentially very dangerous". Potential is not actual though, and I'm asking about what damage has actually materialized. Is there a cost estimate over the past 20 years vs. say, memory safety vulnerabilities?
capitol_ · 3h ago
Is this some sort of troll comment?

I'm sure that you are perfectly able to do your own research, why are you trying to push that work onto some stranger on the internet?

naasking · 2h ago
Is this a troll article? The article asked basically the same question:

    I also wonder how many organizations have had certificates mis-issued due to BGP hijacking. Yes, this will improve the warm fuzzy security feeling we all want at night, but how much actual risk is this requirement mitigating?
Scope creep with diminishing returns happens everywhere.
stblack · 3h ago
Nobody has yet mentioned how certificates induce and support churn.

In 2025 it's not possible to create an app and release it into the world and have it work for years or decades, as was once the case.

If your "developer certificate" for app stores and ad-hoc distribution is valid for a year, then every year you must pay a "developer program fee" to remain a participant. You need to renew that cert, and you need to recompile a new version within a year. Which means you must maintain a development environment and tools on an ongoing basis for an app that may be feature- and operationally-complete.

All this is completely unnecessary except when it comes to reinforcing hegemony of app-store monopolists.

Loudergood · 17m ago
Yup, I bought and paid for a simple Android game that my kids could play together while we were waiting for thing.

New phone and suddenly I can't install it from Google Play anymore simply because the developer hasn't updated it in awhile. Not that it needs to be updated. I've since repurchased it from itch.io and it runs fine, but that's not unusual for lots of good old software.

xg15 · 2h ago
Yeah, we went from "software is a good that can be duplicated with no cost" to "software can be a service" to "software must be a service".
dns_snek · 2h ago
But that has nothing to do with certificates as such and everything to do with app store policies. Certificates don't induce churn - app stores do.
supertrope · 1h ago
A $100 fee makes it costly to burn and churn new accounts. So it's a spam filter.

Forcing developers to stay engages pushes out feature complete software but also pushes out unmaintained software.

An app store is an inherently higher cost distribution method. The operating systems are gratis so development is cross subsidized from app store royalties. They have an incentive to host more paid apps, especially micro-transaction apps that trick kids into spending thousands of dollars off mom's credit card. Of course they've banned or are going to ban alternative channels so you can't choose to self-distribute.

dale_glass · 4h ago
I believe the low maximum lifetimes are becoming a thing because revocation failed.

CRLs become gigantic and impractical at the sizes of the modern internet, and OCSP has privacy issues. And there's the issue of applications never checking for revocation at all.

So the obvious solution was just to make cert lifetimes really short. No gigantic CRLs, no reaching out to the registrar for every connection. All the required data is right there in the cert.

And if you thought 47 days was unreasonable, Let's Encrypt is trying 6 days. Which IMO on the whole is a great idea. Yearly, or even monthly intervals are long enough that you know a bunch of people will do it by hand, or have their renewal process break and not be noticed for months. 6 days is short enough that automation is basically a must and has to work reliably.

Andoryuuta · 3h ago
Semi-related: Firefox 142 was released a few days ago and is now using CRLite[0], which apparently only needs ~300kB a day for for the revocation lists in their new clubcard data-structure[1].

[0]: https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...

[1]: https://github.com/mozilla/clubcard

FuriouslyAdrift · 3h ago
Because of all my internal systems that use certs to connect (switches, routers, iot, etc) that have manual only interfaces (most are tftp), I have had to go back to just running my own CA infrastructure and only using public CAs for non-corporate or mixed audience sites/services.

It's really annoying because I have to carve outs for browsers and other software that refuse to connect to things with unverifiable certs and adding my CA to some software or devices is a either a pain or impossible.

It's created a hodge podge of systems and policies and made our security posture full of holes. Back when we just did a fully delegated digicert wildcard (big expense) on a 3 or 5 year expiration, it was easy to manage. Now, I've got execs in other depts asking about crazy long expirations because of the hassle.

9dev · 3h ago
Why is fronting these systems with a central haproxy with TLS termination or similar not an option?
dvdkon · 3h ago
Because then you have plain HTTP running over your network. The issue here (I presume) is not how to secure access over the Internet, but within an internal network.

Plenty of people leave these devices without encrypted connections, because they are in a "secure network", but you should never rely on such a thing.

9dev · 2h ago
Nothing stops you from using a self-signed certificate with a ridiculous expiration period for HTTPS between the reverse proxy and the device in question.
FuriouslyAdrift · 2h ago
Except browsers and other software that are becoming hard-coded to block access to such devices.

We used to use Firefox solely for internal problem devices with IP and subnet exclusions but even that is becoming difficult.

fanf2 · 2h ago
Use the self-signed cert between the proxy and the problem device; everything else talks to the proxy.
whatevaa · 3h ago
Fronting a switch management interface with haproxy? Are you sure that is a good idea?
9dev · 3h ago
Yes. If we're talking about handling TLS termination and putting an IP behind a sensible hostname, I don't see what's wrong about using a reverse proxy. Note that this does not imply making it accessible on the internet.
FuriouslyAdrift · 2h ago
Yet more infra that must now be managed and a point of failure. No thank you.
9dev · 46m ago
Well. That, or maintaining bespoke PKI and internal CA, along with manually renewing certificates with ever-shortened expiration periods as demanded by browsers.

Pick your poison.

layer8 · 3h ago
CRLs don’t have to be large, since they only need to list revoked certificates that also haven’t expired yet. Using sub-CAs, you can limit the maximum size any single CRL could possibly have. I’m probably missing something, but for SSL certificates on the public internet I don’t really see the issue. Where is the list of such compromised non-expired certificates that is so gigantic?
compumike · 3h ago
Just thinking out loud here: an ACME DNS-01 challenge requires a specific DNS TXT record to be set on _acme-challenge.<YOUR_DOMAIN> as a way of verifying ownership. Currently this is a periodic check every 45 or 90 or 365 days or whatever, which is what everyone's talking about.

Why not encode that TXT record value into the CA-signed certificate metadata? And then at runtime, when a browser requests the page, the browser can verify the TXT record as well, and cache that result for an hour or whatever you like?

Or another set of TXT records for revocation, TXT _acme-challenge-revoked.<YOUR_DOMAIN> etc?

It's not perfect, DNS is not at all secure / relatively easy to spoof for a single client on your LAN, I know that. But realistically, if someone has control of your DNS, they can just issue themselves a legit certificate anyway.

ameliaquining · 2h ago
I think the problem with this idea is not security (as you point out, the status quo isn't really better), but availability. It's not all that uncommon for poorly designed middleboxes to block TXT records, since they're not needed for day-to-day web browsing and such.

Also, I don't see how that last paragraph follows; is your argument just that client-side DNS poisoning is an attack not worth defending against?

Also, there's maybe not much value in solving this for DNS-01 if you don't also solve it for the other, more commonly used challenge types.

ashleyn · 3h ago
Certbot has this down to a science. I haven't once had to touch it after setting it up. 6 days doesn't seem like an onerous requirement in light of that.
jraph · 4h ago
The decreasing validity time pushes for the process to be automated, and automation reduces the possible human errors.

Many things need to be run and automated when running stuff, I don't understand what makes SSL certificates special in this.

For a hobbyist, setting up certbot or acme.sh is pretty much fire and forget. For more complex settings well… you already have this complexity to manage and therefore the people managing this complexity.

You'll need to pick a client and approve it, sure, but that's once, and that's true for any tool you already use. (edit: and nginx is getting ACME support, so you might already be using this tool)

It's not the first time I encounter them, but I really don't get the complaints. Sure, the setup may take longer. But the day to day operations are then easier.

throw0101a · 3h ago
> The decreasing validity time pushes for the process to be automated, and automation reduces the possible human errors.

There are environments and devices where automation is not possible: not everything that needs a cert is a Linux server, or a system where you can run your own code. (I initially got ACME/LE working on a previous job's F5s because it was RH underneath and so could get Dehydrate working (only needs bash, cURL, OpenSSL); not all appliances even allow that).

I'm afraid that with the 47-day mandate we'll see the return of self-signed certs, and folks will be trained to "just accept it the first time".

jraph · 3h ago
In these setups, the issue already exists: an appliance would have to renew its SSL certificate when it expires. I believe ssl certificates should already not be used anywhere they can't be renewed.
birdman3131 · 3h ago
One of the arguments to be made is that while " automation reduces the possible human errors." it also reduces the amount of human oversight as well.
9dev · 3h ago
Oversight over… what exactly? TLS certificates don't need human oversight. If you want to see which certificates have been issued for your domains, set up certificate transparency monitoring. But thank goodness we're past paying people for comparing certificate checksums.
nikanj · 1h ago
Schrödinger's certificates are so mundane they don't need human oversight, but are so precious they need to be renewed every 47 days
9dev · 45m ago
Your point is..? That applies to a lot of automatically maintained infrastructure, and it works just fine.
auguzanellato · 3h ago
Do you really need more oversight on renewals than a simple success/failure notification?

For new certificate you can keep the existing amount of human oversight in place so nothing changes on that front.

everforward · 3h ago
Yes, because you want to know what certificates you're issuing. You could be automatically issuing and deploying certs on a system where the actual app was decommissioned. It's probably mostly a risk for legacy systems where the app gets killed, but the hardware stays live and potentially unpatched and is now vulnerable to a hacker taking it over.

With manual renewals, the cert either wouldn't get renewed and would become naturally invalid or the notification that the cert expired would prompt someone to finish the cleanup.

ameliaquining · 3h ago
This is what Certificate Transparency is for. If you want to know what publicly trusted certificates are being issued for whatever domains are of interest to you, that's how you find out. It has the important advantage of always working no matter how heterogeneous your stack is; the clients that request certificates do not need to be connected to any particular notification system.
cortesoft · 3h ago
Then you set up a process to monitor the certs that have been issued.
FuriouslyAdrift · 3h ago
No better way to create errors at scale than automation ;-)
bbarnett · 4h ago
I've spent 15+ minutes searching, and the digicert (linked to in the article), and other cert providers all reference a vote on "Multi-Perspective Issuance Corroboration (MPIC)".

Everywhere I've read, one "must validate domain control using multiple independent network perspectives". EG, multiple points on the internet, for DNS validation.

Yet there is not one place I can find a very specific "this is what this means". What is a "network perspective", searching shows it means "geographical independent regions". What's a region? How big? How far apart from your existing infra qualifies? How is it calculated.

Anyone know? Because apparently none of the bodies know, or wish to tell.

jaas · 3h ago
Section 3.2.2.9 of this document:

https://cabforum.org/working-groups/server/baseline-requirem...

You can also just search the document for the word "Perspective" to find most references to it.

ameliaquining · 3h ago
For convenience, here are the quotes that most directly answer the above question:

"Effective December 15, 2026, the CA MUST implement Multi-Perspective Issuance Corroboration using at least five (5) remote Network Perspectives. The CA MUST ensure that [...] the remote Network Perspectives that corroborate the Primary Network Perspective fall within the service regions of at least two (2) distinct Regional Internet Registries."

"Network Perspectives are considered distinct when the straight-line distance between them is at least 500 km."

elp · 3h ago
Unless I'm completely misunderstanding things Letsencrypt has been doing this since 2020 https://letsencrypt.org/2020/02/19/multi-perspective-validat...

I.e they check from multiple network locations in case an attacker has messed with network routing in some way. This is reasonable and imposes no extra load on the domain needing the certificate all the extra work falls on the CA, and if Letsencrypt can get this right there is no major reason why "Joe's garage certs" can't do the same thing.

This is outrage porn.

Avamander · 3h ago
The same the exact IP addresses or ASNs of existing validation origins are not public, neither will any future ones be. It makes it a bit harder to coordinate an attack against this infrastructure.
greyface- · 3h ago
It's trivial for an attacker to learn the validation origins by triggering validations of their own servers while watching the logs. Secrecy confers no advantage here.
nikanj · 3h ago
It means the barrier of entry to the SSL certificate market gets higher, favouring established players
wongarsu · 3h ago
Renting five servers 500km apart each, spread across at least two continents is hardly a difficult or costly requirement
Havoc · 3h ago
> I am responsible for approving SSL certificates for my company. I’ve developed a process

What does that even mean? Is he smelling them to check for freshness?

I get process around first time request perhaps to ensure it’s set up right, but renewals?

> My stakeholders understand their roles and responsibilities

Oh no. All that’s missing here is a committee and steering group and daily stand ups

sigseg1v · 2h ago
The first link in the article I clicked for context led to a cert provider whose business name I recognize. Found the problem.

I inherited a process using the same thing last year and it is the absolutely most insane nonsense I can think of. These types of companies have support that is totally useless and their entire business model is to charge 1000x or more (eg. compare signature price to a HSM in GCP) what competitors charge while also providing less functionality, and hoping that people will get sucked in and trapped in their ecosystem by purchasing an expensive cert such as an "EV" cert which I'm still not totally clear does by the way, but I'm assured it's very important for security on Windows. Not security against bad guys though... it appears to be for security against no-name anti virus vendors deleting your files if they detect you didn't pay this "EV" cert ransom. They don't need to actually detect threats based on code or behavior, they just detect if you have enough money.

mholt · 3h ago
First line:

> I am responsible for approving SSL certificates for my company.

And that is exactly what the requirements are intending to prevent. Automation is the way.

The system is working!

cr3ative · 3h ago
Right. This unfortunately reads like a human process has been set up where automation should have been set up, and now that hand is being forced.

The hand-waving away of certbot/ACME at the very end of the article only really goes to show that it hasn't been looked in to properly for whatever reason.

ComputerGuru · 3h ago
I actually don’t have a problem with the SSL changes as they specifically pertain to http servers – it’s largely a dived problem with automated solutions compatible with all the major players on most fronts.

But certs and every other context have become neigh impossible except in enterprise settings with your own CA and cert servers. From things like printers and network appliances to entirely non-http applications like VPN (StrongSwan and OpenVPN both have/support TLS with signed SSL certs, but place very different constraints on how those work in practice and what identities are supported, how or if wildcards work, etc).

Very little attention has been paid to non-general purpose and non-http contexts as things currently stand.

DougN7 · 3h ago
The last time I looked, if you ran your HTTPS service on anything other than port 443 LetsEncrypt was not for you. Maybe that’s built into ACME?
mdaniel · 2h ago
I can't tell if it's a typo but HTTP-01 would contact your webserver on :80 in order to successfully retrieve a very, very, very specific ACME path and does not care at all what you do with your issued TLS afterward, including what port you run it upon

Also, I know firsthand that the DNS Validator also works perfectly fine, no http check required

OptionOfT · 2h ago
You can get LetsEncrypt certificates for endpoints that aren't publically accessible through the DNS-01 challenge.
azeemba · 4h ago
I think a large enough org that needs many different certificates should have an internally-trusted CA. That would then allow the org to decide their own policy for all their internal facing certificates.

Then you only have to follow the stricter rules for only the public facing certs.

linsomniac · 3h ago
We make extensive use of self-signed certificates internally on our infrastructure, and we used to manually manage year-long certs. A few months ago I built "LessEncrypt", which is a dead simple ACME-inspired system for handing out certs without requiring hijacking the HTTP port or doing DNS updates. Been running it on ~200 hosts for a few months now and it's been fantastic to have the certs manage themselves.

https://github.com/linsomniac/lessencrypt

I've toyed with the idea of adding the ability for the server component to request certs from LetsEncrypt via DNS validation. Acting as a clearing house so that individual internal hosts don't need a DNS secret to get certs. However, we also put IP addresses and localhost on our internal certs, so we'd ahve to stop doing that to be able to get them from LetsEncrypt.

jraph · 3h ago
Why or in which cases is opening a dedicated port better than publishing challenges under some /.well-known path using the standard HTTP port?

(You say hijacking the HTTP port, but I don't let the ACME client take over 80/443, I make my reverse proxy point the expected path to a folder the ACME client writes to, I'm not asking for a comparison with a setup where the acme client takes over the reverse proxy and edits its configuration by itself, which I don't like)

linsomniac · 23m ago
The case for it is where it's not easy to plop a file in a .well-known path on port 80/443. If you have a reverse proxy that is easy to set up to publish that, that makes it easier. I guess I could have used different wording, I do consider making the .well-known available a subset of hijacking the port, but can see why it would be confusing. ACME setup can still be trickier to set up, but is definitely a good solution if it fits in your environment.
ocdtrekkie · 4h ago
It used to be only a large enough organization needed this, but smaller organizations could slap their PKI wildcard on everything. Between the 47 day lifetime and the removal of client authentication as a permitted key usage of PKI certs, everyone will need a private CA.

Active Directory Certificate Services is a fickle beast but it's about to get a lot more popular again.

m-p-3 · 11m ago
It's cumbersome for a reason, and I believe it will lead to better tooling and automation.
romaniv · 4h ago
The web today is a rotting carcass with various middlemen maggots crawling all over it and gorging themselves on the decay. The only real discussion to be had is what to replace it with and how to design the new protocols to avoid the same issues.
jacquesm · 3h ago
The reason the web is a rotting carcass is not because of the way the web is architected, it is because a lot of people's livelihoods depend on making it as rotten as possible without collapsing it entirely.

From advertising companies, search engines (ok, sometimes both), certificate peddlers and other 'service' (I use the term lightly here) providers there are just too many of these maggots that we don't actually need. We mostly need them to manage the maggots! If they would all fuck off the web would instantly be a better place.

ameliaquining · 2h ago
Who do you propose needs to fuck off in order for the web to not need certificate authorities?
pixl97 · 3h ago
Thats the neat thing, you cant really avoid the same issues. Security is not a destination, it's a process. Everything you find a way to make something more secure someone seems to find a new way to attack it, and so the ecosystem evolves.
bloomca · 3h ago
What do you think is better? The web is indeed questionable, but it is literally the best we have, it is still reasonably simple to deploy a web app.

Desktop app development gets increasingly hostile and OSes introduce more and more TCC modals, you pretty much need a certificate to codesign an app if you sideload (and app stores have a lot of hassle involved), mobile clients had it bad for a while (and just announced that Android will require a dev certificate for sideloading as well).

edit: also another comment is correct, the reason it is like that is because it has the most eyes on it. In the past it was on desktop apps, which made them worse

quesera · 2h ago
I don't know what a replacement for the web would look like.

But it seems apparent to me that it will have to work over HTTP/QUIC, and TCP port 443.

Which prompts the obvious question ...

mdaniel · 2h ago
As a friendly reminder, SRV records exist and are great at fixing that magic port syndrome (unless you were hinting at the infinite corporate firewall appliances, for which I have no magic fix)
quesera · 1h ago
Right. Egress on anything other than tcp/443 is probably a non-starter for any new protocol.

The question I was alluding to is: if it's HTTP-ish over tcp/443, wouldn't it still be the web anyway?

But thinking about it more, the server could easily select a protocol based on the first chunk of the client request. And the example of RTP suggests that maybe even TCP would be optional.

ExoticPearTree · 3h ago
I think that the author is a bit confused about email validation.

When I was doing this, via email, if you wanted a certificate for sub.subdomain.example.com - the list of email addresses were in order something like hostmaster@sub.subdomain.example.com and hostmaster@example.com - you clicked the radio option that best suited you and you were good to go. You don't need email addresses for every subdomain.

jeroenhd · 1h ago
I think the reason they couldn't do it is that they want a certificate for *.sub.example.com. Wildcards tend to trip up certificate provisioning in annoying ways.
fidotron · 4h ago
Looking at the changes going on in computing regarding the need for constantly updating certificates for a website, verified identity to develop mobile apps etc. it's clear there is a background push for control of everything such that when things are considered problems they can promptly be cut off from everything all at once.
ameliaquining · 2h ago
Are 389-day certificates really that much less concerning from a censorship perspective than 47-day ones? Also, DNS is already much more censorable than the Web PKI, so I don't see how increasing reliance on the latter makes things worse.
commandlinefan · 3h ago
I would be ok with all of this if it meant anything. My computer has 151 trusted Certificate Authorities installed on it, including heavy hitters in the CA industry such as TUBITAK, Telia and Sectigo. As a user, I have no idea what sort of actual verification went into verifying the certificates that the site I'm visiting is presenting.
ameliaquining · 2h ago
The reason you can trust all those CAs is because Certificate Transparency makes it very likely that misissuances will be caught, and a CA that screws up and fails to credibly ensure that it won't happen again will be distrusted be browsers. The chance that the particular domain you're interested in will be the one that gets a misissued certificate before that happens is really quite low. It's not a perfect system but it works surprisingly well in practice.
ollybee · 3h ago
What is obnoxious is that certificate transparency logs mean that you now have to effectively centrally register any new domain you put online. That means you instantly see a whole load of traffic to your domain from bots, scrapers, beg bounty scanners etc. Any new site has to be designed to handle that baseline of traffic.

I understand the point of CTL's and it's necessary given that every browser and device is configured to trust CA's that you wouldn't actually trust. It's had awful side effects for people who want to host low traffic sites, or fly under the radar for whatever reason.

watusername · 3h ago
To be frank, the whole post reads like "I hate change" with no convincing argument otherwise. The author even acknowledges the very lenient ramp-up from CAB _and_ the myriad of available tooling, yet still throws his hands up.

> I am responsible for approving SSL certificates for my company. [...] I review and approve each cert. What started out as a quarterly or semi-monthly task has become a monthly-to-weekly task depending on when our certs are expiring.

I don't get the security need for manually approving renewals, and the author makes no attempt to justify this either. It may make sense for some manual process to be in place for initial issuances, as certificates are permanently added to a publicly-available ledger. And to take a step back, do you need public certs to begin with? Can you not have an internal CA? Again, the author makes no attempt to justify this, or demonstrate understanding in the post.

> email-based validation may as well not exist when we need to update a certificate for test.lab.corp.example.com because there is no webmaster@test.lab.corp.example.com.

I know that this is an example, but as a developer it would be a pain to have to go through a manual, multi-day process for my `test.lab.corp.example.com` to work. And the rest of the post seems to imply that this is actually the case at OP's org.

> Which resource-starved team will manage the client and the infrastructure it needs? It will need time to undergo code review and/or supplier review if it’s sold by a company. There will be a requirement for secrets management. There will be a need for monitoring and alerting. It’s not as painless as the certificate approval workflow I have now.

There are additional costs and new processes to be made, yes, but even from a non-technical POV this appears to be a good time to lead and take ownership.

> Any platforms that offer or include certificate management bundled with the actual services we pay for will win our business by default. [...] What is obvious to me is that my stakeholders and I are hurrying to offload certificate management to our vendors and platforms and not to our CA.

That's okay. If you hate change and don't want to take ownership, pay someone else to take ownership.

creatonez · 2h ago
If short lived certs are a problem, your methodology is wrong and has been for a long time. Period.
_JamesA_ · 2h ago
Code signing certificates are even worse.
the_mitsuhiko · 3h ago
The one complaint that I think is valid is that automating wildcard certificates at the moment is really tricky. And that really is because most of the DNS providers do not have proper APIs for it.`
mdaniel · 2h ago
Most? https://registry.terraform.io/search/providers?q=dns seems to be a pretty healthy list, and https://registry.terraform.io/providers/hashicorp/dns/latest will work for any one that honors https://datatracker.ietf.org/doc/html/rfc2136 (although "honoring standards" is probably the very problem you were citing)
Avamander · 4h ago
There's two sides to this, if it's not a public service, why should it have a certificate from a public CA? If your risk assessment says that you do not need MPIC, then just don't do that, yourself.

The second side is that if it's so tedious to approve and install, use solutions that require neither. Surely you don't need to have some artisanal certificate installation process that involves a human if you already admit that stricter issuance reduces no risk of yours. Thus, simplify your processes.

There are automated solutions to pretty much all platforms both free and paid. Nginx has it, I just checked and Apache has a module for this as well. Could the author write a blog post about what's stopping them from adopting these solutions?

In the end I can think of *extremely* few and niche cases where any changes to a computer system are actually (human) time-consuming due to regulatory reasons that at the same time require public trust.

ameliaquining · 3h ago
"If it's not a public service, why should it have a certificate from a public CA?"

Probably because making sure that clients trust the right set of non-public CAs is currently too much of a pain in the ass. Possibly an underrated investment in the security of the internet would be inventing better solutions to make this process easier, the way Certbot made certificate renewal easier (though it'd be a harder problem as the environment is more heterogeneous). This might reduce the extent of conservative stakeholders crankily demanding that the public CA infrastructure accommodate their non-public-facing embedded systems that can't keep up with the constantly evolving security requirements that are part and parcel of existing on the public internet.

Avamander · 2h ago
> Probably because making sure that clients trust the right set of non-public CAs is currently too much of a pain in the ass. Possibly an underrated investment in the security of the internet would be inventing better solutions to make this process easier.

I don't see a reason why that should be a problem to solve for public CAs and rest of the internet? Complaining about multi-perspective validation or lifetime is silly if the hindrance is someone's own business needs and requirements.

ameliaquining · 2h ago
Because right now, the CA/B Forum believes that they cannot just completely blow off the concerns of orgs that are having problems adapting to the new requirements because they have legacy tech investments that use the Web PKI for purposes it's not a good fit for. This causes them to move more slowly than the less conservative stakeholders would like. If those concerns were lessened, then the CA/B Forum would feel freer to move faster.
tzs · 1h ago
I've only put maybe 2 seconds of thought into this so it is probably stupid, but why don't we have an alternative that does not require third party certificate authorities?

For example why not allow an organization to have its own self-signed certificate authority, and allow it to publish its self-signed root certificate through DNS, and make browsers accept that root for use with that domain?

I see two objections offhand.

Objection #1. It doesn't provide any validation that the certificates were actually made by the legal entity that they claim to be for. It just shows that whoever made the CA had write access to the domain's DNS records. It can't replace EV certificates or OV certificates.

Retort #1. So? Those sites that need EV of OV certificates can keep using the current approach. But a very large number of sites don't need EV or OV certificates. This can be seen by the success of Let's Encrypt which only issues DV certificates. Even some large sites use DV certificates, such as Amazon.

Objection #2. If someone gets write access to your DNS records they can replace your CA!

Retort #2. So? If someone gets write access to your DNS records they can make Let's Encrypt certificates for your domain.

What have I overlooked?

tptacek · 58m ago
Downgrade protection. It's very tricky to come up with an alternate root of trust for TLS connections that isn't strippable by middleboxes. Stripping isn't even always intentional: a big part of why DANE failed was that middleboxes reject DNSSEC responses, forcing browsers to fall back to X.509. If you have to have an X.509 WebPKI certificate no matter what, then the alternative root of trust just adds attack surface, and while a tiny subset of nerds with ideological objections to X.509 might be fine with that, it flunks the cost/benefit calculations for the browser developers themselves.

If you want to get more specific about using DNS as an alternate root of trust, there are bigger problems. The X.509 WebPKI has mandatory certificate transparency, so misissuance can be detected. Just as importantly, and relatedly, the browser developers can kill a CA that misissues. They've done so multiple times, and have killed one of the largest CAs over misissuance incidents.

Neither capability exists for a DNS-based PKI, which is deeply problematic given that the DNS PKI is --- de jure --- run by state actors.

alanfranz · 1h ago
This is called DANE. I don’t know if I missed an implied /s in your message.

You overlooked the need for DNSSEC, so another PKI with his own quirks, and somehow less reliable than CAs.

jmwilson · 1h ago
Another obnoxious behavior is clients enforcing lifetime requirements for domains they have no business imposing their opinion about: .internal and .home.arpa. These are specifically carved out for private use. If I want to roll my own CA with a 2.5.29.30 name constraint extension for one of these domains and hand out a 10 year wildcard certificate, I should be able to without interference from my web browser.

Additionally, Google and the PSL have inadvertently broken .home.arpa on Chrome by misclassifying it as a public suffix, while leaving .internal alone. A wildcard cert for *.home.arpa will not work on Chrome, but *.internal will, despite these two domains being essentially equivalent in purpose.

jeroenhd · 1h ago
> I should be able to without interference from my web browser

You should be. From what I can remember, both Firefox and Chrome add exceptions to user installed certificates that disable requirements such as certificate transparency logs and even things like HPKP back when that was a thing.

It's easy to make a mistake and install certificates in the system chain instead (especially on Windows), but if you pick the right certificate store I don't think you should be having any trouble. That said, it's been a while since I last dealt with Chrome, maybe things have gotten worse.

jmwilson · 59m ago
Firefox does do the right thing and seems the most usable browser for private CAs. Chrome and derivatives mostly too, except the problem mentioned about the public suffix list. Mobile clients seem the most broken. I can't get iOS to work well with my private CA packaged into a .mobileconfig, but it could be my error as well.
ozim · 3h ago
It would not be a problem if we could have magic way to make CRL working on scale and not having other smaller issues.
gwbas1c · 4h ago
With Azure-hosted sites, I find it's significantly easier to have Microsoft perform all certificate management for us. All we do is verify that we own the domain, and then they do all the certificate management for us.

When I saw the 47-day expiration period, it made me wonder if someone is trying to force everyone onto cloud solutions like what Azure provides.

The old geezer in me is disappointed that it's increasingly harder to host a site on a cable modem at home. (But I haven't done that in over two decades.)

yjftsjthsd-h · 3h ago
> When I saw the 47-day expiration period, it made me wonder if someone is trying to force everyone onto cloud solutions like what Azure provides.

> The old geezer in me is disappointed that it's increasingly harder to host a site on a cable modem at home. (But I haven't done that in over two decades.)

It might be harder to host at home, but only for network reasons. It is perfectly straightforward to use letsencrypt and your choice of acme client to do certificates; I really don't think that's meaningful point of friction even with the shorter certificate lifetimes.

eichin · 3h ago
Yeah - the best time to do automated renewal was ~5 years ago, the second best time is now - I just get email once a week with the list of cert renewals (which is how I learned, to my surprise, that sometimes the letsencrypt renewals do fail! but I've never seen it happen twice in a row.)

And it's not like the automation is hard (when I first did letsencrypt certs I did a misguidedly-paranoid offline key thing - for my second attempt, the only reason I had to do any work at all, instead of letting the prepackaged automation work, was to support a messy podman setup, and even that ended up mostly being "systemd is more work than crontab")

rco8786 · 3h ago
They are obnoxious and they used to be 10x more obnoxious.
xnorswap · 4h ago
I think the author has missed the point of the 47 day expiry.

It is short enough to force teams to automate the process.

You're not supposed to be human-actioning something every month.

But yes, it'll be a huge headache for teams that stick their head in the sand and think, "We don't need to automate this, it's just 6 months".

As the window decreases to 3 months it'll be even more frustrating, and then will come a breaking point when it finally rests at 47 days.

But the schedule is well advertised. The time to get automation into your certificate renewal is now.

In the real world however, this will be a LOT of teams. I think the organisations defining this has missed just how much legacy and manual processes are out there, and the impact that this has on them.

I don't think this post makes that argument well enough, instead trying to argue the technical aspect of ACME not being good enough.

ACME is irrelevant in the face of organisations not even trying, and wondering why they have a pain every 6 weeks.

ameliaquining · 3h ago
Is there a different implementation timeline that you think would adequately address the legitimate concerns of orgs relying on legacy and manual processes? My model is that beyond a baseline of a couple years (which were already granted), adding more time doesn't help, because these orgs will always procrastinate until the last minute on anything that doesn't seem to management like an obvious immediate priority. I think the CA/B Forum does understand this and that they're significantly inconveniencing a lot of people, but it has to happen sometime, and part of the purpose of the push for automation is to ensure that the inevitable future tightenings of security requirements won't require most orgs to do anything.
xnorswap · 3h ago
Just extending the timeline won't help, as you suggest, if anything it'll make the problem even worse, by further bedding in the helpless.

What typically does work for this kind of thing, is finding a hook to artificially rather than technically necessitate it, while not breaking legacy.

For example, while I hate the monopoly that Google has on search, it was incredibly effective when they down-ranked HTTP sites in favour of HTTPs sites.

( In 2014: See https://developers.google.com/search/blog/2014/08/https-as-r... )

Almost overnight, organisations that never gave a shit, suddenly found themselves rushing through the any required tech debt to get SSL certs and HTTPs in place.

It was only after that drove up HTTPs to a critical mass did Google have the confidence to further nudge through bigger warnings in Chrome. ( 2018 ).

Perhaps ChatGPT and has impacted Google's monopoly too much to try again, but they could easily rank results based on certificate validity length and try the same trick again.

rini17 · 3h ago
When it's automated we're back at square one: after few years it breaks and nobody will have any idea where the acme scripts are or how to debug them.
pferde · 3h ago
That could be an argument against automating anything, ever.

The solution is just like with any other automation - document it.

quesera · 2h ago
... which is also the solution for any other infrequent manual process. :)
sam_lowry_ · 3h ago
I already had this with the certbot on Debian that was running perfectly for 5-6 years by that time.
bell-cot · 2h ago
Not mentioned - especially for smaller or short-staffed org's, it may be a non-trivial effort to automate, then secure/document/maintain the automation.

Vs. shoving httpS proxy services in front of insecure backends is often easy.

OptionOfT · 2h ago
That is my standard approach when I deploy something. My application shouldn't deal with TLS unless it needs to.

Usually fronting a service with Traefik or NGINX fits all the business needs.

I do recall a setup in Kubernetes where nearly all traffic had to be encrypted, even within the cluster. The boundary was the pod. Within a pod you are guaranteed that all containers run on the same node. And since a node is a physical boundary (it's either a physical machine or a vm on a physical machine) you're guaranteed that that traffic never goes over a network cable.

The solution then is to deploy something like LinkerD which ensures that traffic between pods is encrypted transparently. We could've eased the policy that traffic between pods on the same node shouldn't be encrypted, but then we introduced more variables in the process, and it wasn't worth it.

FpUser · 2h ago
I used to pay for certs for my company and it had started to feel like extortion business and was also stealing my time/ so at some point I've said fuck you vultures and switched to LetsEncrypt.
compumike · 3h ago
It's strange: SSL certificates (and maybe domain name registrations?) are one of the only "ticking time bomb" elements present in every modern web stack, whether a static site or not. By "ticking time bomb" I mean that there's a hard date N weeks/months from now where your site will definitely stop working, unless some external pile of dependencies work smoothly to extend that date.

Software didn't have that sort of "ticking time bomb" element before, I think?

I think I understand why it's necessary: we have a single, globally shared public namespace of domain names, which we accept will turn over their ownership over the long run, just like real estate changes hands. So we need expiration dates to invalidate "stale" records.

We've already switched over everything to Let's Encrypt. But I don't think anyone should be under the delusion that automation / ACME is failproof:

https://github.com/certbot/certbot/issues?q=is%3Aissue%20ren...

https://github.com/cert-manager/cert-manager/issues?q=is%3Ai...

https://github.com/caddyserver/caddy/issues?q=is%3Aissue%20A...

(These are generally not issues with the software per se, but misconfiguration, third-party DNS API weirdness, IPv6, rate limits, or other weird edge cases.)

Anyway, a gentle reminder that Let's Encrypt suggests monitoring your SSL certificates may be "helpful": https://letsencrypt.org/docs/monitoring-options/ (Full disclosure: I wrote the most recent addition to that list, with the "self-hosted scripts".)

dark-star · 4h ago
> I am responsible for approving SSL certificates for my company

What does this even mean? Does he check the certificates for typos, or that they have the correct security algorithm or something?

I'm pretty sure such an "approval" could be replaced by an automatic security scanner or even a small shall script

tialaramex · 3h ago
This is what fascinated me too.

FWIW the idea of inspecting the certificate "for typos" or similar doesn't make sense. What you're getting from the CA wasn't really the certificate but the act of signing it, which they've already done. Except in some very niche situations your certificate is always already publicly available when you receive it, what you've got back is in some sense a courtesy copy. So it's too late to "approve" this document or not, the thing worth approving already happened.

Also the issuing CA was required by the rules to have done a whole bunch of automated checks far beyond what a human would reasonably do by hand. They're going to have checked your public keys don't have any of a set of undesirable mathematical properties (especially for RSA keys) for example and don't match various "known bad" keys. Can you do better? With good tooling yeah, by hand, not a chance.

But then beyond this, modern "SSL certificates" are just really boring. They're 10% boilerplate 90% random numbers. It's like tasking a child with keeping a tally of what colour cars they saw. "Another red one? Wow".

ameliaquining · 3h ago
It's possible that this was just a slight imprecision of language, and the thing being inspected is the CSR rather than the actual certificate. (But the point about individual certificates/CSRs being unworthy of human attention is totally right.)
tialaramex · 2h ago
That's true, although inspecting a CSR is also daft because much of the CSR is actually ignored by the CA so you can "check" it but if it was "wrong" that makes absolutely no difference to anything.

The CA is going to look at the requested names (to check they were authorized) and they'll also copy the requested public key, this combination is what's certified. But if your antiquated gear spits out a CSR which also gives a (possibly bogus) company name and an (maybe invalid) street address "checking" that won't matter because the CA will just throw it away, the certificate they issue you isn't allowed to contain information they didn't check, so that part of your CSR is just tossed away without reading it.

Avamander · 3h ago
It means that they click on the link in the email they get and confirm issuance for their domain that way.
lovehashbrowns · 2h ago
Sounds so similar to something we had set up when I worked for a major retailer a few years ago. In order to get a cert you had to email the security team or some junk like that and THEY would go through the digicert UI. I stopped reading the absolutely giant and incredibly confusing certificate support document and swapped everything I was responsible for to ACM.

Side note, at some point I got an email telling me to stop issuing public certificates and only issue private certs. I had to get on a call with someone and explain PKI. To someone on the security team!

FergusArgyll · 4h ago
sudo certbot --nginx
riffic · 2h ago
47 day certs are going to be glorious. Get with the times, please.
ocdtrekkie · 4h ago
I do not think PKI will survive the 47 day change. I am not sure the CAB will survive that change. It seems extremely apparent the people who made the decision have neither any relevant experience in IT nor any practical understanding of security, and I think they've finally flown too close to the sun.

Automated renewal is... probably about a decade or two from being supported well enough to be an actual answer.

In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.

gruez · 4h ago
>Automated renewal is... probably about a decade or two from being supported well enough to be an actual answer.

???

All my servers use certbot and it works fine. There's also no shortage of SaaS/PaaS that offer free ssl with their service, and presumably they've got that automated as well.

ocdtrekkie · 4h ago
Out of about three dozen places I need a certificate, I believe one recently added support for ACME. Tell me you aren't in enterprise IT without telling me you aren't in enterprise IT. ;)

It may help you to understand that it is not an assumption any given product even supports HTTPS well in the first place, and a lot of vendors look at you weird when you express that you intend to enable it. One piece of software requires rerunning the installer to change the certificate.

Yeah, there are also some very expensive vendors out there to manage this for big companies with big dollars.

9dev · 3h ago
Your perspective may be just as narrow, albeit from the other end of the spectrum. Huge heaps of things do work just fine with ACME now, or support being fronted by a reverse proxy which does.

Plus, how would you ever get enterprise tool vendors to add support if not for customers pestering them with support requests because manual certificate renewal has gotten too painful?

> I do not think PKI will survive the 47 day change. […] In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.

Maybe PKI will die… or you will. Progress doesn't treat dinosaurs too well usually.

fanf2 · 1h ago
Put a reverse proxy in front of it. Write a script. Use RPA for the really obnoxious ones. There are lots of workarounds.
ameliaquining · 3h ago
What would it look like for the Web PKI to "not survive that change"? Is the idea that companies stop having websites and tell all their users to switch to Gopher or something, because the burden of certificate management is too much?

> In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.

Good. A certificate being publicly trusted is a liability, which is why there are all these stringent requirements around it. If your certificates do not in fact need to be trusted by random internet users, then the CA/B wants you to stop relying on the Web PKI, because that reduces the extent to which your maintenance costs have to be balanced against everybody else's security.

As I said in another comment, private CAs aren't that popular right now in the kinds of organizations that have a hard time keeping up with these changes, because configuring clients is too painful. But if you can do it, then by all means, do!

ocdtrekkie · 2h ago
> What would it look like for the CA/B to "not survive that change"?

I suspect when companies who are members actually realize what happened, CA/B members will be told to reverse the 47 day lifetime or be fired and replaced by people who will. This is a group of people incredibly detached from reality, but that reality is going to come crashing through to their employers as 2029 approaches.

> Good.

You may assume that most organizations will implement private CAs in these scenarios. I suspect the use of encryption internally will just fall. And it will be far easier for attackers to move around inside a network, and take over the handful of fancy auto-renewing public-facing servers with PKI anyways.

ameliaquining · 2h ago
Who exactly in the CA/B member companies is going to demand that the 47-day lifetime be reversed, and why are they going to do that?

If an org is tech-forward enough to have bothered setting up HTTPS for internal use cases on their own initiative, just because it was good for security, then they're not going to have major problems adapting to the 47-day lifetime. The orgs that will struggle to deal with this are the ones that did the bare minimum HTTPS setup because some external factor forced them to (with the most obvious candidate being browsers gradually restricting what can be done over unencrypted HTTP). Those external factors presumably haven't gone anywhere, so the orgs will have to set up private CAs even if they'd rather not bother.

ocdtrekkie · 2h ago
I think when Sundar and Satya start hearing about how their customers are losing billions of dollars because of some random people at their company called "certificate trust program leads" or whatever, there is going to be a lot of questions how those decisions got made and how to get them un-made.

Most of the other forum members either won't oppose longer lifetimes (every cert vendor would be happy) or will bow to the only two companies that matter.

ameliaquining · 2h ago
Nothing even remotely similar to that happened on previous tightenings. Going from not a peep to enough outrage to overturn a decision this thoroughly debated all at once seems really unlikely. Also, what are the aggrieved enterprises going to do, threaten to move from GCP to AWS if Chrome doesn't do what they want? That's an empty threat and everyone knows it.