I’m the technical lead for the Let’s Encrypt SRE/infra team. So I spend a lot of time thinking about this.
The salt here is deserved! JSON Web Signatures are a gnarly format, and the ACME API is pretty enthusiastic about being RESTful.
It’s not what I’d design. I think a lot of that came via the IETF wanting to use other IETF standards, and a dash of design-by-committee.
A few libraries (for JWS, JSON and HTTP) go a long way to making it more pleasant but those libraries themselves aren’t always that nice, especially in C.
I’m working on an interactive client and accompanying documentation to help here too, because the RFC language is a bit dense and often refers to other documents too.
cryptonector · 1h ago
> JSON Web Signatures are a gnarly format
They are??
As someone who wallows in ASN.1, Kerberos, and PKI, I don't find JWS so "gnarly". Even if you're open-coding a JSON Web Signature it will be easier than to open-code S/MIME, CMS, Kerberos, etc. Can you explain what is so gnarly about JWS?
Mind you, there are problems with JWT. Mainly that HTTP user-agents don't know how to fetch the darned things because there is not standard for how to find out how to fetch the darned things, when you should honor a request for them, etc.
dwedge · 1h ago
What is she talking about that you have to pay for certs if you want more than 3? Am I about to get a bill for the past 5 years or did she just misunderstand?
belorn · 1h ago
to quote the article (or rather, the 2023 article which is the one mentioning the number 3).
"Somehow, a couple of weeks ago, I found this other site which claimed to be better than LE and which used relatively simple HTTP requests without a bunch of funny data types."
"This is when the fine print finally appeared. This service only lets you mint 90 day certificates on the free tier. Also, you can only do three of them. Then you're done. 270 days for one domain or 3 domains for 90 days, and then you're screwed. Isn't that great? "
She don't mention what this "other site" is.
jchw · 1h ago
FWIW, it is ZeroSSL. I want there to be more major ACME providers than just LE, but I'm not sure about ZeroSSL, personally. It seems to have the same parent company as IdenTrust (HID Global Corporation). Probably a step up from Honest Achmed but recently I recall people complaining that their EV code signing certificates were not actually trusted by Windows which is... Interesting.
eadmund · 10h ago
> So, yes, instead of saying that "e" equals "65537", you're saying that "e" equals "AQAB". Aren't you glad you did those extra steps?
Oh JSON.
For those unfamiliar with the reason here, it’s that JSON parsers cannot be relied upon to treat numbers properly. Is 4723476276172647362476274672164762476438 a valid JSON number? Yes, of course it is. What will a JSON parser due with it? Silently truncate it to a 64-bit or 63-bit integer, or a float, probably or if you’re very lucky emit an error (a good JSON decoder written in a sane language like Common Lisp would of course just return the number, but few of us are so lucky).
So the only way to reliably get large integers into and out of JSON is to encode them as something else. Base64-encoded big-endian bytes is not a terrible choice. Silently doing the wrong thing is the root of many security errors, so it not wrong to treat every number in the protocol this way. Of course, then one loses the readability of JSON.
JSON is better than XML, but it really isn’t great. Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.
cortesoft · 2h ago
> Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.
I feel like not understanding why JSON won out is being intentionally obtuse. JSON can easily be hand written, edited, and read for most data. Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand. If you have a JSON object you want to hand edit, you can just type... for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.
You might not think the ability to hand generate, read, and edit is important, but I am pretty sure that is a big reason JSON has won in the end.
Oh, and the Ruby JSON parser handles that large number just fine.
eadmund · 1h ago
> I feel like not understanding why JSON won out is being intentionally obtuse.
I didn’t feel like my comment was the right place to shill for an alternative, but rather to complain about JSON. But since you raise it.
> JSON can easily be hand written, edited, and read for most data.
So can canonical S-expressions!
> Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand.
Which is why the advanced representation exists. I contend that this:
(urn:ietf:params:acme:error:malformed
(detail "Some of the identifiers requested were rejected")
(subproblems ((urn:ietf:params:acme:error:malformed
(detail "Invalid underscore in DNS name \"_example.org\"")
(identifier (dns _example.org)))
(urn:ietf:params:acme:error:rejectedIdentifier
(detail "This CA will not issue for \"example.net\"")
(identifier (dns example.net))))))
is far easier to read than this (the first JSON in RFC 8555):
{
"type": "urn:ietf:params:acme:error:malformed",
"detail": "Some of the identifiers requested were rejected",
"subproblems": [
{
"type": "urn:ietf:params:acme:error:malformed",
"detail": "Invalid underscore in DNS name \"_example.org\"",
"identifier": {
"type": "dns",
"value": "_example.org"
}
},
{
"type": "urn:ietf:params:acme:error:rejectedIdentifier",
"detail": "This CA will not issue for \"example.net\"",
"identifier": {
"type": "dns",
"value": "example.net"
}
}
]
}
> for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.
As you can see, no you do not.
eximius · 1h ago
For you, perhaps. For me, the former is denser, but crossing into a "too dense" region. The JSON has indentation which is easy on my poor brain. Also, it's nice to differentiate between lists and objects.
But, I mean, they're basically isomorphic with like 2 things exchanges ({} and [] instead of (); implicit vs explicit keys/types).
marcosdumay · 4h ago
What I don't understand is why you (and a lot of other people) just expect S-expression parsers to not have the exact same problems.
eadmund · 1h ago
Because canonical S-expressions don’t have numbers, just atoms (i.e., byte sequences) and lists. It is up to the using code to interpret "34" as the string "34" or the number 34 or the number 13,108 or the number 13,363, which is part of the protocol being used. In most instances, the byte sequence is probably a decimal number.
Now, S-expressions as used for programming languages such as Lisp do have numbers, but again Lisp has bignums. As for parsers of Lisp S-expressions written in other languages: if they want to comply with the standard, they need to support bignums.
its-summertime · 1h ago
"it can do one of 4 things" sounds very much like the pre-existing issue with JSON
01HNNWZ0MV43FF · 3h ago
I think they mean that Common Lisp has bigints by default
ryukafalz · 2h ago
As do Scheme and most other Lisps I'm familiar with, and integers/floats are typically specified to be distinct. I think we'd all be better off if that were true of JSON as well.
I'd be happy to use s-expressions instead :) Though to GP's point, I suppose we might then end up with JS s-expression parsers that still treat ints and floats interchangeably.
I'm half joking, but I'm not sure why S-expressions would be better here. There are LISPs that don't do arbitrary precision math.
mise_en_place · 1h ago
For actual SERDES, JSON becomes very brittle. It's better to use something like protobuf or cap'n'proto for such cases.
kangalioo · 4h ago
But what's wrong with sending the number as a string? `"65537"` instead of `"AQAB"`
comex · 3h ago
The question is how best to send the modulus, which is a much larger integer. For the reasons below, I'd argue that base64 is better. And if you're sending the modulus in base64, you may as well use the same approach for the exponent sent along with it.
For RSA-4096, the modulus is 4096 bits = 512 bytes in binary, which (for my test key) is 684 characters in base64 or 1233 characters in decimal. So the base64 version is much smaller.
Base64 is also more efficient to deal with. An RSA implementation will typically work with the numbers in binary form, so for the base64 encoding you just need to convert the bytes, which is a simple O(n) transformation. Converting the number between binary and decimal, on the other hand, is O(n^2) if done naively, or O(some complicated expression bigger than n log n) if done optimally.
Besides computational complexity, there's also implementation complexity. Base conversion is an algorithm that you normally don't have to implement as part of an RSA implementation. You might argue that it's not hard to find some library to do base conversion for you. Some programming languages even have built-in bigint types. But you typically want to avoid using general-purpose bigint implementations for cryptography. You want to stick to cryptographic libraries, which typically aim to make all operations constant-time to avoid timing side channels. Indeed, the apparent ease-of-use of decimal would arguably be a bad thing since it would encourage implementors to just use a standard bigint type to carry the values around.
You could argue that the same concern applies to base64, but it should be relatively safe to use a naive implementation of base64, since it's going to be a straightforward linear scan over the bytes with less room for timing side channels (though not none).
foobiekr · 2h ago
Cost.
ayende · 3h ago
Too likely that this would not work because silent conversion to number along the way
iforgotpassword · 3h ago
Then just prefixing it with an underscore or any random letter would've been fine but of course base64 encoding the binary representation in base 64 makes you look so much smarter.
JackSlateur · 2h ago
Is this ok ?
Python 3.13.3 (main, May 21 2025, 07:49:52) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more
information.
>>> import json
>>>
json.loads('47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
47234762761726473624762746721647624764380000000000000000000000000000000000000000000
yes, python falls into the sane language category with arbitrary-precision arithmetic
tempodox · 4h ago
“Worse is better” is still having ravaging success.
TZubiri · 51m ago
It feels like malpractice to use json in encryption
drob518 · 10h ago
Seems like a large integer can always be communicated as a vector of byte values in some specific endian order, which is easier to deal with than Base64 since a JSON parser will at least convert the byte value from text to binary for you.
But yea, as a Clojure guy sexprs or EDN would be much better.
matja · 10h ago
Aren't JSON parsers technically not following the standard if they don't reliably store a number that is not representable by a IEEE754 double precision float?
It's a shame JSON parsers usually default to performance rather than correctness, by using bignums for numbers.
kens · 3h ago
> Aren't JSON parsers technically not following the standard if they don't reliably store a number that is not representable by a IEEE754 double precision float?
That sentence has four negations and I honestly can't figure out what it means.
q3k · 9h ago
Have a read through RFC7159 or 8259 and despair.
> This specification allows implementations to set limits on the range and precision of numbers accepted
JSON is a terrible interoperability standard.
matja · 9h ago
So a JSON parser that cannot store a 2 is technically compliant? :(
reichstein · 2h ago
JSON is a text format. A parser must recognize the text `2` as a valid production of the JSON number grammar.
Converting that text to _any_ kind of numerical value is outside the scope of the specification.
(At least the JSON.org specification, the RFC tries to say more.)
As a textural format, when you use it for data interchange between different platforms, you should ensure that the endpoints agree on the _interpretation_, otherwise they won't see the same data.
Again outside of the scope of the JSON specification.
q3k · 8h ago
Yep. Or one that parses it into a 7 :)
chasd00 · 3h ago
> Or one that parses it into a 7 :)
if it's known and acceptable that LLMs can hallucinate arguments to an API then i don't see how this isn't perfectly acceptable behavior either.
kevingadd · 3h ago
I once debugged a production issue that boiled down to "A PCI compliance .dll was messing with floating point flags, causing the number 4 to unserialize as 12"
If you want to actually implement an ACME client from first principles, reading the RFC (plus related RFCs for JOSE etc) is probably easier than you think. I did exactly that when I made a client for myself.
I also wrote up a digested description of the issuance flow here: https://www.arnavion.dev/blog/2019-06-01-how-does-acme-v2-wo... It's not a replacement for reading the RFCs, but it presents the information in the sequence that you would follow for issuance, so think of it like an index to the RFC sections.
Nice thanks! I’ve been wanted to learn it as dealing with cert expirations every year is a pain. My guess is that we will have 24 hour certs at some point.
jazzyjackson · 2h ago
Looks like a good class; is it only available to enrolled students? videos seem to be behind a log-in wall.
anishathalye · 1h ago
Looks like the 2023 lectures weren't uploaded to YouTube, but the lectures from earlier iterations of the class, including 2022, are available publicly. For example, see the YouTube links on https://css.csail.mit.edu/6.858/2022/
(6.858 is the old name of the class, it was renamed to 6.5660 recently.)
distantsounds · 4h ago
why read the manual when you can rewrite the implementation in plain english with zero code and publish to hackernews? wayyyy more internet points!
1a527dd5 · 11h ago
I don't understand the tone of aggression against ACME and their plethora of clients.
I know it isn't a skill issue because of who the author is. So I can only imagine it is some sort of personal opinion that they dislike ACME as a concept or the tooling around ACME in general.
Edit* Re-read it. The tone isn't aimed at the ACME or the clients. It's the spec itself. ACME idea good, ACME implementation bad.
lucideer · 10h ago
> I don't understand the tone of aggression against ACME and their plethora of clients.
> ACME idea good, ACME implementation bad.
Maybe I'm misreading but it sounds like you're on a similar page to the author.
As they said at the top of the article:
> Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.
This might seem harsh but when I think it's a pretty fair perspective to have when running security-sensitive processes.
dwedge · 1h ago
This is the same author that threw everyone into a panic about atop and turned out to not really have found anything.
giancarlostoro · 10h ago
Im not a container guru by any means (at least not yet?) but would docker not suffice these concerns?
fpoling · 10h ago
The issue is that the client needs to access the private key, tell web server where various temporary files are during the certificate generation (unless the client uses DNS mode) and tell the web server about a new certificate to reload.
To implement that many clients run as a root. Even if that root is in a docket container, this is needlessly elevated privileges especially given the complexity (again, needless) of many clients.
The sad part is that it is trivial to run most of the clients with an account with no privileges that can access very few files and use a unix socket to tell the web server to reload the certificate. But this is not done.
And then ideally at this point the web servers should if not implement then at least facilitate ACME protocol implementations, like, for example, redirect traffic requests from acme servers to another port with one-liner in config. But this is not the case.
It's cheap. If the client was done today, it would be based on AI.
TheNewsIsHere · 10h ago
My reading of the article suggested to me that the author took exception to the code that touched the keying material. Docker is immaterial to that problem. I won’t deign to speak for Rachel By The Bay (mother didn’t raise a fool, after all), but I expect Docker would be met with a similar regard.
Which I do understand. Although I use Docker, I mainly use it personally for things I don’t want to spend much time on. I don’t really like it over other alternatives, but it makes standing up a lab service stupidly easy.
rsync · 8h ago
Yes, it does.
I run acme in a non privileged jail whose file system I can access from outside the jail.
So acme sees and accesses nothing and I can pluck results out with Unix primitives from the outside.
Yes, I use dns mode. Yes, my dns server is also a (different) jail.
lucideer · 6h ago
I use docker for the same reasons as the author's reservations - I combine a docker exec with some of my own loose automation around moving & chmod-ing files & directories to obviate the need for the acme client to have unfettered root access to my system.
Whether it's a local binary or a dockerised one, that access still needs to be marshalled either way & it can get complex facilitating that with a docker container. I haven't found it too bad but I'd really rather not need docker for on-demand automations.
I give plenty* of services root access to my system, most of which I haven't written myself & I certainly haven't audited their code line-by-line, but I agree with the author that you do get a sense from experience of the overall hygiene of a project & an ACME client has yet to give me good vibes.
* within reason
dangus · 1h ago
I disagree, the author is overcomplicating and overthinking things.
She doesn't "trust" tooling that basically the entire Internet including major security-conscious organizations are using, essentially letting perfect get in the way of good.
I think if she were a less capable engineer she would just set that shit up using the easiest way possible and forget about it like everyone else, and nothing bad would happen. Download nginx proxy manager, click click click, boom I have a wilcard cert, who cares?
I mean, this is her https site, which seems to just be a blog? What type of risk is she mitigating here?
Essentially the author is so skilled that she's letting perfect get in the way of good.
I haven't thought about certificates for years because it's not worth my time. I don't really care about the tooling, it's not my problem, and it's never caused a security issue. Put your shit behind a load balancer and you don't even need to run any ACME software on your own server.
diggan · 10h ago
> I don't understand the tone of aggression against ACME and their plethora of clients.
The older posts on the same website provided a bit more context for me to understand today's post better:
Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.
Sadly, security is a cat and mouse game, which means it's always evolving and you're forced to keep up - and it's inherent by the nature of the field, so we can't really blame anyone (unlike, say, being forced to integrate with the latest Google services to be allowed on the Play Store). At least you get to write your own ACME client if you want to. You don't have to use certbot, and there's no TPM-like behaviour locking you out of your own stuff.
tptacek · 4h ago
Non-ACME certs are basically over. The writing has been on the wall for a long time. I understand people being squeamish about it; we fear change. But I think it's a hopeful thing: the Web PKI is evolving. This is what that looks like: you can't evolve and retain everyone's prior workflows, and that has been a pathology across basically all Internet security standards work for decades.
ipdashc · 3h ago
ACME is cool (compared to what came before it), but I'm kind of sad that EV certs never seemed to pan out at all. I feel like they're a neat concept, and had the potential to mitigate a lot of scams or phishing websites in an ideal world. (That said, discriminating between "big companies" and "everyone else who can't afford it" would definitely have some obvious downsides.) Does anyone know why they never took off?
johannes1234321 · 1h ago
> Does anyone know why they never took off?
Browser vendors at some point claimed it confused users and removed the highlight (I think the same browser vendors who try to remove the "confusing" URL bar ...)
Aside from that EV certificates are slow to issue and phishers got similar enough EV certs making the whole thing moot.
spockz · 11h ago
Given that keys probably need to be shared between multiple gateway/ingresses, how common is it to just use some HSM or another mechanism of exchanging the keys with all the instances? The acme client doesn’t have to run on the servers itself.
tialaramex · 10h ago
> The acme client doesn’t have to run on the servers itself.
This is really important to understand if you care about either: Actually engineering security at some scale or knowing what's actually going on in order to model it properly in your head.
If you just want to make a web site so you can put up a blog about your new kitten, any of the tools is fine, you don't care, click click click, done.
For somebody like Rachel or many HN readers, knowing enough of the technology to understand that the ACME client needn't run on your web servers is crucial. It also means you know that when some particular client you're evaluating needs to run on the web server that it's a limitation of that client not of the protocol - birds can't all fly, but flying is totally one of the options for birds, we should try an eagle not an emu if we want flying.
throw0101b · 27m ago
> Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.
There are a number of shell-based ACME clients whose prerequisites are: OpenSSL and cURL. You're probably already relying on OpenSSL and cURL for a bunch of things already.
If you can read shell code you can step through the logic and understand what they're doing. Some of them (e.g., acme.sh) often run as a service user (e.g., default install from FreeBSD ports) so the code runs unprivileged: just add a sudo (or doas) config to allow it to restart Apache/nginx.
g-b-r · 10h ago
> Some people don't want to be forced to run a bunch of stuff they don't understand on the server
It's not just about not understanding, it's that more complex stuff is inherently more prone to security vulnerabilities, however well you think you reviewed its code.
Avamander · 10h ago
> It's that more complex stuff is inherently more prone to security vulnerabilities
That's overly simplifying it and ignores the part where the simple stuff is not secure to begin with.
In the current context you could take a HTTP client with a formally verified TLS stack, would you really say it's inherently more vulnerable than a barebones HTTP client talking to a server over an unencrypted connection? I'd say there's a lot more exposed in that barebones client.
g-b-r · 8h ago
The alternative of the article was ACME vs other ways of getting TLS certificates, not https vs http.
Of course plain http would be, generally, much more dangerous than a however complex encrypted connection
hannob · 11h ago
> Some people don't want to be forced to run a bunch of stuff they
> don't understand on the server, and I agree with them.
Honest question:
* Do you understand OS syscalls in detail?
* Do you understand how your BIOS initializes your hardware?
* Do you understand how modern filesystems work?
* Do you understand the finer details of HTTP or TCP?
Because... I don't. But I know enough about them that I'm quite convinced each of them is a lot more difficult to understand than ACME. And all of them and a lot more stuff are required if you want to run a web server.
sussmannbaka · 10h ago
This point is so tired. I don’t understand how a thought forms in my neurons, eventually matures into a decision and how the wires in my head translate this into electrical pulses to my finger muscles to type this post so I guess I can’t have opinions about complexity.
snowwrestler · 4h ago
I get where you’re going with this, but in this particular case it might not be relevant because there’s a decent chance that Rachel By The Bay does actually understand all those things.
frogsRnice · 10h ago
Sure - but people are still free to decide where they draw the line.
Each extra bit of software is an additional attack surface after all
fc417fc802 · 9h ago
An OS is (at least generally) a prerequisite. If minimalism is your goal then you'd want to eliminate tangentially related things that aren't part of the underlying requirements.
If you're a fan of left-pad I won't judge but don't expect me to partake without bitter complaints.
kjs3 · 10h ago
I hear some variation of this line of 'reasoning' about once a week, and it's always followed by some variation of "...and that's why we shouldn't have to do all this security stuff you want us to do".
donnachangstein · 4h ago
OpenBSD has a dead-simple lightweight ACME client (written in C) as part of the base OS. No need to roll your own. I understand it was created because existing alternatives ARE bloatware and against their Unixy philosophy.
Perhaps the author wasn't looking hard enough. It could probably be ported with little effort.
tialaramex · 2h ago
When I last checked this client is a classic example of OpenBSD philosophy not understanding why security is the way it is.
This client really wants the easy case where the client lives on the machine which owns the name and is running the web server, and then it uses OpenBSD-specific partitioning so that elements of the client can't easily taint one another if they're defective
But, the ACME protocol would allow actual air gapping - the protocol doesn't care whether the machine which needs a certificate, the machine running an ACME client, and the machine controlling the name are three separate machines, that's fine, which means if we do not use this OpenBSD all-in-one client we can have a web server which literally doesn't do ACME at all, an ACME client machine which has no permission to serve web pages or anything like that, and name servers which also know nothing about ACME and yet the whole system works.
That's more effort than "I just install OpenBSD" but it's how this was designed to deliver security rather than putting all our trust in OpenBSD to be bug-free.
donnachangstein · 1h ago
I said it was dead-simple and you delivered a treatise describing the most complex use case possible. Then maybe it's not for you.
Most software in the OpenBSD base system lacks features on purpose. Their dev team frequently rejects patches and feature requests without compelling reasons to exist. Less features means less places for things to go wrong means less chance of security bugs.
It exists so their simple webserver (also in the base system) has ACME support working out of the box. No third party software to install, no bullshit to configure, everything just works as part of a super compact OS. Which to this day still fits on a single CD-ROM.
Most of all no stupid Rust compiler needed so it works on i386 (Rust cannot self-host on i386 because it's so bloated it runs out of memory, which is why Rust tools are not included in i386).
If your needs exceed this or you adore complexity then feel free to look elsewhere.
zh3 · 3h ago
Or uacme [0] - litle bit of C that's been running perfectly since endless battery failures with the LE python client made us look for something that would last longer.
Yeah, was looking for someone to comment this. I use it. Works great.
liampulles · 10h ago
I appreciate the author calling this stuff out. The increasing complexity of the protocols that the web is built on is not a problem for developers who simply need to find a tool or client to use the protocol, but it is a kind of regulatory capture that ensures only established players will be the ones able to meet the spec required to run the internet.
I know ACME alone is not insurmountably complex, but it is another brick in the wall.
charcircuit · 3h ago
These protocols all have open source implementations. And as AI gets stronger this barrier will get smaller and smaller.
jeroenhd · 11h ago
There's something to be said for implementing stuff like this manually for the experience of having done it yourself, but the author's tone makes it sound like she hates the protocol and all the extra work she needs to do to make the Let's Encrypt setup work.
Kind of makes me wonder what kind of stack her website is running on that something like a lightweight ACME library (https://github.com/jmccl/acme-lw comes to mind, but there's a C++ library for ESP32s that should be even more lightweight) loading in the certificates isn't doing the job.
mschuster91 · 10h ago
> but the author's tone makes it sound like she hates the protocol and all the extra work she needs to do to make the Let's Encrypt setup work.
The problem is, SSL is a fucking hot, ossified mess. Many of the noted core issues, especially the weirdnesses around encoding and bitfields, are due to historical baggage of ASN.1/X.509. It's not fun to deal with it, at all... the math alone is bad enough, but the old abstractions to store all the various things for the math are simply constrained by the technological capabilities of the late '80s.
There would have been a chance to at least partially reduce the mess with the introduction of LetsEncrypt - basically, have the protocol transmit all of the required math values in a decent form and get an x.509 cert back - and HTTP/2, but that wasn't done because it would have required redeveloping a bunch of stuff from scratch whereas one can build an ACME CA with, essentially, a few lines of shell script, OpenSSL and six crates of high proof alcohol to drink away one's frustrations of dealing with OpenSSL, and integrate this with all software and libraries that exist there.
schoen · 27m ago
Yes, we actually considered the "have the protocol transmit all of the required math values in a decent form and get an x.509 cert back" version, but some people who were interested in using Let's Encrypt were apparently very keen on being able to use an existing external CSR. So that became mandatory in order not to have two totally separate code paths for X.509-based requests and non-X.509-based requests.
An argument for this is that it makes it theoretically possible for devices that have no knowledge of anything about PKI since the year 2000, and/or no additional programmability, to use Let's Encrypt certs (obtained on their behalf by an external client application). I have, in fact, subsequently gotten something like that to work as a consultant.
mschuster91 · 20m ago
Yikes. Guessed as much. Thanks for your explanation.
As for oooold devices - doesn't LetsEncrypt demand key lengths and hash algorithms nowadays that simply weren't implemented back then?
jeroenhd · 9h ago
There's no easy way to "just" transmit data in a foolproof manner. You practically need to support CSRs as a CA anyway, so you might as well use the existing ASN.1+X509 system to transmit data.
ASN.1 and X509 aren't all that bad. It's a comprehensively documented binary format that's efficient and used everywhere, even if it's hidden away in binary protocols you don't look at every day.
Unlike what most people seem to think, ACME isn't something invented just for Let's Encrypt. Let's Encrypt was certainly the first high-profile CA to implement the protocol, but various CAs (free and paid) have their own ACME servers and have had them for ages now. It's a generic protocol for certificate authorities to securely do domain validation and certificate provisioning that Let's Encrypt implemented first.
The unnecessarily complex parts of the protocol when writing a from-the-ground-up client are complex because ACME didn't reinvent the wheel, and reused existing standard protocols instead. Unfortunately, that means having to deal with JWS, but on the other hand, it means most people don't need to write their own ACME-JWS-replacement-protocol parsers. All the other parts are complex because the problem ACME is solving is actually quite complex.
The author wrote [another post](https://rachelbythebay.com/w/2023/01/03/ssl/) about the time they fell for the lies of a CA that promised an "easier" solution. That solution is pretty much ACME, but with more manual steps (like registering an account, entering domain names).
I personally think that for this (and for many other protocols, to be honest) XML would've been a better fit as its parsers are more resilient against weird data, but these days talking about XML will make people look at you like you're proposing COBOL. Hell, I even exchanging raw, binary ASN.1 messages would probably have gone over pretty well, as you need ASN.1 to generate the CSR and request the certificate anyway. But, people chose "modern" JSON instead, so now we're base64 encoding values that JSON parsers will inevitably fuck up instead.
schoen · 25m ago
> Unlike what most people seem to think, ACME isn't something invented just for Let's Encrypt. Let's Encrypt was certainly the first high-profile CA to implement the protocol, but various CAs (free and paid) have their own ACME servers and have had them for ages now. It's a generic protocol for certificate authorities to securely do domain validation and certificate provisioning that Let's Encrypt implemented first.
This depends on whether you're speaking as a matter of history. ACME was originally invented and implemented by the Let's Encrypt team, but in the hope that it could become an open standard that would be used by other CAs. That hope was eventually borne out.
GoblinSlayer · 5h ago
The described protocol looks like rewording of X509 with json syntax, but you still have X509, as a result you have two X509. Replay nonce is used straightforwardly as serial number, termsOfServiceAgreed can be extension, and CSR is automatically signed in the process of generation.
sam_lowry_ · 11h ago
I am running an HTTP-only blog and it's getting harder every year not to switch to HTTPS.
For instance, Whatsapp can not open HTTP links anymore.
projektfu · 10h ago
You can proxy it, which for a small server might be the best way to avoid heavy traffic, through caching at the proxy.
g-b-r · 10h ago
For god's sake, however complex ACME might be it's better than not supporting TLS
bigstrat2003 · 4h ago
There's no good reason to serve a blog over TLS. You're not handling sensitive data, so unencrypted is just fine.
foobiekr · 2h ago
The reason is to prevent your site from becoming a watering hole where malicious actors use it to inject malware into the browsers of your users.
Further: tapping glass is a thing, and if the only traffic that is encrypted is the "important" or "sensitive" stuff, then it sticks out in the flow, and so attackers know to focus just on that. If all traffic is encrypted, then it's much harder for attackers to figure out what is important and what is not.
So by encrypting your "unimportant" data you add more noise that has to be sifted through.
sam_lowry_ · 10h ago
Why? The days of MITM boxes injecting content into HTTP traffic are basically over, and frankly they never were a thing in my part of the world.
I see no other reason to serve content over HTTPS.
JoshTriplett · 10h ago
> Why? The days of MITM boxes injecting content into HTTP traffic are basically over
The reason you don't see many MITM boxes injecting content into HTTP anymore is because of widespread HTTPS adoption and browsers taking steps to distrust HTTP, making MITM injection a near-useless tactic.
(This rhymes with the observation that some people now perceive Y2K as overhyped fear-mongering that amounted to nothing, without understanding that immense work happened behind the scenes to avert problems.)
They show any site served over HTTP as explicitly not secure in the address bar (making HTTPS the "default" and HTTP the visibly dangerous option), they limit many web APIs to sites served over HTTPS ( https://developer.mozilla.org/en-US/docs/Web/Security/Secure...) , https://developer.mozilla.org/en-US/docs/Web/Security/Secure... ), they block or upgrade mixed-content by default (HTTPS sites cannot request HTTP-only resources anymore), they require HTTPS for HTTP/2 and HTTP/3, they increasingly attempt HTTPS to a site first even if linked/typed as http, they warn about downloads over http, and they're continuing to ratchet up such measures over time.
foobiekr · 2h ago
If browser vendors really cared, they would disable javascript on non-https sites.
fc417fc802 · 9h ago
> they increasingly attempt HTTPS to a site first even if linked/typed as http
And can generally be configured by the user not to downgrade to http without an explicit prompt.
Honestly I disagree with the refusal to support various APIs over http. Making the (configurable last I checked) prompt mandatory per browser session would have sufficed to push all mainstream sites to strictly https.
JoshTriplett · 8h ago
> And can generally be configured by the user not to downgrade to http without an explicit prompt.
Absolutely, and this works quite well on the current web.
> Honestly I disagree with the refusal to support various APIs over http.
There are multiple good reasons to do so. Part of it is pushing people to HTTPS; part of it is the observation that if you allow an API over HTTP, you're allowing that API to any attacker.
fc417fc802 · 3h ago
> if you allow an API over HTTP, you're allowing that API to any attacker.
In the scenario I described you're doing that only after the user has explicitly opted in on a case by case basis, and you're forcing a per-session nag on them in order to coerce mainstream website operators to adopt the secure default.
At that point it's functionally slightly more obtuse than adding an exception for a certificate (because those are persistent). Rejecting the latter on the basis of security is adopting a position that no amount of user discretion is acceptable. At least personally I'm comfortable disagreeing with that.
More generally, I support secure defaults but almost invariably disagree with disallowing users to shoot themselves in the foot. As an example, I expect a stern warning if I attempt to uninstall my kernel but I also expect the software on my device to do exactly what I tell it to 100% of the time regardless of what the developers might have thought was best for me.
JoshTriplett · 2h ago
> More generally, I support secure defaults but almost invariably disagree with disallowing users to shoot themselves in the foot.
I agree with this. But also, there is a strong degree to which users will go track down ways (or follow random instructions) to shoot themselves in the foot if some site they care about says "do this so we can function!". I do think, in cases where there's value in collectively pushing for better defaults, it's sometimes OK for the "I can always make my device do exactly what I tell it to do" escape hatch to be "download the source and change it yourself". Not every escape hatch gets a setting, because not every escape hatch is supported.
castillar76 · 10h ago
They’ve been making it harder and harder to serve things over HTTP-only for a while now. Steps like marking HTTP with big “NOT SECURE” labels and trying to auto-push to HTTP have been pretty effective. (With the exception of certain contexts, I think this is a generally good trend, FWIW.)
g-b-r · 8h ago
I have to change two settings to be able to see plain http things, and luckily I only need to a handful of times a year.
If I'm really curious about your plain http site I'll check it out through archive.org, and I'm definitely not going to keep visiting it frequently.
It's been easy to live with forced https for at least five years (and for at least the last ten with https first, with confirmations for plain http).
0xCMP · 3h ago
If you try to open anything with just HTTP on an iOS device (e.g. "Hey, look at this thing I made and have served on the public internet! <link>") it just won't load.
This was my experience sending a link to someone who primarily uses an iPad and is non-technical. They were not going to find/open their Macbook to see the link.
g-b-r · 8h ago
You see no reason for privacy, ok
DonHopkins · 9h ago
Are you an Anti-VAXer too?
I'll give you my 8600 when you pry it from my cold, dead LAN.
Aachen · 3h ago
They thought it was too complex and therefore insecure so the natural solution was to roll your own implementation and now they feel comfortable running that version of it?!
Edit: to be clear, I'd not be too surprised if their homegrown client survives an audit unscathed, I'm sure they're a great coder, but the odds just don't seem better than to the alternative of using an existing client that was already audited by professionals as well as other people
matja · 10h ago
Lucky that 415031 is prime :)
The steps described in the article sound familiar to the process done in the early 2000's, but I'm not sure why you'd want to make it hard for yourself now.
I use certbot with "--preferred-challenges dns-01" and "--manual-auth-hook" / "--manual-cleanup-hook" to dynamically create DNS records, rather than needing to modify the webserver config (and the security/access risks that comes with). It just needs putting the cert/key in the right place and reloading the webserver/loadbalancer.
orion138 · 10h ago
Not the main point of the article, but the author’s comments on Gandi made me wonder:
What registrar do people recommend in 2025?
teddyh · 1h ago
Like I frequently¹ advise²:
Don’t look to large, well-known registrars. I would suggest that you look for local registrars in your area. The TLD registry for your country/area usually has a list of the authorized registrars, so you can simply search that for entities with a local address.
Disclaimer: I work at such a small registrar, but you are probably not in our target market.
I have built a registrar in the past and have a lot of arcane knowledge about how they work. Just need to figure out a way to monetize!
upofadown · 1h ago
I recently had to bail on Gandi. I had a special requirement, being Canadian, in that I didn't want to use a registrar in the USA. I found a Canadian registrar that seemed to have the technical stuff reasonably worked out (many don't) and had easy to understand pricing:
I use Cloudflare for everything I can and then currently use Namecheap for anything it doesn't support. I haven't tried Porkbun mostly because I'm okay with what I have already.
samch · 10h ago
Since you asked, I use Cloudflare for my registrar. I can’t really say if it’s objectively better or worse than anybody else, but they seemed like a good choice when Google was in the process of shutting off their registry service.
sloped · 10h ago
Pork bun is my favorite.
floren · 4h ago
I've been on Namecheap for years but I'm ready to move just because they refuse to support dynamic AAAA records. How's Porkbun on that front?
graemep · 10h ago
It seems to be what Rachel decided on.
Must be other good ones? Somewhat prefer something in the UK (but have been using Gandi so its not essential).
jsheard · 10h ago
I don't know about the UK, but if you want to keep things in Europe then I can vouch for Netim in France.
INWX in Germany also seems well regarded but I haven't used them.
mattl · 10h ago
Gandi prices went way way up. I've been using Porkbun too.
KolmogorovComp · 10h ago
Any feedback on CF one?
jsheard · 10h ago
CF sells domains at cost so you're not going to beat them on price, but the catch is that domains registered through them are locked to their infrastructure, you're not allowed to change the nameservers. They're fine if you don't need that flexibility and they support the TLDs you want.
anonymousiam · 3h ago
"Skip the first 00 for some inexplicable reason" is something that caught me a few months ago. I was comparing keys in a script and they did not match because of the leading 00.
Does anyone know why they're there?
aaronmdjones · 25m ago
If the leading bit is set, it could be interpreted as a signed negative number. Prepending 00 guarantees that this doesn't happen.
tux3 · 11h ago
JOSE/JWK is indeed some galactically overengineered piece of spec, but the rest seems.. fine?
There are private keys and hash functions involved. But base64url and json aren't the worst web crimes to have been inflicted upon us. It's not _that_ bad, is it?
unscaled · 10h ago
Yes, JOSE is certainly overengineered and JWK is arguably somewhat overengineered as well.
But "the rest" of ACME also include X.509 certificates and PKCS#10 Certificate Signing Requests, which are in turn based on ASN.1 (you're fortunate enough you only need DER encoding) and RSA parameters. ASN.1 and X.509 are devilishly complex if you don't let openssl do everything for you and even if you do. The first few paragraphs are all about making the correct CSR and dealing with RSA, and encoding bigints the right way (which is slightly different between DER and JWK to make things more fun).
Besides that I don't know much about the ACME spec, but the post mentions a couple of other things :
So far, we have (at least): RSA keys, SHA256 digests, RSA signing, base64 but not really base64, string concatenation, JSON inside JSON, Location headers used as identities instead of a target with a 301 response, HEAD requests to get a single value buried as a header, making one request (nonce) to make ANY OTHER request, and there's more to come.
This does sound quite complex. I'm just not sure how much simpler ACME could be. Overturning the clusterfuck that is ASN.1, X.509 and the various PKCS#* standards has been a lost cause for decades now. JOSE is something I would rather do without, but if you're writing an IETF RFC, you're only other option is CMS[1], which is even worse. You can try to offer a new signature format, but that would be shut down for being "simpler and cleaner than JOSE, but JOSE just has some warts that need to be fixed or avoided"[2].
I think the things you're left with that could have been simplified and accepted as a standard are the APIs themselves, like getting a nonce with a HEAD request and storing identifiers in a Location header. Perhaps you could have removed signatures (and then JOSE) completely and rely on client IDs and secrets since we're already running over TLS, but I'm not familiar enough with the protocol to know what would be the impact. If you really didn't need any PKI for the protocol itself here, then this is a magnificent edifice of overengineering indeed.
Most of it is unused though, only CN, SANs and public key are used.
tialaramex · 1h ago
You really shouldn't need CN (but it's convenient for humans) however there are a bunch of other interesting things in the X.509 certificate, lets look at the one for this site:
Issuer: We need to know who issued this cert, then we can check whether we trust them and whether the signature on the certificate is indeed from them, and potentially repeat this process - this cert was issued by Let's Encrypt's E5 intermediate
Validity: We need to know when this cert was or will be valid, a perfectly good certificate for 2019 ain't much good now, this one is valid from early May until early August
Now we get a public key, in this case a nice modern elliptic curve P-256 key
We need to know how the signature works, in this case it's ECDSA with SHA-384
And we need a serial number for the certificate, this unique number helps sidestep some nasty problems and also gives us an easy shorthand if reporting problems, 05:6B:9D:B0:A1:AE:BB:6D:CA:0B:1A:F0:61:FF:B5:68:4F:5A will never be any other cert only this one.
We get a mandatory notice that this particular certificate is NOT a CA certificate, it's just for a web server, and we get the "Extended key use" which says it's for either servers or for clients (Let's Encrypt intends to cease offering "for client" certificates in the next year or so, today they're the default)
Then we get a URL for the CRL where you can find out if this certificate (or others like it) were revoked since issuance, info with a URL for OCSP (also going away soon) and a URL where you can get your own copy of the issuer's certificate if you somehow do not have that.
We get a policy OID, this is effectively a complicated way to say "If you check Let's Encrypt's formal policy documents, this certificate was specifically issued under the policy identified with this OID", these do change but not often.
Finally we get two embedded SCTs, these are proof that two named Certificate Transparency Log services have seen this certificate, or rather, the raw data in the certificate, although they might also have the actual certificate.
So, quite a lot more than you listed.
[A correct decoder also needs to actually verify the signature, I did not list that part, obviously ignoring the signature would be a bad idea for a live system as then anybody can lie about anything]
oneplane · 11h ago
I personally don't see the overengineering in JOSE; as you mention, a JWK (and JWKs) is not much more than the RSA key data we already know and love but formatted for Web and HTTP. It doesn't get more reasonable than that. JWTs, same story, it's just JSON data with a standard signature.
The spec (well, the RFC anyway) is indeed classically RFC-ish, but the same applies to HTTP or TCP/IP, and I haven't seen the same sort of complaints about those. Maybe it's just resistance to change? Most of the specs (JOSE, ACME etc) aren't really complex for the sake of complexity, but solve problems that aren't simple problems to solve simply in a simple fashion. I don't think that's bad at all, it's mostly indicative of the complexity of the problem we're solving.
unscaled · 9h ago
I would argue that JOSE is complex for the sake of complexity. It's not nearly as bad as old cryptographic standards (X.509 and the PKCS family of standards) and definitely much better than XMLDSig, but it's still a lot more complex than it needs to be.
Some examples of gratuitous complexity:
1. Supporting too many goddamn algorithms. Keeping RSA and HMAC-SHA256 for leagcy-compatible stuff, and Ed25519 for XChaChaPoly1305 for regular use would have been better. Instead we support both RSA with PKCS#1 v1.5 signatures and RSA-PSS with MGF1, as well as ECDH with every possible curve in theory (in practice only 3 NIST Prime curves).
2. Plethora of ways to combine JWE and JWS. You can encrypt-then-sign or sign-then-encrypt. You can even create multiple layers of nesting.
3. Different "typ"s in the header.
4. RSA JWKs can specify the d, p, q, dq, dp and qi values of the RSA private key, even though everything can be derived from "p" and "q" (and the public modulus and exponent "n" and "e").
5. JWE supports almost every combination of key encryption algorithm, content encryption algorithm and compression algorithm. To make things interesting, almost all of the options are insecure to a certain degree, but if you're not an expert you wouldn't know that.
6. Oh, and JWE supports password-based key derivation for encryption.
7. On the other, JWS is smarter. It doesn't need this fancy shmancy password-based key derivation thingamajig! Instead, you can just use HMAC-SHA256 with any key length you want. So if you fancy encrypting your tokens with a cool password like "secret007" and feel like you're a cool guy with sunglasses in a 1990s movie, just go ahead!
This is just some of the things of the top of my head. JOSE is bonkers. It's a monument to misguided overengineering. But the saddest thing about JOSE is that it's still much simpler than the standards which predated it: PKCS#7/CMS, S/MIME and the worst of all - XMLDSig.
oneplane · 8h ago
It's bonkers if you don't need it, just like JSONx (JSON-as-XML) is bonkers if you don't need it. But standards aren't for a single individual need, if they were they wouldn't be standards. And some people DO need these variations.
Take your argument about order of operations or algorithms. Just because you might not need to do it in an alternate order or use a legacy (and broken) algorithm doesn't mean nobody else does. Keep in mind that this standard isn't exactly new, and isn't only used in startups in San Francisco. There are tons of systems that use it that might only get updated a handful of times each year. Or long-lived JWTs that need to be supported for 5 years. Not going to replace hardware that is out on a pole somewhere just because someone thought the RFC was too complicated.
Out of your arguments, none of them require you to do it that way. Example: you don't have to supply d, dq, dp or qi if you don't want to. But if you communicate with some embedded device that will run out of solar power before it can derive them from the RSA primitives, you will definitely help it by just supplying it on the big beefy hardware that doesn't have that problem. It allows you to move energy and compute cost wherever it works best for the use case.
Even simpler: if you use a library where you can specify a RSA Key and a static ID, you don't have to think about any of this; it will do all of it for you and you wouldn't even know about the RFC anyway.
The only reason someone would need to know the details is if you don't use a library or if you are the one writing it.
folmar · 2h ago
`cfssl` shows how easy could getting certificates signed be for the typical use case.
Imagine coming from JWK and having to encode that public key into a CSR or something with that attitude.
oneplane · 8h ago
Imagine writing your own security software when there are proven systems that just take that problem out of your hands so you don't need to complain about it.
patrickmay · 1h ago
ACME aside, I love the description of how the OP iterated to a solution via a combination of implementing simple functions and cussing. That is a beautiful demonstration of what it means to be an old school hacker.
tialaramex · 10h ago
One of the things this gestures at might as well get a brief refresher here:
Subject Alternative Name (SAN) is not an alternative in the sense that it's an alias, SANs exist because the X.509 certificate standard is, as its name might suggest, intended for the X.500 directory system, a system from the 20th century which was never actually deployed. Mozilla (back then the Netscape Corporation) didn't like re-inventing wheels and this standard for certificates already existed so they used it in their new "Secure Sockets" technology but it has no Internet names so at first they just put names in plain text. However, X.500 was intended to be infinitely extensible, so we can just invent an alternative naming scheme, and that's what the SANs are, which is why they're mandatory for certificates in the Web PKI today - these are the Internet's names for things, so they're mandatory when talking about the Internet, they're described in detail in PKIX, the IETF document standardising the use of X.500 for the Internet.
There are several types of name we can express as SANs but in a certificate the two you'll commonly see are dnsName - the same ASCII names you'd see in URLs like "news.ycombinator.com" or "www.google.com" and ipAddress - a 32-bit integer typically spelled as four dotted decimals 10.20.30.40 [yes or an IPv6 128-bit integer will work here, don't worry]
Because the SANs aren't just free text a machine can reliably parse them which would doubtless meet Rachel's approval. The browser can mindlessly compare the bytes in the certificate "news.ycombinator.com" with the bytes in the actual DNS name it looked up "news.ycombinator.com" and those match so this cert is for this site.
With free text in a CN field like a 1990s SSL certificate (or, sadly, many certificates well into the 2010s because it was difficult to get issuers to comply properly with the rules and stop spewing nonsense into CN) it's entirely possible to see a certificate for " 10.200.300.400" which well, what's that for? Is that leading space significant? Is that an IP address? But those numbers don't even fit in one byte each I hope our parser copes!
fanf2 · 3h ago
Tedious and pedantic note:
You can’t mindlessly compare the bytes of the host name: you have to know that it’s the presentation format of the name, not the DNS wire format; you have to deal with ASCII case insensitivity; you have to guess what to do about trailing dots (because that isn’t specified); you have to deal with wildcards (being careful to note that PKIX wildcard matching is different from DNS wildcard matching).
It’s not as easy as it should be!
tialaramex · 2h ago
In practice it's much easier than you seem to have understood
The names PKIX writes into dnsName are exactly the same as the hostnames in DNS. They are defined to always be Fully Qualified, and yet not to have a trailing dot, you don't have to like that but it's specified and it's exactly how the web browsers worked already 25+ years ago.
You're correct that they're not the on-wire label-by-label DNS structure, but they are the canonical human readable DNS name, specifically the Punycode encoded name, so [the website] https://xn--j1ay.xn--p1ai/ the Russian registry which most browsers will display with Cyrllic, has its names stored in certificates the same way as it is handled in DNS, as Punycode "xn--j1ay.xn--p1ai". In software I've seen the label-by-label encoding stuff tends to live deep inside DNS-specific code, but the DNS name needed for comparing with a certificate does not do this.
You don't need to "deal with" case except in the sense that you ignore it, DNS doesn't handle case, the dnsName in SANs explicitly doesn't carry this, so just ignore the case bits. Your DNS client will do the case bit wiggling entropy hack, but that's not in code the certificate checking will care about.
You do need to care about wildcards, but we eliminated the last very weird certificate wildcards because they were minted only by a single CA (which argued by their reading they were obeying PKIX) and that CA is no longer in business 'cos it turns out some of the stupid things they were doing even a creative lawyerly reading of specifications couldn't justify. So the only use actually enabled today is replacing one DNS label at the front of the name. Nothing else is used, no suffixes, no mid-label stuff, no multi-label wildcards, no labels other than the first.
Edited to better explain the IDN situation hopefully
cryptonector · 1h ago
> You don't need to "deal with" case except in the sense that you ignore it, DNS doesn't handle case
The DNS is case-insensitive, though only for ASCII. So you have to compare names case-insensitively (again, for ASCII). It _is_ possible to have DNS servers return non-lowercase names! E.g., way back when sun.com's DNS servers would return Sun.COM if I remember correctly. So you do have to be careful about this, though if you do a case-sensitive, memcmp()-like comparison, 999 times out of 1,000 everything will work, and you won't fail open when it doesn't.
p_ing · 10h ago
Did browsers ever strictly require a SAN; they certainly didn't even as of ~10 years ago? Yes, it is "required", but CN only has worked for quite some time. I find this tricks up some IT admins who are still used to only supplying a CN and don't know what a SAN is.
tialaramex · 9h ago
> Did browsers ever strictly require a SAN;
Yes, all the popular browsers require this.
> they certainly didn't even as of ~10 years ago?
That's true, ten years ago it was likely that if a browser required this they would see unacceptably high failure rates because CAs were non-compliant and enforcement wasn't good enough. Issuing certs which would fail PKIX was prohibited, but so is speeding and yet people do that every day. CT improved our ability to inspect what was being issued and monitor fixes.
> Yes, it is "required", but CN only has worked for quite some time.
No trusted CA will issue "CN only" for many years now, if you could obtain such a certificate you'd find it won't work in any popular browser either. You can read the Chromium or Mozilla source and there just isn't any code to look in CN, the browser just parses the SANs.
> I find this tricks up some IT admins who are still used to only supplying a CN and don't know what a SAN is.
In most cases this is a sign you're using something crap like openssl's command line to make CSRs, and so you're probably expending a lot of effort filling out values which will be ignored by the CA and yet not offered parameters you did need.
p_ing · 9h ago
You're forgetting that browsers deal with plenty of internal-only CAs. Just because a public CA won't issue a CN only cert doesn't mean an internal CA won't. That is why I'm curious to know if browsers /strictly/ require SANs, yet. Not something I've tested in a long time since I started supporting public-only websites/cloud infra.
As you noted about OpenSSL, Windows CertSvr will allow you to do CN only, too.
tialaramex · 8h ago
I mean, no, I'm not forgetting that, of course your private CA can issue whatever nonsense you like, to this day - and indeed several popular CAs are designed to do just that as you noted. Certificates which ignore this rule won't work in a browser though, or in some other modern software.
Chromium published an "intent to remove" and then actually removed the CN parsing in 2017, at that point EnableCommonNameFallbackForLocalAnchors was available for people who were still catching up to policy from ~15 years ago. The policy override flag was removed in 2018, after people had long enough to fix their shit.
Mozilla had already made an equivalent change before that, maybe it worked for a few more years in Safari? I don't have a Mac so no idea.
Oh parts of this remind me of having to write an HMAC signature for some API calls. I like to start in Postman, but the provider's supplied Postman collection was fundamentally broken. I tried and tried to write a pre-request script over a day or two, and ended up giving up. I want to get back to it, but it's frustrating because there's no feedback cycle. Every request fails with the same 401 Unauthorized error, so you are on your own for figuring out which piece of the script isn't doing quite the right thing.
amiga386 · 10h ago
Things change over time.
Part of not wanting to let go is the sunk cost fallacy. Part of it is being suspicious of being (more) dependent on someone else (than you are already dependent on a different someone else).
(As an aside, the n-gate guy who ranted against HTTPS in general and thought static content should just be HTTP also thought like that. Unfortunately, as I'm at a sketchy cafe using their wifi, his page currently says I should click here to enter my bank details, and I should download new cursors, and oddly doesn't include any of his own content at all. Bit weird, but of course I can trust he didn't modify his page, and it's just a silly unnecessary imposition on him that I would like him to use HTTPS)
Unfortunately for those rugged individuals, you're in a worldwide community of people who want themselves, and you, to be dependent on someone else. We're still going with "trust the CAs" as our security model. But with certificate transparency and mandatory stapling from multiple verifiers, we're going with "trust but verify the CAs".
Maximum acceptable durations for certificates are coming down, down, down. You have to get new ones sooner, sooner, sooner. This is to limit the harm a rogue CA or a naive mis-issuing CA can do, as CRLs just don't work.
The only way that can happen is with automation, and being required to prove you still own a domain and/or a web-server on that domain, to a CA, on a regular basis. No "deal with this once a year" anymore. That's gone and it's not coming back.
It's good to know the whole protocol, and yes certbot can be overbearing, but Debian's python3-certbot + python3-certbot-apache integrates perfectly with how Debian has set up apache2. It shouldn't be a hardship.
And if you don't like certbot, there are lots of other ACME clients.
And if you don't like Let's Encrypt, there are other entities offering certificates via the ACME protocol (YMMV, do you trust them enough to vouch for you?)
pixl97 · 10h ago
> thought static content should just be HTTP
Yep, I've seen that argument so many times and it should never make sense to anyone that understands MITM.
The only way it could possibly work is if the static content were signed somehow, but then you need another protocol the browser and you need a way to exchange keys securely, for example like signed RPMs. It would be less expensive as the encryption happens once, but is it worth having yet another implementation?
drob518 · 9h ago
The argument doesn’t even make sense for static content ignoring mitm attacks.
pixl97 · 3h ago
There is no such thing as static content. There is only content. Bits are sent to your browser which it then applies the DOM to.
If you want to ensure the bits that were sent from the server to your browser they must be signed in some method.
drob518 · 2h ago
Right, that’s exactly my point.
bigstrat2003 · 4h ago
> Yep, I've seen that argument so many times and it should never make sense to anyone that understands MITM.
Rather, it's that most people simply don't need to care about MITM. It's not a relevant attack for most content that can be reasonably served over HTTP. The goal isn't to eliminate every security threat possible, it's to eliminate the ones that are actually a problem for your use case.
kbolino · 2h ago
MITM is a very real threat in any remotely public place. Coffee shop, airport, hotel, municipal WAN, library, etc. I honestly wouldn't put that much trust in a lot of residential/commercial broadband setups or hosting/colocation providers either. It does not matter what is intended to be served, because it can be replaced with anything else. Innocuous blog? Transparently replaced with a phishing site. Harmless image? Rewritten to appear the same but with a zero-day exploit injected.
There's no such thing as "not worth the effort to secure" because neither the site itself nor its content matters, only the network path from the site to the user, which is not under the full control of either party. These need not be, and usually aren't, targeted attacks; they'll hit anything that can be intercepted and modified, without a care for what it's meant to be, where it's coming from, or who it's going to.
Viewing it is an A-to-B interaction where A is a good-natured blogger and B is a tech-savvy reader, and that's all there is to it, is archaic and naive to the point of being dangerous. It is really an A-to-Z interaction where even if A is a good-natured blogger and Z is a tech-savvy user, parties B through Y all get to have a crack at changing the content. Plain HTTP is a protocol for a high-trust environment and the Internet has not been such a place for a very long time. It is unfortunate that party A (the site) must bear the brunt of the security burden, but that's the state of things today. There were other ways to solve this problem but they didn't get widespread adoption.
pixl97 · 3h ago
MITM is a risk to everyone. End of story.
The browser content model knows nothing if the data it's receiving is static or not.
ISPs had already shown time and again they'd inject content into http streams for their own profit. BGP attacks routed traffic off to random places. Simply put the modern web should be zero trust at all.
Pretty useless in this case if I control the stream going to you. The main page defining the integrity would have to be encrypted.
Maybe you could have a mixed use case page in the browser where you had your secure context, then a sub context of unencrypted protected objects, that could possibly increase caching. With that said, looks like another fun hole browser makers would be chasing every year or so.
XorNot · 10h ago
For catching purposes form content distribution an unencrypted signed protocol would've helped a lot. Every Linux packaging format having to bake one in via GPG is a huge pain.
PhilipRoman · 2h ago
I think it would be enough to just have a widely supported hash://sha256... protocol with a hint of what host(s) is known to provide the object (falling back to something DHT based maybe). There is https://www.rfc-editor.org/rfc/rfc6920.html but I haven't seen any support for it.
abujazar · 3h ago
Lost me at "make an RSA key". RSA is ancient.
dwedge · 1h ago
I really don't understand why this blog gets so much traction here. Ranting against rss scrapes, FUD about some atop vulnerability that turned out to be nothing, and thinking you have to pay for acme certs and caring about the way it's parsed?
renewiltord · 13m ago
Community favourite. Once you hit a critical mass with a community you will always be read by them.
AStonesThrow · 1h ago
Well, I will perhaps endure flak or downvotes for pointing a few things out, but Rachel:
- Is female [TIL the term "wogrammer"]
- Works for Facebook [formerly Rackspace and Google] so an undeniably Big MAMAA
- Has been blogging prolifically for at least 14 years [let's call it 40 years: she admin'd a BBS at age 12]
- Website is custom self-hosted; very old school and accessible; no ads or popup bullshit
- Probably has more CSE/SWE experience+talent in her little pinky finger than 80% of HN commenters
So I'd say that her position and experience command enough respect that we cannot judge her merely by peeking at a few trifling journal entries.
dwedge · 1h ago
And yet that's exactly what HN does
z3t4 · 11h ago
At some stage you need to update your TXT records, and if you register a wildcard domain you have to do it twice for the same request! And you have to propagate these TXT records twice to all your DNS servers, and wait for some third party like google dns to request the TXT record. And it all has to be done within a minute in order to not time out. DNS servers are not made to change records from one second to another and rely heavily on caching, so I'm lucky that I run my own DNS servers, but good luck doing this if you are using something like a anycast DNS service.
Arnavion · 3h ago
You can have multiple TXT records for the same domain identifier, and the ACME server will look through all of them to find the one that it expects. So for an order that requests SANs example.org and *.example.org, where the server asks for two authorizations to be completed for _acme-challenge.example.org, you can create both TXT records at the same time.
>2. Query for TXT *records* for the validation domain name
>3. Verify that the contents of *one of the TXT records* match the digest value
(Emphasis mine.)
castillar76 · 10h ago
Fortunately that’s only needed if you’re using the DNS validation method — necessary if you’re getting wildcards (but…eek, wildcards). For HTTP-01, no DNS changes are needed unless you want to add CAA records to block out other CAs.
elric · 4h ago
Wildcard Certificates are your friend if you don't want all of your hostnames becoming public knowledge.
12_throw_away · 4h ago
Having tried it myself, I can highly recommend a security posture that doesn't depend on the secrecy of any particular URL :)
XorNot · 10h ago
Or just use the HTTP protocol, which works fine.
fpoling · 9h ago
For wildcard certificates DNS is the only option.
jlundberg · 2h ago
acme_tiny.py is a good choice of client for anyone who don’t want to write a client from scratch — but still want to review the code.
wolf550e · 9h ago
Implementing an ACME client in python using pyca/cryptography (or in Go) would be fine, but why do it in C++ ?
mr_toad · 3h ago
Not everyone wants to deal with maintaining Python and untold dependencies on their web server. A C++ binary often has no additional dependencies, and even if it does they’ll be dealt with by the OS package manager.
wolf550e · 2h ago
I think uv[1] basically solved this problem for python scripts. Go creates statically linked executables that are easy to deploy.
>I contacted Rachel and she said - and this is my poor paraphrasing from memory - that the IP ban was something she intentionally implemented but I got caught as a false positive
Yeah, I can't reach out their website from Denmark for some unknown reasons. On top of that the most recent update of their RSS feed server fxxxd up my news reader so I'm even less inclined to see whatever they do because it looks like they're not very competent technology-wise.
qwertox · 11h ago
This site can’t be reached
rachelbythebay.com took too long to respond.
luckman212 · 11h ago
Site is working fine for me, East coast US.
rcarmo · 11h ago
Same. Europe.
ndsipa_pomu · 11h ago
I was amazed by them having so much distrust of the various clients. Certbot is typically in the repositories for things like Debian/Ubuntu.
If you use a DNS service provider that supports it, you can use the DNS-01 challenge to get a certificate - that means that you can have the acme.sh running on a completely different server which should help if you're twitchy about running a complex script on it. It's also got the advantage of allowing you to get certificates for internal/non-routable addresses.
JoshTriplett · 10h ago
Certbot is definitely one of the strongest arguments against ACME and Let's Encrypt.
Personally, I find that tls-alpn-01 is even nicer than dns-01. You can run a web server (or reverse proxy) that listens to port 443, and nothing else, and have it automatically obtain and renew TLS certificates, with the challenges being sent via TLS ALPN over the same port you're already listening on. Several web servers and reverse proxies have support for it built in, so you just configure your domain name and the email address you want to use for your Let's Encrypt account, and you get working TLS.
Shadowmist · 9h ago
Does this only work if LE can reach port 443 on one of your servers/proxies?
JoshTriplett · 9h ago
Yes. If you want to create certificates for a private server you have to use a different mechanism, such as dns-01.
christina97 · 11h ago
I used to like them, then they somehow sold out to zerossl and switched the default there from LE after an update.
Pinned to an old version and looking for a replacement right now.
Bender · 11h ago
That annoyed me as well given the wording on the ZeroSSL site suggested one has to create an account which is not true. I had hit an error using DNS-01 at the time. They have an entirely different page for ACME clients but it is not or was not linked from anywhere on the main page.
If anyone else ran into that it's just a matter of adding
--server letsencrypt
castillar76 · 10h ago
You can also permanently change your default to LE — acme.sh actually has instructions for doing so in their wiki.
I rather liked using ZeroSSL for a long time (perhaps just out of knee-jerk resistance to the “Just drink the Koolaid^W^W^Wuse Let’s Encrypt! C’mon man, everyone’s doing it!” nature of LE usage), but of late ZeroSSL has gotten so unreliable that I’ve rolled my eyes and started swapping things back to LE.
ndsipa_pomu · 9h ago
I only started using it after the default was ZeroSSL, but it's easy to specify LetsEncrypt instead
12_throw_away · 3h ago
Dunno about the protocol, but man, working with certbot and getting it do what I wanted was ... well, a lot more work than I would have guessed. The hooks system was so much trouble that I ended up writing my own.
But yeah, can definitely recommend DNS-01 over HTTP-01, since it doesn't involve implicitly messing with your server settings, and makes it much easier to have a single locked server with all the ACME secrets, and then distribute the certs to the open-to-the-internet web servers.
egorfine · 10h ago
certbot is complexity creep at it's finest. I'd love to hear Rachel's take on it.
+1 for acme.sh, it's beautiful.
corford · 11h ago
Agree with the acme.sh recommendation. It's my favourite by far (especially, as you point out, when leveraging with DNS-01 challenges so you can sidestep most of the security risks the article author worries about)
skywhopper · 11h ago
Certbot goes out of its way to be inscrutable about what it’s doing. It munges your web server config (temporarily) to handle http challenges, and for true sysadmins who are used to having to know all the details of what’s going on, that sort of script is a nightmare waiting to happen.
I assume certbot is the client she’s alluding to that misinterprets one of the factors in the protocol as hex vs decimal and somehow things still work, which is incredibly worrisome.
castillar76 · 10h ago
Having my ACME client munge my webserver configs to obtain a cert was one of the supreme annoyances about using them — it felt severely constraining on how I structured my configs, and even though it’s a blip, I hated the double restart required to fetch a cert (restart with new config, restart with new cert).
Then I discovered the web-root approach people mention here and it made a huge difference. Now I have the HTTP snippet in my server set to serve up ACME challenges from a static directory and push everything else to HTTPS, and the ACME client just needs write permission to that directory. I can dynamically include that snippet in all of the sites my server handles and be done.
If I really felt like it, I could even write a wrapper function so the ACME client doesn’t even need restart permissions on the web-server (for me, probably too much to bother with, but for someone like Rachel perhaps worthwhile).
ndsipa_pomu · 8h ago
A wrapper function may be overkill when you can do something like this:
With the HTTP implementation that's true, but the DNS implementation of certbot's certificate request plugins don't touch your server config. As an added bonus, you can use that to also obtain wildcard certificates for your subdomains so different applications can share the same certificate (so you only need one single ACME client).
claudex · 11h ago
You can configure certbot to write in a directory directly and it won't touch your web server config.
ndsipa_pomu · 9h ago
> It munges your web server config (temporarily) to handle http challenges
I run it in "webroot" mode on NgINX servers so it's just a matter of including the relevant config file in your HTTP sections (likely before redirecting to HTTPS) so that "/.well-known/acme-challenge/" works correctly. Then when you do run certbot, it can put the challenge file into the webroot and NgINX will automatically serve it. This allows certbot to do its thing without needing to do anything with NgINX.
xorcist · 9h ago
acme.sh is 8000 lines, still a magnitude better than certbot for something security-critical, but not great.
tiny-acme.py is 200 lines, easy to audit and incorporate parts into your own infrastructure. It works well for the tiny work it does but it does support anything more modern.
skywhopper · 11h ago
I identify with this so much because of my own revulsion for the ACME protocol and the available tooling for using it—and SSL tooling in general for that matter—and because this is also representative of my process for figuring out this sort of low priority technical issue that I have to understand before I can implement, in a way that clearly most folks in the industry don’t care about understanding.
heraldgeezer · 4h ago
Devs need to be sysadmins also.
bananapub · 10h ago
tangentially, for anyone looking to make their lives easier, you can run `acme-dns` on a spared 53/udp somewhere, CNAME the _acme_challenge. from your real DNS hosting to that, then have `lego` or whatever do DNS challenges via acme-dns - no need to let inscrutable scripts touch your real DNS config, no need for anything to touch your HTTP config.
elric · 4h ago
I wish DNS providers offered more granular access control. Some offer an API key per zone, others have a single key which grants access to every single zone in your account. I haven't come across any that offer "acme-only" APIs.
It's on my long list of potential side projects, but I don't think I'll ever gey around to it
Arnavion · 2h ago
You can also use an NS record directly instead of CNAME'ing to a different domain.
ThePowerOfFuet · 11h ago
With the greatest respect to Rachel, ain't _nobody_ got time for that.
egorfine · 10h ago
> import JSON (something I use as little as possible)
This makes me wonder what world of development she is in. Does she prefer SOAP?
hansvm · 10h ago
JSON is slow, not particularly comfortable for humans to work with, uses dangerous casts by default, is especially dangerous when it crosses library or language boundaries, has the exponential escaping problem when people try to embed submessages, relies on each client to appropriately validate every field, doesn't have any good solution for binary data, is prone to stack overflow when handling nested structures, etc.
If the author says they dislike JSON, especially given the tone of this article with respect to nonsensical protocols, I highly doubt they approve of SOAP.
egorfine · 10h ago
> JSON is [...]
What would you suggest instead given all these cons?
Y_Y · 6h ago
Fixing all of those at once might be a bit too much to ask, but I have some quick suggestions. I'd say for a more robust JSON you could try Dhall. If you just want to exchange lumps of data between programs I'd use Protobuf. If you want simple and freeform I'd go with good old sexps.
Her webserver outputs logs in protobuf, so I think she likes binary serialization.
codeduck · 10h ago
Given her experience and work history, it's much more likely that she views any text-based protocol as an unnecessary abstraction over simply processing raw TCP.
horsawlarway · 10h ago
Is this a joke? I don't even know where to begin with this comment... It reads like a joke, but I suspect it's not?
TCP is just a bunch of bytes... You can't process a bunch of bytes without understanding what they are, and that requires signaling information at a different level (ex - in the bytes themselves as a defined protocol like SSH, SCP, HTTP, etc - or some other pre-shared information between server and client [the worst of protocols - custom bullshit]).
lesuorac · 4h ago
> or some other pre-shared information between server and client [the worst of protocols - custom bullshit])
Why is this worse than JSON?
"{'protected': {'protected': { 'protected': 'QABE' }}}" is just as custom as 66537 imo. It's easier to reverse engineer than 66537 but that's not less custom.
codeduck · 9h ago
parent mentioned SOAP as an alternative to JSON. I was being glib about the fact that the engineer who wrote this blog post is a highly-regarded sysadmin and SRE who tinkers on things ranging from writing her own build systems to playing with RF equipment.
horsawlarway · 7h ago
Sure. Between the two comments, I think the SOAP joke is a lot better.
The salt here is deserved! JSON Web Signatures are a gnarly format, and the ACME API is pretty enthusiastic about being RESTful.
It’s not what I’d design. I think a lot of that came via the IETF wanting to use other IETF standards, and a dash of design-by-committee.
A few libraries (for JWS, JSON and HTTP) go a long way to making it more pleasant but those libraries themselves aren’t always that nice, especially in C.
I’m working on an interactive client and accompanying documentation to help here too, because the RFC language is a bit dense and often refers to other documents too.
They are??
As someone who wallows in ASN.1, Kerberos, and PKI, I don't find JWS so "gnarly". Even if you're open-coding a JSON Web Signature it will be easier than to open-code S/MIME, CMS, Kerberos, etc. Can you explain what is so gnarly about JWS?
Mind you, there are problems with JWT. Mainly that HTTP user-agents don't know how to fetch the darned things because there is not standard for how to find out how to fetch the darned things, when you should honor a request for them, etc.
"Somehow, a couple of weeks ago, I found this other site which claimed to be better than LE and which used relatively simple HTTP requests without a bunch of funny data types."
"This is when the fine print finally appeared. This service only lets you mint 90 day certificates on the free tier. Also, you can only do three of them. Then you're done. 270 days for one domain or 3 domains for 90 days, and then you're screwed. Isn't that great? "
She don't mention what this "other site" is.
Oh JSON.
For those unfamiliar with the reason here, it’s that JSON parsers cannot be relied upon to treat numbers properly. Is 4723476276172647362476274672164762476438 a valid JSON number? Yes, of course it is. What will a JSON parser due with it? Silently truncate it to a 64-bit or 63-bit integer, or a float, probably or if you’re very lucky emit an error (a good JSON decoder written in a sane language like Common Lisp would of course just return the number, but few of us are so lucky).
So the only way to reliably get large integers into and out of JSON is to encode them as something else. Base64-encoded big-endian bytes is not a terrible choice. Silently doing the wrong thing is the root of many security errors, so it not wrong to treat every number in the protocol this way. Of course, then one loses the readability of JSON.
JSON is better than XML, but it really isn’t great. Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.
I feel like not understanding why JSON won out is being intentionally obtuse. JSON can easily be hand written, edited, and read for most data. Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand. If you have a JSON object you want to hand edit, you can just type... for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.
You might not think the ability to hand generate, read, and edit is important, but I am pretty sure that is a big reason JSON has won in the end.
Oh, and the Ruby JSON parser handles that large number just fine.
I didn’t feel like my comment was the right place to shill for an alternative, but rather to complain about JSON. But since you raise it.
> JSON can easily be hand written, edited, and read for most data.
So can canonical S-expressions!
> Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand.
Which is why the advanced representation exists. I contend that this:
is far easier to read than this (the first JSON in RFC 8555): > for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.As you can see, no you do not.
But, I mean, they're basically isomorphic with like 2 things exchanges ({} and [] instead of (); implicit vs explicit keys/types).
Now, S-expressions as used for programming languages such as Lisp do have numbers, but again Lisp has bignums. As for parsers of Lisp S-expressions written in other languages: if they want to comply with the standard, they need to support bignums.
I'd be happy to use s-expressions instead :) Though to GP's point, I suppose we might then end up with JS s-expression parsers that still treat ints and floats interchangeably.
json.Number is (almost) my “favorite” arbitrary decimal: https://github.com/ncruces/decimal?tab=readme-ov-file#decima...
I'm half joking, but I'm not sure why S-expressions would be better here. There are LISPs that don't do arbitrary precision math.
For RSA-4096, the modulus is 4096 bits = 512 bytes in binary, which (for my test key) is 684 characters in base64 or 1233 characters in decimal. So the base64 version is much smaller.
Base64 is also more efficient to deal with. An RSA implementation will typically work with the numbers in binary form, so for the base64 encoding you just need to convert the bytes, which is a simple O(n) transformation. Converting the number between binary and decimal, on the other hand, is O(n^2) if done naively, or O(some complicated expression bigger than n log n) if done optimally.
Besides computational complexity, there's also implementation complexity. Base conversion is an algorithm that you normally don't have to implement as part of an RSA implementation. You might argue that it's not hard to find some library to do base conversion for you. Some programming languages even have built-in bigint types. But you typically want to avoid using general-purpose bigint implementations for cryptography. You want to stick to cryptographic libraries, which typically aim to make all operations constant-time to avoid timing side channels. Indeed, the apparent ease-of-use of decimal would arguably be a bad thing since it would encourage implementors to just use a standard bigint type to carry the values around.
You could argue that the same concern applies to base64, but it should be relatively safe to use a naive implementation of base64, since it's going to be a straightforward linear scan over the bytes with less room for timing side channels (though not none).
But yea, as a Clojure guy sexprs or EDN would be much better.
It's a shame JSON parsers usually default to performance rather than correctness, by using bignums for numbers.
That sentence has four negations and I honestly can't figure out what it means.
> This specification allows implementations to set limits on the range and precision of numbers accepted
JSON is a terrible interoperability standard.
Converting that text to _any_ kind of numerical value is outside the scope of the specification. (At least the JSON.org specification, the RFC tries to say more.)
As a textural format, when you use it for data interchange between different platforms, you should ensure that the endpoints agree on the _interpretation_, otherwise they won't see the same data.
Again outside of the scope of the JSON specification.
if it's known and acceptable that LLMs can hallucinate arguments to an API then i don't see how this isn't perfectly acceptable behavior either.
I also wrote up a digested description of the issuance flow here: https://www.arnavion.dev/blog/2019-06-01-how-does-acme-v2-wo... It's not a replacement for reading the RFCs, but it presents the information in the sequence that you would follow for issuance, so think of it like an index to the RFC sections.
(6.858 is the old name of the class, it was renamed to 6.5660 recently.)
I know it isn't a skill issue because of who the author is. So I can only imagine it is some sort of personal opinion that they dislike ACME as a concept or the tooling around ACME in general.
We've been using LE for a while (since 2019 I think) for handful of sites, and the best nonsense client _for us_ was https://github.com/do-know/Crypt-LE/releases.
Then this year we've done another piece of work this time against the Sectigo ACME server and le64 wasn't quite good enough.
So we ended up trying:-
- https://github.com/certbot/certbot on GitHub Actions, it was fine but didn't quite like the locked down environment
- https://github.com/go-acme/lego huge binary, cli was interestingly designed and the maintainer was quite rude when raising an issue
- https://github.com/rmbolger/Posh-ACME our favourite, but we ended up going with certbot on GHA once we fixed the weird issues around permissions
Edit* Re-read it. The tone isn't aimed at the ACME or the clients. It's the spec itself. ACME idea good, ACME implementation bad.
> ACME idea good, ACME implementation bad.
Maybe I'm misreading but it sounds like you're on a similar page to the author.
As they said at the top of the article:
> Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.
This might seem harsh but when I think it's a pretty fair perspective to have when running security-sensitive processes.
To implement that many clients run as a root. Even if that root is in a docket container, this is needlessly elevated privileges especially given the complexity (again, needless) of many clients.
The sad part is that it is trivial to run most of the clients with an account with no privileges that can access very few files and use a unix socket to tell the web server to reload the certificate. But this is not done.
And then ideally at this point the web servers should if not implement then at least facilitate ACME protocol implementations, like, for example, redirect traffic requests from acme servers to another port with one-liner in config. But this is not the case.
Which I do understand. Although I use Docker, I mainly use it personally for things I don’t want to spend much time on. I don’t really like it over other alternatives, but it makes standing up a lab service stupidly easy.
I run acme in a non privileged jail whose file system I can access from outside the jail.
So acme sees and accesses nothing and I can pluck results out with Unix primitives from the outside.
Yes, I use dns mode. Yes, my dns server is also a (different) jail.
Whether it's a local binary or a dockerised one, that access still needs to be marshalled either way & it can get complex facilitating that with a docker container. I haven't found it too bad but I'd really rather not need docker for on-demand automations.
I give plenty* of services root access to my system, most of which I haven't written myself & I certainly haven't audited their code line-by-line, but I agree with the author that you do get a sense from experience of the overall hygiene of a project & an ACME client has yet to give me good vibes.
* within reason
She doesn't "trust" tooling that basically the entire Internet including major security-conscious organizations are using, essentially letting perfect get in the way of good.
I think if she were a less capable engineer she would just set that shit up using the easiest way possible and forget about it like everyone else, and nothing bad would happen. Download nginx proxy manager, click click click, boom I have a wilcard cert, who cares?
I mean, this is her https site, which seems to just be a blog? What type of risk is she mitigating here?
Essentially the author is so skilled that she's letting perfect get in the way of good.
I haven't thought about certificates for years because it's not worth my time. I don't really care about the tooling, it's not my problem, and it's never caused a security issue. Put your shit behind a load balancer and you don't even need to run any ACME software on your own server.
The older posts on the same website provided a bit more context for me to understand today's post better:
- "Why I still have an old-school cert on my https site" - January 3, 2023 - https://rachelbythebay.com/w/2023/01/03/ssl/
- "Another look at the steps for issuing a cert" - January 4, 2023 - https://rachelbythebay.com/w/2023/01/04/cert/
Sadly, security is a cat and mouse game, which means it's always evolving and you're forced to keep up - and it's inherent by the nature of the field, so we can't really blame anyone (unlike, say, being forced to integrate with the latest Google services to be allowed on the Play Store). At least you get to write your own ACME client if you want to. You don't have to use certbot, and there's no TPM-like behaviour locking you out of your own stuff.
Browser vendors at some point claimed it confused users and removed the highlight (I think the same browser vendors who try to remove the "confusing" URL bar ...)
Aside from that EV certificates are slow to issue and phishers got similar enough EV certs making the whole thing moot.
This is really important to understand if you care about either: Actually engineering security at some scale or knowing what's actually going on in order to model it properly in your head.
If you just want to make a web site so you can put up a blog about your new kitten, any of the tools is fine, you don't care, click click click, done.
For somebody like Rachel or many HN readers, knowing enough of the technology to understand that the ACME client needn't run on your web servers is crucial. It also means you know that when some particular client you're evaluating needs to run on the web server that it's a limitation of that client not of the protocol - birds can't all fly, but flying is totally one of the options for birds, we should try an eagle not an emu if we want flying.
There are a number of shell-based ACME clients whose prerequisites are: OpenSSL and cURL. You're probably already relying on OpenSSL and cURL for a bunch of things already.
If you can read shell code you can step through the logic and understand what they're doing. Some of them (e.g., acme.sh) often run as a service user (e.g., default install from FreeBSD ports) so the code runs unprivileged: just add a sudo (or doas) config to allow it to restart Apache/nginx.
It's not just about not understanding, it's that more complex stuff is inherently more prone to security vulnerabilities, however well you think you reviewed its code.
That's overly simplifying it and ignores the part where the simple stuff is not secure to begin with.
In the current context you could take a HTTP client with a formally verified TLS stack, would you really say it's inherently more vulnerable than a barebones HTTP client talking to a server over an unencrypted connection? I'd say there's a lot more exposed in that barebones client.
Of course plain http would be, generally, much more dangerous than a however complex encrypted connection
Honest question:
* Do you understand OS syscalls in detail?
* Do you understand how your BIOS initializes your hardware?
* Do you understand how modern filesystems work?
* Do you understand the finer details of HTTP or TCP?
Because... I don't. But I know enough about them that I'm quite convinced each of them is a lot more difficult to understand than ACME. And all of them and a lot more stuff are required if you want to run a web server.
Each extra bit of software is an additional attack surface after all
If you're a fan of left-pad I won't judge but don't expect me to partake without bitter complaints.
Perhaps the author wasn't looking hard enough. It could probably be ported with little effort.
This client really wants the easy case where the client lives on the machine which owns the name and is running the web server, and then it uses OpenBSD-specific partitioning so that elements of the client can't easily taint one another if they're defective
But, the ACME protocol would allow actual air gapping - the protocol doesn't care whether the machine which needs a certificate, the machine running an ACME client, and the machine controlling the name are three separate machines, that's fine, which means if we do not use this OpenBSD all-in-one client we can have a web server which literally doesn't do ACME at all, an ACME client machine which has no permission to serve web pages or anything like that, and name servers which also know nothing about ACME and yet the whole system works.
That's more effort than "I just install OpenBSD" but it's how this was designed to deliver security rather than putting all our trust in OpenBSD to be bug-free.
Most software in the OpenBSD base system lacks features on purpose. Their dev team frequently rejects patches and feature requests without compelling reasons to exist. Less features means less places for things to go wrong means less chance of security bugs.
It exists so their simple webserver (also in the base system) has ACME support working out of the box. No third party software to install, no bullshit to configure, everything just works as part of a super compact OS. Which to this day still fits on a single CD-ROM.
Most of all no stupid Rust compiler needed so it works on i386 (Rust cannot self-host on i386 because it's so bloated it runs out of memory, which is why Rust tools are not included in i386).
If your needs exceed this or you adore complexity then feel free to look elsewhere.
[0] https://github.com/ndilieto/uacme
I know ACME alone is not insurmountably complex, but it is another brick in the wall.
Kind of makes me wonder what kind of stack her website is running on that something like a lightweight ACME library (https://github.com/jmccl/acme-lw comes to mind, but there's a C++ library for ESP32s that should be even more lightweight) loading in the certificates isn't doing the job.
The problem is, SSL is a fucking hot, ossified mess. Many of the noted core issues, especially the weirdnesses around encoding and bitfields, are due to historical baggage of ASN.1/X.509. It's not fun to deal with it, at all... the math alone is bad enough, but the old abstractions to store all the various things for the math are simply constrained by the technological capabilities of the late '80s.
There would have been a chance to at least partially reduce the mess with the introduction of LetsEncrypt - basically, have the protocol transmit all of the required math values in a decent form and get an x.509 cert back - and HTTP/2, but that wasn't done because it would have required redeveloping a bunch of stuff from scratch whereas one can build an ACME CA with, essentially, a few lines of shell script, OpenSSL and six crates of high proof alcohol to drink away one's frustrations of dealing with OpenSSL, and integrate this with all software and libraries that exist there.
An argument for this is that it makes it theoretically possible for devices that have no knowledge of anything about PKI since the year 2000, and/or no additional programmability, to use Let's Encrypt certs (obtained on their behalf by an external client application). I have, in fact, subsequently gotten something like that to work as a consultant.
As for oooold devices - doesn't LetsEncrypt demand key lengths and hash algorithms nowadays that simply weren't implemented back then?
ASN.1 and X509 aren't all that bad. It's a comprehensively documented binary format that's efficient and used everywhere, even if it's hidden away in binary protocols you don't look at every day.
Unlike what most people seem to think, ACME isn't something invented just for Let's Encrypt. Let's Encrypt was certainly the first high-profile CA to implement the protocol, but various CAs (free and paid) have their own ACME servers and have had them for ages now. It's a generic protocol for certificate authorities to securely do domain validation and certificate provisioning that Let's Encrypt implemented first.
The unnecessarily complex parts of the protocol when writing a from-the-ground-up client are complex because ACME didn't reinvent the wheel, and reused existing standard protocols instead. Unfortunately, that means having to deal with JWS, but on the other hand, it means most people don't need to write their own ACME-JWS-replacement-protocol parsers. All the other parts are complex because the problem ACME is solving is actually quite complex.
The author wrote [another post](https://rachelbythebay.com/w/2023/01/03/ssl/) about the time they fell for the lies of a CA that promised an "easier" solution. That solution is pretty much ACME, but with more manual steps (like registering an account, entering domain names).
I personally think that for this (and for many other protocols, to be honest) XML would've been a better fit as its parsers are more resilient against weird data, but these days talking about XML will make people look at you like you're proposing COBOL. Hell, I even exchanging raw, binary ASN.1 messages would probably have gone over pretty well, as you need ASN.1 to generate the CSR and request the certificate anyway. But, people chose "modern" JSON instead, so now we're base64 encoding values that JSON parsers will inevitably fuck up instead.
This depends on whether you're speaking as a matter of history. ACME was originally invented and implemented by the Let's Encrypt team, but in the hope that it could become an open standard that would be used by other CAs. That hope was eventually borne out.
For instance, Whatsapp can not open HTTP links anymore.
TLS isn't for you, it's for your readers.
Except when an adversary MITMs your site and injects an attack to one of your readers:
* https://www.infoworld.com/article/2188091/uk-spy-agency-uses...
Further: tapping glass is a thing, and if the only traffic that is encrypted is the "important" or "sensitive" stuff, then it sticks out in the flow, and so attackers know to focus just on that. If all traffic is encrypted, then it's much harder for attackers to figure out what is important and what is not.
So by encrypting your "unimportant" data you add more noise that has to be sifted through.
I see no other reason to serve content over HTTPS.
The reason you don't see many MITM boxes injecting content into HTTP anymore is because of widespread HTTPS adoption and browsers taking steps to distrust HTTP, making MITM injection a near-useless tactic.
(This rhymes with the observation that some people now perceive Y2K as overhyped fear-mongering that amounted to nothing, without understanding that immense work happened behind the scenes to avert problems.)
And can generally be configured by the user not to downgrade to http without an explicit prompt.
Honestly I disagree with the refusal to support various APIs over http. Making the (configurable last I checked) prompt mandatory per browser session would have sufficed to push all mainstream sites to strictly https.
Absolutely, and this works quite well on the current web.
> Honestly I disagree with the refusal to support various APIs over http.
There are multiple good reasons to do so. Part of it is pushing people to HTTPS; part of it is the observation that if you allow an API over HTTP, you're allowing that API to any attacker.
In the scenario I described you're doing that only after the user has explicitly opted in on a case by case basis, and you're forcing a per-session nag on them in order to coerce mainstream website operators to adopt the secure default.
At that point it's functionally slightly more obtuse than adding an exception for a certificate (because those are persistent). Rejecting the latter on the basis of security is adopting a position that no amount of user discretion is acceptable. At least personally I'm comfortable disagreeing with that.
More generally, I support secure defaults but almost invariably disagree with disallowing users to shoot themselves in the foot. As an example, I expect a stern warning if I attempt to uninstall my kernel but I also expect the software on my device to do exactly what I tell it to 100% of the time regardless of what the developers might have thought was best for me.
I agree with this. But also, there is a strong degree to which users will go track down ways (or follow random instructions) to shoot themselves in the foot if some site they care about says "do this so we can function!". I do think, in cases where there's value in collectively pushing for better defaults, it's sometimes OK for the "I can always make my device do exactly what I tell it to do" escape hatch to be "download the source and change it yourself". Not every escape hatch gets a setting, because not every escape hatch is supported.
If I'm really curious about your plain http site I'll check it out through archive.org, and I'm definitely not going to keep visiting it frequently.
It's been easy to live with forced https for at least five years (and for at least the last ten with https first, with confirmations for plain http).
This was my experience sending a link to someone who primarily uses an iPad and is non-technical. They were not going to find/open their Macbook to see the link.
I'll give you my 8600 when you pry it from my cold, dead LAN.
Edit: to be clear, I'd not be too surprised if their homegrown client survives an audit unscathed, I'm sure they're a great coder, but the odds just don't seem better than to the alternative of using an existing client that was already audited by professionals as well as other people
The steps described in the article sound familiar to the process done in the early 2000's, but I'm not sure why you'd want to make it hard for yourself now.
I use certbot with "--preferred-challenges dns-01" and "--manual-auth-hook" / "--manual-cleanup-hook" to dynamically create DNS records, rather than needing to modify the webserver config (and the security/access risks that comes with). It just needs putting the cert/key in the right place and reloading the webserver/loadbalancer.
What registrar do people recommend in 2025?
Don’t look to large, well-known registrars. I would suggest that you look for local registrars in your area. The TLD registry for your country/area usually has a list of the authorized registrars, so you can simply search that for entities with a local address.
Disclaimer: I work at such a small registrar, but you are probably not in our target market.
1. <https://news.ycombinator.com/item?id=32095499>
2. <https://news.ycombinator.com/item?id=32507784>
I have built a registrar in the past and have a lot of arcane knowledge about how they work. Just need to figure out a way to monetize!
https://grape.ca/
Must be other good ones? Somewhat prefer something in the UK (but have been using Gandi so its not essential).
INWX in Germany also seems well regarded but I haven't used them.
Does anyone know why they're there?
There are private keys and hash functions involved. But base64url and json aren't the worst web crimes to have been inflicted upon us. It's not _that_ bad, is it?
But "the rest" of ACME also include X.509 certificates and PKCS#10 Certificate Signing Requests, which are in turn based on ASN.1 (you're fortunate enough you only need DER encoding) and RSA parameters. ASN.1 and X.509 are devilishly complex if you don't let openssl do everything for you and even if you do. The first few paragraphs are all about making the correct CSR and dealing with RSA, and encoding bigints the right way (which is slightly different between DER and JWK to make things more fun).
Besides that I don't know much about the ACME spec, but the post mentions a couple of other things :
So far, we have (at least): RSA keys, SHA256 digests, RSA signing, base64 but not really base64, string concatenation, JSON inside JSON, Location headers used as identities instead of a target with a 301 response, HEAD requests to get a single value buried as a header, making one request (nonce) to make ANY OTHER request, and there's more to come.
This does sound quite complex. I'm just not sure how much simpler ACME could be. Overturning the clusterfuck that is ASN.1, X.509 and the various PKCS#* standards has been a lost cause for decades now. JOSE is something I would rather do without, but if you're writing an IETF RFC, you're only other option is CMS[1], which is even worse. You can try to offer a new signature format, but that would be shut down for being "simpler and cleaner than JOSE, but JOSE just has some warts that need to be fixed or avoided"[2].
I think the things you're left with that could have been simplified and accepted as a standard are the APIs themselves, like getting a nonce with a HEAD request and storing identifiers in a Location header. Perhaps you could have removed signatures (and then JOSE) completely and rely on client IDs and secrets since we're already running over TLS, but I'm not familiar enough with the protocol to know what would be the impact. If you really didn't need any PKI for the protocol itself here, then this is a magnificent edifice of overengineering indeed.
[1] https://datatracker.ietf.org/doc/html/rfc5652 [2] https://mailarchive.ietf.org/arch/msg/cfrg/4YQH6Yj3c92VUxqo-...
Most of it is unused though, only CN, SANs and public key are used.
Issuer: We need to know who issued this cert, then we can check whether we trust them and whether the signature on the certificate is indeed from them, and potentially repeat this process - this cert was issued by Let's Encrypt's E5 intermediate
Validity: We need to know when this cert was or will be valid, a perfectly good certificate for 2019 ain't much good now, this one is valid from early May until early August
Now we get a public key, in this case a nice modern elliptic curve P-256 key
We need to know how the signature works, in this case it's ECDSA with SHA-384
And we need a serial number for the certificate, this unique number helps sidestep some nasty problems and also gives us an easy shorthand if reporting problems, 05:6B:9D:B0:A1:AE:BB:6D:CA:0B:1A:F0:61:FF:B5:68:4F:5A will never be any other cert only this one.
We get a mandatory notice that this particular certificate is NOT a CA certificate, it's just for a web server, and we get the "Extended key use" which says it's for either servers or for clients (Let's Encrypt intends to cease offering "for client" certificates in the next year or so, today they're the default)
Then we get a URL for the CRL where you can find out if this certificate (or others like it) were revoked since issuance, info with a URL for OCSP (also going away soon) and a URL where you can get your own copy of the issuer's certificate if you somehow do not have that.
We get a policy OID, this is effectively a complicated way to say "If you check Let's Encrypt's formal policy documents, this certificate was specifically issued under the policy identified with this OID", these do change but not often.
Finally we get two embedded SCTs, these are proof that two named Certificate Transparency Log services have seen this certificate, or rather, the raw data in the certificate, although they might also have the actual certificate.
So, quite a lot more than you listed.
[A correct decoder also needs to actually verify the signature, I did not list that part, obviously ignoring the signature would be a bad idea for a live system as then anybody can lie about anything]
The spec (well, the RFC anyway) is indeed classically RFC-ish, but the same applies to HTTP or TCP/IP, and I haven't seen the same sort of complaints about those. Maybe it's just resistance to change? Most of the specs (JOSE, ACME etc) aren't really complex for the sake of complexity, but solve problems that aren't simple problems to solve simply in a simple fashion. I don't think that's bad at all, it's mostly indicative of the complexity of the problem we're solving.
Some examples of gratuitous complexity:
1. Supporting too many goddamn algorithms. Keeping RSA and HMAC-SHA256 for leagcy-compatible stuff, and Ed25519 for XChaChaPoly1305 for regular use would have been better. Instead we support both RSA with PKCS#1 v1.5 signatures and RSA-PSS with MGF1, as well as ECDH with every possible curve in theory (in practice only 3 NIST Prime curves).
2. Plethora of ways to combine JWE and JWS. You can encrypt-then-sign or sign-then-encrypt. You can even create multiple layers of nesting.
3. Different "typ"s in the header.
4. RSA JWKs can specify the d, p, q, dq, dp and qi values of the RSA private key, even though everything can be derived from "p" and "q" (and the public modulus and exponent "n" and "e").
5. JWE supports almost every combination of key encryption algorithm, content encryption algorithm and compression algorithm. To make things interesting, almost all of the options are insecure to a certain degree, but if you're not an expert you wouldn't know that.
6. Oh, and JWE supports password-based key derivation for encryption.
7. On the other, JWS is smarter. It doesn't need this fancy shmancy password-based key derivation thingamajig! Instead, you can just use HMAC-SHA256 with any key length you want. So if you fancy encrypting your tokens with a cool password like "secret007" and feel like you're a cool guy with sunglasses in a 1990s movie, just go ahead!
This is just some of the things of the top of my head. JOSE is bonkers. It's a monument to misguided overengineering. But the saddest thing about JOSE is that it's still much simpler than the standards which predated it: PKCS#7/CMS, S/MIME and the worst of all - XMLDSig.
Take your argument about order of operations or algorithms. Just because you might not need to do it in an alternate order or use a legacy (and broken) algorithm doesn't mean nobody else does. Keep in mind that this standard isn't exactly new, and isn't only used in startups in San Francisco. There are tons of systems that use it that might only get updated a handful of times each year. Or long-lived JWTs that need to be supported for 5 years. Not going to replace hardware that is out on a pole somewhere just because someone thought the RFC was too complicated.
Out of your arguments, none of them require you to do it that way. Example: you don't have to supply d, dq, dp or qi if you don't want to. But if you communicate with some embedded device that will run out of solar power before it can derive them from the RSA primitives, you will definitely help it by just supplying it on the big beefy hardware that doesn't have that problem. It allows you to move energy and compute cost wherever it works best for the use case.
Even simpler: if you use a library where you can specify a RSA Key and a static ID, you don't have to think about any of this; it will do all of it for you and you wouldn't even know about the RFC anyway.
The only reason someone would need to know the details is if you don't use a library or if you are the one writing it.
https://github.com/cloudflare/cfssl
Subject Alternative Name (SAN) is not an alternative in the sense that it's an alias, SANs exist because the X.509 certificate standard is, as its name might suggest, intended for the X.500 directory system, a system from the 20th century which was never actually deployed. Mozilla (back then the Netscape Corporation) didn't like re-inventing wheels and this standard for certificates already existed so they used it in their new "Secure Sockets" technology but it has no Internet names so at first they just put names in plain text. However, X.500 was intended to be infinitely extensible, so we can just invent an alternative naming scheme, and that's what the SANs are, which is why they're mandatory for certificates in the Web PKI today - these are the Internet's names for things, so they're mandatory when talking about the Internet, they're described in detail in PKIX, the IETF document standardising the use of X.500 for the Internet.
There are several types of name we can express as SANs but in a certificate the two you'll commonly see are dnsName - the same ASCII names you'd see in URLs like "news.ycombinator.com" or "www.google.com" and ipAddress - a 32-bit integer typically spelled as four dotted decimals 10.20.30.40 [yes or an IPv6 128-bit integer will work here, don't worry]
Because the SANs aren't just free text a machine can reliably parse them which would doubtless meet Rachel's approval. The browser can mindlessly compare the bytes in the certificate "news.ycombinator.com" with the bytes in the actual DNS name it looked up "news.ycombinator.com" and those match so this cert is for this site.
With free text in a CN field like a 1990s SSL certificate (or, sadly, many certificates well into the 2010s because it was difficult to get issuers to comply properly with the rules and stop spewing nonsense into CN) it's entirely possible to see a certificate for " 10.200.300.400" which well, what's that for? Is that leading space significant? Is that an IP address? But those numbers don't even fit in one byte each I hope our parser copes!
You can’t mindlessly compare the bytes of the host name: you have to know that it’s the presentation format of the name, not the DNS wire format; you have to deal with ASCII case insensitivity; you have to guess what to do about trailing dots (because that isn’t specified); you have to deal with wildcards (being careful to note that PKIX wildcard matching is different from DNS wildcard matching).
It’s not as easy as it should be!
The names PKIX writes into dnsName are exactly the same as the hostnames in DNS. They are defined to always be Fully Qualified, and yet not to have a trailing dot, you don't have to like that but it's specified and it's exactly how the web browsers worked already 25+ years ago.
You're correct that they're not the on-wire label-by-label DNS structure, but they are the canonical human readable DNS name, specifically the Punycode encoded name, so [the website] https://xn--j1ay.xn--p1ai/ the Russian registry which most browsers will display with Cyrllic, has its names stored in certificates the same way as it is handled in DNS, as Punycode "xn--j1ay.xn--p1ai". In software I've seen the label-by-label encoding stuff tends to live deep inside DNS-specific code, but the DNS name needed for comparing with a certificate does not do this.
You don't need to "deal with" case except in the sense that you ignore it, DNS doesn't handle case, the dnsName in SANs explicitly doesn't carry this, so just ignore the case bits. Your DNS client will do the case bit wiggling entropy hack, but that's not in code the certificate checking will care about.
You do need to care about wildcards, but we eliminated the last very weird certificate wildcards because they were minted only by a single CA (which argued by their reading they were obeying PKIX) and that CA is no longer in business 'cos it turns out some of the stupid things they were doing even a creative lawyerly reading of specifications couldn't justify. So the only use actually enabled today is replacing one DNS label at the front of the name. Nothing else is used, no suffixes, no mid-label stuff, no multi-label wildcards, no labels other than the first.
Edited to better explain the IDN situation hopefully
The DNS is case-insensitive, though only for ASCII. So you have to compare names case-insensitively (again, for ASCII). It _is_ possible to have DNS servers return non-lowercase names! E.g., way back when sun.com's DNS servers would return Sun.COM if I remember correctly. So you do have to be careful about this, though if you do a case-sensitive, memcmp()-like comparison, 999 times out of 1,000 everything will work, and you won't fail open when it doesn't.
Yes, all the popular browsers require this.
> they certainly didn't even as of ~10 years ago?
That's true, ten years ago it was likely that if a browser required this they would see unacceptably high failure rates because CAs were non-compliant and enforcement wasn't good enough. Issuing certs which would fail PKIX was prohibited, but so is speeding and yet people do that every day. CT improved our ability to inspect what was being issued and monitor fixes.
> Yes, it is "required", but CN only has worked for quite some time.
No trusted CA will issue "CN only" for many years now, if you could obtain such a certificate you'd find it won't work in any popular browser either. You can read the Chromium or Mozilla source and there just isn't any code to look in CN, the browser just parses the SANs.
> I find this tricks up some IT admins who are still used to only supplying a CN and don't know what a SAN is.
In most cases this is a sign you're using something crap like openssl's command line to make CSRs, and so you're probably expending a lot of effort filling out values which will be ignored by the CA and yet not offered parameters you did need.
As you noted about OpenSSL, Windows CertSvr will allow you to do CN only, too.
Chromium published an "intent to remove" and then actually removed the CN parsing in 2017, at that point EnableCommonNameFallbackForLocalAnchors was available for people who were still catching up to policy from ~15 years ago. The policy override flag was removed in 2018, after people had long enough to fix their shit.
Mozilla had already made an equivalent change before that, maybe it worked for a few more years in Safari? I don't have a Mac so no idea.
Why I still have an old-school cert on my HTTPS site - https://news.ycombinator.com/item?id=34242028 - Jan 2023 (63 comments)
Part of not wanting to let go is the sunk cost fallacy. Part of it is being suspicious of being (more) dependent on someone else (than you are already dependent on a different someone else).
(As an aside, the n-gate guy who ranted against HTTPS in general and thought static content should just be HTTP also thought like that. Unfortunately, as I'm at a sketchy cafe using their wifi, his page currently says I should click here to enter my bank details, and I should download new cursors, and oddly doesn't include any of his own content at all. Bit weird, but of course I can trust he didn't modify his page, and it's just a silly unnecessary imposition on him that I would like him to use HTTPS)
Unfortunately for those rugged individuals, you're in a worldwide community of people who want themselves, and you, to be dependent on someone else. We're still going with "trust the CAs" as our security model. But with certificate transparency and mandatory stapling from multiple verifiers, we're going with "trust but verify the CAs".
Maximum acceptable durations for certificates are coming down, down, down. You have to get new ones sooner, sooner, sooner. This is to limit the harm a rogue CA or a naive mis-issuing CA can do, as CRLs just don't work.
The only way that can happen is with automation, and being required to prove you still own a domain and/or a web-server on that domain, to a CA, on a regular basis. No "deal with this once a year" anymore. That's gone and it's not coming back.
It's good to know the whole protocol, and yes certbot can be overbearing, but Debian's python3-certbot + python3-certbot-apache integrates perfectly with how Debian has set up apache2. It shouldn't be a hardship.
And if you don't like certbot, there are lots of other ACME clients.
And if you don't like Let's Encrypt, there are other entities offering certificates via the ACME protocol (YMMV, do you trust them enough to vouch for you?)
Yep, I've seen that argument so many times and it should never make sense to anyone that understands MITM.
The only way it could possibly work is if the static content were signed somehow, but then you need another protocol the browser and you need a way to exchange keys securely, for example like signed RPMs. It would be less expensive as the encryption happens once, but is it worth having yet another implementation?
If you want to ensure the bits that were sent from the server to your browser they must be signed in some method.
Rather, it's that most people simply don't need to care about MITM. It's not a relevant attack for most content that can be reasonably served over HTTP. The goal isn't to eliminate every security threat possible, it's to eliminate the ones that are actually a problem for your use case.
There's no such thing as "not worth the effort to secure" because neither the site itself nor its content matters, only the network path from the site to the user, which is not under the full control of either party. These need not be, and usually aren't, targeted attacks; they'll hit anything that can be intercepted and modified, without a care for what it's meant to be, where it's coming from, or who it's going to.
Viewing it is an A-to-B interaction where A is a good-natured blogger and B is a tech-savvy reader, and that's all there is to it, is archaic and naive to the point of being dangerous. It is really an A-to-Z interaction where even if A is a good-natured blogger and Z is a tech-savvy user, parties B through Y all get to have a crack at changing the content. Plain HTTP is a protocol for a high-trust environment and the Internet has not been such a place for a very long time. It is unfortunate that party A (the site) must bear the brunt of the security burden, but that's the state of things today. There were other ways to solve this problem but they didn't get widespread adoption.
The browser content model knows nothing if the data it's receiving is static or not.
ISPs had already shown time and again they'd inject content into http streams for their own profit. BGP attacks routed traffic off to random places. Simply put the modern web should be zero trust at all.
Maybe you could have a mixed use case page in the browser where you had your secure context, then a sub context of unencrypted protected objects, that could possibly increase caching. With that said, looks like another fun hole browser makers would be chasing every year or so.
- Is female [TIL the term "wogrammer"]
- Works for Facebook [formerly Rackspace and Google] so an undeniably Big MAMAA
- Has been blogging prolifically for at least 14 years [let's call it 40 years: she admin'd a BBS at age 12]
- Website is custom self-hosted; very old school and accessible; no ads or popup bullshit
- Probably has more CSE/SWE experience+talent in her little pinky finger than 80% of HN commenters
https://medium.com/wogrammer/rachel-kroll-7944eeb8c692
So I'd say that her position and experience command enough respect that we cannot judge her merely by peeking at a few trifling journal entries.
https://datatracker.ietf.org/doc/html/rfc8555#section-8.4
>2. Query for TXT *records* for the validation domain name
>3. Verify that the contents of *one of the TXT records* match the digest value
(Emphasis mine.)
1 - https://docs.astral.sh/uv/guides/scripts/
>I contacted Rachel and she said - and this is my poor paraphrasing from memory - that the IP ban was something she intentionally implemented but I got caught as a false positive
[0] https://news.ycombinator.com/item?id=42599359
My favourite client is probably https://github.com/acmesh-official/acme.sh
If you use a DNS service provider that supports it, you can use the DNS-01 challenge to get a certificate - that means that you can have the acme.sh running on a completely different server which should help if you're twitchy about running a complex script on it. It's also got the advantage of allowing you to get certificates for internal/non-routable addresses.
Personally, I find that tls-alpn-01 is even nicer than dns-01. You can run a web server (or reverse proxy) that listens to port 443, and nothing else, and have it automatically obtain and renew TLS certificates, with the challenges being sent via TLS ALPN over the same port you're already listening on. Several web servers and reverse proxies have support for it built in, so you just configure your domain name and the email address you want to use for your Let's Encrypt account, and you get working TLS.
Pinned to an old version and looking for a replacement right now.
If anyone else ran into that it's just a matter of adding
I rather liked using ZeroSSL for a long time (perhaps just out of knee-jerk resistance to the “Just drink the Koolaid^W^W^Wuse Let’s Encrypt! C’mon man, everyone’s doing it!” nature of LE usage), but of late ZeroSSL has gotten so unreliable that I’ve rolled my eyes and started swapping things back to LE.
But yeah, can definitely recommend DNS-01 over HTTP-01, since it doesn't involve implicitly messing with your server settings, and makes it much easier to have a single locked server with all the ACME secrets, and then distribute the certs to the open-to-the-internet web servers.
+1 for acme.sh, it's beautiful.
I assume certbot is the client she’s alluding to that misinterprets one of the factors in the protocol as hex vs decimal and somehow things still work, which is incredibly worrisome.
Then I discovered the web-root approach people mention here and it made a huge difference. Now I have the HTTP snippet in my server set to serve up ACME challenges from a static directory and push everything else to HTTPS, and the ACME client just needs write permission to that directory. I can dynamically include that snippet in all of the sites my server handles and be done.
If I really felt like it, I could even write a wrapper function so the ACME client doesn’t even need restart permissions on the web-server (for me, probably too much to bother with, but for someone like Rachel perhaps worthwhile).
I run it in "webroot" mode on NgINX servers so it's just a matter of including the relevant config file in your HTTP sections (likely before redirecting to HTTPS) so that "/.well-known/acme-challenge/" works correctly. Then when you do run certbot, it can put the challenge file into the webroot and NgINX will automatically serve it. This allows certbot to do its thing without needing to do anything with NgINX.
tiny-acme.py is 200 lines, easy to audit and incorporate parts into your own infrastructure. It works well for the tiny work it does but it does support anything more modern.
It's on my long list of potential side projects, but I don't think I'll ever gey around to it
This makes me wonder what world of development she is in. Does she prefer SOAP?
If the author says they dislike JSON, especially given the tone of this article with respect to nonsensical protocols, I highly doubt they approve of SOAP.
What would you suggest instead given all these cons?
https://github.com/dhall-lang/dhall-lang
https://protobuf.dev/
https://en.wikipedia.org/wiki/S-expression
TCP is just a bunch of bytes... You can't process a bunch of bytes without understanding what they are, and that requires signaling information at a different level (ex - in the bytes themselves as a defined protocol like SSH, SCP, HTTP, etc - or some other pre-shared information between server and client [the worst of protocols - custom bullshit]).
Why is this worse than JSON?
"{'protected': {'protected': { 'protected': 'QABE' }}}" is just as custom as 66537 imo. It's easier to reverse engineer than 66537 but that's not less custom.