Native ACME support comes to Nginx

161 Velocifyer 72 9/11/2025, 5:28:13 PM letsencrypt.org ↗

Comments (72)

muppetman · 1h ago
This idea we seem to have moved towards where every applications ALSO includes their own ACME support really annoys me actually. I much prefer the idea that there's well written clients who's job it is to do the ACME handling. Is my Postfix mailserver soon going to have an ACME shoehorned in? I've already seen GitHub issues for AdGuardHome (a DNS server that supports blocklists) to have an ACME client built in, thankfully thus far ignored. Proxmox (a VM Hypervisor!) has an ACME Client built in.

I realise of course the inclusion of an ACME client in a product doesn't mean I need to use their implementation, I'm free to keep using my own independant client. But it seems to me adding ACME clients to everything is going to cause those projects more PRs, more baggage to drag forward etc. And confusion for users as now there's multiple places they could/should be generating certificates.

Anyway, grumpy old man rant over. It just seems Zawinski's Law "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." can be replaced these days with MuppetMan's law of "Every program attempts to expand until it can issue ACME certificates."

nottorp · 1h ago
That's okay, next step is to fold both nginx and the acme client into systemd.
arianvanp · 25m ago
Okay but hear me out

If we teach systemd socket activation to do TLS handshakes we can completely offload TLS encryption to the kernel (and network devices) and you get all of this for free.

It's actually not a crazy idea in the world of kTLS to centralize TLS handshaking into systems

johannes1234321 · 8m ago
Oh, I remember my Solaris fanboys praising Kernel-Level TLS as it reduced context switching by a lot. I believe they even had a patched openssl making this transparent to openssl based applications.

Linux seems to offer such facilities, too. I never use it to my knowledge, though (might be that some app used it in background?) https://lwn.net/Articles/892216/

devttyeu · 1h ago
Careful posting systemd satire here, there is a high likelihood that your comment becomes the reason this feature gets built and PRed by someone bored enough to also read HN comment section.
devttyeu · 1h ago

  [Unit]
  Description=Whatever
  
  [Service]
  ExecStart=/usr/local/bin/cantDoHttpSvc -bind 0.0.0.0:1234
  
  [HTTP]
  Domain=https://whatever.net
  Endpoint=127.1:1234
Yeah this could happen one day
pta2002 · 28m ago
You can basically implement this right now already by using a systemd generator. It’s not even a particularly bad idea, kinda want to try doing it to hook it up to nginx or something, would make adding a reverse proxy route as simple as adding a unit file, and you could depend on it from other units.
9dev · 51m ago
You know, Tailscale serve basically does this right now, but if I could skip this step and let systemd expose a local socket via HTTPS, automatically attempting to request a certificate for the hostname, with optional configuration in the socket unit file… I would kinda like that actually
robertlagrant · 32m ago
Yeah that sounds quite good now you say it.
akagusu · 48m ago
I'm sure this will become a dependency of GNOME
throw_a_grenade · 1h ago
Unironicaly, I think having systemd-something util that would provide TLS certs for .services upon encountering specific config knob in [Service] section would be much better that having multitude uncoordinated ACME clients that will quickly burn through allowed rate limits. Even just as a courtesy to LE/ISRG's computational resources.
jcgl · 53m ago
It wouldn't specifically have to be a systemd project or anything; you could make a systemd generator[0] so that you could list out certs as units in the Requires= of a unit. That'd be really neat, actually.

[0] https://www.freedesktop.org/software/systemd/man/latest/syst...

Spivak · 56m ago
A systemd-certd would actually kinda slap. One cert store to rule them all for clients, a way to define certs and specify where they're supposed to be placed with automatic reload using the systemd dependency solver, a way to mount certs into services privately, a unified interface for interacting with the cert store.
0x69420 · 52m ago
multiple services depending on different outputs of a single acme client can be expressed, right now, in 2025, within systemd unit definitions, without deeply integrating a systemd-certd-or-whatever-as-such.

which is basically ideal, no? for all the buy-in that the systemd stapling-svchost.exe-onto-cgroups approach asks of us, at the very least we have sufficiently expressive system to do that sort of thing. where something on the machine has a notion of what wants what from what, and you can issue a command to see whether that dependency is satisfied. like. we are there. good. nice. hopefully ops guys are content to let sleeping dogs lie, right?

...right?

mholt · 31m ago
Integrated ACME clients have proven to be more robust, more resilient, more automatic, and easier to use than exposing multiple moving parts: https://github.com/https-dev/docs/blob/master/acme-ops.md#ce...

To avoid a splintered/disjoint ecosystem, library code can be reused across many applications.

EvanAnderson · 1h ago
I'm with you on this. I run my ACME clients as least-privileged standalone applications.

On a machine where you're only running a webserver I suppose having Nginx do it the ACME renewal makes sense.

On many of the machines I support I also need certificates for other services, too. In many cases I also have to distribute the certificate to multiple machines.

I find it easy to manage and troubleshoot a single application handling the ACME process. I can't imagine having multiple logs to review and monitor would be easier.

oliwarner · 1h ago
The idea that the thing that needs the certificate, gets the certificate doesn't seem that perverse to me. The interface/port-bound httpd needs to known what domains it's serving, what certificates it's using.

Automating this is pure benefit to those that want it, and a non-issue to those who don't — just don't use it.

atomicnumber3 · 1h ago
I personally think nginx is the kind of project I'd allow to have its own acme client. It's extremely extremely widely used software and I would be surprised if less than 50% of the certs LE issues are not exclusively served via nginx.

Now if Jenkins adds acme support then yes I'll say maybe that one is too far.

muppetman · 1h ago
But it's a webserver. I'm sure it farms out sending emails from forms it serves, I doubt it has a PHP library built in, surely it farms that out to php-fpm? It doesn't have a REDIS library or NodeJS built in. Why's ACME different?
tuckerman · 1h ago
I get what you are saying but surely obtaining a certificate is much closer to being considered a core part of a web server related to transport, especially in 2025 when browsers throw up "doesn’t support a secure connection with HTTPS" messages left and right, than those other examples.

I think there is also clearly demand: caddy is very well liked and often recommended for hobbyists and I think a huge part of that is the built in certificate management.

andmarios · 1h ago
Nginx (and Apache, etc) is not just a web server; it is also a reverse proxy, a TLS termination proxy, a load balancer, etc.

The key service here is "TLS termination proxy", so being able to issue certificates automatically was pretty high on the wish list.

dividuum · 1h ago
Well, it already has, among a ton of other modules, a memcached and a JavaScript module (njs), so you’re actually not that far off. An optional ACME module sounds fitting.
firesteelrain · 1h ago
To your point, we use Venafi and it has clients that act as orchestrators to deploy the new cert and restart the web service. Webservice itself doesn’t need to be ACME aware.

Venafi supports ACME protocol so it can be the ACME server like Let’s Encrypt

I am speaking purely on prem non internet connect scenario

chrisweekly · 54m ago
"surprised if less than 50% of the certs LE issues are not..."

triple-negative, too hard to parse

dizhn · 26m ago
I believe caddy was the first standalone software to include automated acme. It's a web server (and a proxy) so it's a very good fit. One software many domains. Proxmox likewise is a hypervisor hosting many VMs (hence domains). Another good fit. Though as far as I know they don't provide the service for the VMs "yet".
9dev · 40m ago
I’m of the opposite opinion, really: Automatic TLS certificate requests are just an implementation detail of software able to advertise as accepting encrypted connections. Similarly many applications include an OAuth client that automatically takes care of requesting access tokens and refreshing them automatically, all using a discovery URI and client credentials.

Lots of apps should support this automatically, with no intervention necessary, and just communicate securely with each other. And ACME is the way to enable that.

Ajedi32 · 1h ago
It makes sense to me. If an application needs a signed certificate to function properly, why shouldn't it include code to obtain that certificate automatically when possible?

Maybe if there were OS level features for doing the same thing you could argue the applications should call out to those instead, but at least on Linux that's not really the case. Why should admins need to install and configure a separate application just to get basic functionality working?

renewiltord · 54m ago
You just don't load the module and use certbot and that will work which is what I'm doing. People get carried away with this stuff. The software is quite modular. It's fine for people to simplify it.

For a bunch of tech-aware people the inability for you all here to modify your software to meet your needs is insane. As a 14 year old I was using the ck patch series to have a better (for me) scheduler in the kernel. Every other teenager could do this shit.

In my 30s I have a low friction set up where each bit of software only does one thing and it's easy for me to replicate. Teenagers can do this too.

Somehow you guys can't do either of these things. I don't get it. Are you stupid? Just don't load the module. Use stunnel. Use certbot. None of these things are disappearing. I much prefer. I much prefer. I much prefer. Christ. Never seen a userbase that moans as much about software (I moan about moaning - different thing) while being unable to do anything about it as HN.

mikestorrent · 24m ago
The unix philosophy is still alive... and by that I mean complaining on newsgroups about things, not "do one thing well"
petcat · 1h ago
> the popular open source web server NGINX announced support for ACME with their official ngx_http_acme module (implemented with memory safe Rust code!).

Why even bother calling out that it's written in "memory safe Rust code" when the code itself is absolutely riddled with unsafe {} everywhere.

It seems to me that it's written in memory unsafe Rust code.

ninkendo · 1h ago
Looks like the only unsafe parts are the parts which interop with the rest of the nginx codebase (marshalling pointers, calling various functions in nginx_sys, etc.) Rust cannot guarantee this external C stuff adheres to the necessary invariants, hence it must be marked unsafe.

I don't see a way to integrate rust as a plugin into a C codebase without some level of unsafe usage like this.

benwilber1 · 1h ago
I think the nginx-sys Rust bindings are still pretty new and raw. I've experimented with them before and have given up because of the lack of a polished, safe, Rust API.

Right now you're pretty much stuck casting pointers to and from C land if you want to write a native nginx module in Rust. I'm sure it will get better in the future.

rererereferred · 1h ago
People like bragging/advertising about their language of choice. Maybe others who like the language will get interested in collaborating, or employers who need developers for that language might get in contact with them.

Also, unsafe rust is still safer than C.

pjmlp · 1h ago
Actually it isn't, because there are a few gotchas.

Unsafe Rust, like unsafe code blocks in any language that offers them, should be kept to the bare minimum, as building blocks.

johnisgood · 1h ago
> Also, unsafe rust is still safer than C.

I highly doubt that, and developers of Rust have confirmed here on HN that when it comes to unsafe code within a codebase, it is not just the unsafe blocks that are affected, the whole codebase is affected by that.

vsgherzi · 1h ago
Unsafe rust still enforces many of rust's rules. The only powers you get with unsafe rust are de-refrencing raw pointers, calling unsafe traits / functions, and the ability to access or modify mutable statics. You can read more about this here. https://doc.rust-lang.org/nomicon/what-unsafe-does.html

Unsafe rust is definitely safer than normal C. All the unsafe keyword really means is that the compiler cannot verify the behavior of the code it's up to the programmer. This is for cases where 1. the programmer knows more than the compiler 2. we're interacting with hardware or FFI.

When rust developers say unsafe effects the whole codebase what they mean is that UB in unsafe code could break guarantees about the whole program (even the safe parts). Just because something is unsafe dosen't inherently mean it's going to break everything it just needs more care when writing and reviewing just as C and C++ does.

jcranmer · 19m ago
Rust's core object semantics are very nearly that of C. Really, the only major difference between Rust and C is that you can't violate mutable aliasing rules in Rust, even in unsafe, and C has a strict aliasing mode that Rust can't opt into.

The main practical difference is that Rust pushes you away from UB whereas C tends to push you into it; signed integer overflow is default-UB in C, while Rust makes you go out of your way to get UB integer overflow. Furthermore, the general design philosophy of Rust is that you build "safe abstractions" which might require unsafe to implement, but the interface should be impossible to use in a way which doesn't cause any UB. It's definitely questionable how many people actually adhere to those rules--some people are just going to slap the unsafe keyword on things to make the code compile--but it's still a pretty far distance from C, where the language tends to make building abstractions of any kind, let alone safe ones, difficult.

otterley · 1h ago
We discussed this about a month ago: https://news.ycombinator.com/item?id=44889941
dizlexic · 1h ago
I remember
KyleBerezin · 2h ago
Hey, I just decided to run a DNS server and a couple of web services on my lan from a raspberry pi over the weekend. I used Nginx for the reverse proxy so all of the services could be addressable without port numbers. It was very easy to set up, it's funny how when you learn something new, you start seeing it all over the place.
mikestorrent · 22m ago
That's a great exercise in self-hosting. Nginx is definitely everywhere - probably 95%+ of SaaS and websites you hit are running it somewhere.
btreecat · 2h ago
Congratulations to the folks involved. I'm sure this wasn't a trivial lift. And the improvement to free security posture is a net positive for our community.

I have moved most of my personal stuff to caddy, but I look forward to testing out the new release for a future project and learning about the differences in the offerings.

Thanks for this!

No comments yet

ilaksh · 49m ago
What's the easiest way to install the newest nginx in Ubuntu 24? PPA or something?
thresh · 40m ago
Just use the official packages from https://nginx.org/en/linux_packages.html#Ubuntu

nginx-module-acme is available there, too, so you don't need to compile it manually.

mikestorrent · 22m ago
Docker container?
mark_mart · 1h ago
Does this mean we don’t need to use certbot?
predmijat · 31m ago
Currently it supports only HTTP-01 challenges (no wildcards, must be reachable).
mikestorrent · 21m ago
What about with e.g. internal ACME endpoints like https://developer.hashicorp.com/vault/tutorials/pki/pki-acme...
jaas · 1h ago
If you are using Nginx, then likely yes.
esher · 1h ago
Will that make local development setup easier? Like creating some certs on the fly?
endorphine · 2h ago
What took them so long? Honest question.

I'd expect nginx to have this years ago. Is that so hard to implement for some reason?

aaronax · 2h ago
Slow adoption of ACME by corporate customers that pay the bills. Must be a bigger effort than one would initially think, too.
lysace · 1h ago
Nginx is now owned by F5. Big, expensive and amazingly slow in terms of development.

Related notice: I really enjoy using haproxy for load balancing.

senko · 1h ago
Looks like haproxy also doesn't support it natively.

(unless I'm googlin' it wrong - all info points to using with acme.sh)

Graphon1 · 55m ago
No love for caddyserver?
mholt · 33m ago
There was quite a bit already, about a month ago: https://news.ycombinator.com/item?id=44889941
lysace · 48m ago
Correct.
jsheard · 1h ago
See also: nginx's HTTP/3 support still being experimental, when pretty much every other server besides Apache shipped it years ago.

No comments yet

preisschild · 2h ago
What does this offer to you vs using a tool such as certbot/cert-manager, and then just referencing the path in nginx?
petre · 13m ago
Not needing a python interprter and setting up cron jobs? The sole reason for using Caddy really, because it's just install and forget. I never had an expired certificate with it. I don't want to mess with an entirely different webserver config either after having fully configured my nginx instances. Too bad they wrote it in rust instead of C, now I need another compiler to build it. Minor nuisance. Hopefully it will get packaged.
aargh_aargh · 2h ago
One less program to install, configure, upgrade, watch vulnerabilities in, monitor.
benwilber1 · 1h ago
All of those things also apply to this module since it's an extra module that you have to install separate. It's not included with the nginx base distribution. You have to configure it specifically, you have to monitor it. You have to upgrade and watch for vulnerabilities.
jedisct1 · 1h ago
Native, but requires Rust. No, thanks.
johnisgood · 1h ago
Agreed.

I have had my share of compiling Rust programs, pulling in thousands of dependencies. If people think it is good practice, then well, good for them, but should not sell Rust as a safe language when it encourages such unsafe practices, especially when there are thousands of dependencies and probably all of them have their own unsafe blocks (even this ACME support does), which affect the whole codebase.

I am going to keep using certbot. No reason to switch.

vsgherzi · 37m ago
This is a problem I'm pretty invested in so let's take a look.

If we add the list of dependencies from the modules this is what we get

anyhow = "1.0.98" base64 = "0.22.1" bytes = "1.10.1" constcat = "0.6.1" futures-channel = "0.3.31" http = "1.3.1" http-body = "1.0.1" http-body-util = "0.1.3" http-serde = "2.1.1" hyper = { version = "1.6.0", features = ["client", "http1"] } libc = "0.2.174" nginx-sys = "0.5.0-beta" ngx = { version = "0.5.0-beta", features = ["async", "serde", "std"] } openssl = { version = "0.10.73", features = ["bindgen"] } openssl-foreign-types = { package = "foreign-types", version = "0.3" } openssl-sys = { version = "0.9.109", features = ["bindgen"] } scopeguard = "1" serde = { version = "1.0.219", features = ["derive"] } serde_json = "1.0.142" siphasher = { version = "1.0.1", default-features = false } thiserror = { version = "2.0.12", default-features = false } zeroize = "1.8.1"

Now vendoring and counting the lines of those we get 2,171,685 lines of rust. Now this includes the vedored packages from cargo vendor so what happens when we take just the dependecies for our OS. Vendoring for just x86 linux chops our line count to 1,220,702 not bad for just removing packages that aren't needed, but still alot. Let's actually see what's taking up all that space.

996K ./regex 1.0M ./libc/src/unix/bsd 1.0M ./serde_json 1.0M ./tokio/src/runtime 1.1M ./bindgen-0.69.5 1.1M ./tokio/tests 1.2M ./bindgen 1.2M ./openssl/src 1.4M ./rustix/src/backend 1.4M ./unicode-width/src 1.4M ./unicode-width/src/tables.rs 1.5M ./libc/src/unix/linux_like/linux 1.5M ./openssl 1.6M ./vcpkg/test-data/no-status 1.6M ./vcpkg/test-data/no-status/installed 1.6M ./vcpkg/test-data/no-status/installed/vcpkg 1.7M ./regex-syntax 1.7M ./regex-syntax/src 1.7M ./syn/src 1.9M ./libc/src/unix/linux_like 1.9M ./vcpkg/test-data/normalized/installed/vcpkg/info 2.0M ./vcpkg/test-data/normalized 2.0M ./vcpkg/test-data/normalized/installed 2.0M ./vcpkg/test-data/normalized/installed/vcpkg 2.2M ./unicode-width 2.4M ./syn 2.6M ./regex-automata/src 2.7M ./rustix/src 2.8M ./rustix 2.9M ./regex-automata 3.6M ./vcpkg/test-data 3.9M ./libc/src/unix 3.9M ./tokio/src 3.9M ./vcpkg 4.5M ./libc/src 4.6M ./libc 5.3M ./tokio 12M ./linux-raw-sys 12M ./linux-raw-sys/src

Coming in at 12MB we have linux raw sys which provides bindings to the linux userspace, a pretty reasonable requirement. LibC and tokio. Since this is async Tokio is a must have and is pretty much bound to rust at this point. This project is extremely well vetted and is used in industry daily.

Removing those we are left with 671,031 lines of rust

Serde is a well known dependecy that allows for marshalling of data types Hyper is the curl of the rust world allowing interaction with the network

I feel like this is an understandable amount of code given the complexity of what it's doing. Of course to some degree I agree with you and often worry about dependencies. I have a whole article on it here.

https://vincents.dev/blog/rust-dependencies-scare-me/?

I think I'd be more satisfied if things get "blessed" by the foundation like rustls is being. This way I know the project is not likely to die, and has the backing of the language as a whole. https://rustfoundation.org/media/rust-foundation-launches-ru...

I think we can stand to write more things on our own (sudo-rs did this) https://www.memorysafety.org/blog/reducing-dependencies-in-s...

But to completely ignore or not interact with the language seems like throwing the baby out with the bathwater to me

aakkaakk · 1h ago
Be aware, nginx is developed by a Russian.
themafia · 1h ago
The "cold war" was one of the dumbest features of our world over the past century.

It's amazing to me that people are still addicted to it.

petre · 3m ago
Yes, I'm aware. He also had his office raided by Putin's minions.

https://www.themoscowtimes.com/2019/12/13/russia-nginx-fsb-r...

petcat · 1h ago
nginx has been owned and developed by an American company for a long time...

No comments yet

rererereferred · 1h ago
I own my copy of the code.
TiredOfLife · 1h ago
The last Russian quite loudly stopped working on it and released a fork
dizlexic · 1h ago
NOT A RUSSIAN??!?!?!?