Native ACME support comes to Nginx

171 Velocifyer 84 9/11/2025, 5:28:13 PM letsencrypt.org ↗

Comments (84)

otterley · 4h ago
We discussed this about a month ago: https://news.ycombinator.com/item?id=44889941
dang · 2h ago
Thanks! Macroexpanded:

Nginx introduces native support for ACME protocol - https://news.ycombinator.com/item?id=44889941 - Aug 2025 (298 comments)

dizlexic · 4h ago
I remember
KyleBerezin · 4h ago
Hey, I just decided to run a DNS server and a couple of web services on my lan from a raspberry pi over the weekend. I used Nginx for the reverse proxy so all of the services could be addressable without port numbers. It was very easy to set up, it's funny how when you learn something new, you start seeing it all over the place.
mikestorrent · 2h ago
That's a great exercise in self-hosting. Nginx is definitely everywhere - probably 95%+ of SaaS and websites you hit are running it somewhere.
muppetman · 4h ago
This idea we seem to have moved towards where every applications ALSO includes their own ACME support really annoys me actually. I much prefer the idea that there's well written clients who's job it is to do the ACME handling. Is my Postfix mailserver soon going to have an ACME shoehorned in? I've already seen GitHub issues for AdGuardHome (a DNS server that supports blocklists) to have an ACME client built in, thankfully thus far ignored. Proxmox (a VM Hypervisor!) has an ACME Client built in.

I realise of course the inclusion of an ACME client in a product doesn't mean I need to use their implementation, I'm free to keep using my own independant client. But it seems to me adding ACME clients to everything is going to cause those projects more PRs, more baggage to drag forward etc. And confusion for users as now there's multiple places they could/should be generating certificates.

Anyway, grumpy old man rant over. It just seems Zawinski's Law "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." can be replaced these days with MuppetMan's law of "Every program attempts to expand until it can issue ACME certificates."

nottorp · 3h ago
That's okay, next step is to fold both nginx and the acme client into systemd.
devttyeu · 3h ago
Careful posting systemd satire here, there is a high likelihood that your comment becomes the reason this feature gets built and PRed by someone bored enough to also read HN comment section.
devttyeu · 3h ago

  [Unit]
  Description=Whatever
  
  [Service]
  ExecStart=/usr/local/bin/cantDoHttpSvc -bind 0.0.0.0:1234
  
  [HTTP]
  Domain=https://whatever.net
  Endpoint=127.1:1234
Yeah this could happen one day
pta2002 · 2h ago
You can basically implement this right now already by using a systemd generator. It’s not even a particularly bad idea, kinda want to try doing it to hook it up to nginx or something, would make adding a reverse proxy route as simple as adding a unit file, and you could depend on it from other units.
9dev · 3h ago
You know, Tailscale serve basically does this right now, but if I could skip this step and let systemd expose a local socket via HTTPS, automatically attempting to request a certificate for the hostname, with optional configuration in the socket unit file… I would kinda like that actually
robertlagrant · 2h ago
Yeah that sounds quite good now you say it.
akagusu · 2h ago
I'm sure this will become a dependency of GNOME
arianvanp · 2h ago
Okay but hear me out

If we teach systemd socket activation to do TLS handshakes we can completely offload TLS encryption to the kernel (and network devices) and you get all of this for free.

It's actually not a crazy idea in the world of kTLS to centralize TLS handshaking into systems

johannes1234321 · 2h ago
Oh, I remember my Solaris fanboys praising Kernel-Level TLS as it reduced context switching by a lot. I believe they even had a patched openssl making this transparent to openssl based applications.

Linux seems to offer such facilities, too. I never use it to my knowledge, though (might be that some app used it in background?) https://lwn.net/Articles/892216/

reactordev · 2h ago
Why stop there? Why not sign and verify off the mother of all root CA’s, your TPM 2.0 Module EEPROM?

(fun to walk down through the trees and the silicon desert of despair, to the land of the ROM, where things can never change)

throw_a_grenade · 3h ago
Unironicaly, I think having systemd-something util that would provide TLS certs for .services upon encountering specific config knob in [Service] section would be much better that having multitude uncoordinated ACME clients that will quickly burn through allowed rate limits. Even just as a courtesy to LE/ISRG's computational resources.
jcgl · 3h ago
It wouldn't specifically have to be a systemd project or anything; you could make a systemd generator[0] so that you could list out certs as units in the Requires= of a unit. That'd be really neat, actually.

[0] https://www.freedesktop.org/software/systemd/man/latest/syst...

throw_a_grenade · 32m ago
I found this: https://github.com/woju/systemd-dehydrated/

It essentially creates per-domain units. However, those are timers, not services, because the underlying tool doesn't have long-running daemon, it's designed to run off cron. So I can't depend on them directly, and I also need to add multitude of dropins that will restart or reload services that use certificates (https://github.com/woju/systemd-dehydrated/blob/master/contr...). Coudn't figure out any way that would automate this better.

0x69420 · 3h ago
multiple services depending on different outputs of a single acme client can be expressed, right now, in 2025, within systemd unit definitions, without deeply integrating a systemd-certd-or-whatever-as-such.

which is basically ideal, no? for all the buy-in that the systemd stapling-svchost.exe-onto-cgroups approach asks of us, at the very least we have sufficiently expressive system to do that sort of thing. where something on the machine has a notion of what wants what from what, and you can issue a command to see whether that dependency is satisfied. like. we are there. good. nice. hopefully ops guys are content to let sleeping dogs lie, right?

...right?

Spivak · 3h ago
A systemd-certd would actually kinda slap. One cert store to rule them all for clients, a way to define certs and specify where they're supposed to be placed with automatic reload using the systemd dependency solver, a way to mount certs into services privately, a unified interface for interacting with the cert store.
nottorp · 2h ago
So ... not only would your system take ages to boot without the internets(tm) because that's how systemd works, it will be extended in the same spirit to not boot at all if letsencrypt is down.

Sounds enterprise.

Also, you people forgot that my proposal is to also fold the http server in, and ideally all the scripting languages and all of npm just in case.

throw_a_grenade · 30m ago

  ExecStart=/usr/bin/python3 -m http.server
  WorkingDirectory=/srv/www

?
oliwarner · 3h ago
The idea that the thing that needs the certificate, gets the certificate doesn't seem that perverse to me. The interface/port-bound httpd needs to known what domains it's serving, what certificates it's using.

Automating this is pure benefit to those that want it, and a non-issue to those who don't — just don't use it.

EvanAnderson · 3h ago
I'm with you on this. I run my ACME clients as least-privileged standalone applications.

On a machine where you're only running a webserver I suppose having Nginx do it the ACME renewal makes sense.

On many of the machines I support I also need certificates for other services, too. In many cases I also have to distribute the certificate to multiple machines.

I find it easy to manage and troubleshoot a single application handling the ACME process. I can't imagine having multiple logs to review and monitor would be easier.

atomicnumber3 · 3h ago
I personally think nginx is the kind of project I'd allow to have its own acme client. It's extremely extremely widely used software and I would be surprised if less than 50% of the certs LE issues are not exclusively served via nginx.

Now if Jenkins adds acme support then yes I'll say maybe that one is too far.

muppetman · 3h ago
But it's a webserver. I'm sure it farms out sending emails from forms it serves, I doubt it has a PHP library built in, surely it farms that out to php-fpm? It doesn't have a REDIS library or NodeJS built in. Why's ACME different?
tuckerman · 3h ago
I get what you are saying but surely obtaining a certificate is much closer to being considered a core part of a web server related to transport, especially in 2025 when browsers throw up "doesn’t support a secure connection with HTTPS" messages left and right, than those other examples.

I think there is also clearly demand: caddy is very well liked and often recommended for hobbyists and I think a huge part of that is the built in certificate management.

andmarios · 3h ago
Nginx (and Apache, etc) is not just a web server; it is also a reverse proxy, a TLS termination proxy, a load balancer, etc.

The key service here is "TLS termination proxy", so being able to issue certificates automatically was pretty high on the wish list.

dividuum · 3h ago
Well, it already has, among a ton of other modules, a memcached and a JavaScript module (njs), so you’re actually not that far off. An optional ACME module sounds fitting.
firesteelrain · 3h ago
To your point, we use Venafi and it has clients that act as orchestrators to deploy the new cert and restart the web service. Webservice itself doesn’t need to be ACME aware.

Venafi supports ACME protocol so it can be the ACME server like Let’s Encrypt

I am speaking purely on prem non internet connect scenario

chrisweekly · 3h ago
"surprised if less than 50% of the certs LE issues are not..."

triple-negative, too hard to parse

mholt · 2h ago
Integrated ACME clients have proven to be more robust, more resilient, more automatic, and easier to use than exposing multiple moving parts: https://github.com/https-dev/docs/blob/master/acme-ops.md#ce...

To avoid a splintered/disjoint ecosystem, library code can be reused across many applications.

Ajedi32 · 3h ago
It makes sense to me. If an application needs a signed certificate to function properly, why shouldn't it include code to obtain that certificate automatically when possible?

Maybe if there were OS level features for doing the same thing you could argue the applications should call out to those instead, but at least on Linux that's not really the case. Why should admins need to install and configure a separate application just to get basic functionality working?

9dev · 2h ago
I’m of the opposite opinion, really: Automatic TLS certificate requests are just an implementation detail of software able to advertise as accepting encrypted connections. Similarly many applications include an OAuth client that automatically takes care of requesting access tokens and refreshing them automatically, all using a discovery URI and client credentials.

Lots of apps should support this automatically, with no intervention necessary, and just communicate securely with each other. And ACME is the way to enable that.

imiric · 2h ago
Why should every software need to support encrypted connections? That is a rabbit hole of complexity which can easily be implemented incorrectly, and is a security risk of its own.

Instead, it would make more sense for TLS to be handled centrally by a known and trusted implementation, which proxies the communication with each backend. This is a common architecture we've used for decades. It's flexible, more secure, keeps complexity compartmentalized, and is much easier to manage.

tuckerman · 8m ago
Isn't nginx one of the de facto choices (alongside HAProxy) for such a proxy and therefore it makes sense to include an ACME client? (This might be what you already had in mind but given the top level comment of the thread we are in I wasn't sure)
dizhn · 2h ago
I believe caddy was the first standalone software to include automated acme. It's a web server (and a proxy) so it's a very good fit. One software many domains. Proxmox likewise is a hypervisor hosting many VMs (hence domains). Another good fit. Though as far as I know they don't provide the service for the VMs "yet".
renewiltord · 3h ago
You just don't load the module and use certbot and that will work which is what I'm doing. People get carried away with this stuff. The software is quite modular. It's fine for people to simplify it.

For a bunch of tech-aware people the inability for you all here to modify your software to meet your needs is insane. As a 14 year old I was using the ck patch series to have a better (for me) scheduler in the kernel. Every other teenager could do this shit.

In my 30s I have a low friction set up where each bit of software only does one thing and it's easy for me to replicate. Teenagers can do this too.

Somehow you guys can't do either of these things. I don't get it. Are you stupid? Just don't load the module. Use stunnel. Use certbot. None of these things are disappearing. I much prefer. I much prefer. I much prefer. Christ. Never seen a userbase that moans as much about software (I moan about moaning - different thing) while being unable to do anything about it as HN.

mikestorrent · 2h ago
The unix philosophy is still alive... and by that I mean complaining on newsgroups about things, not "do one thing well"
btreecat · 4h ago
Congratulations to the folks involved. I'm sure this wasn't a trivial lift. And the improvement to free security posture is a net positive for our community.

I have moved most of my personal stuff to caddy, but I look forward to testing out the new release for a future project and learning about the differences in the offerings.

Thanks for this!

No comments yet

ku1ik · 1h ago
I wish they listed Caddy before Traefik there - Caddy pioneered this hands-free certificate automation in a webserver.
ilaksh · 2h ago
What's the easiest way to install the newest nginx in Ubuntu 24? PPA or something?
thresh · 2h ago
Just use the official packages from https://nginx.org/en/linux_packages.html#Ubuntu

nginx-module-acme is available there, too, so you don't need to compile it manually.

mikestorrent · 2h ago
Docker container?
mark_mart · 3h ago
Does this mean we don’t need to use certbot?
predmijat · 2h ago
Currently it supports only HTTP-01 challenges (no wildcards, must be reachable).
mikestorrent · 2h ago
What about with e.g. internal ACME endpoints like https://developer.hashicorp.com/vault/tutorials/pki/pki-acme...
jaas · 3h ago
If you are using Nginx, then likely yes.
esher · 3h ago
Will that make local development setup easier? Like creating some certs on the fly?
endorphine · 4h ago
What took them so long? Honest question.

I'd expect nginx to have this years ago. Is that so hard to implement for some reason?

aaronax · 4h ago
Slow adoption of ACME by corporate customers that pay the bills. Must be a bigger effort than one would initially think, too.
lysace · 4h ago
Nginx is now owned by F5. Big, expensive and amazingly slow in terms of development.

Related notice: I really enjoy using haproxy for load balancing.

senko · 4h ago
Looks like haproxy also doesn't support it natively.

(unless I'm googlin' it wrong - all info points to using with acme.sh)

Graphon1 · 3h ago
No love for caddyserver?
mholt · 2h ago
There was quite a bit already, about a month ago: https://news.ycombinator.com/item?id=44889941
jsheard · 3h ago
See also: nginx's HTTP/3 support still being experimental, when pretty much every other server besides Apache shipped it years ago.
everfrustrated · 1h ago
Yes tho the odds are anybody wanting http3 likely also using a cdn. There aren't too many cdns which support http3 back to origin. Heck most of them don't even support ipv6-only origins.
preisschild · 4h ago
What does this offer to you vs using a tool such as certbot/cert-manager, and then just referencing the path in nginx?
aargh_aargh · 4h ago
One less program to install, configure, upgrade, watch vulnerabilities in, monitor.
benwilber1 · 3h ago
All of those things also apply to this module since it's an extra module that you have to install separate. It's not included with the nginx base distribution. You have to configure it specifically, you have to monitor it. You have to upgrade and watch for vulnerabilities.
petre · 2h ago
Not needing a python interprter and setting up cron jobs? The sole reason for using Caddy really, because it's just install and forget. I never had an expired certificate with it. I don't want to mess with an entirely different webserver config either after having fully configured my nginx instances. Too bad they wrote it in rust instead of C, now I need another compiler to build it. Minor nuisance. Hopefully it will get packaged.
petcat · 4h ago
> the popular open source web server NGINX announced support for ACME with their official ngx_http_acme module (implemented with memory safe Rust code!).

Why even bother calling out that it's written in "memory safe Rust code" when the code itself is absolutely riddled with unsafe {} everywhere.

It seems to me that it's written in memory unsafe Rust code.

ninkendo · 3h ago
Looks like the only unsafe parts are the parts which interop with the rest of the nginx codebase (marshalling pointers, calling various functions in nginx_sys, etc.) Rust cannot guarantee this external C stuff adheres to the necessary invariants, hence it must be marked unsafe.

I don't see a way to integrate rust as a plugin into a C codebase without some level of unsafe usage like this.

benwilber1 · 3h ago
I think the nginx-sys Rust bindings are still pretty new and raw. I've experimented with them before and have given up because of the lack of a polished, safe, Rust API.

Right now you're pretty much stuck casting pointers to and from C land if you want to write a native nginx module in Rust. I'm sure it will get better in the future.

rererereferred · 4h ago
People like bragging/advertising about their language of choice. Maybe others who like the language will get interested in collaborating, or employers who need developers for that language might get in contact with them.

Also, unsafe rust is still safer than C.

pjmlp · 3h ago
Actually it isn't, because there are a few gotchas.

Unsafe Rust, like unsafe code blocks in any language that offers them, should be kept to the bare minimum, as building blocks.

johnisgood · 3h ago
> Also, unsafe rust is still safer than C.

I highly doubt that, and developers of Rust have confirmed here on HN that when it comes to unsafe code within a codebase, it is not just the unsafe blocks that are affected, the whole codebase is affected by that.

vsgherzi · 3h ago
Unsafe rust still enforces many of rust's rules. The only powers you get with unsafe rust are de-refrencing raw pointers, calling unsafe traits / functions, and the ability to access or modify mutable statics. You can read more about this here. https://doc.rust-lang.org/nomicon/what-unsafe-does.html

Unsafe rust is definitely safer than normal C. All the unsafe keyword really means is that the compiler cannot verify the behavior of the code it's up to the programmer. This is for cases where 1. the programmer knows more than the compiler 2. we're interacting with hardware or FFI.

When rust developers say unsafe effects the whole codebase what they mean is that UB in unsafe code could break guarantees about the whole program (even the safe parts). Just because something is unsafe dosen't inherently mean it's going to break everything it just needs more care when writing and reviewing just as C and C++ does.

SAI_Peregrinus · 2h ago
And an unsafe block in Rust having UB is exactly as bad as having UB in C or C++: the whole program's behavior can be altered in unexpected ways. So at its worst it's equivalent to C, but if there's no UB encountered in the unsafe block(s) then the whole program is safe, where for C you can hit UB anywhere in the program not just in annotated sections.
vsgherzi · 20m ago
It's strange to me that others push the unsafe keyword as an "I told you so". Perhaps it's just the way rust presents it. Most rustacians I follow agree that Rust's power is turning unsafe things into safe wrappers for the programmer to use. Much of the std library is implemented with unsafe to make things work at all, and this isn't really a bad thing it is heavily vetted and tested.
jcranmer · 2h ago
Rust's core object semantics are very nearly that of C. Really, the only major difference between Rust and C is that you can't violate mutable aliasing rules in Rust, even in unsafe, and C has a strict aliasing mode that Rust can't opt into.

The main practical difference is that Rust pushes you away from UB whereas C tends to push you into it; signed integer overflow is default-UB in C, while Rust makes you go out of your way to get UB integer overflow. Furthermore, the general design philosophy of Rust is that you build "safe abstractions" which might require unsafe to implement, but the interface should be impossible to use in a way which doesn't cause any UB. It's definitely questionable how many people actually adhere to those rules--some people are just going to slap the unsafe keyword on things to make the code compile--but it's still a pretty far distance from C, where the language tends to make building abstractions of any kind, let alone safe ones, difficult.

jedisct1 · 3h ago
Native, but requires Rust. No, thanks.
johnisgood · 3h ago
Agreed.

I have had my share of compiling Rust programs, pulling in thousands of dependencies. If people think it is good practice, then well, good for them, but should not sell Rust as a safe language when it encourages such unsafe practices, especially when there are thousands of dependencies and probably all of them have their own unsafe blocks (even this ACME support does), which affect the whole codebase.

I am going to keep using certbot. No reason to switch.

vsgherzi · 2h ago
This is a problem I'm pretty invested in so let's take a look.

If we add the list of dependencies from the modules this is what we get

anyhow = "1.0.98" base64 = "0.22.1" bytes = "1.10.1" constcat = "0.6.1" futures-channel = "0.3.31" http = "1.3.1" http-body = "1.0.1" http-body-util = "0.1.3" http-serde = "2.1.1" hyper = { version = "1.6.0", features = ["client", "http1"] } libc = "0.2.174" nginx-sys = "0.5.0-beta" ngx = { version = "0.5.0-beta", features = ["async", "serde", "std"] } openssl = { version = "0.10.73", features = ["bindgen"] } openssl-foreign-types = { package = "foreign-types", version = "0.3" } openssl-sys = { version = "0.9.109", features = ["bindgen"] } scopeguard = "1" serde = { version = "1.0.219", features = ["derive"] } serde_json = "1.0.142" siphasher = { version = "1.0.1", default-features = false } thiserror = { version = "2.0.12", default-features = false } zeroize = "1.8.1"

Now vendoring and counting the lines of those we get 2,171,685 lines of rust. Now this includes the vedored packages from cargo vendor so what happens when we take just the dependecies for our OS. Vendoring for just x86 linux chops our line count to 1,220,702 not bad for just removing packages that aren't needed, but still alot. Let's actually see what's taking up all that space.

996K ./regex 1.0M ./libc/src/unix/bsd 1.0M ./serde_json 1.0M ./tokio/src/runtime 1.1M ./bindgen-0.69.5 1.1M ./tokio/tests 1.2M ./bindgen 1.2M ./openssl/src 1.4M ./rustix/src/backend 1.4M ./unicode-width/src 1.4M ./unicode-width/src/tables.rs 1.5M ./libc/src/unix/linux_like/linux 1.5M ./openssl 1.6M ./vcpkg/test-data/no-status 1.6M ./vcpkg/test-data/no-status/installed 1.6M ./vcpkg/test-data/no-status/installed/vcpkg 1.7M ./regex-syntax 1.7M ./regex-syntax/src 1.7M ./syn/src 1.9M ./libc/src/unix/linux_like 1.9M ./vcpkg/test-data/normalized/installed/vcpkg/info 2.0M ./vcpkg/test-data/normalized 2.0M ./vcpkg/test-data/normalized/installed 2.0M ./vcpkg/test-data/normalized/installed/vcpkg 2.2M ./unicode-width 2.4M ./syn 2.6M ./regex-automata/src 2.7M ./rustix/src 2.8M ./rustix 2.9M ./regex-automata 3.6M ./vcpkg/test-data 3.9M ./libc/src/unix 3.9M ./tokio/src 3.9M ./vcpkg 4.5M ./libc/src 4.6M ./libc 5.3M ./tokio 12M ./linux-raw-sys 12M ./linux-raw-sys/src

Coming in at 12MB we have linux raw sys which provides bindings to the linux userspace, a pretty reasonable requirement. LibC and tokio. Since this is async Tokio is a must have and is pretty much bound to rust at this point. This project is extremely well vetted and is used in industry daily.

Removing those we are left with 671,031 lines of rust

Serde is a well known dependecy that allows for marshalling of data types Hyper is the curl of the rust world allowing interaction with the network

I feel like this is an understandable amount of code given the complexity of what it's doing. Of course to some degree I agree with you and often worry about dependencies. I have a whole article on it here.

https://vincents.dev/blog/rust-dependencies-scare-me/?

I think I'd be more satisfied if things get "blessed" by the foundation like rustls is being. This way I know the project is not likely to die, and has the backing of the language as a whole. https://rustfoundation.org/media/rust-foundation-launches-ru...

I think we can stand to write more things on our own (sudo-rs did this) https://www.memorysafety.org/blog/reducing-dependencies-in-s...

But to completely ignore or not interact with the language seems like throwing the baby out with the bathwater to me

johnisgood · 1h ago
I do not think it is the language to blame for it anyways. That said, I just compiled Zed with release mode and it pulled about ~2000 dependencies, I do not think that this is "normal". Perhaps it is if one is coming from npm, but come on, we should know better.
vsgherzi · 25m ago
The problem is definitely real, I'd hope that as the ecosystem matures we come to better solutions. Microsoft and google are pretty heavily invested these days so I'd expect they'd be able to provide some clarity here.

I think we just need to push a culture of writing your own code for small things you're pulling in. (of course that just is pulling alot of weight :) )

I just get tired of everyone trying to burn down crates.io as an inherent evil.

aakkaakk · 4h ago
Be aware, nginx is developed by a Russian.
petcat · 4h ago
nginx has been owned and developed by an American company for a long time...

No comments yet

themafia · 3h ago
The "cold war" was one of the dumbest features of our world over the past century.

It's amazing to me that people are still addicted to it.

TiredOfLife · 3h ago
The last Russian quite loudly stopped working on it and released a fork
rererereferred · 3h ago
I own my copy of the code.
dizlexic · 3h ago
NOT A RUSSIAN??!?!?!?
petre · 2h ago
Yes, I'm aware. He also had his office raided by the Kremlin's minions.

https://www.themoscowtimes.com/2019/12/13/russia-nginx-fsb-r...