Nginx Introduces Native Support for Acme Protocol

210 phickey 86 8/13/2025, 3:41:55 PM blog.nginx.org ↗

Comments (86)

Shank · 1h ago
> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.

clvx · 1h ago
But you have to have your dns api key loaded and many dns providers don’t allow api keys per zone. I do like it but a compromise could be awful.
grim_io · 1h ago
Sounds like a DNS provider problem. Why would Nginx feel the need to compromise because of some 3rd party implementation detail?
ddtaylor · 46m ago
It's a bit of a pain in the ass, but you can actually just publish the DNS records yourself. It's clear they are on the way out though as I believe it's only a 30 day valid certificate or something.

I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.

hashworks · 1h ago
If you host a hidden primary yourself you get that easily.
Sesse__ · 1h ago
Many DNS providers also don't support having an external primary.
bananapub · 1h ago
no you don't, you can just run https://github.com/joohoi/acme-dns anywhere, and then CNAME _acme_challenge.realdomain.com to aklsfdsdl239072109387219038712.acme-dns.anywhere.com. then your ACME client just talks to the ACME DNS api, which let's it do nothing at all aside from deal with challenges for that one long random domain.
Arnavion · 11m ago
You can do it with an NS record, ie _acme_challenge.realdomain.com pointing to the DNS server that you can program to serve the challenge response. No need to make a CNAME and involve an additional domain in the middle.
rglullis · 1h ago
I've been hoping to get ACME challenge delegation on traefik working for years already. The documentation says it supports it, but it simply fails every time.

If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.

xiconfjs · 1h ago
if even PowerDNS doesn‘t support it :(
samgranieri · 1h ago
I use dns01 in my homelab with step-ca with caddy. It's a joy to use
reactordev · 43m ago
+1 for caddy. nginx is so 2007.
supriyo-biswas · 39m ago
Only if they'd get the K8s ingress out of the WIP phase; I can't wait to possibly get rid of the cert-manager and ingress shenanigans you get with others.
reactordev · 32m ago
Yup. I can’t wait for the day I can kill my caddy8s service.

The best thing about caddy is the fact you can reload config, add sites, routes, without ever having to shutdown. Writing a service to keep your orchestration platform and your ingress in sync is meh. K8s has the events, DNS service has the src mesh records, you just need a way to tell caddy to send it to your backend.

The feature should be done soon but they need to ensure it works across K8s flavors.

01HNNWZ0MV43FF · 7m ago
I think you can that with Nginx too, but the SWAG wrapper discourages it for some reason
chaz6 · 47m ago
One of Traefik's shortcomings with ACME is that you can only use one api key per DNS provider. This is problematic if you want to restrict api keys to a domain, or use domains belonging to two different accounts. I hope Nginx will not have the same constraint.
attentive · 29m ago
Yes, ACME-DNS please - https://github.com/joohoi/acme-dns

Lego supports it.

kijin · 33m ago
A practical problem with DNS-01 is that every DNS provider has a different API for creating the required TXT record. Certbot has more than a dozen plugins for different providers, and the list is growing. It shouldn't be nginx's job to keep track of all these third-party APIs.

It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.

Spivak · 1h ago
I don't even know why anyone wouldn't use the DNS challenge unless they had no other option. I've found it to be annoying and brittle, maybe less so now with native web server support. And you can't get wildcards.
cortesoft · 1h ago
My work is mostly running internal services that aren’t reachable from the external internet. DNS is the only option.

You can get wildcards with DNS. If you want *.foo.com, you just need to be able to set _acme-challenge.foo.com and you can get the wildcard.

filleokus · 1h ago
Spivak is saying that the DNS method is superior (i.e you are agreeing - and I do too).

One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.

(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)

cortesoft · 1h ago
Oh I totally misread the comment.

Nevermind, I agree!

bryanlarsen · 1h ago
> DNS is the only option

DNS and wildcards aren't the only options. I've done annoying hacks to give internal services an HTTPS cert without using either.

But they're the only sane options.

cyberax · 49m ago
One problem with wildcards is that any service with *.foo.com can pretend to be any other service. This is an issue if you're using mutual TLS authentication and want to trust the server's certificate.

It'd be nice if LE could issue intermediary certificates constrained to a specific domain ( https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... ).

bityard · 1h ago
The advantage to HTTP validation is that it's simple. No messing with DNS or API keys. Just fire up your server software and tell it what your hostname is and everything else happens in the background automagically.
jeroenhd · 58m ago
If you buy your domain with a bottom-of-the-barrel domain reseller and then not pay for decent DNS, you don't have the option.

Plus, it takes setting up an API key and most of the time you don't need a wildcard anyway.

creatonez · 1h ago
Why would nginx ever need support for the DNS-01 challenge type? It always has access to `.well-known` because nginx is running an HTTP server for the entire lifecycle of the process, so you'd never need to use a lower level way of doing DV. And that seems to violate the principle of least privilege, since you now need a sensitive API token on the server.
0x457 · 1h ago
Because while Nginx always has access to .well-known, thing that validates on issuer side might not. I use DNS challenge to issue certificates for domains that resolve to IPs in my overlay network.

The issue is that supporting dns-01 is just supporting dns-01 it's providing a common interface to interact with different providers that implement dns-01.

justusthane · 1h ago
You can’t use HTTP-01 if the server running nginx isn’t accessible from the internet. DNS-01 works for that.
chrismorgan · 1h ago
Wildcard certificates are probably the most important answer: they’re not available via HTTP challenge.
lukeschlather · 1h ago
Issuing a new certificate with the HTTP challenge pretty much requires you allow for 15 minutes of downtime. It's really not suitable for any customer-facing endpoint with SLAs.
chrismorgan · 59m ago
Sounds like you’re doing it wrong. I don’t know about this native support, but I’d be very surprised if it was worse than the old way, which could just have Certbot put files in a path NGINX was already serving (webroot method), and then when new certificates are done send a signal for NGINX to reload its config. There should never be any downtime.
kijin · 51m ago
Certbot has a "standalone" mode that occupies port 80 and serves /.well-known/ by itself.

Whoever first recommended using that mode in anything other than some sort of emergency situation needs to be given a firm kick in the butt.

Certbot also has a mode that mangles your apache or nginx config files in an attempt to wire up certificates to your virtual hosts. Whoever wrote the nginx integration also needs a butt kick, it's terrible. I've helped a number of people fix their broken servers after certbot mangled their config files. Just because you're on a crusade to encrypt the web doesn't give you a right to mess with other programs' config files, that's not how Unix works!

Kwpolska · 58m ago
Where would this downtime come from? Your setup is really badly configured if you need downtime to serve a new static file.
kijin · 59m ago
Only if you let certbot take down your normal nginx and occupy port 80 in standalone mode. Which it doesn't need to, if normal nginx can do the job by itself.

When I need to use the HTTP challenge, I always configure the web server in advance to serve /.well-known/ from a certain directory and point certbot at it with `certbot certonly --webroot-path`. No need to take down the normal web server. Graceful reload. Zero downtime. Works with any web server.

smarx007 · 7m ago
When will this land in mainline distros (no PPAs etc)? Given that a new stable version of Debian was released very recently, I would imagine August 2027 for Debian and maybe April 2026 for Ubuntu?

In this very thread some people complain that certbot uses snap for distribution. Imagine making a feature release and having to wait 1-2 years until your users will get it on a broad scale.

giancarlostoro · 2m ago
Nginx maintains their own repository from which you can install nginx on your Ubuntu / Debian systems.

I looked at Arch and they're a version behind, which surprised me. Must not be a heavily maintained arch package.

Saris · 4m ago
I assume they're complaining that it's a snap vs flatpak, not so much vs the distro package repos.
thaumaturgy · 1h ago
Good to see this. For those that weren't aware, there's been a low-effort solution with https://github.com/dehydrated-io/dehydrated, combined with a pretty simple couple of lines in your vhost config:

    location ^~ /.well-known/acme-challenge/ {
        alias <path-to-your-acme-challenge-directory>;
    }
Dehydrated has been around for a while and is a great low-overhead option for http-01 renewal automation.
andrewmcwatters · 1h ago
This is really cool, but I find projects that have thousands of people depending on it not cutting a stable release really distasteful.

Edit: Downvote me all you want, that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

Don't consume major version 0 software, it'll bite you one day. Convince your maintainers to release stable cuts if they've been sitting on major version 0 for years. It's just lazy and immature practice abusing semantic versioning. Maintainers can learn and grow. It's normal.

Dehydrated has been major version 0 for 7 years, it's probably past due.

See also React, LÖVE, and others that made 0.n.x jumps to n.x.x. (https://0ver.org)

CalVer: "If both you and someone you don't know use your project seriously, then use a serious version."

SemVer: "If your software is being used in production, it should probably already be 1.0.0."

https://0ver.org/about.html

nothrabannosir · 1h ago
Distasteful by whom, the people depending on it? Surely not… the people providing free software at no charge, as is? Surely not…

Maybe not distasteful by any one in particular, but just distasteful by fate or as an indicator of misaligned incentives or something?

thaumaturgy · 40m ago
FWIW I have been using and relying on Dehydrated to handle LetsEncrypt automation for something like 10 years, at least. I think there was one production-breaking change in that time, and to the best of my recollection, it wasn't a Dehydrated-specific issue, it was a change to the ACME protocol. I remember the resolution for that being super easy, just a matter of updating the Dehydrated client and touching a config file.

It has been one of the most reliable parts of my infrastructure and I have to think about it so rarely that I had to go dig the link out of my automation repository.

ygjb · 1h ago
That's the great thing about open source. If you are not satisfied with the free labour's pace of implementing a feature you want, you can do it yourself!
andrewmcwatters · 1h ago
Yes, absolutely! I would probably just pick a version to fork, set it to v1.0.0 for your org's production path, and then you'd know the behavior would never change.

You could then merge updates back from upstream.

john01dav · 54m ago
It's generally easier to just deal with breaking changes, since writing code is faster than gaining understanding and breaking changes in the external api are generally much better documented than internals.
dspillett · 1h ago
Feel free to provide and support a "stable" branch/fork that meets your standards.

Be the change you want to see!

Edit to comment on the edit:

> Edit: Downvote me all you want

I don't generally downvote, but if I were going to I would not need your permission :)

> that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

I assume you meant "present" there rather than "consume"?

Anyway, 1.0.0 is just a number. Without relevant promises and a track record and/or contract to back them up breaking changes are as likely there as with any other number. A "version 0.x.x" of a well used and scrutinized open source project is more reliable and trustworthy than something that has just had a 1.0.0 sticker slapped on it.

Edit after more parent edits: or go with one of the other many versioning schemes. Maybe ItIsFunToWindUpEntitledDicksVer Which says "stick with 0.x for eternity, go on, you know you want to!".

dizhn · 2h ago
This is pretty big. Caddy had this forever but not everybody wants to use caddy. It'll probably eat into the user share of software like Traefik.
elashri · 1h ago
What I really like about Caddy is their better syntax. I actually use nginx (via nginx proxy manager) and Traefik but recently I did one project with Caddy and found it very nice. I might get the time to change my selfhosted setup to use Caddy in the future but probably will go with something like pangolin [1] because it provides alternative to cloudflare tunnels too.

[1] https://github.com/fosrl/pangolin

Saris · 1m ago
Caddy does have some bizarre limitations I've run into, particularly logging with different permissions when it writes the file, so other processes like promtail can read the logs. With Caddy you cannot change them, it always writes with very restrictive permissions.

I find their docs also really hard to deal with, trying to figure out something that would be super simple on Nginx can be really difficult on Caddy, if it's outside the scope of 'normal stuff'

kstrauser · 1h ago
I agree. That, and the sane defaults are almost always nearly perfect for me. Here is the entire configuration for a TLS-enabled HTTP/{1.1,2,3} static server:

  something.example.com {
    root * /var/www/something.example.com
    file_server
  }
That's the whole thing. Here's the setup of a WordPress site with all the above, plus PHP, plus compression:

  php.example.com {
    root * /var/www/wordpress
    encode
    php_fastcgi unix//run/php/php-version-fpm.sock
    file_server
  }
You can tune and tweak all the million other options too, of course, but you don't have to for most common use cases. It Just Works more than any similarly complex server I've ever been responsible for.
dizhn · 1h ago
I checked out pangolin too recently but then I realized that I already have Authentik and using its embedded (go based) proxy I don't really need pangolin.
tgv · 1h ago
I switched over to caddy recently. Nginx' non-information about the http 1 desync problem drove me over. I'm not going to wait for something stupid to happen or an auditor ask me questions nginx doesn't answer.

Caddy is really easier than nginx. For starters, I now have templates that cover the main services and their test services, and the special service that runs for an education institution. Logging is better. Certificate handling is perfect (for my case, at least). And it has better metrics.

Now I have to figure out plugins though, because caddy doesn't have rate limiting and some stupid bug in powerbi makes a single user hit certain images 300.000 times per day. That's a bit of a downside.

dekobon · 13m ago
I did a google search for the desync problem and found this page: https://my.f5.com/manage/s/article/K30341203

This type of thing is out of my realm of expertise. What information would you want to see about the problem? What would be helpful?

thrown-0825 · 1h ago
Definitely. I use traefik for some stuff at home and will likely swap it out now.
grim_io · 1h ago
I configure traefik by defining a few docker labels on the services themselves. No way I'm going back to using the horrible huge nginx config.
thway15269037 · 2m ago
Does nginx still lock prometheus metrics and active probing behind $$$$$ (literal hundreds of thousands)? Forgot third most important thing. I think is was re-resolving upstreams.

Anyway, good luck staying competitive lol. Almost everyone I knew either jumped to something more saner or in process of migrating away.

tialaramex · 30m ago
It's good to see this, it surprised me that this didn't happen to basically everything, basically immediately.

I figured either somehow Let's Encrypt doesn't work out, or, everybody bakes in ACME within 2-3 years. The idea that you can buy software in 2025 which has TLS encryption but expects you to go sort out the certificate. It's like if cars had to be refuelled periodically by taking them to a weird dedicated building which is not useful to anything else rather than just charging while you're asleep like a phone and... yeah you know what I get it now. You people are weird.

No comments yet

stego-tech · 1h ago
The IT Roller Coaster in two reactions:

> Nginx Introduces Native Support for Acme Protocol

IT: “It’s about fucking time!

> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

IT: “FUCK. Alright, domain registrar, mint me a new wildcard please, one of the leading web infrastructure providers still can’t do a basic LE DNS-01 pull in 2025.

Seriously. PKI in IT is a PITA and I want someone to SOLVE IT without requiring AD CAs or Yet Another Hyperspecific Appliance (YAHA). If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

While we’re at it, can we also allow DNS-01 certs to be issued for intermediate authorities, allowing internally-signed certificates to be valid via said Intermediary? That’d solve like, 99% of my PKI needs in any org, ever, forever.

0xbadcafebee · 1h ago
> allowing internally-signed certificates to be valid via said Intermediary

By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised. Since only CAs can issue certs, and CAs have to pass at least some basic security scrutiny, clients have assurance that the thing giving it a cert got said cert from a trustworthy authority. If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

> If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

I mean, that's a valid ask. It will become more commonplace once some popular corporate offering includes it, and then all the competitors will adopt it so they don't leave money on the table. To get the first one to adopt it, be a whale of a customer and yell loudly that you want it, then wait 18 months.

stego-tech · 10m ago
> If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

This is where I get rankled.

In IT land, everything needs a valid certificate. The printer, the server, the hypervisor, the load balancer, the WAP’s UI, everything. That said, most things don’t require a publicly valid certificate.

Perhaps Intermediate CA is the wrong phrase for what I’m looking for. Ideally it would be a device that does a public DNS-01 validation for a non-wildcard certificate, thus granting it legitimacy. It would then crank out certificates for internal devices only, which would be trusted via the Root CA but without requiring those devices to talk to the internet or use a wildcard certificate. In other words, some sort of marker or fingerprint that says “This is valid because I trust the root and I can validate the internal intermediary. If I cannot see the intermediary, it is not valid.”

The thinking goes is that this would allow more certificates to be issued internally and easily, but without the extra layer of management involved with a fully bespoke internal CA. Would it be as secure as that? No, but it would be SMB-friendly and help improve general security hygiene instead of letting everything use HTTPS with self-signed certificate warnings or letting every device communicate to the internet for an HTTP-01 challenge.

If I can get PKI to be as streamlined as the rest of my tech stack internally, and without forking over large sums for Microsoft Server licenses and CALs, I’d be a very happy dinosaur that’s a lot less worried about tracking the myriad of custom cert renewals and deployments.

josegonzalez · 1h ago
This is great. Dokku (of which I am the maintainer) has a hokey solution for this with our letsencrypt plugin, but thats caused a slew of random issues for users. Nginx sometimes gets "stuck" reloading and then can't find the endpoint for some reason. The fewer moving knobs, the better.

That said, its going to take quite some time for this to land in stable repositories for Ubuntu and Debian, and it doesn't (yet?) have DNS challenge support - meaning no wildcards - so I don't think it'll be useful for Dokku in the short-term at least.

ctxc · 51m ago
Hey! Great to see you here.

I tried dokku (and still am!) and it is so hard getting started.

For reference, - I've used Coolify successfully where it required me to create a Github app to deploy my apps on pushes to master - I've written GH actions to build and deploy containers to big cloud

This page is what I get if I want to achieve the same, and it's completely a reference book approach - I feel like I'm reading an encyclopedia. https://dokku.com/docs/deployment/methods/git/#initializing-...

Contrast it with this, which is INSTANTLY useful and helps me deploy apps hot off the page: https://coolify.io/docs/knowledge-base/git/github/integratio...

What I would love to see for Dokku is tutorials for popular OSS apps and set-objective/get-it-done style getting started articles. I'd LOVE an article that takes me from baremetal to a reverse proxy+a few popular apps. Because the value isn't in using Dokku, it's in using Dokku to get to that state.

I'm trying to use dokku for my homeserver.

Ideally I want a painless, quick way to go from "hey here's a repo I like" to "deployed on my machine" with Dokku. And then once that works, peek under the hood.

miggy · 1h ago
It seems HAProxy also added ACME/DNS-01 challenge support in haproxy-3.3-dev6 very recently. https://www.mail-archive.com/haproxy@formilux.org/msg46035.h...
aorth · 1h ago
Oh this is exciting! Caddy's support is very convenient and it does a lot of other stuff right out of the box which is great.

One thing keeping me from switching to Caddy in my places is nginx's rate limiting and geo module.

zaik · 26m ago
Is there a way to notify other services, if renewal has succeed? My XMPP server also needs to use the certificate.
RagnarD · 1h ago
After discovering Caddy, I don't use Nginx any longer. Just a much better development experience.
cobbzilla · 2h ago
There’s a section on renewals but no description of how it works. Is there a background thread/process? Or is it request-driven? If request-driven, what about some hostname that’s (somehow) not seen traffic in >90 days?
samgranieri · 1h ago
This is a good first start. One less moving part. They should match caddy for feature parity on this, and also add dns01 challenges as well.

I'm not using nginx these days because of this.

ankit84 · 40m ago
We have been using Caddy for many years now. Picked just because it has automatic cert provisioning. Caddy is really an easier alternative, secure out of the box.
adontz · 2h ago
certbot has an plugin for nginx, so I'm not sure why people think is was hard to use LetsEncrypt with nginx.
bityard · 1h ago
Maybe it's better these days, but even as an experienced systems administrator, I found certbot _incredibly_ annoying to use in practice. They tried to make it easy and general-purpose for beginners to web hosting, but they did it with a lot of magic that does Weird Stuff to your host and server configuration. It probably works great if you're in an environment where you just install things via tarball, edit your config files with Nano, and then rarely ever touch the whole setup again.

But if you're someone who needs tight control over the host configuration (managed via Ansible, etc) because you need to comply with security standards, or have the whole setup reproducible for disaster recovery, etc, then solutions like acme.sh or LEGO are far smaller, just as easy to configure, and in general will not surprise you.

creshal · 1h ago
Certbot is a giant swiss army chainsaw that can do everything middlingly well, if you don't mind vibecoding your encryption intrastructure. But a clean solution it usually isn't.

(That said, I'm not too thrilled by this implementation. How are renewals and revocations handled, and how can the processes be debugged? I hope the docs get updated soon.)

jeroenhd · 52m ago
Certbot always worked fine for me. It autodetects just about everything and takes care of just about everything, unless you manually instruct it what to do (i.e. re-use a specific CSR) and then it does what you tell it to do.

It's not exactly an Ansible/Kubernetes-ready solution, but if you use those tools you already know a tool that solves your problem anyway.

jddj · 1h ago
From the seeming consensus I was dreading setting let's encrypt up on nginx, until I did it and it was and has been... Completely straightforward and painless.

Maybe if you step off the happy path it gets hairy, but I found the default certbot flow to be easy.

9dev · 1h ago
Certbot is a utility that can only be installed via snap. That crap won’t make it to our servers, and many other people view it the same way I do.

So this change is most welcome.

orblivion · 1h ago
From a quick look it seems like a command you use to reconfigure nginx? And that's separate from auto-renewing the cert, right?

Maybe not hard, but Caddy seems like even less to think about.

orblivion · 1h ago
I guess I should compare to this new Nginx feature rather than Caddy. It seems like the benefit of this feature is that you don't have a tool to run, you have a config to put into place. So it's easier to deploy again if you move servers, and you don't have to think about making sure certbot is doing renewals.
andrewstuart · 44m ago
It was this that sent me from nginx to caddy.

But I’m not going back. Nginx was a real pain to configure with so many puzzles and surprises and foot guns.

do_not_redeem · 2h ago
It looks like this isn't included by default with the base nginx, but requires you to install it as a separate module. Or am I wrong?

https://github.com/nginx/nginx-acme

bhaney · 1h ago
Nginx itself is mostly just a collection of modules, and it's up to the one building/packaging the nginx distribution to decide what goes in it. By default, nginx doesn't even build the ssl or gzip modules (though thankfully it does build the http module by default). Historically it only had static modules, which needed to be enabled or disabled at compile time, but now it has dynamic modules that can be compiled separately and loaded at runtime. Some older static modules now have the option of being built as dynamic modules, and new modules that can be written as dynamic modules generally are. A distro can choose to package a new dynamic module in their base nginx package, as a separate package, or not at all.

In a typical distro, you would normally expect one or more virtual packages representing a profile (minimal, standard, full, etc) that depends on a package providing an nginx binary with every reasonable static-only module enabled, plus a number of separately packaged dynamic modules.

timw4mail · 1h ago
Yes, that is correct.
andrewmcwatters · 1h ago
It seems like if you commit your NGINX config with these updates, you can have one less process to your deployment if you're doing something like:

    # https://certbot.eff.org/instructions?ws=other&os=ubuntufocal
    sudo apt-get -y install certbot
    # sudo certbot certonly --standalone
    
    ...
    
    # https://certbot.eff.org/docs/using.html#where-are-my-certificates
    # sudo chmod -R 0755 /etc/letsencrypt/{live,archive}

So, unfortunately, this support still seems more involved than using certbot, but at least one less manual step is required.

Example from https://github.com/andrewmcwattersandco/bootstrap-express

johnisgood · 2h ago
For now I will stick to what works (nginx + certbot), but I will give this a try. Anyone tried it?

Caddy sounds interesting too, but I am afraid of switching because what I have works properly. :/

bityard · 58m ago
I grew up on Apache and eventually became a wizard with its configuration and myriad options and failures modes. Later on, I got semi-comfortable with nginx which was a little simpler because it did less than Apache but you could still get a fairly complex configuration going if you're running weird legacy PHP apps for example.

When I tried using Caddy with something serious for the first time, I thought I was missing something. I thought, these docs must be incomplete, there has to be more to it, how does it know to do X based on Y, this is never going to work...

But it DID work. There IS almost nothing to it. You set literally the bare minimum of configuration you could possibly need, and Caddy figures out the rest and uses sane defaults. The docs are VERY good, there is a nice community around it.

If I had any complaint at all, it would be that the plugin system is slightly goofy.

orphea · 1h ago
Caddy has been great for me. I don't think you should switch if your current setup works but give it a try in a new project.
roywashere · 2h ago
I like it!!! I am using Apache mod_md on Debian for personal project. That is working fine but when setting up a new site it somehow required two Apache restarts which is not super smooth