When you are making an API request, you've validated the certificate of the system you're making the request to and in the process doing so over a secure connection. You've usually authenticated yourself, also over a secure connection, and are including some sort of token validating that authentication which provides your authorization as well.
When you are accepting a call in your web hook, you need to ensure that the call came from the authenticated source which the signature provides. The web hook caller connects using the same certificate validation and secure connection infrastructure. They won't connect if your certificate doesn't validate or they can't establish a secure connection. The signature is their mechanism of authenticating with your API except that they are the authority of their identity.
That last bit is where the the contradiction falls away, the webhook implementer is retaining authentication authority and infrastructure (whether you call them or they are calling you) rather than asking the client to provide an authentication system for them to validate themselves with.
[edit: there's an additional factor. If you move authentication to the web hook implementer you lose control of what authentication mechanisms are in use. Having to implement everyone's authentication systems would be a nightmare full of contradictions. You also open yourself up to having to follow client processes, attend meetings, and otherwise not be able to automate the process of setting up a webhook.]
eqvinox · 13h ago
> […] You've usually authenticated yourself, also over a secure connection, and are including some sort of token validating that authentication […]
nit: not sure if that's just me but I was confused by this wording; with "authenticated yourself" you're referring to an initial permanent-token/login ⇒ session-token step? I initially thought you were implying something on the same connection the API call is made on, which would have to be TLS client certificates (HTTP bearer auth is already the token itself.)
erikerikson · 12h ago
I'm sorry. This is almost surely my poor wording.
TL;DR: I was thinking of bearer token auth flows and not intentionally excluding other forms of authentication.
Part of the problem is reverse ordering. When calling an API, you generally authenticate yourself, often to obtain a temporary token but it can be in the same call as you note via certificate. Only then do you make the API call that you actually wanted to make. I first wrote about making the API call and only then followed with discussing the authentication. In that, I was thinking of the permanent token to session token model but you're absolutely right that mutual auth could bypass that stage. The certificate-based authentication would still precede the API call processing, but would obviate the use/sending of a token. However, I haven't seen that used in automated APIs because of the management overhead and increased barrier for the more entry level skill end of the customer base. I have absolutely seen it in use for internal service interfaces.
Sorry that my words were a tangle, thank you very much for helping me clarify (or at least hopefully do so).
[edit: side note that with mutual auth, I've seen that as a gate to even open a socket paired with further authentication using some sort of a permanent token to session token protocol so one doesn't have to preclude the other.]
losvedir · 16h ago
I think you're missing the point. You're talking about someone making an API call vs receiving a webhook. However, I believe the article is drawing an obvious, if implicit, parallel to expectations for handling a webhook as customer vs handling an API call in your web app.
That is, surely you've worked on a web app where you receive requests from users. Those requests are authenticated (and authorized) in various ways, from OAuth tokens to session cookies to API keys. When you're handling those requests, do you require that they're signed as well? I've rarely seen such a thing (the article points out that AWS does, for example), but most web apps I've worked on don't. We simply take the request for granted (assuming its come over a TLS connection), and then check the credential.
The article is asking: if that's good enough for logic on a web app, why not in reverse? A server handling customer requests generally doesn't know their provenance either, and simply relies on the credential (unless you have IP allow-listing and other measures like that).
I actually work on webhooks as well, and we sign them (and offer mTLS and various other security measures) but I sort of took all those best practices for granted. Now I'm trying to think through what the actual threat model is here, and why it doesn't apply in reverse to the REST API endpoints that we also maintain. I can see the point of signing rather than an included credential if you allow webhooks to http endpoints, but is that it? Probably better to just not allow non-https delivery URLs anyway.
quectophoton · 15h ago
> Now I'm trying to think through what the actual threat model is here, and why it doesn't apply in reverse to the REST API endpoints that we also maintain. I can see the point of signing rather than an included credential if you allow webhooks to http endpoints, but is that it? Probably better to just not allow non-https delivery URLs anyway.
My best guess: Maybe signing the webhooks assumes the TLS-terminating middlewares might not be trusted? Or some other middleware between that and the final handler.
To the best of my understanding, the two options mentioned in the article require a shared secret: API keys include that secret verbatim in the request, while the signing uses the secret in an HMAC function.
If asymmetric cryptography were somehow involved, I would somewhat buy the arguments about validating the origin of the request, because only one party would be able to create a valid signature. But that's not the case here, because with HMAC both parties have access to the same secret used to create a "signature" (which is more like a salted hash, so creating and validating a signature are the same process).
So, if both parties can produce the hash for a valid signature, and the secret is known to both ends, and there's no advantage over API keys when using TLS (assuming TLS is not broken), then I can only think that the problem is what happens outside TLS.
That's why I think the threat model would be a compromised TLS-terminating proxy, or some compromised component in between TLS-terminating proxy and the final application handling the request.
Sounds like zero-trust shenanigans.
If I'm misunderstanding anything, I'm more than happy to be corrected.
erikerikson · 16h ago
The credential is proving provenance.
[edit: obviously once a credential is handed out it can be misused but any such attack would put signing materials in the hands of an attacker too.]
maxwellg · 16h ago
Ooh this is a favorite pet peeve of mine. HMAC is the better solution IMO but API Keys are so much easier for your customers to use:
- API Keys are much, _much_ easier to use from the command line. CURL with HMAC is finicky at best and turns a one-liner into a big script
- Maintaining N client libraries for your REST API is hard and means you'll likely deprioritize non-mainstream languages. If a customer needs to write their own library to interact with your service, needing to incorporate their own HMAC adds even more friction.
- Tools have gotten much better in recent years- it is much easier to configure a logger to ignore sensitive fields now compared to ~10 years ago
growse · 16h ago
API keys are just Basic Auth wearing a silly hat.
There's so many better options than just dumping the secret on the wire.
lo0dot0 · 15h ago
What's the advantage of HMAC over basic auth when TLS is used as a transport?
kevincox · 13h ago
In theory nothing. If you have complete confidentiality you only enough entropy to ensure that the attacker can not guess it.
But in practice things get logged, people mess up their DNS and send the request to a different party (potentially after their CDN decrypts it) or some other blunder. With HMAC as long as the recipient is validating properly (which is a whole different can of worms) the worst the attacker can do is replay requests that they have observed.
arccy · 15h ago
if you copy the aws signing, curl has --aws-sigv4
btown · 11h ago
One of the patterns I often reach for when working with webhooks is "never trust them to do anything other than set a should-refresh flag on a related object, or upsert a stub identity for a new related object, for asynchronous reprocessing which will then call out to get the latest relevant state."
Assume that things will come out of order, may be repeated, may come in giant rushes if there's a misconfiguration or traffic spike, and may have payload details change at any time in hard-to-replicate ways (unless you're archiving every payload and associating it with errors). If you make the "signal" be nothing more than an idempotent flag-set, then many of these challenges go away. And even if someone tries to send unauthenticated requests, the worst they can do is change the order in which your objects are reprocessed. Signature verification is important, but it becomes less critical.
paradox460 · 15h ago
I've always had a small bit of simmering resentment when something wants me to set up a webhook into my system, and provides no way of authorizing the hook.
Stripe and Twilio do it best, with signatures that verify they're the ones sending the hook, but I'd even settle for http basic auth. So many of them seem to say "hey here's the IP addresses well be sending raw posts to your provided URL with, btw these IPs can change at any time without warning.
eqvinox · 16h ago
This entire article seems poorly written, or rather not have thought through the real security requirements. It seems to be intended as an ad piece, but honestly for me it does the exact opposite, tells me I shouldn't be using this company for my APIs.
> However, webhooks aren’t so different from any other API request. They’re just an HTTP request from one server to another. So why not use an API key just like any other API request to check who sent the request?
Because it's still you requesting the event to happen, not the origin of the webhook. It makes no sense for the webhook to use normal API key mechanisms that are designed to control access to an API; the API is accessing you. (To be clear, of course it wouldn't use the same API key as inbound, that's a ridiculous suggestion. I'm saying the mechanics of API keys don't match this use.)
The real issue is that the webhook receiver should authenticate itself to the sender of the webhook, and the only widespread way that's currently happening is HTTPS certificate checks. As the article kinda points out for the other direction, that's kind of an auxiliary function and it's a bit questionable to rely on that. One way to do this properly would be to add another layer of encryption, which only the intended webhook receiver is given the keys for, e.g. the entire payload could be put into an encrypted PCKS#7 container. This would aid against attackers that get a hold of the webhook target in some external manner, e.g. hijacking DNS (which is enough to issue new valid certificates these days, with ACME).
> Signing requests does give extra security points, but why do we collectively place higher security requirements on webhook requests than API requests?
And now the article gets really confused, because it's misidentifying the problem. The point of signing a request that already makes use of an API key would be integrity protection, except that is indeed a function HTTPS can reasonably be relied on for in this scenario. Would a more "complex" key reduce the risk of lieaking it in log files or somesuch? Sure, but that's an aspect of API keys frequently being "loggable" strings. X509 keys as multi-line PEM text might show up less frequently in leaks due to their formatting, but that's not a statement about where and how to use them cryptographically.
arkh · 17h ago
> return actual == expected
Usually you'd want to use a method which prevent timing attacks for this check. Even php provides hash_equals for this usecase.
skybrian · 16h ago
The difference is that vendors have a good reason to widely distribute a public key. They are sending messages via webhooks to many customers, all of whom need to authenticate the same vendor. Publishing a public key solves this. The vendor could even bake it into open source software that customers download.
A vendor’s customers aren’t distributing software. They’re only sending messages via API calls to the vendor. This is many-to-one instead of one-to-many. The key distribution problem is solved differently: each customer saves a different API key to a file. There’s no key distribution problem that would be made easier by publishing a public key.
(That is, on the sending side. The receiving side is handled via TLS.)
It’s a web request either way, but this isn’t peer-to-peer communication, so the symmetry is broken.
ezekg · 14h ago
TLS doesn't handle e.g. man-in-the-middle tampering or replay attacks. Response signatures solve that, which can be verified using a vendor public key just like webhooks.
kaoD · 13h ago
> TLS doesn't handle e.g. man-in-the-middle tampering or replay attacks.
In what way it doesn't?
ezekg · 12h ago
Take the biggest man-in-the-middle on the internet: Cloudflare. It terminates TLS and can modify requests or responses between client and server.
Signatures prevent proxies, good and bad, from doing that without consequence.
JambalayaJimbo · 6h ago
Correct me if I’m wrong here, but if you sign up for Cloudflare, you have to give it access to your private key in order for it to MITM you right?
acdha · 6h ago
The conspiratorial framing as scary-sounding “tampering” isn’t helping: it’s like describing a surgeon as someone who slices people. When you sign up for a CDN, you’re agreeing to have that company proxy your traffic so they can provide various features like caching, access control, edge-side includes or execution, security filtering, etc. which as a technical necessity require them to see your traffic to work.
This is not unique to Cloudflare, for a CDN to do anything which involves seeing the payload you have to have a browser trusted key available to their nodes. Traditionally, you did this by giving them a browser-trusted x509 certificate and private key –now it’s common to authorize them to get one from a service like Let’s Encrypt-so they could handle the TLS handshake on any node for maximum performance but some CDNs like Cloudflare allow you to use your own key server so they don’t have access to the private key but do see the session key which gives you more control: https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...
The other way this can work is by using the CDN at a lower level where it’s proxying TCP connections back to the origin servers. That loses a lot of performance and security features since they can’t see the traffic, which is why most people don’t use CDNs this way but it’s an option and it’s useful if you need to deal with custom protocols or things like accepting traffic on ports which Cloudflare doesn’t support for their normal proxy. If you really didn’t trust Cloudflare, you typically wouldn’t use them but if you had some kind of compliance requirement you could still use a CDN for things like better network performance and lower-level DDoS protection without giving the CDN operator visibility into your traffic payloads.
arccy · 15h ago
request signing is strictly better... except that developers complain when you try to implement anything harder than an api bearer token (see recent threads on google gemini / vertex apis).
TeMPOraL · 11h ago
Shouldn't be surprising. Anything crypto tends to drag along a truckload of extra complexity you don't want to (and shouldn't) care about, and some of that is non-obvious. TLS alone, for example, makes your app depend on a globally synchronized wall-time clock - and what you thought of as self-contained script now needs ongoing operations work to keep the crypto parts from breaking over time.
Retr0id · 17h ago
I get that it's just supposed to be illustrative, but the "verify_request()" function presented is cryptographically unsound because the == comparison is not constant-time.
bvrmn · 15h ago
It's a "wisdom of the crowd" and mantra to follow established crypto standards.
h1fra · 18h ago
You need to check origin and authenticity of a webhook, whereas your API key is there to verify that you are the right person. In an API call, origin is already checked by HTTPS.
kaoD · 13h ago
It makes me sad that TLS client certificates are not known and in turn mostly neglected.
eqvinox · 12h ago
+∞ … though the downside is that they're somewhat annoying to deal with in reverse proxy situations, e.g. large clouds where TLS termination is a separate service in front.
(You'd need to stick the DN in a trusted header, similar to the original IP address in X-Forwarded-For:)
michaelt · 17h ago
The APIs that require signed requests come with ready made libraries for every major programming language, hiding that stuff from the user.
And with signed webhook requests, the recipient can simply ignore the signature if they deem the additional security it grants unnecessary.
slt2021 · 7h ago
if you want to authenticate the caller, its better to use mTLS (or TLS with client cert). Client certificates and https is basically SSH, but for the web.
olehif · 18h ago
API keys are symmetrical, so every client needs a unique one.
Singing allows the server to have only one certificate for all clients (webhook receivers). More convenient.
voganmother42 · 14h ago
if the server can carry a tune that is
immibis · 17h ago
But the server has no problem storing a unique webhook address for each client.
I suppose you can just add a bearer token into the address, if you need that. A different address per association, containing a bearer token, with HTTPS, provides the same security as if the bearer token was sent in a separate header.
felipemesquita · 18h ago
The post is about how webhook requests are usually signed and api responses are not.
For me it seems clear that the reason for this different approach is that api requests are already authenticated. Signing them would yield little additional security. Diminishing returns like the debate over long lived (manually refreshed) api keys versus short lived access tokens with long lived refresh tokens - or, annoyingly, single use refresh tokens that you have to keep track of along with the access token.
Webhooks are unauthenticated post requests that anyone could send if they know the receiving url, so they inherently need sender verification.
dcow · 18h ago
> Signing requests does give extra security points, but why do we collectively place higher security requirements on webhook requests than API requests?
TFA is exploring the juxtaposition of signed web-hook requests vs bearer token api requests, both of which provide authentication but one of which is arguably superior and in common enough use to question why it hasn't become common practice at large.
To flip the question: if there aren’t meaningful benefits to signing requests, why don’t web-hooks just use bearer token authentication?
klabb3 · 17h ago
Both are http requests from client to server. Servers are already authenticated through TLS. The difference is who takes the role of the client.
With API requests the customer takes the client role. The endpoint is the same, eg api.stripe.com. This means, an API key (shared secret) is the minimal config needed to avoid impersonation. You could sign with a private key too but it would also require configuration (uploading the public key to stripe) so there’s not much security gained.
With webhooks, the vendor is the client and needs to authenticate itself. But since it’s always the same vendor, no shared secret is needed. They can sign it with the same private key for all customers. You can bake the public key into client libraries and avoid the extra config. Thus, it’s reasonable to believe the use of public key cryptography is not because it’s more secure, but simply more convenient. Signing is kind of beautiful for these types of problems.
Signing alone creates a potential security issue (confused deputy? Not sure if it has a name): if Eve creates a stripe account and tells stripe that her webhook lives on alice.example.com, ie Alice’s server, stripe could send real verified webhook events to Alice, and if she doesn’t check which account it belongs to, she might provision resources (eg product purchases) if Eve is able to replicate the product ids etc that Alice uses.
Edit: now that I think of it, eve doesn’t even need to point stripe to Alice’s server. She can just store and replay the same signed messages from stripe and directly attack Alice’s server, since the HTTPS connections are not authenticated (only the contents are). To mitigate, the client library should contain some account id in the configuration, in order to correctly discard messages intended for someone else.
veonik · 17h ago
It’s worth pointing out that Stripe, specifically, generates a per-endpoint secret for webhooks that is used for validating the signature.
klabb3 · 16h ago
I suspected as much. It would have been too obvious of an attack vector for something so sensitive. Then obviously my argument falls apart, since it’s no longer saves any config.
That said, you can still benefit from pub keys by having good infra and key rotations to prevent some attacks like message replay after months. Putting such a requirement on customers is pretty doomed because of the workload, processes and infra required.
erikerikson · 16h ago
Because then the client would need to host token vending infrastructure just to accept a webhook request.
As designed, the webhook receiver only has to implement the one endpoint.
[edit: in addition, bearer tokens are not the only authentication system. By moving authentication onto the webhook holder, the caller now has to satisfy any authentication system and have implementations for all of them. Some authentication systems are manual and thereby introduce friction. By providing the authentication materials themselves, they reduce friction and reduce their implementation to having only one mechanism.]
felipemesquita · 17h ago
Some do, but it either involves an additional secret specific for this purpose, or it burdens the client with controlling access and exposure of incoming request headers (in logs and middleware) since they would include the token that can actually make api calls to the vendor.
Nevertheless, your question would have yielded a better article.
> but why do we collectively place higher security requirements on webhook requests than API requests?
We really don’t, signing is just more convenient in the webhook scenario. And it’s also completely optional to check a signature, leading even to many implementations not doing so.
paulryanrogers · 18h ago
Well put. I assume this is because big players make things as easy as possible to consume. APIs are probably more widely used that web hooks, so made easier for consumers to work with.
alexbouchard · 18h ago
Some do; Gitlab, Otka, and Pipedrive come to mind. I think this is more about the expectations set over the last decade. If you do things differently, there's a need to justify it, and it's just perceived as less secure, regardless of whether it's true or not (the pros and cons are well articulated in the article).
woranl · 18h ago
OAuth 1.0a requires API requests to be signed.
tgv · 16h ago
I know that survey panels like HMAC. They pay their respondents (a bit of) money for each survey they complete. And of course there are bots for that, and the simplest strategy is to immediately call the survey panel's end point that registers a response as complete (triggering payment). That can be stopped by making the surveyor signing the link upon true completion. Just adding a key isn't going to cut it.
When you are making an API request, you've validated the certificate of the system you're making the request to and in the process doing so over a secure connection. You've usually authenticated yourself, also over a secure connection, and are including some sort of token validating that authentication which provides your authorization as well.
When you are accepting a call in your web hook, you need to ensure that the call came from the authenticated source which the signature provides. The web hook caller connects using the same certificate validation and secure connection infrastructure. They won't connect if your certificate doesn't validate or they can't establish a secure connection. The signature is their mechanism of authenticating with your API except that they are the authority of their identity.
That last bit is where the the contradiction falls away, the webhook implementer is retaining authentication authority and infrastructure (whether you call them or they are calling you) rather than asking the client to provide an authentication system for them to validate themselves with.
[edit: there's an additional factor. If you move authentication to the web hook implementer you lose control of what authentication mechanisms are in use. Having to implement everyone's authentication systems would be a nightmare full of contradictions. You also open yourself up to having to follow client processes, attend meetings, and otherwise not be able to automate the process of setting up a webhook.]
nit: not sure if that's just me but I was confused by this wording; with "authenticated yourself" you're referring to an initial permanent-token/login ⇒ session-token step? I initially thought you were implying something on the same connection the API call is made on, which would have to be TLS client certificates (HTTP bearer auth is already the token itself.)
TL;DR: I was thinking of bearer token auth flows and not intentionally excluding other forms of authentication.
Part of the problem is reverse ordering. When calling an API, you generally authenticate yourself, often to obtain a temporary token but it can be in the same call as you note via certificate. Only then do you make the API call that you actually wanted to make. I first wrote about making the API call and only then followed with discussing the authentication. In that, I was thinking of the permanent token to session token model but you're absolutely right that mutual auth could bypass that stage. The certificate-based authentication would still precede the API call processing, but would obviate the use/sending of a token. However, I haven't seen that used in automated APIs because of the management overhead and increased barrier for the more entry level skill end of the customer base. I have absolutely seen it in use for internal service interfaces.
Sorry that my words were a tangle, thank you very much for helping me clarify (or at least hopefully do so).
[edit: side note that with mutual auth, I've seen that as a gate to even open a socket paired with further authentication using some sort of a permanent token to session token protocol so one doesn't have to preclude the other.]
That is, surely you've worked on a web app where you receive requests from users. Those requests are authenticated (and authorized) in various ways, from OAuth tokens to session cookies to API keys. When you're handling those requests, do you require that they're signed as well? I've rarely seen such a thing (the article points out that AWS does, for example), but most web apps I've worked on don't. We simply take the request for granted (assuming its come over a TLS connection), and then check the credential.
The article is asking: if that's good enough for logic on a web app, why not in reverse? A server handling customer requests generally doesn't know their provenance either, and simply relies on the credential (unless you have IP allow-listing and other measures like that).
I actually work on webhooks as well, and we sign them (and offer mTLS and various other security measures) but I sort of took all those best practices for granted. Now I'm trying to think through what the actual threat model is here, and why it doesn't apply in reverse to the REST API endpoints that we also maintain. I can see the point of signing rather than an included credential if you allow webhooks to http endpoints, but is that it? Probably better to just not allow non-https delivery URLs anyway.
My best guess: Maybe signing the webhooks assumes the TLS-terminating middlewares might not be trusted? Or some other middleware between that and the final handler.
To the best of my understanding, the two options mentioned in the article require a shared secret: API keys include that secret verbatim in the request, while the signing uses the secret in an HMAC function.
If asymmetric cryptography were somehow involved, I would somewhat buy the arguments about validating the origin of the request, because only one party would be able to create a valid signature. But that's not the case here, because with HMAC both parties have access to the same secret used to create a "signature" (which is more like a salted hash, so creating and validating a signature are the same process).
So, if both parties can produce the hash for a valid signature, and the secret is known to both ends, and there's no advantage over API keys when using TLS (assuming TLS is not broken), then I can only think that the problem is what happens outside TLS.
That's why I think the threat model would be a compromised TLS-terminating proxy, or some compromised component in between TLS-terminating proxy and the final application handling the request.
Sounds like zero-trust shenanigans.
If I'm misunderstanding anything, I'm more than happy to be corrected.
[edit: obviously once a credential is handed out it can be misused but any such attack would put signing materials in the hands of an attacker too.]
- API Keys are much, _much_ easier to use from the command line. CURL with HMAC is finicky at best and turns a one-liner into a big script
- Maintaining N client libraries for your REST API is hard and means you'll likely deprioritize non-mainstream languages. If a customer needs to write their own library to interact with your service, needing to incorporate their own HMAC adds even more friction.
- Tools have gotten much better in recent years- it is much easier to configure a logger to ignore sensitive fields now compared to ~10 years ago
There's so many better options than just dumping the secret on the wire.
But in practice things get logged, people mess up their DNS and send the request to a different party (potentially after their CDN decrypts it) or some other blunder. With HMAC as long as the recipient is validating properly (which is a whole different can of worms) the worst the attacker can do is replay requests that they have observed.
Assume that things will come out of order, may be repeated, may come in giant rushes if there's a misconfiguration or traffic spike, and may have payload details change at any time in hard-to-replicate ways (unless you're archiving every payload and associating it with errors). If you make the "signal" be nothing more than an idempotent flag-set, then many of these challenges go away. And even if someone tries to send unauthenticated requests, the worst they can do is change the order in which your objects are reprocessed. Signature verification is important, but it becomes less critical.
Stripe and Twilio do it best, with signatures that verify they're the ones sending the hook, but I'd even settle for http basic auth. So many of them seem to say "hey here's the IP addresses well be sending raw posts to your provided URL with, btw these IPs can change at any time without warning.
> However, webhooks aren’t so different from any other API request. They’re just an HTTP request from one server to another. So why not use an API key just like any other API request to check who sent the request?
Because it's still you requesting the event to happen, not the origin of the webhook. It makes no sense for the webhook to use normal API key mechanisms that are designed to control access to an API; the API is accessing you. (To be clear, of course it wouldn't use the same API key as inbound, that's a ridiculous suggestion. I'm saying the mechanics of API keys don't match this use.)
The real issue is that the webhook receiver should authenticate itself to the sender of the webhook, and the only widespread way that's currently happening is HTTPS certificate checks. As the article kinda points out for the other direction, that's kind of an auxiliary function and it's a bit questionable to rely on that. One way to do this properly would be to add another layer of encryption, which only the intended webhook receiver is given the keys for, e.g. the entire payload could be put into an encrypted PCKS#7 container. This would aid against attackers that get a hold of the webhook target in some external manner, e.g. hijacking DNS (which is enough to issue new valid certificates these days, with ACME).
> Signing requests does give extra security points, but why do we collectively place higher security requirements on webhook requests than API requests?
And now the article gets really confused, because it's misidentifying the problem. The point of signing a request that already makes use of an API key would be integrity protection, except that is indeed a function HTTPS can reasonably be relied on for in this scenario. Would a more "complex" key reduce the risk of lieaking it in log files or somesuch? Sure, but that's an aspect of API keys frequently being "loggable" strings. X509 keys as multi-line PEM text might show up less frequently in leaks due to their formatting, but that's not a statement about where and how to use them cryptographically.
Usually you'd want to use a method which prevent timing attacks for this check. Even php provides hash_equals for this usecase.
A vendor’s customers aren’t distributing software. They’re only sending messages via API calls to the vendor. This is many-to-one instead of one-to-many. The key distribution problem is solved differently: each customer saves a different API key to a file. There’s no key distribution problem that would be made easier by publishing a public key.
(That is, on the sending side. The receiving side is handled via TLS.)
It’s a web request either way, but this isn’t peer-to-peer communication, so the symmetry is broken.
In what way it doesn't?
Signatures prevent proxies, good and bad, from doing that without consequence.
This is not unique to Cloudflare, for a CDN to do anything which involves seeing the payload you have to have a browser trusted key available to their nodes. Traditionally, you did this by giving them a browser-trusted x509 certificate and private key –now it’s common to authorize them to get one from a service like Let’s Encrypt-so they could handle the TLS handshake on any node for maximum performance but some CDNs like Cloudflare allow you to use your own key server so they don’t have access to the private key but do see the session key which gives you more control: https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...
The other way this can work is by using the CDN at a lower level where it’s proxying TCP connections back to the origin servers. That loses a lot of performance and security features since they can’t see the traffic, which is why most people don’t use CDNs this way but it’s an option and it’s useful if you need to deal with custom protocols or things like accepting traffic on ports which Cloudflare doesn’t support for their normal proxy. If you really didn’t trust Cloudflare, you typically wouldn’t use them but if you had some kind of compliance requirement you could still use a CDN for things like better network performance and lower-level DDoS protection without giving the CDN operator visibility into your traffic payloads.
(You'd need to stick the DN in a trusted header, similar to the original IP address in X-Forwarded-For:)
And with signed webhook requests, the recipient can simply ignore the signature if they deem the additional security it grants unnecessary.
I suppose you can just add a bearer token into the address, if you need that. A different address per association, containing a bearer token, with HTTPS, provides the same security as if the bearer token was sent in a separate header.
For me it seems clear that the reason for this different approach is that api requests are already authenticated. Signing them would yield little additional security. Diminishing returns like the debate over long lived (manually refreshed) api keys versus short lived access tokens with long lived refresh tokens - or, annoyingly, single use refresh tokens that you have to keep track of along with the access token.
Webhooks are unauthenticated post requests that anyone could send if they know the receiving url, so they inherently need sender verification.
TFA is exploring the juxtaposition of signed web-hook requests vs bearer token api requests, both of which provide authentication but one of which is arguably superior and in common enough use to question why it hasn't become common practice at large.
To flip the question: if there aren’t meaningful benefits to signing requests, why don’t web-hooks just use bearer token authentication?
With API requests the customer takes the client role. The endpoint is the same, eg api.stripe.com. This means, an API key (shared secret) is the minimal config needed to avoid impersonation. You could sign with a private key too but it would also require configuration (uploading the public key to stripe) so there’s not much security gained.
With webhooks, the vendor is the client and needs to authenticate itself. But since it’s always the same vendor, no shared secret is needed. They can sign it with the same private key for all customers. You can bake the public key into client libraries and avoid the extra config. Thus, it’s reasonable to believe the use of public key cryptography is not because it’s more secure, but simply more convenient. Signing is kind of beautiful for these types of problems.
Signing alone creates a potential security issue (confused deputy? Not sure if it has a name): if Eve creates a stripe account and tells stripe that her webhook lives on alice.example.com, ie Alice’s server, stripe could send real verified webhook events to Alice, and if she doesn’t check which account it belongs to, she might provision resources (eg product purchases) if Eve is able to replicate the product ids etc that Alice uses.
Edit: now that I think of it, eve doesn’t even need to point stripe to Alice’s server. She can just store and replay the same signed messages from stripe and directly attack Alice’s server, since the HTTPS connections are not authenticated (only the contents are). To mitigate, the client library should contain some account id in the configuration, in order to correctly discard messages intended for someone else.
That said, you can still benefit from pub keys by having good infra and key rotations to prevent some attacks like message replay after months. Putting such a requirement on customers is pretty doomed because of the workload, processes and infra required.
As designed, the webhook receiver only has to implement the one endpoint.
[edit: in addition, bearer tokens are not the only authentication system. By moving authentication onto the webhook holder, the caller now has to satisfy any authentication system and have implementations for all of them. Some authentication systems are manual and thereby introduce friction. By providing the authentication materials themselves, they reduce friction and reduce their implementation to having only one mechanism.]
Nevertheless, your question would have yielded a better article.
> but why do we collectively place higher security requirements on webhook requests than API requests?
We really don’t, signing is just more convenient in the webhook scenario. And it’s also completely optional to check a signature, leading even to many implementations not doing so.