Everything I know about good API design

243 ahamez 93 8/24/2025, 7:10:09 PM seangoedecke.com ↗

Comments (93)

jillesvangurp · 9m ago
API versioning mostly just means things perpetually stuck at v1. You might have the intention to change things up, but you never will.

Putting version numbers in a URL is a bit of a kludge. v1 is the most common version, by far, you will ever see in a url. v2 is rare. v3 is more common strangely. I don't think I've seen a v4 or v5 or higher in the wild very often. That's just not a thing.

My theory is that v1 is the quick and dirty version that developers would like to forget exists. v2 is the "now we know what we're doing!" version and that's usually quickly followed by v3 because if you can change your mind once you can do it twice. After which people just tell developers to quit messing with the API already and keep things stable. v4 and v5 never happen.

Another observation is that semantic versioning for API urls here seems rare. Reason: it's inconvenient for clients to have to update all their URLs every time some developer changes their mind. Most clients will hard code the version. Because it never changes. And because it is hard coded, changing the version becomes inconvenient.

My attitude towards URL based versioning is that you could do it but it's not a tool that you get to use much. Therefore you can safely skip it and it won't be a problem. And in the worst case where you do need it, you can easily add a v2 URL space anyway. But you probably never will as you are unlikely to deprecated the entirety of your API.

There are other ways to deal with deprecating APIs. You can just add new paths or path prefixes in your API as needed. You can use a different domain. Or you can just remove them after some grace period. It depends. Versioning is more aspirational than actually a thing with APIs.

We do version our API but via client headers. Our API client sends a version header. And we check it server side and reject older versions with a version conflict response (409). This enables us to force users of our app to update to something we still support. The version number of our client library increments regularly. Anything falling behind too far we reject. This doesn't work for all use cases. But for a web app this is completely fine.

dwattttt · 7h ago
The reminder to "never break userspace" is good, but people never bring up the other half of that statement: "we can and will break kernel APIs without warning".

It illustrates that the reminder isn't "never change an API in a way that breaks someone", it's the more nuanced "declare what's stable, and never break those".

delta_p_delta_x · 6h ago
Even if the kernel doesn't break userspace, GNU libc does, all the time, so the net effect is that Linux userspace is broken regardless of the kernel maintainers' efforts. Put simply, programs and libraries compiled on/for newer libc are ABI-incompatible or straight-up do not run on older libc, so everything needs to be upgraded in lockstep.

It is a bit ironic and a little funny that Windows solved this problem a couple decades ago with redistributables.

Retr0id · 5h ago
otoh staticly-linked executables are incredibly stable - it's nice to have that option.
delta_p_delta_x · 5h ago
From what I understand, statically linking in GNU's libc.a without releasing source code is a violation of LGPL. Which would break maybe 95% of companies out there running proprietary software on Linux.

musl libc has a more permissive licence, but I hear it performs worse than GNU libc. One can hope for LLVM libc[1] so the entire toolchain would become Clang/LLVM, from the compiler driver to the C/C++ standard libraries. And then it'd be nice to whole-program-optimise from user code all the way to the libc implementation, rip through dead code, and collapse binary sizes.

[1]: https://libc.llvm.org/

teraflop · 5h ago
AFAIK, it's technically legal under the LGPL to statically link glibc as long as you also include a copy of the application's object code, along with instructions for how users can re-link against a different glibc if they wish. You don't need to include the source for those .o files.

But I don't think I've ever seen anybody actually do this.

dijit · 38m ago
GNU LibC is notoriously difficult to statically link to anyway. (getaddrinfo for example).

Most people use musl, though some others use uclibc.

Musl is actually great, even if it comes with some performance drawbacks in a few cases.

rcxdude · 4h ago
Musl is probably the better choice for static linking anyway, GNU libc relies on dynamic linking for a few important features.
resonious · 4h ago
The Windows redistributables are so annoying as a user. I remember countless times applications used to ask me to visit the official Microsoft page for downloading them, and it was quite hard to find the right buttons to press to get the thing. Felt like offloading the burden to the users.
IcyWindows · 2h ago
Many installers do it right and don't require the user to do it themselves.
loeg · 5h ago
You can (equivalently) distribute some specific libc.so with your application. I don't think anyone other than GNU maximalists believes this infects your application with the (L)GPL.
rcxdude · 4h ago
GNU libc has pretty good backwards compatibility, though, so if not you want to run on a broad range of versions, link against as old a version of libc as is practical (which does take some effort, annoyingly). It tends to be things like GUI libraries and such which are a bigger PITA, because they do break compatibility and the old versions stop being shipped in distros, and shipping them all with your app can still run into protocol compatibility issues.
o11c · 2h ago
You're describing 2 completely different things there.

If your program is built to require myfavoritelibrary version 1.9, and you try to run it against myfavoritelibrary 1.0, no shit it doesn't work. Glibc is no different than any other in this regard.

If your program is built to require myfavoritelibrary version 1.0, and you try to run it on myfavoritelibrary 1.9 ... glibc's binary compatibility story has been very good since the release of 2.2 or so, way back in 2000. (I know from documentation that there were a lot of 2.0 -> 2.1 breakages, some of which might've actually been fixed in 2.1.x point releases, so I'm saying 2.2 to be safe)

It's not quite as perfect as Linux's "we do not break userland" but it's pretty darn close; I would have to hunt down changelogs to find something that actually broke without explicitly relying on "do not rely on this" APIs. Source compatibility is a different story, since deprecated APIs can be removed from the public headers but still present in the binary.

... actually, even Linux has unapologetically broken its promise pretty badly in the past at various times. The 2.4 to 2.6 transition in particular was nasty. I'm also aware of at least one common syscall that broke in a very nasty way in some early versions; you can't just use ENOSYS to detect it but have to set up extra registers in a particular way to induce failure for incompatible versions (but only on some architectures; good luck with your testing!)

---

There's nothing stopping you from installing and using the latest glibc and libgcc at runtime, though you'll have to work around your distro's happy path. Just be careful if you're building against them since you probably don't want to add extra dependencies for everything you build.

By contrast, I have statically-linked binaries from ~2006 that simply do not work anymore, because something in the filesystem has changed and their version of libc can't be fixed the way the dynamically-linked version has.

chubot · 7h ago
Yeah, famously there is no stable public driver API for Linux, which I believe was the motivation for Google’s Fuschia OS

So Linux is opinionated in both directions - towards user space and toward hardware - but in the opposite way

pixl97 · 8h ago
While the author doesn't seem to like version based APIs very much, I always recommend baking them in from the very start of your application.

You cannot predict the future and chances are there will be some breaking change forced upon you by someone or something out of your control.

paulhodge · 5h ago
I have to agree with the author about not adding "v1" since it's rarely useful.

What actually happens as the API grows-

First, the team extends the existing endpoints as much as possible, adding new fields/options without breaking compatibility.

Then, once they need to have backwards-incompatible operations, it's more likely that they will also want to revisit the endpoint naming too, so they'll just create new endpoints with new names. (instead of naming anything "v2").

Then, if the entire API needs to be reworked, it's more likely that the team will just decide to deprecate the entire service/API, and then launch a new and better service with a different name to replace it.

So in the end, it's really rare that any endpoints ever have "/v2" in the name. I've been in the industry 25 years and only once have I seen a service that had a "/v2" to go with its "/v1".

ks2048 · 5h ago
> So in the end, it's really rare that any endpoints ever have "/v2" in the name.

This is an interesting empirical question - take the 100 most used HTTP APIs and see what they do for backward-incompatible changes and see what versions are available. Maybe an LLM could figure this out.

I've been just using the Dropbox API and it is, sure enough, on "v2". (although they save you a character in the URL by prefixing "/2/").

Interesting to see some of the choices in v1->v2,

https://www.dropbox.com/developers/reference/migration-guide

They use a spec language they developed called stone (https://github.com/dropbox/stone).

grodriguez100 · 1h ago
The author does not say that you “should not add v1”. They say that versioning is how you change your API responsibly (so, endorsing versioning), but that you should only do it as a last resort.

So you would add “v1”, to be able to easily bump to v2 later if needed, and do your best to avoid bumping to v2 if at all possible.

JimDabell · 33m ago
> While the author doesn't seem to like version based APIs very much, I always recommend baking them in from the very start of your application.

You don’t really need to do that for REST APIs. If clients request application/vnd.foobar then you can always add application/vnd.foo.bar;version=2 later without planning this in advance.

gitremote · 4h ago
I don't think the author meant they don't include /v1 in the endpoint in the beginning. The point is that you should do everything to avoid having a /v2, because you would have to maintain two versions for every bug fix, which means making the same code change in two places or having extra conditional logic multiplied against any existing or new conditional logic. The code bases that support multiple versions look like spaghetti code, and it usually means that /v1 was not designed with future compatibility in mind.
andix · 6h ago
I don't see any harm in adding versioning later. Let's say your api is /api/posts, then the next version is simply /api/v2/posts.
choult · 6h ago
It's a problem downstream. Integrators weren't forced to include a version number for v1, so the rework overhead to use v2 will be higher than if it was present in your scheme to begin.
pixl97 · 5h ago
This here, it's way easier to grep a file for /v1/ and show all the api endpoints then ensure you haven't missed something.
cmcconomy · 1h ago
grep for /* and omit /v2 ?
grodriguez100 · 1h ago
I would say the author recommends the same actually: they say that versioning is “how you change your API responsibly” (so, endorsing versioning), but that you should only switch to a new version as a last resort.
pbreit · 4h ago
Disagree. Baking versioning in from the start means they will much more likely be used, which is a bad thing.
claw-el · 7h ago
If there is a breaking change forced upon in the future, can’t we use a different name for the function?
soulofmischief · 7h ago
A versioned API allows for you to ensure a given version has one way to do things and not 5, 4 of which are no longer supported but can't be removed. You can drop old weight without messing up legacy systems.
Bjartr · 7h ago
See the many "Ex" variations of many functions in the Win32 API for examples of exactly that!
pixl97 · 5h ago
Discoverability.

/v1/downloadFile

/v2/downloadFile

Is much easier to check for a v3 then

/api/downloadFile

/api/downloadFileOver2gb

/api/downloadSignedFile

Etc. Etc.

claw-el · 4h ago
Isn’t having the name (e.g. Over2gb) easier to understand than just saying v2? This is in the situation where there is breaking changes forced upon v1/downloadFile.
echelon · 4h ago
I have only twice seen a service ever make a /v2.

It's typically to declare bankruptcy on the entirety of /v1 and force eventual migration of everyone onto /v2 (if that's even possible).

bigger_cheese · 3h ago
A lot of the Unix/Linux Syscall api has a version 2+

For example dup(), dup2(), dup3() and pipe(), pipe2() etc

LWN has an article: https://lwn.net/Articles/585415/

It talks about avoiding this by designing future APIs using a flags bitmask to allow API to be extended in future.

pixl97 · 4h ago
I work for a company that has an older api so it's defined in the header, but we're up to v6 at this point. Very useful for changes that have happened over the years.
ks2048 · 5h ago
If you only break one or two functions, it seems ok. But, some change in a core data type could break everything, so adding a prefix "/v2/" would probably be cleaner.
jahewson · 6h ago
/api/postsFinalFinalV2Copy1-2025(1)ExtraFixed
CharlesW · 7h ago
You could, but it just radically increases complexity in comparison to "version" knob in a URI, media type, or header.
swagasaurus-rex · 5h ago
Cursor based pagination was mentioned. It has another useful feature: If items have been added between when a user loads the page and hits the next button, index based pagination will give you some already viewed items from the previous page.

Cursor based pagination (using the ID of the last object on the previous page) will give you a new list of items that haven't been viewed. This is helpful for infinite scrolling.

The downside to cursor based pagination is that it's hard to build a jump to page N button.

echelon · 4h ago
You should make your cursors opaque so as to never reveal the size of your database.

You can do some other cool stuff if they're opaque - encode additional state within the cursor itself: search parameters, warm cache / routing topology, etc.

rockwotj · 3h ago
Came here to say these same things exactly. Best write up I know on this subject: https://use-the-index-luke.com/sql/partial-results/fetch-nex...
achernik · 5h ago
> How should you store the key? I’ve seen people store it in some durable, resource-specific way (e.g. as a column on the comments table), but I don’t think that’s strictly necessary. The easiest way is to put them in Redis or some similar key/value store (with the idempotency key as the key).

I'm not sure how would storing a key in Redis achieve idempotency in all failure cases. What's the algorithm? Imagine a server handling the request is doing a conditional write (like SET key 1 NX), and sees that the key is already stored. What then, skip creating a comment? Can't assume that the comment had been created before, since the process could have been killed in-between storing the key in Redis and actually creating the comment in the database.

An attempt to store idempotency key needs to be atomically committed (and rolled back in case it's unsuccessful) together with the operation payload, i.e. it always has to be a resource-specific id. For all intents and purposes, the idempotency key is the ID of the operation (request) being executed, be it "comment creation" or "comment update".

rockwotj · 3h ago
Yes please don’t add another component to introduce idempotency, it will likely have weird abstraction leaking behavior or just be plain broken if you don’t understand delivery guarantees. Much better to support some kind of label or metadata with writes so a user can track progress on their end and store it alongside their existing data.
0xbadcafebee · 5h ago
Most people who see "API" today only think "it's a web app I send a request to, and I pass some arguments and set some headers, then check some settings from the returned headers, then parse some returned data."

But "API" means "Application Programming Interface". It was originally for application programs, which were... programs with user interfaces! It comes from the 1940's originally, and wasn't referred to for much else until 1990. APIs have existed for over 80 years. Books and papers have been published on the subject that are older than many of the people reading this text right now.

What might've those older APIs been like? What were they working with? What was their purpose? How did those programmers solve their problems? How might that be relevant to you?

mrkeen · 40m ago
> That way you can send as many retries as you like, as long as they’ve all got the same idempotency key - the operation will only be performed once.

I worked in an org where idempotency meant: if it threw an exception this time, it needs to throw the same exception everytime.

barapa · 3h ago
They suggest storing the idempotency key in redis. Seems like if possible, you should store them in whatever system you are writing to in a single transaction with the write mutations.
runroader · 7h ago
I think the only thing here that I don't agree with is that internal users are just users. Yes, they may be more technical - or likely other programmers, but they're busy too. Often they're building their own thing and don't have the time or ability to deal with your API churning.

If at all possible, take your time and dog-food your API before opening it up to others. Once it's opened, you're stuck and need to respect the "never break userspace" contract.

Supermancho · 4h ago
With internal users, you likely have instrumentation that allows you to contact and have those users migrate. You can actually sunset api versions, making API versioning an attractive solution. I've both participated in API versioning and observed it employed in organizations that don't use it by default as a matter of utility.
devmor · 7h ago
I think versioning still helps solve this problem.

There’s a lot of things you can do with internal users to prevent causing a burden though - often the most helpful one is just collaborating on the spec and making the working copy available to stakeholders. Even if it’s a living document, letting them have a frame of reference can be very helpful (as long as your office politics prevent them from causing issues for you over parts in progress they do not like.)

JimDabell · 1h ago
This is great. One thing I would add:

The quality of the API is inversely correlated to how difficult it is to obtain API documentation. If you are only going to get the API documentation after signing a contract, just assume it’s dismally bad.

deterministic · 9m ago
It’s rare to read an article where I agree 100% with everything written.

Bravo!

frabonacci · 7h ago
The reminder to "never break userspace" is gold and often overlooked.. ahem Spotify, Reddit and Twitter come to mind.
claw-el · 8h ago
> However, a technically-poor product can make it nearly impossible to build an elegant API. That’s because API design usually tracks the “basic resources” of a product (for instance, Jira’s resources would be issues, projects, users and so on). When those resources are set up awkwardly, that makes the API awkward as well.

One issue I have with weird resources are those that feel like unnecessary abstraction. It makes it hard for the human to read and understand intuitively, especially someone new to these set of APIs. Also, it makes it so much harder to troubleshoot during an incident.

canpan · 4h ago
> many of your users will not be professional engineers. They may be salespeople, product managers, students, hobbyists, and so on.

This is not just true for authentication. If you work in a business setting, your APIs will be used by the most random set of users. They be able to google for how to call your api in python, but not be able to do things like converting UTC to their local time zone.

tiffanyh · 2h ago
Here’s also some good recommendations: https://jcs.org/2023/07/12/api
JimDabell · 27m ago
That’s good too. Regarding:

> Be descriptive in your error responses

This is a useful standardised format:

RFC 7807: Problem Details for HTTP APIs

https://datatracker.ietf.org/doc/html/rfc7807

wener · 6h ago
I still try think /v1 /v2 is a break, I don't trust you will keep v1 forever, otherwise you'll never introduce this execuse.

I'd like to introduce more fields or flags to control the behavior as params, not asking user to change the whole base url for single new API.

calrain · 5h ago
I like this pattern.

When an API commits to /v1 it doesn't mean it will deprecate /v1 when /v2 or /v3 come out, it just means we're committing to supporting older URI strategies and responses.

/v2 and /v3 give you that flexibility to improve without affecting existing customers.

zahlman · 7h ago
Anyone else old enough to remember when "API" also meant something that had nothing to do with sending and receiving JSON over HTTP? In some cases, you could even make something that your users would install locally, and use without needing an Internet connection.
drdaeman · 7h ago
I believe it’s pretty common to e.g. call libraries’ and frameworks’ user- (developer-) facing interface an API, like in “Python’s logging library has a weird-looking API”, so I don’t think API had eroded to mean only networked ones.
mettamage · 6h ago
I never understood why libraries also had the word API. From my understanding a library is a set of functions specific to a certain domain, such as a statistics library, for example. Then why would you need the word API? You already know it’s a library.

For end points it’s a bit different. You don’t know what are they or user facing or programmer facing.

I wonder if someone has a good take on this. I’m curious to learn.

dfee · 5h ago
To use code, you need an interface. One for programming. Specifically to build an application.

Why does the type of I/O boundary matter?

snmx999 · 2h ago
The API of a library is what a recipe is to food.
shortrounddev2 · 6h ago
To me the API is the function prototypes. The DLL is the library
chubot · 7h ago
Well it stands for “application programming interface”, so I think it is valid to apply it to in-process interfaces as well as between-process interfaces

Some applications live in a single process, while others span processes and machines. There are clear differences, but also enough in common to speak of “APIs” for both

gct · 6h ago
Everyone's decided that writing regular software to run locally on a computer is the weird case and so it has to be called "local first".
rogerthis · 7h ago
Things would come in SDKs, and docs were in MS Help .chm files.
ivanjermakov · 5h ago
> sending and receiving JSON over HTTP

In my circles this is usually (perhaps incorrectly) called REST API.

j45 · 6h ago
APIs are for providing accessibility - to provide access to interactions and data inside an application from the outside.

The format and protocol of communication was never fixed.

In addition to the rest api’s of today, soap, wsdl, web sockets could all can deliver some form of API.

bigiain · 4h ago
CORBA

Shudder...

mlhpdx · 6h ago
Having built a bunch of low level network APIs I think the author hits on some good, common themes.

Versioning, etc. matter (or don’t) for binary UDP APIs (aka protocols) just as much as for any web API.

xtacy · 8h ago
Are there good public examples of well designed APIs that have stood the test of time?
binaryturtle · 8h ago
I always thought the Amiga APIs with the tag lists were cool. You easily could extend the API/ABI w/o breaking anything at the binary level (assuming you made the calls accept tag lists as parameters to begin with, of course).
cyberax · 7h ago
I'm a bit of a different opinion on API versioning, but I can see the argument. I definitely disagree about idempotency: it's NOT optional. You don't have to require idempotency tokens for each request, but there should be an option to specify them. Stripe API clients are a good example here, they automatically generate idempotency tokens for you.

Things that's missing from this list but that were important for me at some points:

1. Deadlines. Your API should allow to specify the deadline after which the request is no longer going to matter. The API implementation can use this deadline to cancel any pending operations.

2. Closely related: backpressure and dependent services. Your API should be designed to not overload its own dependent services with useless retries. Some retries might be useful, but in general the API should quickly propagate the error status back to the callers.

3. Static stability. The system behind the API should be designed to fail static, so that it retains some functionality even if the mutating operations fail.

cyberax · 9h ago
> You should let people use your APIs with a long-lived API key.

Sigh... I wish this were not true. It's a shame that no alternatives have emerged so far.

TrueDuality · 8h ago
There are other options that allow long-lived access with naturally rotating keys without OAuth and only a tiny amount of complexity increase that can be managed by a bash script. The refresh token/bearer token combo is pretty powerful and has MUCH stronger security properties than a bare API key.
maxwellg · 7h ago
Refresh tokens are only really required if a client is accessing an API on behalf of a user. The refresh token tracks the specific user grant, and there needs to be one refresh token per user of the client.

If a client is accessing an API on behalf of itself (which is a more natural fit for an API Key replacement) then we can use client_credentials with either client secret authentication or JWT bearer authentication instead.

TrueDuality · 5h ago
That is a very specific form of refresh token but not the only model. You can just easily have your "API key" be that refresh token. You submit it to an authentication endpoint, get back a new refresh token and a bearer token, and invalidate the previous bearer token if it was still valid. The bearer token will naturally expire and if you're still using it, just use the refresh immediately, if its days or weeks later you can use it then.

There doesn't need to be any OIDC or third party involved to get all the benefits of them. The keys can't be used by multiple simultaneous clients, they naturally expire and rotate over time, and you can easily audit their use (primarily due to the last two principles).

0x1ceb00da · 3h ago
> The refresh token/bearer token combo is pretty powerful and has MUCH stronger security properties than a bare API key

I never understood why.

TrueDuality · 3h ago
The quick rundown of refresh token I'm referring to is:

1. Generate your initial refresh token for the user just like you would a random API key. You really don't need to use a JWT, but you could.

2. The client sends the refresh token to an authentication endpoint. This endpoint validates the token, expires the refresh token and any prior bearer tokens issued to it. The client gets back a new refresh token and a bearer token with an expiration window (lets call it five minutes).

3. The client uses the bearer token for all requests to your API until it expires

4. If the client wants to continue using the API, go back to step 2.

The benefits of that minimal version:

Client restriction and user behavior steering. With the bearer tokens expiring quickly, and refresh tokens being one-time use it is infeasible to share a single credential between multiple clients. With easy provisioning, this will get users to generate one credential per client.

Breach containment and blast radius reduction. If your bearer tokens leak (logs being a surprisingly high source for these), they automatically expire when left in backups or deep in the objects of your git repo. If a bearer token is compromised, it's only valid for your expiration window. If a refresh token is compromised and used, the legitimate client will be knocked offline increasing the likelihood of detection. This property also allows you to know if a leaked refresh token was used at all before it was revoked.

Audit and monitoring opportunities. Every refresh creates a logging checkpoint where you can track usage patterns, detect anomalies, and enforce policy changes. This gives you natural rate limiting and abuse detection points.

Most security frameworks (SOC 2, ISO 27001, etc.) prefer time-limited credentials as a basic security control.

Add an expiration time to refresh tokens to naturally clean up access from broken or no longer used clients. Example: Daily backup script. Refresh token's expiration window is 90 days. The backups would have to not run for 90 days before the token was an issue. If it was still needed the effort is low, just provision a new API key. After 90 days of failure you either already needed to perform maintenance on your backup system or you moved to something else without revoking the access keys.

0x1ceb00da · 2h ago
So a refresh token on its own isn't more secure than a simple api key. You need a lot of plumbing and abuse detection analytics around it as well.
rahkiin · 7h ago
If api keys do not need to ve stateless, every api key can become a refresh token with a full permission and validity lookup.
marcosdumay · 4h ago
This.

The separation of a refresh cycle is an optimization done for scale. You don't need to do it if you don't need the scale. (And you need a really huge scale to hit that need.)

pixelatedindex · 8h ago
To add on, are they talking about access tokens or refresh tokens? It can’t be just one token, because then when it expires you have to update it manually from a portal or go through the same auth process, neither of which is good.

And what time frame is “long-lived”? IME access tokens almost always have a lifetime of one week and refresh tokens anywhere from 6 months to a year.

smj-edison · 7h ago
> Every integration with your API begins life as a simple script, and using an API key is the easiest way to get a simple script working. You want to make it as easy as possible for engineers to get started.

> ...You’re building it for a very wide cross-section of people, many of whom are not comfortable writing or reading code. If your API requires users to do anything difficult - like performing an OAuth handshake - many of those users will struggle.

Sounds like they're talking about onboarding specifically. I actually really like this idea, because I've certainly had my fair share of difficulty just trying to get the dang thing to work.

Security wise perhaps not the best, but mitigations like staging only or rate limiting seem sufficient to me.

pixelatedindex · 6h ago
True, I have enjoyed using integrations where you can generate a token from the portal for your app to make the requests. One thing that’s difficult in this scenario is authorization - what resources does this token have access to can be kind of murky.
rahkiin · 8h ago
Inthink they are talking about refresh token or Api Keys like PATs. Some value you pass in a header and it just works. No token flow. And the key is valid for months and can be revoked
cyberax · 7h ago
If you're using APIs from third parties, the most typical authentication method is a static key that you stick in the "Authorization" HTTP header.

OAuth flows are not at all common for server-to-server communications.

In my perfect world, I would replace API keys with certificates and use mutual TLS for authentication.

pixelatedindex · 6h ago
IME, OAuth flows are pretty common in S2S communication. Usually these tend to be client credential based flows where you request a token exactly like you said (static key in Authorization), rather than authorized grant flows which requires a login action.
cyberax · 4h ago
Yeah, but then there's not that much difference, is there? You can technically move the generation of the access tokens to a separate secure environment, but this drastically increases the complexity and introduces a lot of interesting failure scenarios.
pixelatedindex · 4h ago
I mean… is adding an OAuth layer in 2025 adding that much complexity? If you’re scripting then there’s usually some package native to the language, if you’re using postman you’ll need to generate your authn URL (or do username/passwords for client ID/secret).

If you have sensitive resources they’ll be blocked behind some authz anyway. An exception I’ve seen is access to a sandbox env, those are easily generated at the press of a button.

cyberax · 3h ago
No, I'm just saying that an OAuth layer isn't really adding much benefit when you either use an API key to obtain the refresh token or the refresh token itself becomes a long-term secret, not much better than an API key.

Some way to break out of the "shared secret" model is needed. Mutual TLS is one way that is at least getting some traction.

nostrebored · 7h ago
In your perfect world, are you primarily the producer or consumer of the API?

I hate mTLS APIs because they often mean I need to change how my services are bundled and deployed. But to your point, if everything were mTLS I wouldn’t care.

cyberax · 3h ago
> In your perfect world, are you primarily the producer or consumer of the API?

Both, really. mTLS deployment is the sticking point, but it's slowly getting better. AWS load balancers now support it, they terminate the TLS connection, validate the certificate, and stick it into an HTTP header. Google Cloud Platform and CloudFlare also support it.