Ten years of JSON Web Token and preparing for the future

235 mooreds 145 5/25/2025, 11:05:36 PM self-issued.info ↗

Comments (145)

martinald · 1d ago
The problem I've got with JWTs is that actually you can rarely (never, really in my experience?) assume anything in the JWT apart from user id are valid for a long period of time.

For the most simple use case of an client auth state; you want to be able to revoke auth straight away if an account is compromised. This means you have to check the auth database for every request anyway, and you probably could have got whatever else was in the claim there quickly.

Same with roles; if you downgrade an admin user to a lower 'class' of user then you don't want it to take minutes to take effect.

So then all you are left with is a unified client id format, which is somewhat useful, but not really the 'promise' of JWTs (I feel?).

the_duke · 1d ago
Active user logouts, deletions, permission changes are rare, so the size of revocations lists is extremely small compared to number of tokens in existence.

You can keep revocations in a very fast lookup system (eg broadcasts + in-memory store), combined with reasonably short token renewals, like 5-60 minutes.

Massively cuts down the number of token validity checks, and makes the system tolerant to downtimes of the auth system. That's less relevant for basic apps where the auth data is in the same DB as all the other data, but that is rarely the case in larger systems.

rafaelmn · 1d ago
And when building distributed systems a bunch of systems don't really care about those changes immediately anyway so the propagation delay is acceptable, or you can push the burden of refreshing to the client for privilege expansion cases, etc.
nextaccountic · 7h ago
> makes the system tolerant to downtimes of the auth system

You mean the system that handles revocations? If so, your own system will be vulnerable if you continue to do business as usual while it's down

CafeRacer · 1d ago
Is there a proposed rfc for revocation lists?
mooreds · 1d ago
https://datatracker.ietf.org/doc/html/rfc7009 handles generic revocation without specifying implementation details.

Some auth servers implement it. Keycloak does[0]. Auth0 doesn't as far as I can tell[1]. FusionAuth (my employer) has had it listed as a possible feature for years[2] but it never has had the community feedback to bubble it up to the top of our todo list.

0: https://www.keycloak.org/securing-apps/oidc-layers#_token_re...

1: https://auth0.com/docs/secure/tokens/revoke-tokens

2: https://github.com/fusionAuth/fusionauth-issues/issues/201

pwlb · 1d ago
You may have a look at this (still a Draft): https://datatracker.ietf.org/doc/draft-ietf-oauth-status-lis...
unscaled · 1d ago
I don't think status lists solve the requirement for near-realtime revocations. The statuslist itself has a TTL and does not get re-loaded until that TTL expires. This is practically similar to the common practice of having a stateful refresh token and a stateless access token. The statuslist "ttl" claim is equivalent to the "exp" claim of the access token in that regard, and it comes with the same tradeoffs. You can have a lower TTL for statuslist, but that comes at the cost of higher frequency of high-latency network calls due to cache misses.

The classic solution to avoid this (in the common case where you can fit the entire revocation list in memory) is to have a push-based or pub/sub-based mechanism for propagating revocations to token verifiers.

motorest · 22h ago
> The statuslist itself has a TTL (...)

If you read the draft, the TTL is clearly specified as optional.

> (...) and does not get re-loaded until that TTL expires.

That is false. The draft clearly states that the optional TTL is intended to "specify the maximum amount of time, in seconds, that the Status List Token can be cached by a consumer before a fresh copy SHOULD be retrieved."

> You can have a lower TTL for statuslist, but that comes at the cost of higher frequency of high-latency network calls due to cache misses.

The concept of a TTL specifies the staleness limit, and anyone can refresh the cache at a fraction of the TTL. In fact, some cache revalidation strategies trigger refreshes at random moments well within the TTL.

There is also a practical limit to how frequently you refresh a token revocation list. Some organizations have a 5-10min tolerance period for basic, genera-purpose access tokens, and fall back to shorter-lived and even one-time access tokens for privileged operations. So if you have privileged operations being allowed when using long-lived tokens, your problem is not the revocation list.

unscaled · 8h ago
In that case, when and how would you reload the statuslist?

Again, it doesn't matter if TTL and caching is optional, what matters is that this specification has NOTHING to do with a pub/sub-based or push-based mechanism as described by GGGP. This draft specifies a list that can be cached and/or refreshed periodically or on demand. This means that there will always be some specified refresh frequency and you cannot have near-real-time refreshes.

> There is also a practical limit to how frequently you refresh a token revocation list. Some organizations have a 5-10min tolerance period for basic, genera-purpose access tokens, and fall back to shorter-lived and even one-time access tokens for privileged operations. So if you have privileged operations being allowed when using long-lived tokens, your problem is not the revocation list.

That's totally cool. Some organizations are obviously happy with delayed revocations for non-sensitive operations, which they could easily achieve them with stateful refresh tokens, without the added complexity of revocation lists. Stateful and revokable refresh tokens are already supported by many OAuth 2.0 implementations such as Keycloak and Auth0[1]. All you have to do is to set the access token's TTL to 5-10 minutes and you'll get the same effect as you've described above. The performance characteristics may be worse, but many ap which are happy with delayed revocation are happy with this simple solution.

Unfortunately, there are many products where immediate revocation is required. For instance, administrative dashboards and consoles where most operations are sensitive. You can force token validity check through an API call for all operations, but that makes stateless access tokens useless.

What the original post above proposed is a common pattern[2] that lets you have the performance characteristics (zero extra latency) of stateless tokens together with the security characteristics of a stateful access token (revocation is registered in near-real-time, usually less than 10 seconds). This approach is supported by WSO2[3], for instance. The statuslist spec does nothing to standardize this approach.

[1] https://auth0.com/docs/secure/tokens/refresh-tokens/revoke-r...

[2] See "Decentralized approach" in https://dzone.com/articles/jwt-token-revocation

[3] https://mg.docs.wso2.com/en/latest/concepts/revoked-tokens/#...

motorest · 6h ago
> In that case, when and how would you reload the statuslist?

It only depends on your own requirements. You can easily implement pull-based or push-based approaches if they suit your needs. I know some companies enforce a 10min tolerance on revoked access tokens, and yet some resource servers poll them at a much higher frequency.

> Again, it doesn't matter if TTL and caching is optional (...)

I agree, it doesn't. TTL is not relevant at all. If you go for a pull-based approach, you pick the refresh strategy that suits your needs. TTL means nothing if it's longer than your refresh periods.

> This draft specifies a list that can be cached and/or refreshed periodically or on demand. This means that there will always be some specified refresh frequency and you cannot have near-real-time refreshes.

Yes. You know what it makes sense for you. It's not for the standard to specify the max frequency. I mean, do you think the spec specify max expiry periods for tokens?

Try to think about the problem. What would you do if the standard somehow specified a TTL and it was greater than your personal needs?

brok3nmachine · 1d ago
For this reason, I use a LaunchDarkly config flag for the revocation list. Updates to the config are pushed to all LD clients in near real time.
mattmanser · 1d ago
This makes it sound like you've only worked in an extremely narrow domain.

It's not rare, it happens constantly in enterprise software, project managemment software, anything where you have collaboration.

What is so frustrating about tech like JWTs is that it fits the fairly rare, high profile, websites like Reddit, netflix, etc. but doesn't fit ANYTHING else.

Everyone else wants immediate revocation of rights, not waiting for a token to expire.

And yet we all have to suffer this subpar tech because someone wrote a blog post about it and a bunch of moronic software "architects" made it the only option. If you don't JWT somehow you're doing it wrong, even though it should in fact be an extremely niche way of doing Auth at scale.

Simple cookie based tokens were and still are a much better choice for many applications.

bandoti · 1d ago
It’s important to consider that JWT is a series of specs and folks can choose to use any of them to suit their needs.

In fact, it can be used to create simple tokens—even if you store them in a database in a traditional authentication sense.

But it is also helpful to be able to use OIDC, for example, with continuous delivery workflows to authenticate code for deployment. These use JWT and it works quite well I think.

Note: technically JWT is only one of the specs so it’s not exactly correct how I’m referring to it, but I think of them collectively as JWT. :)

konha · 1d ago
> It's not rare, it happens constantly in enterprise software, project managemment software, anything where you have collaboration

The number of revoked tokens compared to all active tokens should still be tiny in those systems, wouldn’t you agree?

> Everyone else wants immediate revocation of rights, not waiting for a token to expire.

With a revocation list you can still have that. Once you propagated your revocation to all relying parties the token effectively expires early.

motorest · 21h ago
> What is so frustrating about tech like JWTs is that it fits the fairly rare, high profile, websites like Reddit, netflix, etc. but doesn't fit ANYTHING else.

This is only conceivably true if your ability to design services only goes as far as reusing reddit-like usecases for everything and anything.

But everyone else is not incumbered by that limitation.

> Everyone else wants immediate revocation of rights, not waiting for a token to expire.

Where exactly does a JWT prevent you from rejecting revoked tokens? I mean, JWTs support short-lived tokens, jti denylists, single-user tokens with nonces, etc. Why are you blaming JWTs for problems you're creating to yourself.

bilater · 1d ago
Can you tell me of any instance where someone's auth needed to be revoked within 5 minutes and a delay was not acceptable? I think it's more of an imaginary 'five nines' engineering thing than real life.
martinald · 18h ago
Firstly I don't think most people who use JWTs use 5 min refreshes. But even assuming that - any collaboration software. Imagine you invite a user by mistake to an internal wiki, you don't really want them looking at the content for 5 minutes. Much better to be able to revoke instantly.

Then you have anything that handles financial data. If you're a bank and you get a call that you have a fraudster taking over an account; you want to be able to revoke that straight away. Waiting another 5 minutes could mean many thousands more in losses (simplified example, but you hopefully get my drift), which arguably the bank may be liable for by the regulator.

Also many other "UX" problems, you also don't want roles to be out of sync for 5 minutes. Imagine you are collaborating on a web app and you need to give a colleague write access to the system for an urgent deadline. She's sitting next to you and you have to wait 5 minutes (or do a forced login/logout) before you get access, even after refreshing the page.

Finally it's really far from ideal to be using 5 min refreshes. For idle users with a tab open you will have people constantly pinging the backend all the time to get a refresh. Imagine some sort of IOT use case where you have thousands of devices on very bandwidth limited wide area networks.

Furthermore - it's a total mess on mobile apps. Imagine you have an app (say a food delivery app) that is powered by push notifications for delivery status. If you've got a 5 min token and you push down an update via push notifications telling it to get new data from a HTTP endpoint to update a widget, your token will almost certainly be expired by the time the delivery is on the way. You then need to do a background token refresh which may or may not be possible on the OS in question.

bilater · 16h ago
fair these are good examples. still feel there is a better trade off than constant pings.
dcrazy · 23h ago
Any time an employee is fired?
bilater · 22h ago
You don't tell them they are fired and then revoke access immediately. Either access is already revoked or they are given a reasonable time to close out (you have end of day before we revoke access, we will revoke access after this meeting etc). Either way a JWT expiring every second versus 5 minutes doe not change things.

I'm trying to be sensible here not dream up straw man scenarios of which there are many.

martinald · 18h ago
If you've got a rogue employee destroying stuff; you definitely do not want to wait 5 minutes.
5Qn8mNbc2FNCiVV · 19h ago
Just press the button and get a coffee and press it again (after 5 minutes), the second time it'll be instantly revoked.
mooreds · 1d ago
> you want to be able to revoke auth straight away if an account is compromised

It really depends on the system. In my experience, there are tons of apps that want to be able to revoke access but weigh that against transparent re-authentication. OIDC handles both nicely with:

* short access/id token lifetimes (seconds to minutes)

* regular transparent refreshes of those tokens (using a refresh token that is good for days to months)

This flexibility lets developers use the same technology for banks (with a shorter lifetime for both access/id tokens and refresh tokens) and consumer applications (with a short lifetime for access/id tokens and a longer lifetime for refresh tokens).

afiori · 23h ago
Imho the complexity and cost of having super short-lived access tokens is worse than eating up 1 more per-request db lookup
hn_throwaway_99 · 1d ago
> For the most simple use case of an client auth state; you want to be able to revoke auth straight away if an account is compromised. This means you have to check the auth database for every request anyway, and you probably could have got whatever else was in the claim there quickly.

FWIW, I built a system previously that got around this "having to check the DB on every access to check for revocations" issue that worked quite well. Two important things to realize:

1. Revocations (or what is usually basically "explicit logout") is actually quite rare in a lot of user application patterns. E.g. for many web apps users very rarely explicitly logout. It's even rarer for mobile apps.

2. You only need to keep around a list of revocations for as long as your token expiry is. For example, if your token expiration is 30 mins, and you expire a user's tokens at noon, by 12:30 PM you can drop that revocation statement, because any tokens affected by that revocation would have expired anyway.

Thus, if you have a relatively short token expiration (say, a half hour), the size of your token expiration list can almost always fit in memory. So what I built:

1. The interface to see if a token has expired is basically "getEarliestTokenIssuedAt(userId: string): Date" - essentially, what is the earliest possible issuance timestamp for a token for a particular user to be considered valid. So, revoking a user's previously issued tokens means just setting this date to Now(), then any token issued before that will be considered invalid.

2. I had a table in postgres that just stored the user ID and earliest valid token date. However, I used postgres' NOTIFY functionality to send a broadcast to all my servers whenever a row was added to this table.

3. My servers then just had what was a local copy of this table, but stored in memory. Again, remember that I could just drop entries that were older than the longest token expiration date, so this could fit in memory.

On the off-chance that somehow the current revocation list couldn't fit in memory, I build something in the system that allowed it to essentially say "memory is full" which would cause it to make a call back to postgres', but again, that situation would naturally clear up after a few minutes if revocations went back down and the token expiration window passed.

This sounds more complicated than it actually was. It has the benefits of:

1. Almost no statefulness, which was great for scalability.

2. Verifying a token could still always be done in memory, at least almost. Over a couple years of running the system I actually never hit a state when the in-memory revocation list got too big.

martinald · 1d ago
Just seems like a lot of extra fiddly stuff to go wrong for monolithic apps. I get it if you have went all in on microservices as each "client" request can fan out to hundreds of requests, each requiring an auth check.

But still, I'm not sure that I've seen an auth/roles database that couldn't fit (at least) the important stuff itself in RAM itself fwiw. Even 1TB of RAM is relatively affordable (if you are not on the hyperscalers) and you could fit billions of users in that, which at least in theory means you can just check everything and not have another store to worry about.

fmbb · 1d ago
What you describe sounds like it will make any explicit log out action users do on any device turn into a ”log me out from all devices” action, which was probably not at all the user’s intent unless that is the only explicit option you give them.
littlecranky67 · 1d ago
A "logout" action from the user should just delete the JWT from the device he is using. Asuming the token wasn't compromised, there is no backend work involved.

Is this as secure as doing a blacklist for non-expired tokens? No, it isn't. It is a sane tradeoff between decent security and implementation complexity.

orphea · 1d ago

  > A "logout" action from the user should just delete the JWT from the device he is using.
I wouldn't say should. It may. If you're fine with inability to terminate sessions on other devices.
littlecranky67 · 1d ago
Terminating sessions on other devices is not possible, but another tradeoff is using a "Logout from all devices" mechanism. In that case you just have a global "token not issue before" field, and when you logout from all devices, set that timestamp to the current time (and all issued tokens will fail authentication). But again, tradeoff. You individual requirements may vary.
vintermann · 1d ago
> You only need to keep around a list of revocations for as long as your token expiry is. For example, if your token expiration is 30 mins, and you expire a user's tokens at noon, by 12:30 PM you can drop that revocation statement, because any tokens affected by that revocation would have expired anyway.

And this sort of thing is basically what redis is for, right? Spin up a docker container, use it as a simple key value store (really just key store). When someone manually invalidates a token, push it in, with the expiry date is has anyway.

bandoti · 1d ago
Might not even need to store the token itself just a piece of data that is contained in the claims to say the account is in a good state. Any number of tokens then can be issued and the validation step would ensure the claims is correct.
rollcat · 1d ago
Overall this sounds good, but:

> 1. Almost no statefulness, which was great for scalability.

This is called "eventual consistency", it's probably fine in practice but you still do have a lot of state. Personally, if I have any in-application state at all, I would use a sticky cookie on the LB to send each client to the same instance.

akoboldfrying · 1d ago
This seems like about the best that can be done (well, you could go full Bloom filter to squeeze that revocation list size down even further), but it does seem vulnerable to DoS: Create 10000 accounts and log them all out at the same time to force the server into the slow PostgreSQL mode.
rollcat · 1d ago
Any system that allows you to create 10000 accounts is already vulnerable to DoS.

Also, as vintermann suggested, you can use a faster, domain-specific database if you're concerned about this becoming an issue. And sometimes edge cases like this aren't worth considering until you hit them.

paradox460 · 17h ago
You don't have to check the db every request. Just store a list of revoked tokens in a fast cache, like redis, with ttl longer than the longest token lifetime and reject/force reauth any token that matches
fifticon · 1d ago
I have a random idea regarding compromised tokens, which may not hold water. What if you put things like the client's IP address in the token? Then the server can reject (and mark for compromise) as soon as they receive any request from a different ip address? I realise this will also invalidate people who somehow roam between ip addressses, say DHCP/wireless in a larger building.
deeringc · 1d ago
Enterprise customers often have split tunnel VPNs or proxies (with PAC configs) where part of the traffic may go through a VPN and another part goes directly. So for example a customer admin might configure an app that does email and webRTC so that the real time traffic (media and the associated signalling) goes directly and the email traffic goes via some TLS intercepting proxy for some compliance reason or DLP. This can result in one application having multiple public IPs for different network requests, even while they are on one internal network (not even jumping between networks like you say). That isnt something that the application author can control, it's the customer admin that decides to do that.
mooreds · 1d ago
There are two in-use RFCs to make compromised tokens much harder to use by attackers. Neither use IP addresses, but both bind the token to the client using some form of cryptography.

RFC 8705 section 3[0], binds tokens by adding a signature of a client certificate presented to the server doing authentication. Then any server receiving that token can check to see that the client certificate presented is the same (strictly speaking, hashes to the same value). This works great if you have client certs everywhere and can handle provisioning and revoking them.

RFC 9449[1] is a more recent one that uses cryptographic primitives in the client to create proof of private key possessions. From the spec:

> The main data structure introduced by this specification is a DPoP proof JWT that is sent as a header in an HTTP request, as described in detail below. A client uses a DPoP proof JWT to prove the possession of a private key corresponding to a certain public key.

These standards are robust ways to ensure a client presenting a token is the client who obtained it.

Note that both depend on other secrets (client cert, private key) being kept secure.

0: https://datatracker.ietf.org/doc/html/rfc8705

1: https://datatracker.ietf.org/doc/html/rfc9449

hn_throwaway_99 · 1d ago
Client IP addresses change a lot more than you think, especially on mobile networks.
spiffyk · 1d ago
More importantly, this will invalidate everyone jumping between Wi-Fi and mobile data, so that might be unworkable for many.
paulddraper · 1d ago
I think you have a narrow view.

For example, a magic link sent via email can have a substantial validity duration.

ithkuil · 1d ago
Yes but generally magic links are only used for authentication. So if you delete or downgrade the principal whoever uses that magic link to authenticate can only perform the operations that are associated to the principal and the check is performed after the magic link is verified, unless the magic link also used to carry auth claims
paulddraper · 1d ago
Yes, up-to-date permissions require centralized consistency.

My point is…JWT can be used in a number of contexts.

lo0dot0 · 1d ago
Clicking on links in emails is a security risk because they could be spam. I don't do that unless it's the only way to move forward and then I double check the url. Basically I only use it to sign up then never again if possible.
pbreit · 1d ago
If a token is not a JWT is it really a “Bearer” token?
unscaled · 1d ago
"Bearer" and JWT are orthogonal. Tokens in other format or stateful formats can be bearer tokens, while JWTs can use non-bearer authentication methods. For instance, RFC 9449 (DPoP) describes an authentication method where you have to provide a PoP (based on JWS) in addition to an access token (which may or may not be JWT).
formerly_proven · 1d ago
Bearer token just means whoever has the token string has the associated capability - like bearer bonds.

Unlike e.g. challenge-response or signature authentication.

sbergot · 1d ago
Oauth defines bearers tokens without requesting them to be jwt.
detaro · 1d ago
yes, 100%
motorest · 1d ago
> For the most simple use case of an client auth state; you want to be able to revoke auth straight away if an account is compromised. This means you have to check the auth database for every request anyway, and you probably could have got whatever else was in the claim there quickly.

I fail to see the relevance of your scenarios regarding JWTs. I mean, I get your frustration. However, none of it is related t JWTs. Take a moment to read what you wrote: if your account is compromised, the attacker started abusing credentials the moment he got them. The moment the attacker got a hold of valid credentials is not the moment you discovered the attack, let alone the moment you forced the compromised account to go through a global sign-off. This means that your scenario does not prevent abuse. You are revoking a token when it was already being abused.

Also, as someone who implemented JWT-based access controls in resource servers, checking revocation lists is a basic scenario. It's very often implemented as a very basic and very fast endpoint that provides a list of JWT IDs. The resource server polls this endpoint to check for changes, and checks the list on every call as part of the JWT check. The time window between revoking a token and rejecting said token in a request is dictated by how frequent you poll the endpoint. Do you think, say, 1 second is too long?

> Same with roles; if you downgrade an admin user to a lower 'class' of user then you don't want it to take minutes to take effect.

It's the exact same scenario: you force a client to refresh it's access tokens, and you revoke which tokens were issued. Again, is 1 second too long?

Also, nothing forces you to include roles in a JWT. OAuth2 doesn't. Nothing prevents your resource server from just using the jti to fetch roles from another service. Nevertheless, are you sure that service would be updated as fast or faster than a token revocation?

> So then all you are left with is a unified client id format, which is somewhat useful, but not really the 'promise' of JWTs (I feel?).

OAuth2 is just that. What's wrong with OAuth?

Also, it seems you are completely missing the point of JWTs. Their whole shtick is that they allow resource servers do verify access tokens locally without being forced to consume external services. Token revocation and global sign-offs are often reported as gotchas, but given how infrequent these scenarios take place and how trivial they are to implement (periodically polling an endpoint hardly changes that.

edf13 · 1d ago
If it’s a major compromise you can simply roll out a new key… invalidating all current JWTs forcing a new login… you could also group signing keys by user type to further minimise the refreshes.
hdjrudni · 1d ago
Every time I want to use a JWT, it seems like it's the suboptimal choice, so I've never found a genuine use case for them.

Most recently, I wanted to implement 2FA w/ TOTP. I figure I'll use 1 cookie for the session, and another cookie as a TOTP bypass. If the user doesn't have a 2FA bypass cookie, then they have to complete the 2FA challenge. Great, so user submits username & password like normal, if they pass but don't have the bypass cookie the server sends back a JWT with 10 minute expiry. They have to send back the JWT along with OTP to complete the login.

I figure this is OK, but not optimal. Worst case, hacker does not submit any username/password but attempts to forge the JWT along with OTP. User ID is in clear text in the JWT, but the key only exists on the server so it's very difficult to crack. Nevertheless, clients have unlimited attempts because JWT is stateless and they can keep extending the expiry or set it to far future as desired. Still, 256 bits, not likely they'll ever succeed, but I should probably be alerted to what's going on.

Alternative? Create a 2FA challenge key that's unique after every successful username/password combo. User submits challenge key along with OTP. Same 256 bit security, but unique for each login attempt instead of using global HMAC key. Also, now it's very easy to limit attempts to ~3 and I have a record of any such hacking attempt. Seems strictly better. Storage is not really a concern because worse case I can still prune all keys older than 10 minutes. Downside I guess is I still have to hit my DB, but it's a very efficient query and I can always move to a key-value store if it becomes a bottleneck.

I don't know, what's the use-case? Server-server interaction? Then they still need to share a key to validate the JWT. And probably all but the user-facing server doesn't need to be exposed to public internet anyway so why the hoopla adding JWT? I haven't looked into it much because I don't believe in this microservice architecture either, but if I were to go down that road I'd probably try gRPC/protobufs and still not bother with JWT.

gerdesj · 1d ago
A JWT can include claims - that's the difference: JWTs are a bit more complicated data structure out of the box. You can do authN and authZ in one go.

You can do it all via individual browser cookies but it will be complicated. However you can dump session cookies to a database and then you can do claims locally on the server and use that cookie to tie it all together.

So I think you can do it either way.

JWTs are mutually authenticated (shared secret) but cookies are not.

unscaled · 1d ago
Everything can include "claims". Claims are just fields in a JSON object. If you're using your own token format which is based on Libsodium's secret box, you can just do `secretbox_seal(secret_key, json_encode(claims))`. It's a no-brainer one liner. You can even use MessagePack or protocol buffers instead of JSON and save a little bit on the token size.

JWT might do other things for you, like standardizing how to deal with key rotation (using the "kid" claim and JWKs discovery urls), or tying a bearer token to a PoP structure (DPoP), but that's all about standardization. And as a standard JWT is too flexible and ambiguous. There are better proposed standards out there, and for most of the thing JWT is used for (non-interoperable access tokens) it's an overkill.

harrall · 1d ago
Long before JWT existed, if you wanted to pass some trusted data through an untrusted channel, you would make a payload with an expiry, encrypt or sign it with your secret key, then send it. However, you would need to make up your own way to send this info. For example, if this were a website, you might dump the signed/encrypted payload into several form fields and upon receiving it back, you would verify that it was signed with your key.

Now that JWT exists, there is a standard way to do it so you don’t have to write the same boring code a bunch of times in different languages. You just have one string you pass in one field and if you tell someone else that it’s a JWT, they know how to parse it. You don’t have to document your own special way anymore.

At the end of the day, it’s just a standard for that specific problem that didn’t have a standard solution before. If passing data like that is not a problem for your use case, then you don’t need the tool.

To use your Protobuf example, there was a time before Protobuf or tools like it existed. I can tell you that writing the exact same protocol code by hand in Java, PHP, and Python is absolute tedious work. But if it never came up that you had to write your own protocol, you neither know the pain of doing it manually nor the pleasure of using Protobuf, and that’s fine.

afiori · 1d ago
The issue with JWTs is that it is very often misused standard
jmhmd · 1d ago
I use JWTs to let me do auth on cached resources. I can verify permissions in an edge worker and deliver the cached resource without needing to roundtrip to the database. Not sure how to implement that without JWT (or rolling my own solution). Lots of people here saying some version of “I don’t see the use case, just use X”, but these kinds of standards nearly always arise as a result of a valid use case, even if they aren’t as common.
alisonatwork · 1d ago
In your scenario you could still apply additional protections to a user id after detecting X attempts of sending a forged JWT. At least you could alert on JWTs that arrive with invalid signatures. Or you could put a 2FA challenge key inside the JWT, just use the JWT as a container to hold the information you would have shared with the client anyway.

I agree that JWTs don't really do anything more than a cookie couldn't already do, but I think the use case is for apps, not web browsers. In particular apps that do raw HTTP API calls and do not implement a cookie jar. And then because most companies do "app first development", we end up having to support JWT in the web browser too, manually putting it into localstorage or the application state, instead of just leveraging the cookie jar that was already there.

We just recently had to implement an SSO solution using JWT because the platform only gave out JWTs, so we ended up putting the JWT inside an encrypted HttpOnly cookie. Seemed a bit like a hat-on-a-hat, but eh.

codethief · 1d ago
> we end up having to support JWT in the web browser too, manually putting it into localstorage or the application state, instead of just leveraging the cookie jar that was already there.

> We just recently had to implement an SSO solution using JWT because the platform only gave out JWTs, so we ended up putting the JWT inside an encrypted HttpOnly cookie. Seemed a bit like a hat-on-a-hat, but eh.

Why would you think that? Cookies are a perfectly normal place to store JWTs for web applications. If your frontend is server-side-generated, the browser needs to authenticate the very first request it sends to the server and can't rely on anything apart from cookies anyway.

formerly_proven · 1d ago
> I don't know, what's the use-case? Server-server interaction? Then they still need to share a key to validate the JWT. And probably all but the user-facing server doesn't need to be exposed to public internet anyway so why the hoopla adding JWT?

JWTs across servers are typically used with signatures, not in HMAC mode (so no globally shared HMAC keys). Then the issuer simply exposes a JWKS endpoint for downstream consumers (so no additional maintenance to distribute public keys).

stephenr · 1d ago
> what's the use-case?

The use-case I always remember people presenting for JWTs was mostly part of the "serverless" fad/hype.

The theory was presented like this: If you use a JWT your application logic can be stateless when it comes to authentication; So you don't need to load user info from a session database/kv store, it's right there in the request...

The only way that makes any sense to me, is if your application has zero storage of its own: it's all remote APIs/services, including your authentication source. I'm sure there are some applications like that, but I find it hard to believe that's how/why it's used most of the time.

Never underestimate this industry's ability to get obsessed with the new shiny.

I had an eye opening experience many years ago with a junior dev (I was significantly more experienced than he was then, but wouldn't have called myself "senior" at the time).

He had written an internal tool for the agency we both worked for/through. I don't recall the exact specifics, but I remember the accountant was involved somewhat, and it was a fairly basic CRUD-y PHP/MySQL app. Nothing to write home about, but it worked fine.

At some point he had an issue getting his php/mysql environment configured (I guess on a new laptop?) - this was before the time of Docker; Vagrant was likely already a thing but he wasn't using it.

From what he explained afterwards I believe it was just the extremely common issue that connecting to "localhost" causes the mysql client to attempt a socket connection, and the default socket location provided in php isn't always correct.

As I said, I heard about this after he'd decided that connection issue (with an app that had already been working well enough to show off to powers-that-be and get approval to spend more paid time on it) was enough to warrant re-writing the entire thing to use MongoDB.

As I said: never underestimate this industry's ability to get obsessed with the new shiny.

kro · 1d ago
There is a jti claim that can be used for storing a token ID, so you could enforce tracking all issued tokens server side.

Cracking 256bit by brute force is unrealistically unlikely as you said, and there are many systems that could be compromised by that compute, an isolated jwt sig seems like just a very specific example.

A nice benefit of JWT for me is that it can be asymm signed and verified (ID tokens)

mooreds · 1d ago
Here's an interesting video from an identity conference a few years ago. The speaker is Brian Campbell who was one of the JWT spec authors.

https://www.youtube.com/watch?v=IgKRGS6cQWw

Here's the video description:

> JWT is an IETF standard security token format that, due to perceived simplicity and widespread library availability, has been extremely popular in recent years. Despite that popularity (or maybe, in part, because of it), JWT has been heavily derided by reputable people in information security ("horrible standard", "RFC was made by monkeys", "Internet’s worst cryptography standard", "JWT is a disaster ... amazing how bad it is", "simplistic, complicated, and unsafe all at the same time", and "almost impossible to build a secure JWT library" ...give just a taste of the sentiment).

> The criticism has been substantiated and amplified by a steady stream of public vulnerabilities in libraries and deployments. Indeed there have been serious and legitimate security problems with JWT and many of them can be attributed directly to fundamental flaws in the specification itself that allowed, or even encouraged, such implementation mistakes. But is JWT irredeemably flawed? This session will endeavor to take a hard look at that very question (complete with the presenter's own sense of inadequacy and fear of culpability in JWT's flaws) with a review/overview of JWT fundamentals and a pragmatic look at each of the most common and/or biting criticisms and associated real-world vulnerabilities.

methou · 1d ago
JWTs are just too fat, and JS users often forgets encoding is not encryption.

I've seen some news site trackers send JWT in url/header to some 3rd party tracker. Content is no surprise, my full name, and email address, violates its own privacy policy.

Otherwise it's very open and handy, from inspecting a jwt token I can learn a lot about the architectural design of many sites.

unscaled · 1d ago
tptacek's survey was already mentioned here, but I think it should be more famous. https://fly.io/blog/api-tokens-a-tedious-survey

Unfortunately, it seems like 99% of the industry decides which token to use based on Medium articles, LLM responses or how many unmaintained packages that implement this thing they can find on NPM.

JWT is mostly used as an access token, but for the vast majority of use cases it's a bad fit. If you've got low traffic no strict multi-region deployment requirements, random IDs are the best approach for you. They are extremely lean and easy to revoke. It's pretty secure: the only common vulnerabilities I can think of with this approach are session fixation[1] and timing attacks[2]. Both attacks are preventable if you take just a few simple precautions:

1. Always generate 32-byte session IDs using a cryptographically secure random number generator on authentication. (Never re-use existing session IDs for new logins)

2. Either use a cryptographic hash (e.g. SHA-256 or Blake2b) of the session ID a the database field used when querying sessions or make sure that the Session ID field is indexed with a hash-based index (B-trees are susceptible to timing attacks).

In cases where you really cannot use Session IDs, your service is usually big enough and important enough to use custom Protobuf tokens even a more special-purpose format like Macaroons. These formats give can be far more compact and give you full control on designing for your needs. For instance, if you want flexible claims (with most of them standardized across your services), together with encryption, you can use a combination of Protobuf and a libsodium secret box envelope.

[1] https://owasp.org/www-community/attacks/Session_fixation

[2] e.g. https://github.com/advisories/GHSA-cvw2-xj8r-mjf7

nh2 · 17h ago
> or make sure that the Session ID field is indexed with a hash-based index

Using a hash index instead of a btree isn't a 100% guaranteed solution because there may be craftable collisions (because e.g. postgres's index hash is not cryptographic) which cause fallback to linear comparison across the values inside the hash bucket:

https://dba.stackexchange.com/questions/285739/prevent-timin...

So hashing the ID before the DB lookup is better.

lyu07282 · 1d ago
I use JWT and a half dozen other standards, not by choice though, I wished I could do what you suggest it would simplify everything a ton, but I'm not going to roll my own multi-org/SSO/2FA auth platform. Needing those auth features is what made me use these standards not because my app is big, it's not it's tiny.
slt2021 · 21h ago
sessionID is vulnerable to stealing cookies. Some games - if you lose your session cookie, you might as well lose your account and everything you have on it.

you can of course bind sessionID to the IP address, but this is extra effort you need to put. in JWT land you can just put the IP addressed inside the payload and forward requests with non-matching IP to reauth and regenerate JWT for their new IP in case customer is roaming networks

jpc0 · 1d ago
> Content is no surprise, my full name, and email address

Not sure if I’m remembering correctly but isn’t it recommended to not store any identifiable information in a JWT precisely because of this?

littlecranky67 · 1d ago
Well JWTs are signed - signing is not encryption per se. But there are also JWE which are mentioned in the linked article. They are fully encrypted.
genghisjahn · 1d ago
I love JWTs between servers. Between servers and clients, you just end up remaking cookies/sessions. Strictly my experience/opinion. Glad to hear from others.
gerdesj · 1d ago
Cookies are only controlled by the server but obviously can be negotiated for with a secret. JWTs have a mutual secret component built in and far cooler sounding ... stuff. So both ends have to trust the other and prove it with JWT and when cookies are in play, you takes your chances - you can use mutual TLS to get the same trust that JWT gives.

I have a web app that I'm doing sysops for which ended up with both. The web devs insisted on JWT and cough "forgot" about the auth bearer bit in the header because their API didn't use it. I ended up amending and recompiling an Apache module for that but to be fair, they will support it in the next version so I can revert my changes. A few db lookups in the Apache proxy JWT module I'm using and you have your claims.

On the front of that lot you have Apache session cookies generated when auth succeeds against a PrivacyIDEA instance - ie MFA.

I suppose we have cookies for authN and JWT for authZ. Cookies could do both and apart from session I'm not too familiar but it looks like claims would require multiple cookies where JWT does it all in one.

genghisjahn · 1d ago
I use JWTs with RSA key pairs primarily. I tell the other service to make the pair and send me the public. I never see the private. Then I can verify all their tokens with the public key.

This way I don’t have to worry about sharing the secret. It never leaves the other service.

firesteelrain · 1d ago
I think you meant verify with public key.
genghisjahn · 1d ago
Yes. I meant verify with public key. (Facepalm and updated)
firesteelrain · 1d ago
It’s easy to get mixed up.
lyu07282 · 1d ago
Isn't that what OIDC/JWKS is? That also uses PKI to verify JWT with public keys.
some_furry · 1d ago
> Then I can verify all their tokens with the private key.

Mmmm. No. You're supposed to use a public key to verify the tokens, not a private key. What library are you using that tolerates this sort of misuse?

udev4096 · 1d ago
Who uses apache reverse proxy in 2025?
slt2021 · 1d ago
you cant generally reuse cookies across domains, because browser controls which domain receive which cookie. Also cookies are not cryptographically signed and thus easily forgeable by the client/browser.

JWTs on the other hand allow to be used across domain, so that you can use JWT issued by your IDP on one domain, to be trusted on another domain. crypto signature helps in verifying integrity of data.

sessions are usually tied to a single backend/application server. Its hard to reuse a session data across different apps.

JWTs on the other hand allow sharing session data across different app servers/microservices.

gerdesj · 1d ago
"Also cookies are not cryptographically signed and thus easily forgeable by the client/browser."

My Apache webby thingies quite happily dole out encrypted cookies:

https://httpd.apache.org/docs/2.4/mod/mod_session_crypto.htm...

Your notes on cross site issues are also described there.

JWTs are mutually shared secret passable with nobs on - you can store loads of data in them. Cookies are more one shot and one piece of data.

formerly_proven · 1d ago
Encryption generally doesn’t provide authentication, so I wouldn’t be surprised if that Apache module allows a user to flip is_admin=0 to 1 because the encryption is sufficiently malleable to do that. Especially because that page mentions 3DES.
kevlened · 1d ago
> Also cookies are not cryptographically signed and thus easily forgeable by the client/browser

While it's true that you could avoid signing cookies, this isn't the default for any server library I'm aware of. If your library doesn't require a secret to use for signing, you should report it.

I'm also unaware of JWT libraries that default to "none" for the algorithm (some go against the spec and avoid it entirely), though it's possible to use JWTs insecurely.

pjmlp · 1d ago
10 years of tooling pain, more likely.

I don't know what the better solution looks like, but dealing with OAuth and JWT setups is kind of horrible, regardless of the technology stack being used.

90s_dev · 1d ago
> It’s often said that one sign of a standard having succeeded is that it’s used for things that the inventors never imagined.

It's certainly a sign of something's utility and versatility, for sure. Congrats.

vrosas · 1d ago
If you go back and search hacker news for any article involving JWTs or OAuth you’ll find hundreds of comments of circular arguments over what a JWT is and is not. People never seem to be able to separate the two.
90s_dev · 1d ago
I still don't really understand them. The last time I used them was for a client probably in 2016 or 2018, and I forgot everything I learned about them. But they have an RFC so that's pretty cool.
marifjeren · 1d ago
I think an easy way to think about them is it's just a json object, with some cryptographic crud glued to it that proves who created it.
deeringc · 1d ago
JSON Web Tokens are part of the JSON Object Signing and Encryption (JOSE) family of standards which are really just containers for cryptographic primitives in a web-friendly representation. Most people are aware of JWS (signed payloads) but there are also JWE (encrypted payloads) and JWK (key payloads). If you're building any sort of cryptographic system that needs to represent encrypted/signed values or keys, you can use JOSE to represent these primitives without having to reinvent the wheel. By far the biggest use of JOSE is in authentication systems where JWS are used as signed bearer tokens but that's just one application and there are many others. They arent perfect, but they filled an important gap when they were created and made it much easier to deal with crypto at an application layer compared with all of hte binary formats that are used in things like TLS.
TheChaplain · 1d ago
I don't really get the point of JWT these days with SameSite and CSP-headers available.

And from the backend perspective, most frameworks have session tracking built-in with cookies so it's super easy to dismiss one or all clients.

With JWT however, that rarely exist and you need to re-implement the whole session-shebang in order to keep track of the clients.

littlecranky67 · 1d ago
Samesite/CSP Headers and JWT are orthogonal to each other. I use a JWT system for authenticating my SPA against the REST backend, but store the JWT in a cookie (using SameSite=strict and HttpOnly).
marifjeren · 1d ago
Love JWTs but I wish there was a better standard for conveying detailed and compact authorization information, for systems requiring enforcement of complex authorization rules.

We experimented once with trying to put permissions on a JWT (more complex than your popular scopes) but that makes them grow quickly. And we experimented with putting role information on JWTs but that results in re-centralization of logic.

Maybe conveying complex authorization info via a single object that gets passed around repeatedly is fundamentally a flawed idea, but if I had an identity standards wishlist that would be near the top.

kgeist · 1d ago
>We experimented once with trying to put permissions on a JWT (more complex than your popular scopes) but that makes them grow quickly.

We solved it by simply using bitmasks.

Say, you want to encode an access rule "allows reading from Calendar objects". The typical CRUD actions can be encoded with 4 bits. For example, all bits are zero => no access. The first bit is 1 => can create. The second bit is 1 => can read. Etc.

Then, say, if your system has 32 different types of objects, you can say that, "position 13 encodes for calendars". So you get 32*4 = 128 bits, i.e. just 16 bytes to encode information about CRUD rules for 32 different types of objects.

Sure it sounds complicated but if you move it to a library, you stop thinking about it.

hirsin · 1d ago
That works as long as you have no distinctions between objects of a type. _Which_ calendar can they edit? All of them?

Putting all the repos you can access into a token is a request we get sometimes. It would be... Difficult.

kgeist · 23h ago
I agree it doesn't work for all cases. In our case, some services can have complex, service-specific access control logic that's hard to express declaratively in a token, so we also have to make some checks by consulting the DB. I don't think that's usually a problem (performance-wise) because we already need to contact the DB anyway - to retrieve the entity to work with (and that has, say, an OwnerID property). The access token helps reduce DB load by skipping general checks ("can the user access calendars in principle?"), and for users who can, we then consult the DB additionally, if the service requires it ("is the user actually the owner of this calendar?" or any other additional access logic). The general case "can the user access calendars in principle?" also allows to hide menu items / return 403 in the UI immediately with zero DB or cache cost.
JimDabell · 1d ago
You might want to check out Biscuit:

> Biscuit is an authorization token with decentralized verification, offline attenuation and strong security policy enforcement based on a logic language

https://www.biscuitsec.org

mparnisari · 1d ago
cyberax · 1d ago
Authorization is really not a generally solvable problem. Every large system ends up having its quirks.

Attempting to generalize it ends up in pain, suffering, and AWS IAM.

No comments yet

Bilal_io · 1d ago
Oh yeah the token grows in length very quickly, we also tried using it 5 years ago to pass in roles to the client and ended up with many issues
jrvarela56 · 1d ago
tzahifadida · 1d ago
One thing that I dont like about jwts is that all rest calls must include that huge thing. A simple old school sessionid as a cookie is smaller and will pass the request faster. You can store the session in the database or redis or cache in memory, really who really have millions of users? Why go for the most complicated setup?
sroussey · 1d ago
The meat of the article is to point to this link:

https://www.ietf.org/archive/id/draft-sheffer-oauth-rfc8725b...

JSON Web Token Best Current Practices

francislavoie · 1d ago
Paseto is better https://paseto.io/, but unfortunately OAuth forces the usage of JWT.
JimDabell · 1d ago
I believe OAuth doesn’t require JWT (just an opaque token, which in practice is often JWT), but OpenID Connect – which is based on OAuth – does require JWT.
unscaled · 1d ago
Some optional OAuth extension RFCs do depend on JWT, e.g. Profile for OAuth 2.0 Access Tokens (RFC 9068) OAuth 2.0 Demonstrating Proof of Possession (DPoP) (RFC 9449, JWT-Secured Authorization Request (JAR) (RFC 9101). Core OAuth 2.0 does not enforce supporting JWT anywhere, but due to the influence of OpenID Connect there are more and more OAuth use cases that require JWT if you want to follow standards beyond the core OAuth RFCs (6749 and 6750).

The closest OAuth gets to mandating JWT is with client authentication and proof-of-possession. The OAuth Best Current Practices RFC (9700) recommends using asymmetric JWT for client authentication in case you cannot use Mutual TLS (which is usually the case). This recommendation will probably be rolled into the new OAuth 2.1 standard (it is included in the draft). OAuth 2.1 also mentions the JWT-based DPoP as one of the two recommended methods for implementing sender-constrained access tokens (the other one is Mutual TLS again).

mooreds · 1d ago
> OAuth forces the usage of JWT.

OAuth doesn't, OIDC does for the ID token[0]. OAuth, at least the inital RFCs, were released 3 years before JWT was defined. But many extensions of OAuth do require or support JWTs.

Either way, I'm just not sure the demand is there.

My employer has had an open issue for Pasteo[1] for years but hasn't seen much community support. Some other interesting comments here[2]. Looks like most of the implementations[3] are libraries rather than standalone auth servers.

0: https://openid.net/specs/openid-connect-core-1_0.html#IDToke...

1: https://github.com/fusionAuth/fusionauth-issues/issues/773

2: https://www.reddit.com/r/KeyCloak/comments/1e2h5w7/is_paseto...

3: https://paseto.io/

francislavoie · 1d ago
As others have said, JWT, in practice, is effectively forced by the extensions/OIDC, I was just being short.

It shouldn't be about demand. It's about solving the danger of these poorly designed APIs to improve overall web security.

Zamicol · 1d ago
{ "pay": { "msg": "There are also other options.", "alg": "ES256", "iat": 1748248973, "tmb": "9PcBWntvjAktwfiPp8WxgOyQOwc1h6Lo1UnB_gkWXKk", "typ": "cyphr.me/msg/create" }, "sig": "sHyMrykhsta5etjqH1e5oho0EpEs2FrblQ0DFHQo0aMgKd2V__SQ2Fl2EOSKt8wl65iLmKgIaMVEgCmhtvbUcg" }

Verify: https://cozejson.com

Spec: https://github.com/Cyphrme/Coze

francislavoie · 1d ago
I don't like it much, using JSON as the transport has some problems if encoded in a URL as required by many auth flows. Paseto encodes the whole version+payload+signature to make it easier to transport. Of course you could just base64 encode the whole Coze JSON, but that isn't part of the spec, which means the spec is weak.
hirsin · 1d ago
Hm, I wonder how the double sig problem that SAML would run into will work here. What happens if someone adds an extra sig object there?
cyberax · 1d ago
I fail to see how it's better, except for hand-waving about potential crypto attacks?

It seems to be a NIH-ed serialization format with hard-coded ciphersuits. It doesn't seem to support use-cases like delegation and claims.

francislavoie · 1d ago
Watch the talk at the bottom of the page. JWT/JOSE are chock full of dangerous footguns that aren't just theoretical, they have repeatedly been shown to be poorly designed and too risky to implement correctly as written. Using fewer, known secure cryptographic primitives as part of the spec ensures it's impossible to get the security wrong, can't be misused.
cyberax · 23h ago
I am not going to watch the talk. The slides from it are unavailable either.

And sorry, but the article https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba... is just weak. The only real vulnerability is the key type confusion (HS256 vs. RSA256) enabled by libraries in weakly-typed languages, and easily fixed. Other:

> RSA with PKCS #1v1.5 padding is vulnerable to a type of chosen-ciphertext attack, called a padding oracle.

Not applicable.

> If you attempt to avoid invalid curve attacks by using one of the elliptic curves for security, you're no longer JWT standards-compliant.

This is just nonsense. JWT allows Ed25519: https://www.rfc-editor.org/rfc/rfc8037 Moreover, I'm not aware of real attacks against NIST curves.

francislavoie · 23h ago
The point is that the JWT spec leaves it open to the implementer which primitives to use, which invites bad implementations to be insecure. PASETO requires a small subset of known secure primitives, preventing that problem altogether. It's not "just nonsense".

Here's a list of "alg: none" JWT vulns. Every one of these would've been avoided had the standard been something like PASETO which didn't allow that. https://github.com/zofrex/howmanydayssinceajwtalgnonevuln/bl...

You say "I am not going to watch the talk" and then you continue to argue in bad faith. Please walk away if you're not going to engage honestly.

cyberax · 19h ago
The JWT spec can thus be fixed by changing the recommended set of primitives. No need to reinvent the wheel with custom serialization (that probably also has vulnerabilities when implemented by clueless people).

You parade the alg=none vulnerability that has been fixed long ago as the reason to reinvent the world. It's simply not.

francislavoie · 19h ago
The spec is fundamentally flawed. The fixed spec would be a different spec, i.e. something like PASETO.
cyberax · 13h ago
You keep repeating that. What is "fundamentally flawed"?

PASETO has exactly the same vulnerabilities. You can specify a different version, and a buggy implementation can misinterpret it. With PASETO, the algorithm selection is fully under the control of the attacker.

unscaled · 1d ago
these "potential crypto attacks" resulted in multiple CVEs and several real life attacks. I think even the Storm-0558[1] could be traced to how hard it is verify a valid JWT, due to some of the over-engineering mistakes that have been involved in the standard's design. I don't know if PASETO would have solved that particular attacks, but the PASETO standard solves some of the most common CVEs we see with JWT libraries: alg=none, Algorithm Confusion attacks and invalid curves.

[1] https://www.microsoft.com/en-us/security/blog/2023/07/14/ana...

cyberax · 23h ago
It looks like in the case of MS they simply trusted an incorrect key in the validation path? I fail to see how PASETO would have solved that. There were no token format shenanigans.

`alg=none` and `hsa=rsa` were really the only ones that are JWT-specific. Invalid curves are algorithm-specific, and JWT allows the Ed25519 signatures.

francislavoie · 23h ago
Yes it allows Ed25519, but it doesn't disallow other curves. That's the whole point. If you allow primitives that have potential issues, it's risky to use.
cyberax · 23h ago
There is no standard saying that implementations MUST support P-256. So you're free to just turn on Ed25519.

And so far, I don't think NIST curves have been cracked? iOS secure enclave only supports them, for example.

francislavoie · 22h ago
tsarchitect · 1d ago
Dabvid Blevins has a great video (2018) that mentions JWTs https://www.youtube.com/watch?v=osQmFNm0YDU

He discusses the architectural advantages of JWT but also discusses JWTs lacking

"JWTs are a passport without a picture. A very dangerous thing".

His solution: OAuth2 + JWT + Signatures

santiagobasulto · 1d ago
I think the rule of thumb should be to ALWAYS start with Cookies for client/server authentication, with all the security features enabled by default. HttpOnly, Secure and SameSite.

Then, if that has some sort of limitation for your app's specific use case, you can see to migrate to JWT.

But the standards are standards for a reason.

grxar · 1d ago
I would like to hear your opinion about how to invalidate a token, the 2 options we have so far: 1 query the db on every request and 2 cache for some minutes the db data, seems both don't fit well in a modern web development
littlecranky67 · 1d ago
Tradeoff: You can't implement individual token revocation, but you can easily implement a "Logout on all devices" system. You have a timestamp with a "minimum issued at" attached to each identity, and if that identity (user) choses to "logout from all devices", you set the timestamp to the current time. Upon validating a token you only make sure the issued-at (iat) is after your "minimum issued at".

I think it is a real good trade-off, as in case of a security breach you have an easy way to mitigate leaked tokens. The downside is, your user will have to re-login all devices. If you do not want to burden your users with the login on all devices, you should ask your self how often you do have security breaches and leaked tokens, might be you have others issues going on.

pelagicAustral · 1d ago
JWTs are a vanity project of the JS community... I am still waiting for a use case that cannot be served by traditional key exchange...
firesteelrain · 1d ago
> JWTs are a vanity project of the JS community

JWTs are standardized (RFC 7519) and used outside the JS ecosystem. Not a vanity project

Though often overused and poorly misunderstood where simpler and more secure methods would suffice.

mdaniel · 1d ago
Did you mean cert exchange, because keys are just a very long password, but certs carry actual information about the holder (err, I guess pedantically of the holder with the key)
some_furry · 1d ago
> Did you mean cert exchange, because keys are just a very long password

My experience differs:

My private key is only 256 bits (32 bytes, which base64 encodes up to 44 characters, if you use padding). My typical passwords are 40-64 characters (unless stupid requirements force me to go shorter).

Uvix · 1d ago
What algorithm are you using? RSA keys are normally at least 2048 bits in length.
magicalhippo · 1d ago
some_furry · 1d ago
Why would I use RSA when Ed25519 is right there?
rvz · 1d ago
One of the worst standards ever made.
sachahjkl · 1d ago
JSON Computer
nssnsjsjsjs · 1d ago
Is there an RSS feed for this blog?
JimDabell · 1d ago
Yes, it’s linked using the standard autodiscovery rel=alternate meta element. You should be able to just paste the site URL into whichever feed reader you use to subscribe.
pawanjswal · 1d ago
JWT has really shaped the modern web!