Comment by antihero

Comment by antihero 5 days ago

20 replies

This is why you have refresh tokens - your actual token expires regularly, but the client has a token that allows you to get a new one. Revoking is a case of not allowing them to get a new one.

ars 5 days ago

You only have to do that if you must validate a token, without having access to session data.

I doubt most systems are like that, you can just use what you call "your actual token" and check if the session is still valid. Adding a second token is rarely needed unless you have disconnected systems that can't see session data.

  • fastball 4 days ago

    Not having to start all my API handlers with a call to the DB to check token validity significantly improves speed for endpoints that don't need the SQL db for anything else, and reduces the load on my SQL db at the same time.

    • ars 4 days ago

      Does it actually improve speed though? The DB check is simply "does this key exist", it can be done in a memory database, it doesn't have to be the same DB as the rest of your data.

      Validating a token requires running encryption level algorithms to check the signing signature, and those are not fast.

      • fastball 8 hours ago

        It definitely improves speed. Crypto algos are slow, but they are not slower than a TCP roundtrip. Even a memory database is not generally running on the same machine, so there is still a round-trip cost vs a JWT. Also, although it doesn't need to be the same DB, it adds more complexity to store such a key in a different DB than your actual user data (where the original auth logic is coming from).

    • RadiozRadioz 3 days ago

      Then don't hit the SQL DB directly, cache the tokens in memory. Be it Redis or just in your app. Invalidate the cache on token expiry (Redis has TTL built in).

      UserID -> token is a tiny amount of data.

      • fastball 7 hours ago

        And now I need to invalidate the cache if the key is invalidated. Also this cache cannot be updated/invalidated atomically, like I can if I'm just storing a refresh key in the SQL db. Caching in Redis is more complex and more prone to error than access/refresh token systems.

d4mi3n 5 days ago

This is an implementation detail in my opinion. There are cases where having the capability to force a refresh is desired. There are also cases where you want to be able to lock out a specific session/user/device. YMMV, do what makes sense for your business/context/threat model.

  • SOLAR_FIELDS 5 days ago

    It is, but its an architectural decision that forces expiry by default. So you should probably even have both. AWS runs 12 hour session tokens, but you can still revoke those to address a compromise in that 12 hour gap. The nice thing that forced expiry does is you just get token revocation For Free by virtue of not allowing renewal

kevincox 4 days ago

This is really just an optimization. It means that you don't need to do an expiry check on the regular token, only on the refresh token. It doesn't change the fact that you should be able to revoke a session before it naturally expires.

  • catlifeonmars 4 days ago

    Having a short session expiry is a workaround for not being able to revoke a token in real time. This is really the fault of stateless auth protocols (like OAuth) which do offline authentication by design. This allows authentication to scale in federated identity contexts.

    • apitman 4 days ago

      OAuth2 is not inherently stateless.

      • catlifeonmars 4 days ago

        Good call. I said OAuth but what I meant was OIDC and specifically JWT. OAuth (not OIDC) implementations MAY use opaque access tokens that require server side state to validate.

  • antihero 4 days ago

    Yeah but depending on how you set it up, you could have a very short expiry. If your auth system is as simple as: Verify refresh token -> Check a fast datastore to see if revoked -> Generate new auth token, this is very easy to scale and and you could have millions of users refreshing with high regularity (<10s) at low cost, without impacting on your actual services (who can verify the auth token offline).

    Say you had 1,000,000 users, and they checked every ten seconds, that's 100,000 requests per-second. If you have 1,000,000 users and can't afford to run a redis/api that can handle an O(1) lookup with decode/sign that can handle that level of traffic, you have operational issues ;)

    It's all a tradeoff. Yes, it means some user may have a valid token for ten more seconds than they should, but this should be factored into how you manage risk and trust in your org.

kevin_thibedeau 5 days ago

That's a great way to interfere with local work when the network goes down.

  • freedomben 5 days ago

    If you've built a local app that has to authenticate you against a remote web service even when offline, and all the actual work is being done locally, you have much bigger design issues than authn IMHO

  • rafaelmn 5 days ago

    Access tokens are used for network calls so if the network is down nothing works anyway ?

    • dylan604 5 days ago

      You mean the power going out is related to why my computer will not respond and the screen went blank? That's strange

      • bravesoul2 4 days ago

        Sounds like you want local-first offline-first apps? Me too.