I'm using Vue with OvermindJS (instead of Vuex) for state management. Within Overmind, I use GraphQL following this tutorial from their official docs. It works like a charm, but I run into a problem with the authentication. I use a JWT approach with a refresh-cookie. That means I need to refresh the Token at certain intervals or when it's expired.
I think an interceptor that can check if the token is expired every time I make a request to my GraphQL endpoint would be the best approach. Is this possible in Overmind? Alternatively, I could create a Promise function to every created request that checks it, but I don't find that solution particularly appealing.
Cheers
Related
As I am working on implementing a proper auth flow into a react web app, I am presented with different patterns of how to use access and refresh tokens.
I am considering the following two patterns:
Creating some sort of middleware to the fetch API:
This middleware runs before every request to the backend and checks whether the access token is still valid or not.
If it is invalid, it first calls the auth server to fetch a new access (and refresh) token.
Creating an interval which is independent from all other logic to keep the access token alive.
Say if the access token is valid for 5 minutes, the interval will run every 5 minutes to fetch a new access token
I would also make sure it only runs every five minutes, if the user is still active , so that the application left open without any user interaction for a long time will automatically log out
Any API call simply uses the currently active access token and does not need to worry about checking the token first or anything
The second approach seems much much easier and cleaner to implement for me, since it does not add any complexity to fetching data and is completely independent/seperate to the app otherwise.
I've been having a hard time to research this question though tbh. I'm not sure if there is some security issue I'm missing with that approach.
So my questions are:
Is there any security issue with fetching a new access token in an interval from the clients side?
Is there a common practise on how SPA apps (like the react app I mentioned) handle access tokens?
If yes, what is that common practise?
If there is no security issue, are there other cons of the second approach that I am missing out on?
Thank you for your answers in advance!
I think the answer depends, if you always do it every X minutes, and you have many active clients, it might create more load on the backend, compared do doing it on a need basis. Perhaps all clients are not so active all the time?
One thing to look out for is to make sure you don't trigger multiple requests at the same time to request new refresh tokens. If you get a race condition here, then you might be logged out (if you use one-time refresh tokens)
Also it is worth considering to use the BFF pattern, do watch this video
Using the BFF pattern to secure SPA and Blazor Applications - Dominick Baier - NDC Oslo 2021
I have a firebase cloud function that uses express middleware to generate an authToken which is passed on up through the routes.
This token lasts 24 hr's so I don't want to spam the service with excessive requests. As such I want to cache the result for a while before regenerating it.
I considered two approaches.
See how to cache the axios request when generating the token.
Use Firebase Secret Manager to store the value.
Whilst researching the axios caching, I already had secret manager in use so have implemented both in parallel.
Secret Manager
Currently I have implemented secret manager to store and update the token, when I need the token for the external API, I can just grab it from secret manager..
I have a scheduled job that then creates a new token, and subsequently disables all prior secret versions before adding in the new token.
Creating the token is achieved by using axios to call the auth end point of the external API.
Axios Caching
When it comes to finding out how to enable caching for express running on cloud functions. I'm currently getting the impression that it involves adding in another layer of complexity, such as either a CDN, or Redis solution.
I've also seen intermediate cache solutions such as https://github.com/AlbinoDrought/cachios which looks like it leverages local memory of some sort. Which I assume would mean that cloud functions distributed across different regions/instances would have there own unique local cache. I wouldn't know if this is a problem or not. But would I think be more lightweight in configuration and setup than say redis?
My Problem is, and I hope this isn't too opinion based for SO. but.
Is this use of secret manager a really bad idea. Given secret manager is generally I think geared around long lived values that don't change that often. Or is it perfectly fine, that if I have 2 requests a day, that could be 2x365 versions a year logged against the secret manager, which would have to be tidied up everynow and then. Are the implications I'm not aware of?
Or is enabling some sort of CDN or express caching layer a much more standard approach? And if so is there any guidance on a simple express caching layer that's geared specifically for google Firebase cloud functions?
2 requests per day is fine. See the documentation on rotating keys here: https://cloud.google.com/secret-manager/docs/rotation-recommendations
Note that it suggests, in approach 2 & 3, that valid use cases include grabbing secrets on each launch, and grabbing secrets continuously. 2x/day is less frequent than both of these.
I want to use artillery.io as a stress-testing tool, and want to setup a basic stress-test. But all my url's are behind an auth wall (using Auth0) and I want to try to get a valid token for the testing session so that my backend does not throw me into a 401-spiral. The artillery docs explains that I can hook up to some lifecycle events using processor, but does not explain which lifecycle hooks I have available. I've managed to figure out that there is a beforeRequest hook I can use, but this does not seem optimal as this will probably run before every request. My tokens have at least an hour validity...
So the main question is; how do I construct a processor hook which taps into the auth0 login flow, retreives a token which can then be stored in an environment variable (or some other local mechanism) for use as an Authorization Bearer token in future requests done by artillery?
OR if this is a bad pattern to follow, what is the best practice for urls behind auth-walls? I've already thought about loging in first and then copying the token to an environment variable and use that, but this makes the test a bit harder to use since it requires a manual step.
Any input is greatly appreciated.
First of all the project is amazing, I had GraphQL working with MongoDB very quickly. Even GraphiQL with the ModHeader extension. However, I am trying to add policies to the graphQL endpoints and I am finding that ctx.session is always empty, even tho I am making authorized requests (via the Bearer token)
How does session work in this example? Do I need to query for the user every single time I create a request?
The user info is available through the ctx.state.user object and not the ctx.session. Also, feel free to take a look at the GraphQL example https://github.com/strapi/strapi-examples/tree/master/react-apollo
I've got a Mule application that listens for requests from one application, then responds by calling a JSON API multiple times to authenticate and then retrieve several data, doing some transformation, and returning the results. The API requires HTTP basic authentication. When an account authenticates, the application that provides the API 1) returns a session/authentication cookie that can be used to identify the current user in subsequent calls, and 2) updates the database to record the last authentication timestamp for the current user. The API also has a call to check to see if the session/authentication cookie is still valid.
I currently have a flow that invokes the authentication method, then goes on to make a bunch of calls with the session/authentication cookie.
The issue is when the Mule application gets many requests at once, the application that provides the API deadlocks trying to update the authentication timestamp, since the flow will authenticate once for each request. Is there a way (possibly using the object store) to store the session/authentication cookie for use by subsequent requests to the Mule flow? Basically, I want the flow to suspend all other requests to the same flow, check to see if there are stored cookies, check to see if they are still valid, authenticate (again or for the first time) to get a new session/authentication cookie if needed, store the new cookie, then continue.
Is that a reasonable way of doing that, and is it even possible? If not, I think you can get the gist of what I'm trying to accomplish. What better way is there? Thanks!
edit: I've done a little experimentation, and I can definitely use the object store to hold on to the cookie. The part I'm stuck on now is how I get the only first request to re-authenticate if there is no valid cookie while any near-simultaneous requests wait. I'm looking into VM queues and the Mule Requester, but I'm not sure that that will work. I will post the code for a fully functional test when I'm done.