I try to implement Oauth2/OpenId Connect into microservices architecture based on Java/Spring Cloud. My current problem is about tokens propagation between microservices or through a message broker like RabbitMQ.
Very few topics talk about this. I only found this Stackoverflow thread but I don't like proposed answers.
Here are the different cases :
My microservice A receives a request initiated by the end user going through the API gateway and carrying a valid access token (JWT with scopes/claims corresponding to the final user : username, id, email, permissions, etc.). There's no problem with this case. The microservice has all informations to process the request.
1st problem : What happens if microservice A needs to call microservice B ?
1st solution : microservice A sends the access token to the microservice B
==> What happens if the token expires before arriving at microservice B ?
2nd solution : use "client credentials grant" proposed by OAuth (aka service account). It means microservice A request a new access token with its own credentials and use it to call microservice B.
==> With this solution, all data related to the user (username, id, permissions, etc.) are lost.
For example, the called method in microservice B needs the user id to work. The user id value can be set as query string.
If the method is called with the user access token, the microservice B can validate that the user id value in query string is equal to the user id value in the JWT token.
If the method is called with the service access token, the microservice B can't validate the query string value and needs to trust the microservice A.
For this cas, I heard about the "token-exchange" draft from OAuth 2. The idea is very interesting. It allows microservice A to convert the user access token into another access token with less permissions but forged for microservice A. Unfortunately, this mecanism is still in draft and not implemented in a lot of products.
2nd problem : What happens if microservice A pushs a message to RabbitMQ and microservice B receives this message ?
1st solution : Authentication and authorization are managed by RabbitMQ (vhost, account, etc.)
==> Once again, all user related data are lost. Moreover, we have 2 repositories to manage authentication and authorization
2nd solution : Like the first problem, use "client credentials grant"
What do you think about it ? Is there another better solution ?
Thanks in advance.
It's quite straightforward. There are always two use cases that we will refer to as end-user and app2app. Always gotta handle both.
Consider a simplified architecture:
Internet
User ----------------> Service A +-------> Service B
HTTP request |
+-------> Service C
Assume the user is authenticated by a token. JWT or anything else, doesn't matter.
The service A verifies the user token and performs the action. It has to call service B and C so it makes the appropriate requests and has to include the user token in these requests.
There can be a bit of transformations to do. Maybe A reads the user token from a cookie but B reads the token from the Authorization: Bearer xxx header (a common way in HTTP API to accept a JWT token). Maybe C is not HTTP-based but GRPC (or whatever developers use nowadays?), so the token has to be forwarded over that protocol, no idea what's the general practice there to pass extra information.
It's fairly straightforward and it works really well for all services dealing with end-users, as long as the protocol can multiplex messages/requests with their context. HTTP is an ideal example because each request is fully independent, a web server can process various stuff per path and argument and cookie and more.
It's also an extremely secure model because actions have to be initiated by the user. Wants to delete a user account? The request can be fully restricted to the user, there can't just be an employee/intern/hacker calling https://internal.corp/users/delete/user123456 or https://internal.corp/transactions/views/user123456. Actually, customer support for example may need to access these information so there has to be some limited access besides being the user.
Consider a real-word architecture:
Internet
User ----------------> Service A +-------> Service B --------> SQL Database
HTTP request |
+-------> Service C --------> Kafka Queue
|
|
Service X <--------------+
Service Y <--------------+
Service Z <--------------+
Passing JWT user tokens doesn't work with middleware that do not work on a end-user basis. Notably databases and queues.
A database does not handle access based on end-user (same issue with the queue). It usually requires either a dedicated username/password or a SSL certificate for access, neither of which have any meaning outside of that database instance. The access is full read/write or read-only per table with no possibility of fine permission (there is a concept of row-level permission but let's ignore it).
So service B and service C need to get functional accounts with write permission respectively to the SQL database and the Kafka Queue.
Services X,Y,Z need to read from the queue, they also each need a dedicated read-only access. They don't strictly need to have a JWT user token per message, they could trust that what's in the queue is intended, because whatever wrote to the queue in the first place must have had explicit permission to write to the queue.
It gets a bit tricky if service X needs to call yet another HTTP service that requires a user token. That's hard to do if the token wasn't forwarded. It would be possible to store the token in the queue along the message, but it's really ill advised. One does not want to save and persist tokens everywhere (the kafka queue writes messages and would basically become a highly-sensitive token database, erf). That's where the forwarding of user tokens shows its limits, at the boundaries of systems speaking different protocols with different access control models. One has to think carefully of how to architecture systems together to minimize this hassle. Use dedicated service accounts to interface specific systems that don't understand end-user tokens or don't have end-user tokens.
Related
For most client applications and UI approaches JWTs seem to be a good way to carry tokens for an OAuth based approach. This allows decoupling of a User/Auth service and the services that are actually being accessed.
Given this architecture
My question is: in the case of public APIs (ie Github or Slack) where a bearer key is generated with specific roles. How is that key authorized in a microservice architecture? Do those types of keys just require that the Auth service gets queried with every request?
Could an API gateway mitigate this? I would like to understand if a solution exists where there is minimal communication between services. Thank you!
Normally, this is solved using scopes. The scopes are permissions given to a user to do certain operations,for example there will be a scope for read a repository, another for update it, another one for delete etc..
These scopes are tied to the token and normally are requested by the user himself or added automatically depending on the user type. And the same as the authentication process, they could be included in the token itself coded as a claim in a jwt or they could be requested or checked by calling an oauth server when one operation is requested.
The advantages of include them in jwt is that there is not need to call an external server every time an operation is requested so there is a lower latency and less bandwith is required, also you remove a point of failure. Obviously if this solution is used the token must be properly signed or even encrypted to avoid possible manipulations.
However it has also drawbacks, and the most dangerous one is that the token cannot be revoked because this information cannot be included in the token and the service that check if the token is valid only can access the data contained in the token itself. Because of this, this kind of tokens are normally issued with a little expiry time so in case of the token is stolen, the validity of it will be very limited
I am trying to understand statelessness in restful APIs in the context of authentication. Here's the scenario:
The user logs in.
The server verifies the username and the password, and generates an opaque access token. It caches some information related to this token - for example, the expiration time, the userId, whether this token was explicitly invalidated before it expired, etc.
The token is sent to the client, and the client sends it with every future request.
List item
Fielding's dissertation defines statelessness as:
"...such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client."
In my example, the client is sending the token with every request, so the first condition is satisfied. However, my server has a context associated with this session that is stored in the sessions cache.
Does this make my application stateful?
If yes, then is it that true statelessness be achieved only if we are using JWTs? I am pondering upon this as JWTs are quite new, so how were architects building truly stateless services before they were invented?
That's right. If you you maintaining the session you are keeping the state in server which makes the application hard to scale. True stateless applications can be scaled out and any server should be able to handle the request.
JWT is popular way to avoid sessions and everything is encapsulated inside the token for any server to auth/Authorize the request and help us achieve stateless application, they come with their own challenges however OpenID connect is the new way for Auth/Authorization.
Before jwt to make application stateless we used to keep session in DB (or Shared Cache) , and any server would like to check the session would have to contact DB.
Hope that Helps!
Briefly: No, such usage of token does not make your application stateful.
Detailed: When we talk about stateless/stateful, we consider usually only the data that affect business logic. Business logic does not usually depend on authentication data. For example, a user sends a request that contains all data needed to place some order. Normally creating an order does not depend on when this user has logged in, on his user ID, etc.
I have an Identity Server 4.0 implementation at my workplace. On top of Implicit and Auth Code flow, we are planning to use Client Credential flow for API to API call authentication.
There are few API that need to keep a log of who called it (the name of calling API). I have done a lot of digging but could not find a convincing (and secure) way of doing this.
In my understanding, in Client Cred flow, the client is going to IDS with just client secret. And obviously, this will make it practically impossible for IDS to know who is calling it. Am I right? Is there any way of knowing the client (so that some id claims can be added to the token)?
Any suggestions welcome.
EDIT: To elaborate my question and better explain my understanding of this particular OAuth flow:
Ok, so let me be clear. Let us say API X has to call API Y.
It follows the below order:
(1) X goes to IDS with Client-Id and Client-Secret for Y.
(2) IDS validates the Client-Id and Secret and issues an access-token to X
(3) X calls Y with the given access token
In step (2) above, as per OAuth 2.0's client credential flow, there is nothing except Client-ID and Client-Secret that X is required to supply. Now, if API Z wants to talk to Y, it is going to go to IDS with the same Client-ID and the same Secret.
If IDS has no way of identifying if the authentication call is from X or Z, how can it add any additional claim in the issued token?
So the only other way for Y to know if the call is from X or Z is that X or Z telling themselves (in header or url or post data) who they are (which invalidates the entire purpose of authorizing through client-cred flow). Remember that my question doesn't talk about authentication.
There are two approaches: either use unique client credentials per instance (API X, API Z are seperate clients), or use the same clientid and / or leave it to the client to provide information.
With unique clientid's you can add information about the client as a claim in the ClientClaims table, e.g. Claim('ApiName', 'apiname').
This claim is added to the access token and is available in the receiving API.
In this scenario client credentials are used as id/password, allowing the client to 'login'.
The alternative is to use the same clientid for all API's. Now it's up to the client to provide information.
One scenario can be to issue apikey's that can be used to identify the client, e.g. a guid. You can send it along with each call.
In addition there is an alternative, add a custom endpoint. Don't use client credentials but implement your own endpoint.
With extension grants you can request the information you need and translate that to a valid access token.
Through the ExtensionGrantValidationContext object you have access to the incoming token request - both the well-known validated values, as well as any custom values (via the Raw collection)
Perhaps it's an idea to 'extend' the client credentials flow with an APIKEY.
Folks:
This is a REST design question, not specific to any programming language. I am creating an application backend that is accessed via REST APIs. I would like to use the same APIs for both UI and API-based access. I am trying to figure out the best way to authenticate users so that I can reuse the same methods.
My current thinking on authentication is as follows:
API Users
These users get a user GUID and a pre-shared symmetric key. On each API request they include additional headers or request parameters that contain:
Their GUID
A security token that contains the user GUID, the current timestamp and another GUI (token GUID) concatenated together and encrypted using the shared key
Upon receiving the request, the server looks at the claimed GUID, retrieves the shared key, attempts to decrypt and verifies the token.
UI Users
These users will make a login request, supplying human credentials (userid/password). Once authenticated, a session is established backed by cookies and further REST calls are secured based on this session.
The Problem
What is the best way to write one REST endpoint that secures both ways: API access and UI access cleanly without too much duplication? I am looking to do the equivalent of the following, but perhaps more cleanly:
#app.route('/')
def hello():
user = None
if session:
user = get_authenticated_user()
else:
user = process_auth_headers()
# Do something with user
I am looking to code the server in Flask, but I am sure the solution will apply as easily to any REST-based server-side framework.
Looking forward to some insights from the community.
Thanks.
We use node for our server, but I think the method we use is pretty common. There are session libraries that express can use, and they can utilize pretty well any database to store session information. They use a cookie with a key that does a lookup on the database when the client comes in. The session data is created when the client authenticates, and the cookie with the client key is added to the browser. The clients GUID is stored in the session, it never leaves the server. We use that info when they hit the server to check if they are logged in, who they are, and what they can do. We have used both FB, (client checks FB, then sends the FB id and token down to the server, which then rechecks and sets up the session or rejects it,) or the old classic, email and password. This works well when you have to scale across multiple app servers, as session is independent of the server, it works for both mobile clients and the web.
I'm building some RESTful API for my project based on Play Framework 2.X.
My focus is on the authentication mechanism that I implemented.
Currently, I make use of SecureSocial.
The workflow is:
An anonymous user calls a secured API
Server grabs any cookie Id (kind of authentication token) and checks for matching in the Play 2 cache. (cache contains an association between cookie Id (randomly generated) and the user Id, accessible from database.
If any matched, user is authorized to process its expected service.
If none matched, user is redirected to a login page, and when filled with valid credentials (email/password), server stores its corresponding authentication data on Play 2 cache, and sends the newly created Cookie containing only a custom Id (authentication token) to user and of course, secured through SSL.
While the cookie/token doesn't expire, the user can call secured api (of course, if authorized)
The whole works great.
However, after some search, I came across this post, and ...I wonder if I'm in the right way.
Indeed, dealing with cookies ("sessions" in Play's term), would break the rule Restfulness.
Because an api really considered as stateless should be called with ALL the needed data at once (credentials/tokens etc..). The solution I implemented needs both calls: one to authenticate, the other to call the secured API.
I want to make things well, and I wonder some things:
What about the use of Api keys? Should I implement a solution using them instead of this SecureSocial workflow? Api Keys would be sent at EVERY API CALL, in order to keep restfulness.
I think about it since I want my APIs to be reached by some webapps, mobiles and other kind of clients. None of all are forced to manage cookies.
What about OAuth? Should I really need it? Would it replace totally the usage of simple api keys? Always with this objective of several clients, this known protocol would be a great way to manage authentication and authorization.
In one word, should I implement another mechanism in order to be Restful compliant?
this is quite an old Q, but still worth answering as it may interest others.
REST does mandate statelessness, but authorization is a common exception when implementing.
The solution you described requires a single authorization process, followed by numerous service calls based on authorized cookie. This is analog to api keys or OAuth. There's nothing wrong with cookies as long as the service isn't of high-security nature and that you expire then after a reasonable time.
Integrating OAuth into your service sounds a bit of an overkill and is recommended only if you expose the API to 3rd parties (outside your organization).