I have a SaaS application that has multiple organisations and each organisation has multiple users. Each customer is an organisation (not a user) and the customer wants access to their data in my SaaS application via API. Obviously I need to authenticate the customer so they only receive the data that belongs to their organisation.
The customer will be importing this data from their application so this is a server-to-server API call which I assume I need to use client credentials flow.
The customer will be able to generate the credentials on a self-service basis.
Is using client credentials flow the correct flow?
If using client credentials flow does each customer have their own client_id and client_secret or does my application have 1 client_id and each customer have their own client_secret?
The standard solution for this type of B2B API solution is to use Client Credentials flow as you suggest.
Each business parter uses their own client id for identification when calling your API
Each partner is also given a string client secret - though it may be possible if your authorization server supports it to use secrets based on an X509 credential
Use of different client ids enables you to turn off Partner A without impacting PArtner B - and also to tell callers apart
Authentication completes based on the client id and secret being verified and you then move onto authorization. Typically your API will need to continue by doing this:
Receive a technical client id such as 08134scdv79
Map it to an application specific id, such as a primary key from your Partners database table
Authorize the request based on whether the caller is allowed to get the data requested
The one interesting aspect to your question is the self service part. I assume you mean some kind of administrator from a business partner can set up credentials - but that this remains a restricted operation?
Related
I'm designing two micro services one for Teacher and other for Student. Now my question is what is the best approach of storing users and doing Authentication Authorization :-
Centralized Auth server which will store user roles as well as all the info.
Centralized Auth server which will only store roles but the user info will be Stored in the databases of their respective services (Student, Teacher)
No centralized Auth server but redirecting login request to either Student or Teacher as per the role in the request body and it will be the responsibility of Gateway.
I want to know the pros and cons of these approaches. If there is any better approach then please share the same.
P.S :- Multiple roles can be assigned to a single user.
I would go for the first approach. Rather than "centralized Auth" server it would be more of a "auth micro service".
Now the important part is how to handle authentication itself. In general you could either use a session or JWT.
For micro services I think JWT is a perfect fit. If you use session you basically "centralize" your authentication and authorization. What I mean by this is that after a user is authenticated, every time the user makes a request all the micro services that react to this response must check on the centralized session. This will not only increase latency but it just doest fit with the distributed system. The point of using micro services is to have make replicas of services and so scale horizontally.
If you use JWT, the micro services only need the secret key to validate the token. Basically no centralized store(session) for authentication infos.
About the "auth service", I would suggest you to store authentication and authorization related data only(including user info related to authentication. phone number, email, name etc. you probably would use this in case user needs to change password, forgot password etc.). Other specific data related to a specific role can be stored in the corresponding service.
I have built a Room Booking application in FileMaker that accesses Google Calendar via the Calendar API, authenticated with OAuth2.
Everything works well except I am unsure as to the relationship between the OAuth2 Client token flow and the individual FileMaker/GCal users who will use the system.
At the moment, I am both the owner of the project in the Google Developer Console, and the only beta tester, so naturally the system works with my calendar - I log in once, pass OAuth2 my ClientID and Secret, generate my Code, swap it for the Token and Refresh and I'm off.
However, the whole system at the moment only has one Token and Refresh, held in a single row FileMaker table, thus, when I create a second test user, things still forward to my Calendar.
This is where I am unclear. It sounds obvious, but it's hard to find a clear answer on this.
Should I have it so each user uses the same ClientID and Secret (which I keep secret from them) to generate their own unique set of Tokens?
Or is the single set enough, and I'm misunderstanding some other aspect of the system (and if so, what)?
In short: are the Tokens per Application or per User of the Application?
Answering my own question:
CLIENT'S (= Application) STUFF
Client ID: pertains to the Application, general to all users
Client Secret: pertains to the Application, general to all users
Redirect URI: pertains to the Application, general to all users
USER'S STUFF
Authorisation Code: specific to each user, requires the Client ID and Client Secret, and retrieved as a GET variable from the Redirect URL following user's authentication with the 3rd party service (e.g. http://YourRedirectURI.com?code=abc123)
Refresh Token: specific to each user, requires the Client ID and Authorisation Code
Access Token: specific to each user, requires the Client ID and Refresh Token, and is time limited (typically 1 hour) so a new one needs to be regenerated once it expires
NB Users should not see the Client Secret (or ideally Client ID either). They should be used in the Application's internal logic to generate calls for the users' Code/Token but not seen by them.
OAUTH2 FLOW
So, essentially the OAuth2 'Flow' is as follows:
1) Your Client ID + your Client Secret + their authentication login to the 3rd party service = that specific user's Authentication Code as GET var in Redirect URI
2) Your Client ID + your Client Secret + their Authentication Code = Refresh Token & Access Token
3) Your Client ID + your Client Secret + their Refresh Token = new Access Token
I try to implement Oauth2/OpenId Connect into microservices architecture based on Java/Spring Cloud. My current problem is about tokens propagation between microservices or through a message broker like RabbitMQ.
Very few topics talk about this. I only found this Stackoverflow thread but I don't like proposed answers.
Here are the different cases :
My microservice A receives a request initiated by the end user going through the API gateway and carrying a valid access token (JWT with scopes/claims corresponding to the final user : username, id, email, permissions, etc.). There's no problem with this case. The microservice has all informations to process the request.
1st problem : What happens if microservice A needs to call microservice B ?
1st solution : microservice A sends the access token to the microservice B
==> What happens if the token expires before arriving at microservice B ?
2nd solution : use "client credentials grant" proposed by OAuth (aka service account). It means microservice A request a new access token with its own credentials and use it to call microservice B.
==> With this solution, all data related to the user (username, id, permissions, etc.) are lost.
For example, the called method in microservice B needs the user id to work. The user id value can be set as query string.
If the method is called with the user access token, the microservice B can validate that the user id value in query string is equal to the user id value in the JWT token.
If the method is called with the service access token, the microservice B can't validate the query string value and needs to trust the microservice A.
For this cas, I heard about the "token-exchange" draft from OAuth 2. The idea is very interesting. It allows microservice A to convert the user access token into another access token with less permissions but forged for microservice A. Unfortunately, this mecanism is still in draft and not implemented in a lot of products.
2nd problem : What happens if microservice A pushs a message to RabbitMQ and microservice B receives this message ?
1st solution : Authentication and authorization are managed by RabbitMQ (vhost, account, etc.)
==> Once again, all user related data are lost. Moreover, we have 2 repositories to manage authentication and authorization
2nd solution : Like the first problem, use "client credentials grant"
What do you think about it ? Is there another better solution ?
Thanks in advance.
It's quite straightforward. There are always two use cases that we will refer to as end-user and app2app. Always gotta handle both.
Consider a simplified architecture:
Internet
User ----------------> Service A +-------> Service B
HTTP request |
+-------> Service C
Assume the user is authenticated by a token. JWT or anything else, doesn't matter.
The service A verifies the user token and performs the action. It has to call service B and C so it makes the appropriate requests and has to include the user token in these requests.
There can be a bit of transformations to do. Maybe A reads the user token from a cookie but B reads the token from the Authorization: Bearer xxx header (a common way in HTTP API to accept a JWT token). Maybe C is not HTTP-based but GRPC (or whatever developers use nowadays?), so the token has to be forwarded over that protocol, no idea what's the general practice there to pass extra information.
It's fairly straightforward and it works really well for all services dealing with end-users, as long as the protocol can multiplex messages/requests with their context. HTTP is an ideal example because each request is fully independent, a web server can process various stuff per path and argument and cookie and more.
It's also an extremely secure model because actions have to be initiated by the user. Wants to delete a user account? The request can be fully restricted to the user, there can't just be an employee/intern/hacker calling https://internal.corp/users/delete/user123456 or https://internal.corp/transactions/views/user123456. Actually, customer support for example may need to access these information so there has to be some limited access besides being the user.
Consider a real-word architecture:
Internet
User ----------------> Service A +-------> Service B --------> SQL Database
HTTP request |
+-------> Service C --------> Kafka Queue
|
|
Service X <--------------+
Service Y <--------------+
Service Z <--------------+
Passing JWT user tokens doesn't work with middleware that do not work on a end-user basis. Notably databases and queues.
A database does not handle access based on end-user (same issue with the queue). It usually requires either a dedicated username/password or a SSL certificate for access, neither of which have any meaning outside of that database instance. The access is full read/write or read-only per table with no possibility of fine permission (there is a concept of row-level permission but let's ignore it).
So service B and service C need to get functional accounts with write permission respectively to the SQL database and the Kafka Queue.
Services X,Y,Z need to read from the queue, they also each need a dedicated read-only access. They don't strictly need to have a JWT user token per message, they could trust that what's in the queue is intended, because whatever wrote to the queue in the first place must have had explicit permission to write to the queue.
It gets a bit tricky if service X needs to call yet another HTTP service that requires a user token. That's hard to do if the token wasn't forwarded. It would be possible to store the token in the queue along the message, but it's really ill advised. One does not want to save and persist tokens everywhere (the kafka queue writes messages and would basically become a highly-sensitive token database, erf). That's where the forwarding of user tokens shows its limits, at the boundaries of systems speaking different protocols with different access control models. One has to think carefully of how to architecture systems together to minimize this hassle. Use dedicated service accounts to interface specific systems that don't understand end-user tokens or don't have end-user tokens.
So let's take the basic e-commerce microservices.
Identity and access . This microservice will take care of user accounts, roles
and authentication. The authentication method will be the based on the usual
token based flow (user enters username + pass and server returns a unique and
random token via cookie). This service can also be used to get the user profile.
Cart microservice. This microservice can be used to put products in a cart.
Check what products a cart has. Etc ...
Asuming that "Identity and access" microservice will be used for generating the random token as a result of a succesful authentication, and for linking this token to a user, how will this token be used to make the user's identity available to the cart microservice? For example, when a user will add a product to his cart, he will send along the authorization token and the cart microservice will have to identify the user based on that token.
Could a distributed database be an option? A database which has these tokens stored and links to user built, and to which all microservices have access?
Or should all microservices get the user's identity from a special identity and access API which will expose users based on the access token?
A distributed data base definitely conflicts with the following basic principle of micro services:
A micro service owns its data and exposes it via well defined interfaces.
No other micro service may access data owned by another micro service directly.
So one solution in this case would be to have a token micro services or the last solution you have described.
Suppose we have a number of (stateless, HTTP-based) (micro)services and a bunch of "daemons", which do all kinds of background processing by actually using said services.
Now, I want to have a way for services and daemons to be able to mutually authenticate and authorize. For example, a daemon that performs full-text indexing of Orders needs:
Read-only access to the Orders, Customers (which itself needs read-only access to Companies service) and Inventory services
Read and write access to the OrdersSearch service in order to be able to update the full-text index.
There are also applications, which operate "on behalf" of the user. For example, Inventory web app needs read and write access to the Inventory service, but the Inventory service itself needs to verify permissions of the user operating the application.
All that said, how do I achieve what I just described? I'd prefer not to use gigantic enterprisey frameworks or standards. From what I've read, Two-Legged OAuth2 is what I need, but I'm not exactly sure.
I was thinkinking of establishing an Authorization service which will be used to answer questions like "Hey, I'm Inventory service. What permissions the Customer service that is calling me right now has for me?", but that has two major weak with distributing shared secrets.
Authentication:
I imagine an authentication service where a requesting API signs its request using an established protocol: e.g. concatenating parts of the request with a expirable-NONCE and application ID then hashing it to create a signature. This signature is then encrypted with a private key. All requests must contain this encrypted signature and the NONCE as well an application identifier. The receiving service then does a lookup for the requesting application's public-key. After verifying the NONCE has not expired, the receiving service decrypts the digest using the public key and verifies the signature is valid (by repeating the signing process and coming to the same signature). A service would be required for obtaining the public key. A service can cache the application ID to public key mapping.
Authorization:
This can be done using some sort of role based access control scheme. Another service can be used to lookup whether the requesting service has access to the resources being requested.
I think both the authorization and authentication can be done internally, depending on time and money and need for specialization. If you are using Java take a look at Spring Security. If you decide to create custom code please justify it to your managers and get buy in. Do a thorough search online for any other solution and include in your write-up as to why it would not fit and that a custom solution is still required.