I see that in most of the keycloak tutorials it is suggested to create two client in keycloak i.e. frontend, backend. But I don't understand the need for this since I can validated the JWT token provided by frontend using public key even without creating separate client.
So my question is, is the approach of not creating the backend app the wrong approach? Also when & why should we create a backend client in keycloak.
Ref - https://medium.com/devops-dudes/secure-front-end-react-js-and-back-end-node-js-express-rest-api-with-keycloak-daf159f0a94e
I see most of the tutorial of keycloak suggest to create two client in
keycloak i.e. frontend, backend. But I don't understand the need of
this as I can validated JWT token provided by frontend using public
key even without creating separate client.
Typically, such tutorials are created to showcase the authentication and authorization capabilities of Keycloak.
The authentication part is showcased by the user authenticating via the browser (using the frontend client), whereas the authorization part is showcased by the application sending an access token to the Keycloak server where the claims on the access token (e.g., roles) can then be used to infer if the user has the permissions to perform the desire action (i.e., authorization).
So my question is, is approach of not creating backend app is not
right approach?
Depends on your specific use case. Alternatively to the approach that I have previously mentioned, one could have had a single client (i.e, the frontend client), and after the user has successfully authenticated, the application would pass the access token to the backend. The backend could then perform the authorization by directly checking, for instance, the roles in the access token, instead of relying on the Keycloak server to do so. There are pros and cons to both approaches.
Also when & why should we create backend client in keycloak.
A typical example would be if the backend would be a separated micro-service that triggers some maintenance task for example. Assuming that task is not related at all to the user authentication process, it would make more sense to then have a separate client (in this case a confidential one) that would rely on the client credentials flow which is typically used for machine-to-machine use-cases.
Related
We're using Keycloak server for authenticating against several IDPS (google, active directory, etc). We have a spring gateway microservice which plays role of a client and several other microservices which play role of resource servers.
When user authenticates via keycloak, we want to associate the authenticated user with some custom fields (like context, roles, user details) from our custom database (NOT Keycloak DB) and send those fields to other microservices as well, so that we do not need to load the fields from DB in every microservice.
How would you do that? Making a GlobalFilter in the Gateway which would add those fields to request headers and setting those headers somehow to the principal object in resource servers? Or using cache (redis) to store the fields on gateway and load them in resource servers? Or do you have some other solution? For example extending access token, overiding UserDetailsService, etc..
What's important to note is, that we don't want to extends Keycloak Database, since we want to have the whole role management in our custom database. Reason for that is that keycloak schema is not very flexible. We want to use keycloak only as a dummy authentication server.
The preferred option for security related values is for Keycloak to reach out to your APIs or custom data sources at the time of token issuance, then include your domain specific claims in JWT access tokens. In keycloak I believe this is done via a protocol mapper, as in this answer.
This design pattern is discussed in the Claims Best Practices article. It is recommended to not send secure values such as roles in custom headers etc, since they are potentially easier to change by a hostile party. Instead each API should receive the JWT and validate it, in a zero trust manner, then use the received claims for authorization.
For non secure values, such as a session_id or correlation_id used for logging, simple HTTP headers work well.
Im struggling to wrap my head around implementing OAuth or OpenID with multiple external token providers. Since from my perspective the providers like google mix both specs in a single api, im differentiating these two mechanism in the way i protect my ressources and deal with user data. OpenID is only used for authentication and i produce my own access_tokens and persist all of the user data myself, while on the other hand OAuth provides external access_tokens and manages user data.
Entities involved:
External provider - which is my term for the OAuth/OpenID provider like Google OAuth
Backend - Server which serves primarily as a RESTful API for the clients
Clients - Apps (e.g SPA, IOS App, Desktop clients) which need to access ressources provided by the backend
Im developing an express backend (rest api) for several different sort of clients including web, ios/android. From my understanding i have the following options: (referring to most implementation docs of the provider i want to use instead of the spec)
Option 1 - OpenID
The client acquires an auth_code by signing in with any external provider (Apple, Google, Facebook)
The client sends this auth_code to my backend where this code is used to issue an id_token
The backend uses this id_token to authenticate the user and store any required information about the user
The backend generates an access/refresh token for authorization and sends these back to the client
Now i can use my access_token for authorization and the refresh token for auth-state-management (signout, invalidate tokens)
Problem - Is there a problem really ?
Im not sure if i need to frequently check if the user is still a valid identity. With valid identity i mean wether the external identity (e.g. google user) still exists, which basically relates to wether my persistance layer has to invalidate (delete) this user. In other words do i have to sync my persistance layer with the external provider to avoid dead/unusable identities. This is especially a problem if information like the email address changes and my backend does not get notified about that. Or should i just live with the fact that the user in my backend is just related to the external user by the id and the clients have to manage their data in my backend themselves (e.g change their email at the client). That would mean i preferable ignore any changes to the external user data (at the provider).
Option 2 - OAuth
The client acquires an auth_code by signing in with any external provider (SiwA (ios), Google, Facebook)
The client sends this auth_code to my backend where this code is used to issue an access_token/refresh_token from the provider
The backend sends back the access_token/refresh_token obtained from the external provider
Now everytime the client does make a request it has to contain the external access_token which then is used at the backend to ask the external provider if this token is valid and the client has access to the ressource. In other words i use the external access_token for authorization
Everytime user data (e.g. email, address...) is required at the backend, it is necessary to ask the external provider for this data by providing the access_token, which was given by the client
Problems/Questions:
I assume that the refresh process has to be perfomed on client side in case the backend redirects unauthorized from the provider in case the token expires. Is that correct ?
How do i determine from which provider the token is. It seems weird for me to implement a trial and error process and just ask every provider if this is a valid token. E.g. if the backend receives an access token in the header of the request it doesnt know which provider to ask. (or should i encode this information in the header like Bearer Provider Token in order to know where to check the access token.
Using Option 2, anytime the external provider experiences downtime no user is able to use my backend, while with using Option 1 only the signin (inital sigin or after explicity signing out, which invalidates the refresh token) is not availabe for this specific provider.
Is there anything i'm missing ? It seems to me that Option 2 introduces a lot of unnecessary communication to the auth provider, while Option 1 does neglect any communication which is potentially required (e.g. sync of identity state) ?
The main question for me is, considering Option 1 which seems more suitable for my scenario, do i necessarily have to react to any state change of the user state, like change of email at the external provider or are there any downside to ignoring anything then the external user id to allow for authentication.
I ended up implementing OpenID, realizing i only needed authentication and no authorization nor rigorous coupling of user data. At the time of asking the question, i was aware of the difference, however i didn't dig deep enough into the requirements of my project. Thus i discarded the basic OAuth protocol, since i didn't need any authorization to external ressources.
Regarding OpenID, the management of external identities from the OpenID provider is not in the scope of the protocol it has to be done independently. There are other protocols and methods dealing with that e.g. SCIM.
I ended up relying on the fact the the external id provided with id_token is unique and initialize a one time mapping at the first time authenticating (basically a signup). My server manages the user data from this moment on. Subsequent authentication request rely on the fact that this mapping never changes and any user data may differ compared to any data kept at the provider, e.g. different mail addresses at my server compared to the google mail of the external identityl. However this doesn't violate my requirements.
Furthermore, i want to add that i ended up supporting implicit flow and auth code flow, which means the client can send the id_token directly, instead of sending the auth_code. I did not fully get the point why this is more unsecure since from my current perspective my server ask the provider to verify the id_token which prevents any malicious intends by proividing false id_tokens.
We have an application that has frontend UI(Which is a web application) which communicates with a resource server. Our frontend will be using some APIs from a resource server to get data.
I am planning to add frontend to Okta and provide access to okta registered users.
In the resource server, we have some APIs that we want to expose to our customers to integrate in their system(Programmatically). To use our APIs, we have to provide client credentials(client ID/secret) to them. Using clientId/Secret, they will get access_token and will use that in a subsequent request. We can display this clientId/Secret via frontend UI once the user logs in to it via Okta.
How should I authenticate requests to the resource server from the frontend? And how do I authenticate requests to resource server via customer using clientId/Secret? Should I use one or two different tokens for this purpose?
Does Okta provides per-user client Id/secret that user(customer) can use to get access_token and send it to access resource server and resource server validate token against Okta.
I just did something very similar. You can read about my experience here: How to pass/verify Open ID token between .net core web app and web api?
I don't know what application framework you are using (.net, node, etc), but if you're using .NET the steps are to (a) install the middleware in your web app, (b) install the middleware in your api app, and (c) make sure calls from your web app to the api app pass the id_token.
If you, in addition to that, need to secure it for external users - it should work the same way. The only difference is they will manually call the /authorize endpoint to get their token - but the middleware should take care of the token verification for you in both cases.
Note I did experience one odd thing which is that I needed to pass the id_token and not the access_token. It is also worth mentioning that the claims were interpreted differently in the app and the api (in that the name of the claims for say, userid, were different between them - the data was still the same).
You will have to use 2 different access tokens. There are 2 different flows going on here:
Web UI to API
Business partner system to API
Technically this means:
Authorization Code Flow (PKCE)
Client Credentials Flow
And in terms of tokens it means:
In the first case there is an end user represented in access tokens (the 'sub' claim)
In the second case there is only a Client Id claim in access tokens
I can advise on token validation techniques if needed - let me know.
To me though this feels like an architectural question - in particular around applying authorization after identifying the caller and versioning / upgrades.
Based on my experience I tend to prefer the following architecture these days, based on 2 levels of APIs: eg with these ones exposed to the internet:
User Experience API serves the UI
Partner API deals with B2B
And both entry point APIs call the same core services which are internal. Might be worth discussing with your stakeholders ..
I'm working in microservices environment, where each service authenticates using OpenID Connect to an authentication service (local IdP), based on Users I keep locally on my Database.
Now, I want these services to be able to authenticate using Azure, Google, etc.
Can (and should) I modify my authentication service to allow redirection to another IdP, and replace or chain the token to my proprietary token for my services?
Is there a simpler way?
How can I allow users to login both using name / password OR external IdP?
I'm doing some research on the topic by myself as well and from what I've found until now, it seems that there is a urn:ietf:params:oauth:grant-type:token-exchange grant type that should allow exchanging external idp token to an internal one as described in some spec.
It should be supported as part of the openid connect /token endpoint so as long as the local idp supports it, I guess that this should be the best practice to achieve what you are looking for.
I'm currently looking into mitreid-connect idp implementation as local idp and some of my requirements is to also allow SSO with third parties while being able to issue a local token from the external user identity.
Will update as it goes...
If you manage all the SP (your microservices) it's definitely easier to implement it on your common IDP.
But if the SP are external ones (like existing services you just installed) and they already implements the public IDP you want to use, it was be a bit harder to pass through your current IDP without problem.
I'm guessing you are in the first case (you made all your SP) so I will elaborate it:
When your current IDP will authenticate user on others public IDP, it will get some information (email, name, etc.) and you can normalize those in your answer, to be sure your SP are completely agnostic of which original IDP was used. It will be better for you if the future to debug this setup. And of course to add a new public IDP...
But if you need to use some specific call to original IDP, (let says Youtube API for example) you could have a agnostic API on your common IDP which will forward to the appropriate proprietary API of original IDP, or deny the request if the IDP does not have a video system.
Or you could give original token to your SP, in a custom field or scope of your oidc token, so for example an SP dedicated to video could directly call Youtube API with the google user token.
I recently did a similar setup for my company. I would like to share the overall structure to give an idea about our solution. Hope it helps:
Our authentication server is an node express server with following properties:
Hosts static login screens to allow authentication against local database via email + password, as well as provides links to authenticate with external OAuth2 providers.
Both local and external authentication requests are forwarded to Passport.js Authentication strategies
After successful login, both local and external Passport.js strategies respond to a callback. Upon this response, a session object is created via express-session and a cookie is sent.
At this point, cookies can be used to exchange JWT's, so that authentication against stateless API's can be possible with Bearer Access Tokens.
I'm having a hard time choosing a decent/secure authentication strategy for a microservice architecture. The only SO post I found on the topic is this one: Single Sign-On in Microservice Architecture
My idea here is to have in each service (eg. authentication, messaging, notification, profile etc.) a unique reference to each user (quite logically then his user_id) and the possibility to get the current user's id if logged in.
From my researches, I see there are two possible strategies:
1. Shared architecture
In this strategy, the authentication app is one service among other. But each service must be able to make the conversion session_id => user_id so it must be dead simple. That's why I thought of Redis, that would store the key:value session_id:user_id.
2. Firewall architecture
In this strategy, session storage doesn't really matter, as it is only handled by the authenticating app. Then the user_id can be forwarded to other services. I thought of Rails + Devise (+ Redis or mem-cached, or cookie storage, etc.) but there are tons of possibilities. The only thing that matter is that Service X will never need to authenticate the user.
How do those two solutions compare in terms of:
security
robustness
scalability
ease of use
Or maybe you would suggest another solution I haven't mentioned in here?
I like the solution #1 better but haven't found much default implementation that would secure me in the fact that I'm going in the right direction.
Based on what I understand, a good way to resolve it is by using the OAuth 2 protocol (you can find a little more information about it on http://oauth.net/2/)
When your user logs into your application they will get a token and with this token they will be able to send to other services to identify them in the request.
Example of Chained Microservice Design
Resources:
http://presos.dsyer.com/decks/microservice-security.html
https://github.com/intridea/oauth2
https://spring.io/guides/tutorials/spring-security-and-angular-js/
Short answer : Use Oauth2.0 kind token based authentication, which can be used in any type of applications like a webapp or mobile app. The sequence of steps involved for a web application would be then to
authenticate against ID provider
keep the access token in cookie
access the pages in webapp
call the services
Diagram below depicts the components which would be needed. Such an architecture separating the web and data apis will give a good scalability, resilience and stability
You can avoid storing session info in the backend by using JWT tokens.
Here's how it could look like using OAuth 2.0 & OpenID Connect. I'm also adding username & password login to the answer as I assume most people add it as a login option too.
Here are the suggested components of the solution:
Account-service: a microservice responsible for user creation & authentication. can have endpoints for Google, Facebook and/or regular username & password authentication endpoints - login, register.
On register - meaning via register endpoint or first google/fb login, we can store info about the user in the DB.
After the user successfully logs in using either of the options, on the server side we create a JWT token with relevant user data, like userID. To avoid tampering, we sign it using a token secret we define(that's a string).
This token should be returned as httpOnly cookie alongside the login response. It is recommended that it's https only too for security. This token would be the ID token, with regards to the OpenID connect specification.
Client side web application: receives the signed JWT as httpOnly cookie, which means this data is not accessible to javascript code, and is recommended from a security standpoint. When sending subsequent requests to the server or to other microservices, we attach the cookie to the request(in axios it would mean to use withCredentials: true).
Microservices that need to authenticate the user by the token:
These services verify the signature of the JWT token, and read it using the same secret provided to sign the token. then they can access the data stored on the token, like the userID, and fetch the DB for additional info about the user, or do whichever other logic. Note - this is not intended for use as authorization, but for authentication. for that, we have refresh token & access token, which are out of scope of the question.
I have recently created a detailed guide specifically about this subject, in case it helps someone: https://www.aspecto.io/blog/microservices-authentication-strategies-theory-to-practice/
One more architecture perspective is to use nuget-package (library) which actually do authentication/token validaton. Nuget-package will be consumed by each microservice.
One more benefit is that there is no code duplication.
you can use idenitty server 4 for authentication and authorisation purpose
you must use Firewall Architecture hence you have more control over secutiry , robustness ,scalability and ease of use