OAuth for Microservices with API Gateway - architecture - api

In a microservice architecture, an API gateway lays in front of the API. The purpose of this is e.g. changing some request / response parameters, for single entry point or checking authentication etc. Now I would like to protect my API using OAuth2 flows to obtain an access token. The problem is to decide who is the actual OAuth client, I will demonstrate it by using a SPA example:
a) Is it the SPA that started the oauthorize request (to the api gateway) by using the implicit grant. Then, the Api gateway would simply route the request through to the OAuth authorization server, acting as a single entry point, with the /authorize stuff from the implicit flow
b) Is it the API Gateway itself, meaning the SPA sends the username and password of the enduser to the api gateway (of course, here the SPA needs to be trusted with the end users credentials), which then acts on its own as an oauth client using the resource owner password grant
c) dismiss the api gateway at all and create the oauth authorization server "parallel" to the api gateway, meaning you would loose the single entry point etc.
The following picture demonstrate a very abstract architecture, and the question is about the numer "[2]", if this request is initiated by the SPA and passed through by the api gateway, or if the api gateway is intercepting the request and acting on its own as an oauth client?
OAuth with API Gateway
My guess is to always use the best fitting grant type for a specific client, regardless of an API gateway being in between or not. This would mean, that when it comes to OAuth, the API Gateway would simply pass the client authorization request through, whatever grant type it used. Therefore, [2] in the picture would come from the client, and not from the API Gateway acting as the OAuth client. Is this correct? As mentioned, this really gets tricky when it comes to first party apps as you probably could use the password credentials grant, which has huge drawbacks,e.g. no refreshing possible for SPAs.

Please bear in mind that this is a purely opinion based answer, because your question is pretty vague.
I don't like the idea about using the API Gateway as the point which authenticates requests. I think that this defeats the Single Responsibility Principle. The gateways purpose usually is to expose your backend to external clients, perhaps change the contract for some specific clients, etc. But it shouldn't also authenticate calls. Besides it would have to do so based on the data passed to it, which you would have to gather somewhere else anyway.
Another thing which is I think undesirable, is that you're considering using the resource owner password grant for your SPA. This is not the correct use case for this grant flow. You could look into this article, which explains it much better than I could: https://www.scottbrady91.com/OAuth/Why-the-Resource-Owner-Password-Credentials-Grant-Type-is-not-Authentication-nor-Suitable-for-Modern-Applications
I would suggest you use the Implicit grant type and use the api gateway to only route calls to the backend, don't authenticate the calls on that layer.
If you're using a spring cloud api gateway (essentially a zuul proxy), you will have to add proper configuration so that it forwards all security headers and redirects. This example works for me:
server:
use-forward-headers: true
zuul:
addHostHeader: true
routes:
<your oauth server route>:
url: <your oauth server url>
path: <if youve got any prefix>
sensitiveHeaders:
stripPrefix: false

It doesn't matter, what Krzysztof Chris Mejka in previous answer like or dislike. Take a look at BCP - https://datatracker.ietf.org/doc/html/draft-ietf-oauth-browser-based-apps#section-6
Latest recommendation from oauth working group at IETF is to use a kind of Api gateway/reverse proxy, another words - you need to keep tokens out of js entirely.

Related

Switch authorization mechanism in Apigee

My architecture considers an ApiGateway (ApiGee) that dispatches requests to a service behind.
All APIs (http and rest) have an authorization mechanism. So, the client needs to authenticate itself to an IAM before operate with my APIs.
In order to avoid bad security design, all services behind the ApiGateway accept authenticated requests.
For (technical and business) requirements reasons, I need to use internal/external different authentication mechanisms: when a client sends a request, that has to contains the authorization information provided by IAM1. Instead, the services behind accept authenticated APIs that contains Auth information that came from IAM2.
For the translation, I create a service 'translation-token-service' that exposes an API for implementing the token translation.
My desiderata is to "link" ApiGee to the 'translation-token-service' in order to let the clients to continue to use the authorization from IAM1 and the behind services to continue to use the authorization information from IAM2: when a request arrives, ApiGee should invoke the 'translation-token-service' for having a token with IAM2 authorization information, injects them into the request and dispatches it to the right behind service.
NB: the token conversation made by ApiGee should be in some how cached (I don't care about the storage type). So if a client makes two call with the same token, the first invocation asks to convert the token, the second one doesn't, for performance reason.
NB: IAM1 is not oidc compliant, IAM2 does.
How can I implement this flow?

How to send user details to backend API if client user was authenticated at the API Management service front end?

I'm interested in using Azure APIMS to authenticate requests from a client app using Azure AD. However, I am interested in having the backend API know details about the user who was authenticated at the gateway such that I can perform logic based on that user.
Backend API would be ASP.NET Core 3.x web api.
What approaches are available to me?
One thing I can think of off the top of my head is to simply forward the Authentication header along to the backend API, and do some type of lookup there. But I am wondering if there is any way for me to aggregate the user details at the frontend api gateway layer, and pass that information along in a header?
How are backend applications obtaining details about the authenticated user when requests are authenticated at the frontend gateway layer?
I realize that if I was to add authentication at the backend API layer (which would duplicate the authentication/authorization) I may be able to technically accomplish it this way. However, this would violate the SRP principle and perhaps add some necessary overhead to the backend.
Azure AD authentication at APIM would mostly imply validating JWT token passed along with request. You can callout to other endpoints if you want to with send-request policy to obtain more information (for example, do group expansion), but that would have to be explicitly configured through policies.
After token is validated it's available to you in object model form so you can inspect its properties and use any part of the token in policy expressions, say to send parts of it as headers to your backend API.

Custom Authentication Service in Kong API Gateway

We are currently analyzing the API gateway for our microservices and Kong is one of the possible candidate. We discovered that Kong support several plugins for authentication but the all based on users stored in Kong database itself. We need to delegate this responsibility to our custom auth HTTP service and don't want to add these users in API gateway database.
It's possible to do this with some code around, instead of using the OpenID connect plugin; in effect you need to implement an Authorization Server which talks to Kong via the Admin (8001) port and authorizes the use of an API with externally given User Ids.
In short, it goes as follows (here for the Authorization Code grant):
Instead of asking Kong directly for tokens, hit the Authorization Server with a request to get a token for a specific API (either hard coded or parameterized, depending on what you need), and include the client ID of the application which needs access in the call (you implement the /authorize end point in fact)
The Authorization Server now needs to authenticate with whatever IdP you need, so that you have the authenticated user inside your Authorization Server
Now get the provision code for your API via the Kong Admin API, and hit the /oauth2/authorize end point of your Kong Gateway (port 8443), including the provision key; note that you may need to look up the client secret for the application client id also via the Admin API to make this work
Include client id, client secret, authenticated user id (from your custom IdP) and optinally scope in the POST to /oauth2/authorize; these values will be added to backend calls to your API using the access token the application can now claim using the authorization code
Kong will give you an Authorization Code back, which you pass back to the application via an 302 redirect (you will need to read the OAuth2 spec for this)
The application uses its client and secret, with the authorization code, to get the access token (and refresh token) from Kong's port 8443, URL /oauth2/token.
It sounds more involved than it is in the end. I did this for wicked.haufe.io, which is based on Kong and node.js, and adds an open source developer portal to Kong. There's a lot of code in the following two projects which show what can be done to integrate with any IdP:
https://github.com/apim-haufe-io/wicked.portal-kong-adapter
https://github.com/Haufe-Lexware/wicked.auth-passport
https://github.com/Haufe-Lexware/wicked.auth-saml
We're currently investigating to see whether we can also add a default authorization server to wicked, but right now you'd have to roll/fork your own.
Maybe this helps, Martin
Check out Kong's OpenID Connect plugin getkong.org/plugins/openid-connect-rp - it connects to external identity and auth systems.

Using openid-connect for authentication spa and rest api

I have an API Server (Resource server) and multiple apps, Web GUI (SPA) and a Desktop client and maybe more coming.
I'd like to use openid-connect besides http basic authentication for my API Server.
It should be configurable which openid provider to use. My own, facebook, google...
I only want to do authentication, I do not need their API. I only need some profile data like email or firstname.
Let's say I have configured google as my IdP and I'm currently using my Web GUI (SPA). I need to login, no problem, according to https://developers.google.com/identity/protocols/OpenIDConnect I redirect the user to google, get my authorization code and the Web Gui (SPA) gets an id_token and access_token from google.
No problem so far, but now the SPA has to work with my API Server and the API Server needs to authenticate every request (since it is a stateless rest api) coming from the Client (WebGui SPA) and needs to know which user actually did this.
A
So the access_token from google is meant to be used to access google api's right? But I also could just pass this access_token with every request to my api server and the api server calls https://www.googleapis.com/oauth2/v3/tokeninfo?access_token=xxx to verify the access_token and get the account name (mail). But this doesn't sound right, does it?
B
I also have and id_token which I can verify without calling google server everytime. So could I also just pass the id_token as bearer with every request to my api server and the api server can verify the id_token? But according to openid-connect spec the access_token is actually the one which just get passed to the api server and the id_token must stay on the client.
But then the id_token would be completely useless to me, the API server needs to know who the user is, the client (Web GUI) doesn't really care.
C
Or since it is my own API Server, does my API Server actually needs to implement the whole oauth2 system by itself, just not authentication but creating access_token and more. So I would have a /api/tokensign to which I can pass the id_token from google, the API verifies the id_token and creates an access_token for my WebGUI (SPA). And this new access_token can be passed as bearer to every api request. This actually sounds as the best solution according to specs, but do I really need to implement oauth2 by myself into my API? Sounds like a heavy addition since A and B could also be implemented.
My rest-api needs authentication with every request so is A, B, C the right approach? Please don't tell me this is opinion based, it is not.
What is the right way using oauth2/openid-connect for authentication?
You can use all three methods you have mentioned above, but indeed with some considerations. I will explain them with regards to available specifications.
Scenario - Two systems S1, S2
S1 - Identity provider
S2 - API endpoint
What you need - Trust and use 'Tokens' issued by S1 to access S2
Explanations for proposed solutioins A, B and C
A - Verify tokens issued by S1 for each call
This can be done using the RFC7662 - OAuth 2.0 Token Introspection endpoint. This validation is valid by the specification so yes you can use the token verification endpoint.
Advantage for this method is that, if a token is revoked, the effect is instantaneous. The very next API call will fail. But indeed there's the implication on performance. You need an extra verification service call.
Note that you do not need to get the account name from this verification response. It could be taken from ID token and could be used to verify for extra protection.
B - Trust tokens issued by S1 for each call
Now this approach is something extended from RFC6750 - The OAuth 2.0 Authorization Framework: Bearer Token Usage. You can indeed use ID toke to authenticate and authorise an end user. This link contains a good explanation on the ID token usage as a bearer token.
You can indeed verify the validity of token using MAC and even encryption. But be mindful to use short lived tokens and to always use TLS. And be mindful about refreshing tokens.! Because according to openID connect specification, ID token is not a mandatory item for a refresh token request.
C - A wrapper for federation
For this you can write your own solution or use an existing solutions (ex:- WSO2 identity server). This identity server will configured to choose the identity provider on your application (client like desktop app or web app). Identity server will do the necessary redirects and provide you the required tokens. But indeed you will need to use introspection endpoint to validate the token validity.
If you go one step ahead of this solution, you can try to implement a code exchange mechanism. You can exchange the token carry from external to tokens issued internally by one of your system (ex:- Google access token to your internal access token). The advantage of this approach is you have control over validation. Also since subsequent token validations are done internally, there should be a performance improvement.
Hope this explains some doubts you have.

REST API Token-based Authentication

I'm developing a REST API that requires authentication. Because the authentication itself occurs via an external webservice over HTTP, I reasoned that we would dispense tokens to avoid repeatedly calling the authentication service. Which brings me neatly to my first question:
Is this really any better than just requiring clients to use HTTP Basic Auth on each request and caching calls to the authentication service server-side?
The Basic Auth solution has the advantage of not requiring a full round-trip to the server before requests for content can begin. Tokens can potentially be more flexible in scope (i.e. only grant rights to particular resources or actions), but that seems more appropriate to the OAuth context than my simpler use case.
Currently tokens are acquired like this:
curl -X POST localhost/token --data "api_key=81169d80...
&verifier=2f5ae51a...
&timestamp=1234567
&user=foo
&pass=bar"
The api_key, timestamp and verifier are required by all requests. The "verifier" is returned by:
sha1(timestamp + api_key + shared_secret)
My intention is to only allow calls from known parties, and to prevent calls from being reused verbatim.
Is this good enough? Underkill? Overkill?
With a token in hand, clients can acquire resources:
curl localhost/posts?api_key=81169d80...
&verifier=81169d80...
&token=9fUyas64...
&timestamp=1234567
For the simplest call possible, this seems kind of horribly verbose. Considering the shared_secret will wind up being embedded in (at minimum) an iOS application, from which I would assume it can be extracted, is this even offering anything beyond a false sense of security?
Let me seperate up everything and solve approach each problem in isolation:
Authentication
For authentication, baseauth has the advantage that it is a mature solution on the protocol level. This means a lot of "might crop up later" problems are already solved for you. For example, with BaseAuth, user agents know the password is a password so they don't cache it.
Auth server load
If you dispense a token to the user instead of caching the authentication on your server, you are still doing the same thing: Caching authentication information. The only difference is that you are turning the responsibility for the caching to the user. This seems like unnecessary labor for the user with no gains, so I recommend to handle this transparently on your server as you suggested.
Transmission Security
If can use an SSL connection, that's all there is to it, the connection is secure*. To prevent accidental multiple execution, you can filter multiple urls or ask users to include a random component ("nonce") in the URL.
url = username:key#myhost.com/api/call/nonce
If that is not possible, and the transmitted information is not secret, I recommend securing the request with a hash, as you suggested in the token approach. Since the hash provides the security, you could instruct your users to provide the hash as the baseauth password. For improved robustness, I recommend using a random string instead of the timestamp as a "nonce" to prevent replay attacks (two legit requests could be made during the same second). Instead of providing seperate "shared secret" and "api key" fields, you can simply use the api key as shared secret, and then use a salt that doesn't change to prevent rainbow table attacks. The username field seems like a good place to put the nonce too, since it is part of the auth. So now you have a clean call like this:
nonce = generate_secure_password(length: 16);
one_time_key = nonce + '-' + sha1(nonce+salt+shared_key);
url = username:one_time_key#myhost.com/api/call
It is true that this is a bit laborious. This is because you aren't using a protocol level solution (like SSL). So it might be a good idea to provide some kind of SDK to users so at least they don't have to go through it themselves. If you need to do it this way, I find the security level appropriate (just-right-kill).
Secure secret storage
It depends who you are trying to thwart. If you are preventing people with access to the user's phone from using your REST service in the user's name, then it would be a good idea to find some kind of keyring API on the target OS and have the SDK (or the implementor) store the key there. If that's not possible, you can at least make it a bit harder to get the secret by encrypting it, and storing the encrypted data and the encryption key in seperate places.
If you are trying to keep other software vendors from getting your API key to prevent the development of alternate clients, only the encrypt-and-store-seperately approach almost works. This is whitebox crypto, and to date, no one has come up with a truly secure solution to problems of this class. The least you can do is still issue a single key for each user so you can ban abused keys.
(*) EDIT: SSL connections should no longer be considered secure without taking additional steps to verify them.
A pure RESTful API should use the underlying protocol standard features:
For HTTP, the RESTful API should comply with existing HTTP standard headers. Adding a new HTTP header violates the REST principles. Do not re-invent the wheel, use all the standard features in HTTP/1.1 standards - including status response codes, headers, and so on. RESTFul web services should leverage and rely upon the HTTP standards.
RESTful services MUST be STATELESS. Any tricks, such as token based authentication that attempts to remember the state of previous REST requests on the server violates the REST principles. Again, this is a MUST; that is, if you web server saves any request/response context related information on the server in attempt to establish any sort of session on the server, then your web service is NOT Stateless. And if it is NOT stateless it is NOT RESTFul.
Bottom-line: For authentication/authorization purposes you should use HTTP standard authorization header. That is, you should add the HTTP authorization / authentication header in each subsequent request that needs to be authenticated. The REST API should follow the HTTP Authentication Scheme standards.The specifics of how this header should be formatted are defined in the RFC 2616 HTTP 1.1 standards – section 14.8 Authorization of RFC 2616, and in the RFC 2617 HTTP Authentication: Basic and Digest Access Authentication.
I have developed a RESTful service for the Cisco Prime Performance Manager application. Search Google for the REST API document that I wrote for that application for more details about RESTFul API compliance here. In that implementation, I have chosen to use HTTP "Basic" Authorization scheme. - check out version 1.5 or above of that REST API document, and search for authorization in the document.
In the web a stateful protocol is based on having a temporary token that is exchanged between a browser and a server (via cookie header or URI rewriting) on every request. That token is usually created on the server end, and it is a piece of opaque data that has a certain time-to-live, and it has the sole purpose of identifying a specific web user agent. That is, the token is temporary, and becomes a STATE that the web server has to maintain on behalf of a client user agent during the duration of that conversation. Therefore, the communication using a token in this way is STATEFUL. And if the conversation between client and server is STATEFUL it is not RESTful.
The username/password (sent on the Authorization header) is usually persisted on the database with the intent of identifying a user. Sometimes the user could mean another application; however, the username/password is NEVER intended to identify a specific web client user agent. The conversation between a web agent and server based on using the username/password in the Authorization header (following the HTTP Basic Authorization) is STATELESS because the web server front-end is not creating or maintaining any STATE information whatsoever on behalf of a specific web client user agent. And based on my understanding of REST, the protocol states clearly that the conversation between clients and server should be STATELESS. Therefore, if we want to have a true RESTful service we should use username/password (Refer to RFC mentioned in my previous post) in the Authorization header for every single call, NOT a sension kind of token (e.g. Session tokens created in web servers, OAuth tokens created in authorization servers, and so on).
I understand that several called REST providers are using tokens like OAuth1 or OAuth2 accept-tokens to be be passed as "Authorization: Bearer " in HTTP headers. However, it appears to me that using those tokens for RESTful services would violate the true STATELESS meaning that REST embraces; because those tokens are temporary piece of data created/maintained on the server side to identify a specific web client user agent for the valid duration of a that web client/server conversation. Therefore, any service that is using those OAuth1/2 tokens should not be called REST if we want to stick to the TRUE meaning of a STATELESS protocol.
Rubens