I have two different client apps written in javascript connecting to two different web api. I am trying to implement identity server 3.
Is it possible to have identity server behind my web api owin
authentication api end point. In other words, is it possible to
route /token endpoint from owin in web api to call /authenticate
endpoint in identity server?
Is it possible to audit log to db in identity server including
failed request along with user's ip and browser agent. Also is it
possible to log user's ip even if i am calling from web api as my
web api is being called by a user using browser?
In my case should i keep two different user base for two different
projects or move all my users to identityserver. If i move all the
user info to identityserver, how am i going to handle all the joins
with other tables in different applications or should i keep a copy
of user with minimum info such as id, email and name?
It makes little sense to first call a web api and deal with authentication during that call.
Your client apps should first redirect the browser to IdentityServer where user would log in and be redirected back to your client app along with either access token (implicit flow) or authorization code (AuthorizationCode flow), depending on the client app having a back-end or not. Then, your client app would make requests to the webapi, passing the access token in the Authorization header.
As for different user bases, one approach might be to implement specific IUserService for each user base and either send a hint about which one to use in the acr_values or tie it to specific clients registered in IdentityService. Again, depending on the requirements.
Is it possible to have identity server behind my web api owin authentication api end point. In other words, is it possible to route /token endpoint from owin in web api to call /authenticate endpoint in identity server?
Yes and no - you cannot reroute those requests, but you can host identityserver in the same application as a web api. In Startup.cs, map a folder to identityserver.
It's not a good idea to do this, first of all, which api of the two will host idsrv? What if that api goes down and takes idsrv with, then the other api does not work anymore.
-> host idsrv separately, make both apis and both javascript apps clients in idsrv, login to idsrv from the javascript apps (=SSO) and use bearer tokens for the api
Is it possible to audit log to db in identity server including failed request along with user's ip and browser agent. Also is it possible to log user's ip even if i am calling from web api as my web api is being called by a user using browser?
Yes, this should be possible, check the logging implementation for idsrv, at the least you should be able to plug in a provider that writes to a database.
In my case should i keep two different user base for two different projects or move all my users to identityserver. If i move all the user info to identityserver, how am i going to handle all the joins with other tables in different applications or should i keep a copy of user with minimum info such as id, email and name?
Idsrv does not need to have all the user info, just an email-address is enough, you can use that as link to the user data in your api databases if you use that as unique identifier.
Related
I am building a system with an ASP.NET Core web app (incidentally, in Blazor), which let's call "Site", and some domain web services (which might someday be used by other sites), one of which let's call "CustomerService".
Following various guides and articles on how to set up authentication with Open ID Connect and Azure Active Directory for this system, I see the following possible different approaches to authentication and authorization, especially with regard to AJAX requests:
Site-only auth, passthrough: Service trusts the site; site authenticates user.
Service-only auth, passthrough: Service authenticates user; site passes through all AJAX requests.
Service-only auth, CORS Service provides site client data via CORS, with authentication; site doesn't handle AJAX requests at all.
Service and site auth, passthrough: Service and site both authenticate user; site passes through some AJAX requests.
These all seem to have significant practical problems. Is there a fifth approach, or a variation I should be considering?
Here's my elaboration of these approaches.
(1) Service trusts the site; site authenticates user:
(1a) Set up Site.Server to use Open ID Connect for users to authenticate, implement all necessary authorization on Site.Server, pass through web API calls to CustomerService, and set up CustomerService to trust requests that come from Site.Server. This looks like a bad idea because then any user can spoof Site.Server and have full access to operations that should be secured on CustomerService. Also, CustomerService would not be able to enforce authorization; we'd be trusting Site.Server to get it right, which seems suboptimal.
(1b) Same as (1a), but Site.Server would know a secret API key that would be passed to CustomerService, either in headers or the API call's querystring or body. This doesn't seem so great because the API key would never change and then could be discovered and spoofed by any user. Still, this could work, as the API key could stay secret, and we could use our secret server for both sides to retrieve it. But still CustomerService would not be able to enforce authorization; we'd be trusting Site.Server to get it right, which seems suboptimal.
(1c) Same as (1b), but we come up with a mechanism for rotating the API key occasionally. This doesn't seem so great because the API key would change and then could be discovered and spoofed by any user. Still, this could work, as the API key could stay secret, and we could use our secret server for both sides to retrieve it. But still CustomerService would not be able to enforce authorization; we'd be trusting Site.Server to get it right, which seems suboptimal.
(2) Service authenticates user; site passes through all AJAX requests: Avoid any authentication on Site.Server and instead enforce authorization/authentication on CustomerService only through Open ID Connect+Azure AD. Site.Server would have to pass through requests including headers to CustomerService. This has the benefit of putting the security in the right place, but it seems unworkable, as the user has no way to authenticate on CustomerService since the user isn't using CustomerService directly; their AJAX requests still go to Site.Server.
(3) Service provides site client data via CORS, with authentication; site doesn't handle AJAX requests at all: Avoid any authentication on Site.Server and instead use CORS to allow the user's browser to connect directly to CustomerService, requiring authentication only through Open ID Connect+Azure AD. This has the benefit of putting the security in the right place, but how can a user authenticate on a AJAX request without having done so in a human-browsable way first? My AJAX request can't redirect to microsoftonline and prompt the user, can it? Plus CORS seems like a bad idea in general--we want to move away from cross-site anything; to the user, it should appear that Site.Server is serving up both AJAX calls and HTML page requests, right?
(4) Service and site both authenticate user; site passes through some AJAX requests. Put authentication on both Site.Server and CustomerService, with the same app ID, making them appear as one and the same site as far as Azure AD knows. Site.Server could do its own authentication and restrict certain service calls from getting to CustomerService, or it could pass through requests, including headers, to CustomerService, which could then deny or grant access as well. This is what we decided to do, but I question it now, as if I add a
second service, now it has to have again the same app ID to keep this approach.
None of these approaches seem to hit the mark. Am I missing another approach that I should be considering? Or is there a variation I am missing?
Here are my thoughts on what an option 5 is:
WEB UI
Code runs in the browser and interacts with the Authorization Server to authenticate the user. A library such as OIDC Client does the security work for you.
Provides best usability and simplest code. The UI uses access tokens to call cross domain APIs. Renewing tokens is tricky though, and browser security requires some due diligence.
WEB BACK END
Is static content only, deployed around the world close to end users - perhaps via Azure CDN. Must execute zero code. Provides best performance.
WEB UI SAME AS MOBILE UI
Your Web UI in effect operates in an identical manner to a mobile UI and is quite a bit simpler, with less need for cookies + double hops.
ENTRY POINT API
The browser UI interacts with an entry point API tailored to UI consumers. This API validates tokens by downloading Azure AD token signing keys. It is also has first say in authorizing requests.
The entry point API orchestrates calls to Core APIs and Azure APIs such as Graph. The Web UI uses a single token scoped to the entry point API and you can strictly control the UI's privileges. Meanwhile the API can use Azure AD's 'on behalf of' feature to get tokens for downstream APIs so that the UI does not need to deal with this.
DOMAIN APIs
These typically run in a locked down private cloud and are not called directly by the outside world. This allows you closer control over which types of caller can invoke which high privilege operations.
BLOG POSTS OF MINE
My blog's index page has further info on these patterns and the goals behind them. Maybe have a browse of the SPA Goals and API Platform Architecture posts.
There are some working code samples on this page. In my case the hosting uses AWS instead of Azure, though concepts are the same.
We have an application that has frontend UI(Which is a web application) which communicates with a resource server. Our frontend will be using some APIs from a resource server to get data.
I am planning to add frontend to Okta and provide access to okta registered users.
In the resource server, we have some APIs that we want to expose to our customers to integrate in their system(Programmatically). To use our APIs, we have to provide client credentials(client ID/secret) to them. Using clientId/Secret, they will get access_token and will use that in a subsequent request. We can display this clientId/Secret via frontend UI once the user logs in to it via Okta.
How should I authenticate requests to the resource server from the frontend? And how do I authenticate requests to resource server via customer using clientId/Secret? Should I use one or two different tokens for this purpose?
Does Okta provides per-user client Id/secret that user(customer) can use to get access_token and send it to access resource server and resource server validate token against Okta.
I just did something very similar. You can read about my experience here: How to pass/verify Open ID token between .net core web app and web api?
I don't know what application framework you are using (.net, node, etc), but if you're using .NET the steps are to (a) install the middleware in your web app, (b) install the middleware in your api app, and (c) make sure calls from your web app to the api app pass the id_token.
If you, in addition to that, need to secure it for external users - it should work the same way. The only difference is they will manually call the /authorize endpoint to get their token - but the middleware should take care of the token verification for you in both cases.
Note I did experience one odd thing which is that I needed to pass the id_token and not the access_token. It is also worth mentioning that the claims were interpreted differently in the app and the api (in that the name of the claims for say, userid, were different between them - the data was still the same).
You will have to use 2 different access tokens. There are 2 different flows going on here:
Web UI to API
Business partner system to API
Technically this means:
Authorization Code Flow (PKCE)
Client Credentials Flow
And in terms of tokens it means:
In the first case there is an end user represented in access tokens (the 'sub' claim)
In the second case there is only a Client Id claim in access tokens
I can advise on token validation techniques if needed - let me know.
To me though this feels like an architectural question - in particular around applying authorization after identifying the caller and versioning / upgrades.
Based on my experience I tend to prefer the following architecture these days, based on 2 levels of APIs: eg with these ones exposed to the internet:
User Experience API serves the UI
Partner API deals with B2B
And both entry point APIs call the same core services which are internal. Might be worth discussing with your stakeholders ..
We are currently analyzing the API gateway for our microservices and Kong is one of the possible candidate. We discovered that Kong support several plugins for authentication but the all based on users stored in Kong database itself. We need to delegate this responsibility to our custom auth HTTP service and don't want to add these users in API gateway database.
It's possible to do this with some code around, instead of using the OpenID connect plugin; in effect you need to implement an Authorization Server which talks to Kong via the Admin (8001) port and authorizes the use of an API with externally given User Ids.
In short, it goes as follows (here for the Authorization Code grant):
Instead of asking Kong directly for tokens, hit the Authorization Server with a request to get a token for a specific API (either hard coded or parameterized, depending on what you need), and include the client ID of the application which needs access in the call (you implement the /authorize end point in fact)
The Authorization Server now needs to authenticate with whatever IdP you need, so that you have the authenticated user inside your Authorization Server
Now get the provision code for your API via the Kong Admin API, and hit the /oauth2/authorize end point of your Kong Gateway (port 8443), including the provision key; note that you may need to look up the client secret for the application client id also via the Admin API to make this work
Include client id, client secret, authenticated user id (from your custom IdP) and optinally scope in the POST to /oauth2/authorize; these values will be added to backend calls to your API using the access token the application can now claim using the authorization code
Kong will give you an Authorization Code back, which you pass back to the application via an 302 redirect (you will need to read the OAuth2 spec for this)
The application uses its client and secret, with the authorization code, to get the access token (and refresh token) from Kong's port 8443, URL /oauth2/token.
It sounds more involved than it is in the end. I did this for wicked.haufe.io, which is based on Kong and node.js, and adds an open source developer portal to Kong. There's a lot of code in the following two projects which show what can be done to integrate with any IdP:
https://github.com/apim-haufe-io/wicked.portal-kong-adapter
https://github.com/Haufe-Lexware/wicked.auth-passport
https://github.com/Haufe-Lexware/wicked.auth-saml
We're currently investigating to see whether we can also add a default authorization server to wicked, but right now you'd have to roll/fork your own.
Maybe this helps, Martin
Check out Kong's OpenID Connect plugin getkong.org/plugins/openid-connect-rp - it connects to external identity and auth systems.
I'm integrating several web sites/services into my application. I use iframes (or webview for Vue Electron) for UI integration and I also use API to implement cross-communication between those services.
At the moment I have to go through OAuth 2 authentication twice for each service: once as part of natural authentication in iframe and another when I ask the user to give me access to this service (for api reasons).
Is there any way to streamline this process?
The state of the art response would be to modify your application completely.
You should have 1 SPA application and not iframe
This application would authenticate to get OAuth2 token
This application would then call the backend (access multiple backend, or access on api management layer that call backends).
Thing is, with this you can have 2 strategies :
give all permission (scope) at 1st authentication
give the smalled scope possible at 1st authentication, then when needed "reauthenticate" (in fact validate new scope) to get new access token
When an API want to call another API, you have also 3 strategies:
you simply use the same client token the API receive to the service your API call (no human interaction needed)
your API generate a token from a service account (using ROPC authentication scheme) or via a client credential scheme (the access token will be valid but usually not be bind to a real user), (no human interaction needed). (the API will be the client of the 2nd API)
your identity provider have an endpoint to transform access token : Your API can give the client access token, and authorization server will transform this with the client_id of your API. You send this token to 2ndAPI ( token will show subject of your UI application, but client ID will be the 1st API clientId) (no human interaction needed)
Now if you use IFrame with multiple sub-application on the same domain (the domain need to be exactly the same!), it is possible to share the same access token for instance via local storage. (security is not top notch)
You will probably need to authenticate with a bigger scope list sometime but it is your only option. You will simulate a single page application, but issue is that you will have potentially different client_id depending first application you authenticate to.
Edit: Multiple authorization server
From your comment, you have multiple authorization server. One strategy could be to ask user to authenticate, your application can then get an access_token and a refresh_token.
Depending on your authorization server, refresh_token can be used a lot / on a long period of time, so that if you store it somewhere, the next time the user visit your application, your application can silently get an access_token from this refresh token. Your application have then access to remove api without newer interaction from your user.
Of course, this means you have to save this token the most safely you can.
By using OpenID Connect you could combine authentication and authorization in a one step and get both an id_token to logon your user to your app as well as an access_token to access APIs in a single authentication response.
I have an API with the following method:
https://api.example.com/services/dosomething
I am providing this service to three different mobile apps, each one with hundreds of users. When a user logs in in the mobile app, a call to my API needs to be made.
I know that providing each one of the three mobile apps a different API Key and doing a HTTP Basic Authentication with it is not secure, since the API Key would be unsafely stored in the device an anyone can take it and make bad use of it.
The approach of OAuth2 doesn't work, since I only have information of my three customers, not their hundreds of users.
What is the best approach to secure the calls to my API on mobile?
In your case, your approach with OAuth2 is good: mobile apps (clients) receive delegation from resource owners (your users) to call protected resources on a resource server (your API).
You only have information about your clients because OAuth2 is not dedicated to authentication of your users but authorization of you clients.
The clients are identified with a client ID. In your case and if you want to know which client calls your resource server, then each client should have a dedicated client ID. You may also identify it using other information such as the IP address or a custom header in the requests it sends.
If you want to know who your users are, you should implement the OpenID Connect extension. This extension works on top of an authorization server based on OAuth2.
The authentication of the user is performed by the authorization server. An ID Token is issued with information about the user. The client (or mobile app) does not have to get or store user's credentials.
There is an excellent video where the both protocols are explained (especially from 4:44 to 11:00).