Keycloak introduced the concept of "Frontend URL" to enable different URLs for front-channel and back-channel requests towards Keycloak.
We have a use case where same Keycloak server is exposed via 2 public URLs (over 2 separate VPNs which are not accessible to each other) via separate Nginx proxies in a Kubernetes cluster:
domain1.company.com
domain2.company.com
and an internal URL:
internal.company.com
Problem is that we can set only one Frontend URL. For example, let's say we set it to domain1.company.com. Now when public clients access Keycloak via domain2.company.com using OIDC Discovery Endpoint, they get the authorization_endpoint as https://domain1.company.com/auth/realms/{realm-name}/protocol/OpenID-connect/auth which is not accessible due to separate VPNs.
By allowing only one value of Frontend URL, Keycloak assumes that the server is accessible via only one public URL, which may not be the case as in our example.
Is there a solution available to this problem?
There is an enhancement proposed for you use case: https://issues.redhat.com/browse/KEYCLOAK-15553
Related
We would like to provide a multi tenant application that identifies the tenant based on a subdomain. As authentication server we use Keycloak, in which each tenant has its own realm.
Now we want to authenticate all requests to our application using a auth proxy. If the request is already authenticated (it has a cookie), the request should be forwarded to the backends. If the request is not yet authenticated (it does not have a cookie), the request should be forwarded to Keycloak and to the correct realm based on the subdomain and an oAuth flow should be initiated. After successful login, a cookie should be set so that all subsequent requests are authenticated. This is exactly the functionality offered by oauth2-proxy. However, we have the further requirement that we have different realms that map the individual tenants. This is not possible with oauth2-proxy at the moment.
Is there another solution besides oauth2-proxy that offers this functionality (possibly Nginx or a plugin for it)?
Thanks
OIDC PLUGIN
You could use lua-resty-openidc with any Lua based Nginx system, eg Kong or OpenResty. This is an established plugin that does the same job as oauth2-proxy. You can have multiple instances of it configured, for different paths, representing different tenants:
location /tenant1/ {
rewrite_by_lua_block {
var opts = ...
local res, err = require("resty.openidc").authenticate(opts)
}
}
location /tenant2/ {
rewrite_by_lua_block {
var opts = ...
local res, err = require("resty.openidc").authenticate(opts)
}
}
There are also various ways to look at input criteria, such as an origin header and re-route accordingly, which can be useful sometimes, though there is a learning curve.
DESIGN
I would question your design a little though. Multiple realms effectively means your apps need to deal with multiple authorization servers, which is a complex setup. Eg APIs need to validate multiple types of access token.
If possible, prefer a solution where you use a single authorization server and simply add a tenant ID claim to access tokens, then ensure that APIs deny access to tenant 2 data for users from tenant 1.
This related answer on multiple realms for a single application also discusses some trade offs around how data can be accessed.
I need to put reverse proxy (web server) in front of another web application. That proxy should provide Basic LDAP authentication.
If LDAP authentication is successful:
Proxy should take username;
Find its match in dictionary (I would provide it separately in required format);
Use value from dictionary for Basic Authorization http header.
Forward request to web application.
Problem I want to solve - web application only supports RBAC with local users, so I want map local users with AD users and achieve RBAC with AD authentication.
Is it achievable with apache or nginx and how? Or should I look for another way?
We are currently analyzing the API gateway for our microservices and Kong is one of the possible candidate. We discovered that Kong support several plugins for authentication but the all based on users stored in Kong database itself. We need to delegate this responsibility to our custom auth HTTP service and don't want to add these users in API gateway database.
It's possible to do this with some code around, instead of using the OpenID connect plugin; in effect you need to implement an Authorization Server which talks to Kong via the Admin (8001) port and authorizes the use of an API with externally given User Ids.
In short, it goes as follows (here for the Authorization Code grant):
Instead of asking Kong directly for tokens, hit the Authorization Server with a request to get a token for a specific API (either hard coded or parameterized, depending on what you need), and include the client ID of the application which needs access in the call (you implement the /authorize end point in fact)
The Authorization Server now needs to authenticate with whatever IdP you need, so that you have the authenticated user inside your Authorization Server
Now get the provision code for your API via the Kong Admin API, and hit the /oauth2/authorize end point of your Kong Gateway (port 8443), including the provision key; note that you may need to look up the client secret for the application client id also via the Admin API to make this work
Include client id, client secret, authenticated user id (from your custom IdP) and optinally scope in the POST to /oauth2/authorize; these values will be added to backend calls to your API using the access token the application can now claim using the authorization code
Kong will give you an Authorization Code back, which you pass back to the application via an 302 redirect (you will need to read the OAuth2 spec for this)
The application uses its client and secret, with the authorization code, to get the access token (and refresh token) from Kong's port 8443, URL /oauth2/token.
It sounds more involved than it is in the end. I did this for wicked.haufe.io, which is based on Kong and node.js, and adds an open source developer portal to Kong. There's a lot of code in the following two projects which show what can be done to integrate with any IdP:
https://github.com/apim-haufe-io/wicked.portal-kong-adapter
https://github.com/Haufe-Lexware/wicked.auth-passport
https://github.com/Haufe-Lexware/wicked.auth-saml
We're currently investigating to see whether we can also add a default authorization server to wicked, but right now you'd have to roll/fork your own.
Maybe this helps, Martin
Check out Kong's OpenID Connect plugin getkong.org/plugins/openid-connect-rp - it connects to external identity and auth systems.
I have a setup using apache and mod_auth_kerb to authenticate users and proxy them to the destination web server using a HTTP header with username (X-Remote-User).
How do i setup a proper logout mechanism from the destination web server?
- URL to call or similar?
Our setup works like this:
We have one url, which is protected by Kerberos:
/kerberos_login
Once client access it, Kerberos authentication is performed. If successful, client is redirected to / — this is not protected by Kerberos.
To log out, clients have to access logout url (that one is also not protected by Kerberos):
/logout
I have a server where my api is hosted -> http://000.000.0.000:8080/todos
I used apigee for managing my api security. -> http://inscripts-test.apigee.net/v1/api/todos?apikey=myapikeyhere
Those are the two URLs, one from my server and one that apigee generated for me using api key.
Ideally all api requests made to http://000.000.0.000:8080 address should be rejected and calls should be allowed only to http://inscripts-test.apigee.net address.
I am new to the world of APIs, please help me understand how these security things should work.
First things first:
You probably don't want to expose your apikey by posting it to StackOverflow. If you can, you'll want to go into the Developer Apps page and regenerate your key:
Locking down your backend requires changes at the backend, and may also require changes at the Apigee layer. To truly lock down your backend, you'll want to allow access only via https. Otherwise, your traffic and any security measures can be compromised between Apigee and your backend.
Given your change to use https, you have some options:
You can require authentication (username & password) and modify your backend to only allow authenticated users. Then assign the gateway a username and password and include them in communications.
Probably your most secure, and possibly easiest if you only allow https: you can use 2-way SSL (mutual authentication) between Apigee and your backend. Your backend validates that only the Apigee certificate is allowed to connect to your backend. See this doc on setting up Apigee to target SSL.