I am running Keycloak on an OpenShift project, and I have 4 pods running:
keycloak (v8.0.1 configured to listen on 8443 with TLS),
keycloakdb (PostgreSQL DB),
proxy (Apache 2.4 reverse proxy), and
portal (our app that we developed to handle connecting to other applications).
The keycloak pod also contains two jar files that we “borrowed” that implements PKI authentication as part of the log on.
The routes configured in OpenShift are
apache: tcp/443 to tcp/8443 on the apache pod
keycloak: tcp/443 to tcp/8443 on the keycloak pod, and
Current state:
A connection to https://proxy.domain.com is redirected to https://keycloak.domain.com for authentication
https://keycloak/domain.com which requests my certificate for a 2-way TLS authentication
then redirected to https://keycloak.domain.com/auth/auth?response_type=code&scope=openid&client=potal&state=&redirect_uri=https://proxy.domain.com/redirect_uri&nonce=
The browser displays a page which give details of my certificate and my user account name with a button to continue
Clicking the continue button, POSTs to https://keycloak.domain.com
The browser is then redirected to https://proxy.domain.com:8443
Since there is no route to https://proxy.domain.com:8443 the connection times out.
The question is how do I get keycloak to redirect the browser to https://proxy.domain.com on tcp/443?
For redirecting to particular URL after authentication, you can use URL redirection setting in client settings.
The problem is the redirect_uri in the authentication request. It points to proxy.domain.com instead to the portal.
The redirect_uriis set by the OAuth 2.0 client code in the portal. Probably, the portal software thinks its own URL starts with proxy.domain.com.
So investigate and fix the OAuth 2.0 code in the portal (probably just a configuration issue).
Related
I'm using oauth2-proxy with Keycloak to authenticate to applications.
Oauth2-proxy sits in the front, and when a request comes to port 4180 it redirects to Keycloak, once you authenticate it redirects to the upstream address (where the application lives)
This works well as long as the application is on the same server as oauth2-proxy.
When the application is on a different server, all the same process goes well with no errors, (exact same configuration except for the upstream which now points to another server:port), but instead of redirecting to the upstream app on the other server, it redirects to the same server:4180 and shows me an Nginx welcome page.
Could this be a configuration issue, or is it mandatory that the application is in the same server as oauth2-proxy?
I am trying to secure an API using Kong as API Gateway, Keycloak as IAM service and NGINX as reverse proxy all of which are up within containers. Kong and Keycloak are connected to each other via OIDC plugin. The desired scenario is the following;
When a client makes request NGINX will redirect the request to Kong and if the client is not logged in it will be redirected to Keycloak’s login page to be redirected to Kong again after a successful login. Most of the flow is working except when the client is redirected to Keycloak, Keycloak’s 8080 port is not hidden, same for the Kong’s 8000 port once the user makes a successful login and redirected back to Kong. We tried a couple of solutions but they did not work out. What is the correct way to set these up to hide their ports?
Thanks in advance.
We have a couple of back-end web applications to which we want to provide access via the public internet. To that end, we are setting up a reverse proxy (IIS 7.5) from our DMZ. At the same time, we want these web applications to be claims-enabled through ADFS 2.0.
WEB1.MYCORP.COM/WFE1 is the other back-end web application, on our internal network
WEB1.MYCORP.COM/WFE2 is the other back-end web application, on our internal network
ADFS.MYCORP.COM is the ADFS 2.0 server, on our internal network
FSPROXY.MYCORP.COM is the ADFS 2.0 proxy server, on our DMZ
RPROXY1.MYCORP.COM is the reverse proxy for WFE1, on our DMZ
RPROXY2.MYCORP.COM is the reverse proxy for WFE2, on our DMZ
In keeping with the proper configuration of ADFS, our internal DNS resolves ADFS.MYCORP.COM to the actual internal server, while external DNS points ADFS.MYCORP.COM to the ADFS proxy (FSPROXY).
So, here's the scenario:
End user browses to RPROXY.MYCORP.COM
Reverse proxy forwards request to WEB1.MYCORP.COM/WFE1
WFE1 redirects browser to ADFS.MYCORP.COM (actually FSPROXY)
ADFS Proxy prompts for credentials and authenticates against ADFS server
Upon successful authentication, browser redirected back to web app
I have a couple of questions. Do I need to configure something in the rp or the application to allow this. Also the adfs endpoint is the rp url is that an issue?
Do I need to set up something for the reverse proxy as well? (Should I/can I) set up a claims-enabled reverse proxy in IIS? How do I set up the reverse proxy rules to pass back the ADFS request unaltered? Currently, when I try to access the back-end application, it fails with a 401 authentication error. If I remove the proxy and just hit the app server it works fine.
Further,
This fails:
The path is client --> rp -->app -->adfs --> rp -->app --> rp -->client machine
this works:
The path is client -->rp -->app -->adfs -->app -->rp -->client machine
Any suggestions would be greatly appreciated!
Not familiar with how you enabled reverse proxy in IIS (ARR?). Something like this http://blogs.iis.net/carlosag/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr
One choice for you is to use ADFS 2012R2 (if possible) because the proxy in that, the Web Application Proxy, handles both ADFS authentication and can handle app publishing for your claims enabled application. There are 2 ways you can publish your app to the internet. Once is pass-through which is kinda what you are trying to do. But it also allows pre-authentication support for a claims aware app. This way, you can have a different policy that decides whether the application can get pass your EDGE network before a packet goes to your internal application.
After doing lots of digging and fiddler traces I found the issue. In testing idp setup the token was different then stage env. The fiddler traces showed that the token was making it back to the app server. The issue was it also looked like the cookie dropped off for no reason. The issue was because the old dev ipd value disagreed with the stage value...naturally. Once I cleared the old token from the database everything worked.
I have setup IBM MobileFirst 7.0 with IBM HTTPServer. HTTPServer only listens for SSL traffic on 443 (no requests over http on port 80 are being processed). The plugin-cfg.xml is directing the /appcenterconsole URLs to the WebSphere Liberty server running the MobileFirst app.
At first, the App Center console loaded and I could login, but any calls to /appcenterconsole/services/* were getting a certificate chain error. I fixed this by adding the http server's certificate to the Liberty keystore. After executing this change, the behavior changed such that on login to App Center, the user immediately receives a 'Your session has expired' message and is sent back to the login page.
Why is my session getting lost? The HTTP server has a JSESIONID for requests to /appcenterconsole/*.
Can the HTTP plugin send the traffic over http to the Liberty server to avoid the SSL chain issue?
This looks like an SSO (Single Sign On) problem. There are two web application, AppCenterConsole and AppCenterServices, and both require authentication, hence they should be set up with SSO. It seems you reach the AppCenterConsole but not the AppCenterService.
Liberty has SSO by default, but if you are using multiple servers, ensure that you have followed the instructions Configuring LTPA on the Liberty profile in the Websphere Liberty Profile documentation.
Alternatively, you can set the JNDI property ibm.appcenter.ui.cors to false for the AppCenterConsole. This will avoid the redirection of requests from the AppCenterConsole to the AppCenterService. If it doesn't already fix the problem, then at least it will produce a better error message with a stack trace that points to the real problem.
I have a web application with servlets and jsps running on tomcat. I have enabled the tomcat to use https for all the users/visitors. I want to know if there is a way I can disable the https for users who are not logged in and are just browsing through the application.
Thank you
If you are searching by a Tomcat setting to do that, the answer is no. If you open a port https/ssl security, you opened it for everyone (the only exception is if you intent to use client authentication using ssl client certificates, that a guess is not the case here).
However, you can check if the user is accessing using https (using HttpSerlvetRequest.isSecure()) and send him back to http with a redirect, or change all page links to starts with 'http', if he is not logged in. That will make sure that any link the user clicks will send him back to http.