Setting an Authorization header after a ForwardAuth in Traefik - traefik

I'm moving from Nginx to Traefik as the reverse-proxy of a Docker Swarm.
Currently, each request coming with a Bearer Token is sent to an authentication service (microservice running in the Swarm) which sends back a JWT when auth is correct. I then need to use this JWT in the Authorization header to the request can be sent to the service it targets.
The current setup with Nginx:
auth_request /auth;
auth_request_set $jwt $upstream_http_jwt;
proxy_set_header "Authorization" "jwt $jwt";
Can this approach be done with Traefik ForwardAuth directly or do I have to add a middleware to create this header once the request has been authenticated ?

This is possible if your authentication service can return the JWT in the Authorization header of its response. Set the authResponseHeaders option of the ForwardAuth middleware to Authorization.
The authResponseHeaders option is the list of headers to copy from the authentication server response and set on forwarded request, replacing any existing conflicting headers.
E.g.
http:
middlewares:
auth:
forwardAuth:
address: "http://your_auth_server/auth"
authResponseHeaders:
- "Authorization"

Related

How to set custom headers with Keycloak Gatekeeper?

I have Keycloak and Keycloak-Gatekeeper set up in OpenShift and it's acting as a proxy for an application that is running.
The application that Keycloak Gatekeeper is proxying requires a custom cookie to be set so I figured I could use the Gatekeeper's custom header configuration to set this however I'm running into issues.
Configuration looks like:
discovery-url: https://keycloak-url.com/auth/realms/MyRealm
client-id: MyClient
client-secret: MyClientSecret
cookie-access-name: my.token
encryption_key: MY_KEY
listen: :3000
redirection-url: https://gatekeeper-url.com
upstream-url: https://app-url.com
verbose: true
resources:
- uri: /home/*
roles:
- MyClient:general-access
headers:
Set-Cookie: isLoggedIn=true
After re-deploying and running through the auth flow, the upstream URL/application is not receiving the custom header. I tried with multiple headers (key/value) but can't seem to get it working or find where that header is being injected in the flow.
I've also checked logs and haven't been able to find anything super useful.
Sample Gatekeeper Config
Gatekeeper Custom Headers Docs
Any suggestions/ideas on how to get this working?
remove Set-Cookie.
Simply add
headers:
isLoggedIn: true

Zabbix HTTP authentication with Keycloak-proxy

I'm try to integrate Zabbix UI with Keycloak SSO, using keycloak-proxy.
My setup is the following:
Nginx is the entry point: it handles the "virtual host", forwarding the requests to keycloak-proxy.
Keyclock-proxy is configured with client_id, client_secret, etc. to authenticate the users to Keycloak;
Zabbix dashboard on Apache, default setup: I enable the HTTP authentication.
I've created a test user both in Keycloak and Zabbix.
The authentication flow is ok: I'm redirected to KeyCloak, I do the authentication, but I always get "Login name or password is incorrect." from Zabbix UI.
What am I doing wrong?
Has anyone tried to use OIDC authentication with Zabbix?
I'm using Zabbix 4.0, KeyCloak 4.4, Keycloak-proxy 2.3.0.
keycloak-proxy configuration:
client-id: zabbix-client
client-secret: <secret>
discovery-url: http://keycloak.my.domain:8080/auth/realms/myrealm
enable-default-deny: true
enable-logout-redirect: true
enable-logging: true
encryption_key: <secret>
listen: 127.0.0.1:10080
redirection-url: http://testbed-zabbix.my.domain
upstream-url: http://a.b.c.d:80/zabbix
secure-cookie: false
enable-authorization-header: true
resources:
- uri: /*
roles:
- zabbix
Zabbix expects PHP_AUTH_USER (or REMOTE_USER or AUTH_USER) header with the username, but keycloak-proxy doesn't provide it. Let's use email as a username (you can use any claim from the access token in theory). Add email to the request header in the keycloak-proxy config:
add-claims:
- email
And create PHP_AUTH_USER variable from email header in the Zabbix Apache config:
SetEnvIfNoCase X-Auth-Email "(.*)" PHP_AUTH_USER=$1
Note: Conf syntax can be incorrect because it is off the top of my head - it may need some tweaks.
BTW: there is a (hackish) user patch available - https://support.zabbix.com/browse/ZBXNEXT-4640, but keycloak-gatekeeper is a better solution
For the record: keycloak-proxy = keycloak-gatekeeper (the project was renamed and migrated to keycloak org recently)

JAX RS injected URIInfo returning localhost for REST requests in reverse proxy

I have a set of IBM Websphere Liberty profiles servers inside a HAProxy reverse proxy. Everything works ok but HAProxy is doing something on requests so I can't get the correct URL in the requests using uriInfo.getBaseUri() or uriInfo.getRequestUriBuilder().build("whatever path")... they both return localhost:9080 as host and port, so I can't build correct URLs pointing to the service. (The request is a standard http://api.MYHOST.com/v1/... REST request )
Of course, I get a uriInfo object using #Context in the method so it gets the request information.
Front end configuration:
reqadd X-Forwarded-Proto:\ http
# Add CORS headers when Origin header is present
capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)] if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Methods:\ GET,\ HEAD,\ OPTIONS,\ POST,\ PUT if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Credentials:\ true if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Headers:\ Origin,\ X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }
And Back-end configuration is:
option forwardfor
http-request set-header Host api.MYHOST.com
http-request set-header X-Forwarded-Host %[dst]
http-request set-header X-Forwarded-Port %[dst_port]
Any ideas on how to get the real request?
The only way I managed to get correct host used in the request is injecting in the method parameters the HttpServletRequest object.
I also inject the UriInfo, which has all valid information except the host name:
#Context UriInfo uriInfo, #Context HttpServletRequest request
After that I use URIBuilder (not UriBuilder) from Apache HttpClient utils to change the Host to the correct one as jax-rs UriBuilder in immutable:
new URIBuilder(uriInfo.getBaseUriBuilder().path("/MyPath").queryParam("MyParameter",myParameterValue)).build()).setHost(request.getServerName()).toString()
I also had to include setPort() and setScheme() to make sure the correct port and scheme are used (the correct ones are in HttpServletRequest, not UriInfo)
I just faced this very issue on my Jersey based application, I used uriInfo.getBaseUriBuilder() to get a UrlBuilder and figured out that it's possible to change the hostname from localhost by using the .host() method
.host(InetAddress.getLocalHost().getHostName())
And you can remove the port part by setting it to -1
.port(-1)
So from a URL that looks like
https://127.0.0.1:8443/hello
I got
https://yourhostname/hello

Not Getting Custom Nameservers Using Godaddy Api

I used this api call to get DNS records and nameservers using domain name
https://api.godaddy.com/v1/domains/testsd34.com/records/NS
GetRecords here is the api call
For default godaddy nameservers its giving everything perfectly but whenever i am using custom nameservers for domain that time this api call not giving nameservers in response its giving empty array,
anyone knows how to get custom nameservers using this api call?
Finally, I found a way to get and edit nameservers for domain.
(For custom nameservers, records are not set by GoDaddy, therefore you have to
query nameserver provider.)
Following is the API call for getting nameservers:
HTTP request:
GET https://api.godaddy.com/api/v1/domains/mydomain.com
HTTP headers:
Authorization -> sso-key my-key:my-secret
Content-Type -> application/json
Response will contain JSON object which has key "nameservers"
with pair of nameservers that you have. Example:
"nameServers": [
"ns1.mynameservers.com",
"ns2.mynameservers.com"
]
To edit the nameservers via API call, you can use following API call:
HTTP request:
PATCH https://api.godaddy.com/api/v1/domains/mydomain.com
HTTP headers:
Authorization -> sso-key my-key:my-secret
Content-Type -> application/json
HTTP body:
{
"nameServers": [
"ns3.mynameservers.com",
"ns4.mynameservers.com"
]
}

NTLM-authenticaion fails but Basic authentication works

Here's what happens on the local server when application invokes HTTP request on local IIS.
request.Credentials = CredentialCache.DefaultNetworkCredentials;
request.PreAuthenticate = true;
request.KeepAlive = true;
When I execute the request, I can see the following series of HTTP calls in Fiddler:
Request without authorization header, results in 401 with WWW-Authenticate NTLM+Negotiate
Request with Authorization: Negotiate (Base64 string 1), results in 401 with WWW-Authenticate: Negotiate (Base64 string 2)
Request with Authorization: Negotiate (Base64 string 3), results in 401 with WWW-Authenticate: Negotiate (Base64 string 4)
Request with Authorization: Negotiate (Base64 string 3), results in 401 with WWW-Authenticate NTLM+Negotiate
Apparently the client and the server (both running on the same machine) are trying to handshake, but in the end authorization fails.
What is strange is that if I disable Windows authentication of the site and enable Basic authentication and send user/pwd explicitly, it all works. It also works if I use NTLM authentication and try to access the site from the browser specifying my credentials.
Well, after several hours of struggling I figured what the problem was. In order to be able to inspect network traffic in Fiddler I defined a Fiddler rule:
if (oSession.HostnameIs("MYAPP")) { oSession.host = "127.0.0.1"; }
Then I used "MYAPP" instead of "localhost" in the Web app reference, and Fiddler happily displayed all session information.
But server security was far less happy, so this alias basically broke challenge-response authentication on the local server. Once I replaced the alias with "localhost", it all worked.