Bearer token for upstream server with NGINX reverse proxy. Is the header being stripped? - ssl

I have a Tomcat server that is behind an NGINX reverse proxy applying SSL. There is a bearer token in place for API calls on the Tomcat server, but I am getting a 401 error when I send this token to an endpoint in Postman. The proxy otherwise works flawlessly.
I've spent way too long troubleshooting this, but I've only looked at my proxy settings. I discovered last night that the proxy should be forwarding Authentication headers to the upstream Tomcat server, so now I'm lost as to how to troubleshoot this. Has anyone encountered this before or can point me in the right direction? This is outside of my normal scope so I'm a little out of my element.
EDIT - Even when I force the header with the Bearer token using "proxy_set_header Authorization "Bearer $ID_TOKEN";" it still returns the 401 error. Is it maybe adding something it shouldn't like a second Authorization header, or appending the Authorization header?
EDIT2 - Tomcat error logs show:
[{"time":"2021-05-14 19:01:10.069","description":"Request header did not include a token."}]

If you are not using the auth_request module for NGINX then it should be fairly easy to simply pass the Authorization headers as followed:
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
If this doesn't work i will really need to see more of your NGINX configuration and I would strongly suggest to use the NGINX auth_request module to handle all oAuth on the NGINX server itself.

Related

Caddy as reverse proxy to rewrite a http redirect url from an upstream response

I am having a backend that is not able when running behind a reverse proxy since I cannot configure a custom base URL.
For the login process the backend makes heavy use of HTTP redirects but due to the fact that is behind a reverse proxy it sends redirection URL that are not reachable by the client.
So I was wondering if there is a way to rewrite the upstream HTTP HEADER Location
If the backend responses
HTTP/1.1 301
Location: http://backend-hostname/auth/login
Caddy should rewrite the Location header to
HTTP/1.1 301
Location: http://www.my-super-site.com/service/a/auth/login
Is something like this possible?
I've that we can remove headers by declaring
header / {
- Location
}
but it possible to replace the header and rewrite the URL?
I was also looking for answer for this question and unfortunately I've found this responses:
https://caddy.community/t/v2-reverse-proxy-but-upstream-server-redirects-to-nonexistent-path/8566
https://caddy.community/t/proxy-url-not-loading-site/5393/7
TLDR:
You need to use sub-domains rather than sub-paths for services that are not design for being after proxy (or at least configure base URL). :(

Does it require https in Kestrel behind my https apache proxy server?

I am not quite clear about the idea whether the Kestrel server needs to be encrypted as a localhost server.
I use Apache with HTTPS as the proxy server for Kestrel server. Does it require to run https in Kestrel as well? In theory, what passes through the Apache proxy server (HTTPS enabled) should be encrypted, right?
Please shed some light if you have any ideas.
No, you don't have to encrypt the traffic between Apache and Kestrel. The apache (or nginx or IIS) will be the SSL termination point.
However what you need to make sure is
that Apache correctly sets the forwarded headers (x-forwarded-* headers)
kestrel is correctly configured to use these headers (UseIISIntegration already does that) or register the app.UseForwardedHeaders(); middleware which also registers them
Without either one, your requests will fail if the controllers/actions are marked with [RequireHttps] attribute

flask-jwt response 401 UNAUTHORIZED even with assess_token sent in request

I first try to test flask-jwt on my local machine, by using flask's built-in server. I request http:/localhost/auth with my username and password as payload it works fine, and I got a token. Then I request a protected API with this token, say with "JWT access_token" as the Authorization header. This works pretty well on local machine.
Then when I deployed it on my server. I can still get a token by request server/auth, while after that when I request a protected api, I always got a 401 UNAUTHORIZED even the token was just issued and I did it the same way as on my local machine.
Is it because some cookies-related issue on my server side?
Are you on Apache? If so you need to have the following line in your Apache2 VirtualHost .conf file in the sites-available directory:
WSGIPassAuthorization On

Force ssl certificate to be included in CORS pre-flight request

I am trying to make a (non-simple) CORS GET request (AJAX) made by my client server (running on Apache # port 443) to a 3rd party endpoint server (running on Tomcat # port 8443), which fails to trigger when tried over HTTPS.
When SSL is not enabled it works just fine indicating that the CORS is set up properly.
The problem is that since the GET is (not-simple) it sends a pre-flight OPTIONS request.
According to this:
Pre-flight OPTIONS request failing over HTTPS
Pre-flighted requests don't include the client certificate. He states this is in the CORS spec however I was unable to find this specifically listed in the spec:
http://www.w3.org/TR/cors/
The third party cannot enable
SSLVerifyClient optional
as they require all communication be sent with SSL.
However they do have their CORS setup right and they have
access-control-allow-credentials: "true"
In our AJAX call we included in the xhrFields
withCredentials: true
So we are telling it to pass withCredentials (which includes cert / cookie / etc)
And on our APACHE we have
SSLOptions +ExportCertData
Somehow when we make the call though, they are still seeing the error "key/cert was not included "
Am I missing something? Is there a way to force this in Apache?
At the moment I'm getting ready to create a man the middle script to attach the cert to the initial request but it seems like there has to be a better way.
Any suggestions?

Removing duplicate headers from HTTP requests

I am using an Apache 2.4 server with mod_proxy as an HTTP reverse proxy for Tomcat server. The reverse proxy works on a Split-DNS configuration where "server.com" might point either to the actual HTTP server or to my reverse proxy depending on where the client is.
The problem that I'm having is that our client application had a problem where sometimes it would include an header more than once. For example, an HTTP request could end up looking like this:
POST server.com HTTP/1.1
Some-Header: foo
Authorization: BASIC abc123
Authorization: BASIC abc123
Other-Headers: ...
This works fine if the client is talking directly to Tomcat but if it goes through the reverse proxy then the duplicated headers seem to get mangled and Tomcat ends up receiving a request that looks like this:
POST server.com HTTP/1.1
Some-Header: foo
Authorization: BASIC abc123, BASIC abc123
Other-Headers: ...
I used Wireshark to inspect the HTTP requests as they are sent/received in the Client->Proxy->Tomcat chain and Apache is definitely the component that is "collapsing" the two headers into one.
Is there a way to configure this behavior in a way where it either sends both headers or just one? What I don't want is this "collapsing" taking place...
You can use mod_headers to remove the duplicate header. See their official docs for information on how to enable it.
Then you can add a line like this to your configuration file so that the first part of header disappears:
RequestHeader edit Authorization "^BASIC\ abc123\\,\ " ""
Let me know if that works for you.