In the following scenario,
[client]---https--->[Nginx]---http--->[app server]
How (and what) would I pass down to the app server to uniquely identify the certificate? That is, Nginx validates the certificate, but app server doesn't see it. I need to distinguish between users at the app server, so they can't impersonate each other.
You could adapt the same technique as what's described in this question for Apache Httpd. You'd need the Nginx equivalent of something like:
RequestHeader set X-ClientCert ""
RequestHeader set X-ClientCert "%{SSL_CLIENT_CERT}s"
I haven't tried, but the documentation for the Nginx SSL module has a section about "Embedded Variables". More specifically:
$ssl_client_cert returns the client certificate in the PEM format for an established SSL connection, with each line except the first prepended
with the tab character; this is intended for the use in the
proxy_set_header directive;
This looks like what you need with a reverse-proxy setting, like the one you have.
Note that it's very important to clear this header on its way in, otherwise clients could just set the headers themselves and use any certificate they like.
How you then want to check this in your application server depends on the platform you're using. In Java, for example, you could write a Filter (or a Tomcat Valve) that sets the parameter in the request from this custom HTTP header.
It sounds like you want to use Nginx for SSL termination, but you want the backend servers to be able to tell with the original request was over HTTPS or HTTP.
I think something this could work:
server {
ssl on;
listen 443;
add_header X-Forwarded-Proto https;
proxy_pass ...
}
# If you need insecure requests as well
server {
listen 80;
add_header X-Forwarded-Proto http;
proxy_pass ...
}
Then your app server can check the value of the X-Forwarded-Proto header.
This is the same design pattern that Amazon Web Services uses for terminating SSL at their Elastic Load Balancers. They also set the X-Forwarded-Proto header for backend servers to check.
Related
I'm learning how to build and host my own website using Python and Flask, but I'm unable to make my website work as I keep getting an infinite redirect loop when I try to access my website through my domain name.
I've made my website using Python, Flask, and Flask-Flatpages. I uploaded the code to GitHub and pulled it onto a Raspberry Pi 4 that I have at my house. I installed gunicorn on the RasPi to serve the website and set up two workers to listen for requests. I've also set up nginx to act as a reverse proxy and listen to requests from outside. Here is my nginx configuration:
server {
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
# listen on port 80 (http)
listen 80;
server_name <redacted>.com www.<redacted>.com;
location ~ /.well-known {
root /home/pi/<redacted>.com/certs;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443;
ssl on;
server_name <redacted>.com www.<redacted>.com;
# location of the SSL certificate
ssl_certificate /etc/letsencrypt/live/<redacted>.com/fullchain.pem; # m$
ssl_certificate_key /etc/letsencrypt/live/<redacted>.com/privkey.pem; #$
# write access and error logs to /var/log
access_log /var/log/blog_access.log;
error_log /var/log/blog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header X_Forwarded_Proto $scheme;
proxy_set_header Host $host;
location /static {
# handle static files directly, without forwarding to the application
alias /home/pi/<redacted>.com/blog/static;
expires 30d;
}
}
When I access the website by typing in the local IP of the RasPi (I've set up a static IP address in /etc/dhcpcd.conf), the website is served just fine, although it seems like my browser won't recognize the SSL certificate even though Chrome says the certificate is valid when I click on Not Secure > Certificate next to the .
To make the website public, I've forwarded port 80 on my router to the RasPi and set up ufw to allow requests only from ports 80, 443, and 22. I purchased a domain name using GoDaddy, then added the domain to CloudFlare by changing the nameservers in GoDaddy (I'm planning to set up cloudflare-ddns later, which is why I added the domain to CloudFlare in the first place). As a temporary solution, I've added the current IP of my router to the A Record in the CloudFlare DNS settings, which I'm hoping will be the same for the next few days.
My problem arises when I try to access my website via my public domain name. When I do so, I get ERR_TOO_MANY_REDIRECTS, and I suspect this is due to some problem with my nginx configuration. I've already read this post and tried changing my CloudFlare SSL/TLS setting from Flexible to Full (strict). However, this leads to a different problem, where I get a CloudFlare error 522: connection timed out. None of the solutions in the CloudFlare help page seem to apply to my situation, as I've confirmed that:
I haven't blocked any CloudFlare IPs in ufw
The server isn't overloaded (I'm the only one accessing it right now)
Keepalive is enabled (I haven't changed anything from the default, although I'm unsure whether it is enabled by default)
The IP address in the A Record of the DNS Table matches the Public IP of my router (found through searching "What is my IP" on google)
Apologies if there is a lot in here for a single question, but any help would be appreciated!
I only see one obvious problem with your config, which is that this block that was automatically added by certbot should probably be removed:
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
Because that behavior is already specified in the location / {} block, and I think the Certbot rule may take effect before the location ~ /.well-known block and break that functionality. I'm not certain about that, and I don't think that would cause the redirects, but you can test the well-known functionality yourself by trying to access http://yourhost.com/.well-known and seeing if it redirects to HTTPS or not.
On that note, the immediate answer to your question is, get more information about what's happening! My next step would be to see what the redirect loop is - your browser may show this in its network requests log, or you can use a command-line tool like curl or httpie or similar to try to access your site via the hostname and see what requests are being made. Is it simply trying to access the same URL over and over, or is it looping through multiple URLs? What are they? What does that point at?
And as a side note, it makes sense that Chrome wouldn't like your certificate when accessing it via IP - certificates are tied to one or more hostnames, so when you're accessing it over an IP address, the hostname doesn't match, so Chrome is probably (correctly) pointing that out and warning you that you're not at the hostname the certificate says you should be at.
My setup is as follows:
Load Balancer → nginx → Traefik
The load balancer in place does not support Proxy Protocol. Instead it adds the real IP of the client to the TCP options field (yikes, I know! Details). That's something Traefik does not support.
To get the real IP to Traefik, I added an nginx inbetween that does nothing more than accepting connections on ports 80 and 443 and adding Proxy Protocol when using SSL. Traefik is configured for Proxy Protocol. Things work as expected.
However I'd like to set the X-Real-IP header to the correct IP when Proxy Protocol is used. When I try setting the header manually through curl, that one is used, so clients can overwrite it.
How can I tell Traefik to always set X-Real-IP to the IP as adviced by Proxy Protocol?
I solved my problem and can see clearer now.
It depdends on which node in your configuration (Load Balancer → nginx → Traefik) terminates the clients request. In my setup (Load Balancer → Traefik) the Load Balancer uses NATing to send the request to the Traefik. Traefik then takes the client´s request and sends a new request to the corresponding backend.
So I had to configure Traefik to never trust the X-Real-Ip header but always set the request´s source ip in the X-Real-Ip header.
Configuration is something like this:
[entryPoints.http.proxyProtocol]
insecure = true
trustedIPs = ["10.10.10.1", "10.10.10.2"]
[entryPoints.http.forwardedHeaders]
trustedIPs = ["10.10.10.1", "10.10.10.2"]
The mostly found configuration (I think) would be that the Load Balancer takes the client´s request and then sends a new request to nginx (reverse proxy load balancer). In this case the Load Balancer must set the X-Real-Ip Header, nginx must propagate the header to Traefik and Traefik must be configured to trust nginx as source for the X-Real-Ip header.
I just looked into the source code because of a similar problem.
Traefik sets the header X-Real-Ip with the source IP address of the request being forwarded. If the header X-Real-Ip already exists, it will be passed through unchanged.
I hope that answers the question.
if req.Header.Get (XRealIp) == "" {
req.Header.Set (XRealIp, clientIP)
}
I have done some resarch for this matter and there are some unaswered question regarding my issue, however I managed to solve half of what is needed (thanks to people on the site).
Scenerio:
I have Nginx as a reverse proxy in internal corporate network. I need to pass traffic to Internet behind corporate proxy.
Half of the solution:
To achive this, following works fine:
server {
listen 80;
server_name myhost.com;
location / {
proxy_set_header Host google.com;
proxy_pass http://corporateproxy:9999/;
}
}
However, above solution does not use SSL between corporate proxy and google.com. Do you have any idea how to add SSL to this?
I have tried adding protocol or port to header but it is not working this way.
I cannot modify anything on the corporate proxy. It should work like this: the URL being accessed is with https it will be redirected to https; http to http. Unfortunatelly header that contains only dns name is treated as http request.
Unfortunatelly the simplest solution does not work because nginx does not respect http_proxy settings on RedHat Machine:
server {
listen 80;
server_name myhost.com;
location / {
proxy_pass https://google.com/;
}
}
Any help will be highly appreciated.
I have endpoint termination setup on my Google Cloud Platform HTTP Load Balancer and HTTPS set as the protocol for communication with my backends.
This means that all requests, HTTP or HTTPS, is HTTPS to apache. The problem with this is that the HTTPS environment variable is set to on even when X-Forwarded-Proto is set to http.
All of my research and testing only points to the inverse case (setting HTTPS to on when X-Forwarded-Proto is https via a SetEnvIf X-Forwarded-Proto https HTTPS=on rule).
But, I need something to unset HTTPS when X-Forwarded-Proto is http.
I've tried setting SSLOptions -StdEnvVars as well as many combinations of SetEnvIf, SetEnv, and UnsetEnv. Setting it via mod_rewrite is not an option for me (I don't know if it would work anyway). An interesting note about turning off StdEnvVars is that even when it is off, all the SSL related variables are gone except HTTPS and I can confirm nothing else is setting it in any of my config files.
Edit:
Some examples of directives I've tried in my server config, vhost, and htaccess:
SetEnvIf X-Forwarded-Proto http HTTPS=Off
SetEnvIf X-Forwarded-Proto http HTTPS=0
SetEnvIf X-Forwarded-Proto http !HTTPS
SetEnv HTTPS Off
SetEnv HTTPS 0
SetEnv HTTPS
UnsetEnv HTTPS
Using these directives with other variables, including tests like foo works just fine.
Using these directives with other variables, including tests like foo works just fine.
Just an idea first (gladly retracted if someone has a better idea)
https://cloud.google.com/compute/docs/load-balancing/http/ says:
Target proxies
Target proxies terminate HTTP(S) connections from clients, and
are referenced by one or more global forwarding rules and route the
incoming requests to a URL map.
The proxies set HTTP request/response headers as follows:
Via: 1.1 google (requests and responses)
X-Forwarded-Proto: [http | https] (requests only)
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP> (requests only)
Can be a comma-separated list of IP addresses depending on the X-Forwarded-For entries appended by the intermediaries the client is
traveling through. The first element in the section
shows the origin address.
The question is where this is set. If in the apache config files, you could just alter the config. If it is set somewhere else, you need to find out where.
The TargetHttpsProxies resource did not list any ways to alter it either. So how about you post the config files that lead to above behavior?
The organisation I'm working for is currently running an application on Glassfish 3.1.2.2 behind a hardware (same issue with software/cloud) load balancer that is also in charge of SSL termination. We are currently having issues with Glassfish not knowing that it is behind an SSL connection and therefor generating certain things incorrectly. Specifically the following:
session cookies are not flagged as secure
redirects generated from Glassfish are done as http:// instead of https://
request.isSecure() is not returning the correct value
request.getScheme() is not returning the correct value
In theory we could rewrite all of these things in the load balancer, but on previous projects using Tomcat and have been able to solve all of them at the container level.
In Tomcat I can just set the secure flag and the scheme value on the HTTP connector definition and everything is good to go. But I can't seem to find equivalents on Glassfish.
Anyone have any ides?
If your load balancer provides X-Forwarded-Proto header you can try to use scheme-mapping attribute in the http definition of your domain.xml:
<http default-virtual-server="server"
max-connections="100"
scheme-mapping="X-Forwarded-Proto">...
For example nginx can be configured to provide this header very easily:
location / {
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://glassfish;
}
Looks like glassfish has some known issues related to scheme-mapping support though.