Nginx reverse proxy to https location causes ssl_error_rx_record_too_long - nginx-reverse-proxy

ttI am adding an nginx reverse proxy in front of my existing nextcloudpi server in order to be able to route traffic to different servers in my network depending to the domain that is used. Currently the nextcloudpi server is the only one running, so all traffic is directly routed to it.
The server is only accessible via https and letsencrypt handles the certifactes on the nextcloudpi server.
In order to route traffic from my reverse proxy to the nextcloudpi server via https, I have set up the default.conf file to look like this:
server {
listen 443;
listen [::]:443;
server_name <my-public-url>;
location / {
proxy_pass https://<hostname-of-my-nextcloudpi-server>;
}
}
Unfortunately that doesn't work. Firefox returns SSL_ERROR_RX_RECORD_TOO_LONG and Chrome ERR_SSL_PROTOCOL_ERROR
I have also not seen anywhere where traffic is proxied to a https location. I am aware that in my internal network I can and should just route to the target location on port 80 but since the server is already set up to use https I want to keep it that way.
Thanks for your help

Related

How do I fix an infinite redirect loop on a self-hosted nginx server?

I'm learning how to build and host my own website using Python and Flask, but I'm unable to make my website work as I keep getting an infinite redirect loop when I try to access my website through my domain name.
I've made my website using Python, Flask, and Flask-Flatpages. I uploaded the code to GitHub and pulled it onto a Raspberry Pi 4 that I have at my house. I installed gunicorn on the RasPi to serve the website and set up two workers to listen for requests. I've also set up nginx to act as a reverse proxy and listen to requests from outside. Here is my nginx configuration:
server {
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
# listen on port 80 (http)
listen 80;
server_name <redacted>.com www.<redacted>.com;
location ~ /.well-known {
root /home/pi/<redacted>.com/certs;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443;
ssl on;
server_name <redacted>.com www.<redacted>.com;
# location of the SSL certificate
ssl_certificate /etc/letsencrypt/live/<redacted>.com/fullchain.pem; # m$
ssl_certificate_key /etc/letsencrypt/live/<redacted>.com/privkey.pem; #$
# write access and error logs to /var/log
access_log /var/log/blog_access.log;
error_log /var/log/blog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header X_Forwarded_Proto $scheme;
proxy_set_header Host $host;
location /static {
# handle static files directly, without forwarding to the application
alias /home/pi/<redacted>.com/blog/static;
expires 30d;
}
}
When I access the website by typing in the local IP of the RasPi (I've set up a static IP address in /etc/dhcpcd.conf), the website is served just fine, although it seems like my browser won't recognize the SSL certificate even though Chrome says the certificate is valid when I click on Not Secure > Certificate next to the .
To make the website public, I've forwarded port 80 on my router to the RasPi and set up ufw to allow requests only from ports 80, 443, and 22. I purchased a domain name using GoDaddy, then added the domain to CloudFlare by changing the nameservers in GoDaddy (I'm planning to set up cloudflare-ddns later, which is why I added the domain to CloudFlare in the first place). As a temporary solution, I've added the current IP of my router to the A Record in the CloudFlare DNS settings, which I'm hoping will be the same for the next few days.
My problem arises when I try to access my website via my public domain name. When I do so, I get ERR_TOO_MANY_REDIRECTS, and I suspect this is due to some problem with my nginx configuration. I've already read this post and tried changing my CloudFlare SSL/TLS setting from Flexible to Full (strict). However, this leads to a different problem, where I get a CloudFlare error 522: connection timed out. None of the solutions in the CloudFlare help page seem to apply to my situation, as I've confirmed that:
I haven't blocked any CloudFlare IPs in ufw
The server isn't overloaded (I'm the only one accessing it right now)
Keepalive is enabled (I haven't changed anything from the default, although I'm unsure whether it is enabled by default)
The IP address in the A Record of the DNS Table matches the Public IP of my router (found through searching "What is my IP" on google)
Apologies if there is a lot in here for a single question, but any help would be appreciated!
I only see one obvious problem with your config, which is that this block that was automatically added by certbot should probably be removed:
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
Because that behavior is already specified in the location / {} block, and I think the Certbot rule may take effect before the location ~ /.well-known block and break that functionality. I'm not certain about that, and I don't think that would cause the redirects, but you can test the well-known functionality yourself by trying to access http://yourhost.com/.well-known and seeing if it redirects to HTTPS or not.
On that note, the immediate answer to your question is, get more information about what's happening! My next step would be to see what the redirect loop is - your browser may show this in its network requests log, or you can use a command-line tool like curl or httpie or similar to try to access your site via the hostname and see what requests are being made. Is it simply trying to access the same URL over and over, or is it looping through multiple URLs? What are they? What does that point at?
And as a side note, it makes sense that Chrome wouldn't like your certificate when accessing it via IP - certificates are tied to one or more hostnames, so when you're accessing it over an IP address, the hostname doesn't match, so Chrome is probably (correctly) pointing that out and warning you that you're not at the hostname the certificate says you should be at.

400 Bad Request load balancer for Apache servers with NGINX

I am using NGINX as load balancer for Apache WebServers (WordPress). All servers are made with AWS EC2. My config for NGINX:
cat /etc/nginx/sites-available/default
upstream web_backend {
server 35.157.101.5;
server 35.156.213.23;
}
server {
listen 80;
location / {
proxy_pass http://web_backend;
}
}
But after NGINX restart i am access load balancer via public ip and getting an error:
Bad Request
Your browser sent a request that this server could not understand.
Additionally, a 400 Bad Request error was encountered while trying to
use an ErrorDocument to handle the request.
Apache/2.4.29 (Ubuntu) Server at
ip-172-31-35-36.eu-central-1.compute.internal Port 80
If i refresh page i am getting same error but with another ip in the end (second server’s private ip), so i understand that NGINX do the work and it is Apache problem.
I tried to add 80 port for my servers in nginx config, replace ips with dns and private ip, but it didn’t help. Access log on Apache doesn’t show anything useful, just 400 errors.
What could be the problem?
Don’t use ‘_’ for upstream name, it was the only reason for my problem.
Just check on which ports are the Apache WebServers Running. You have to add those to your upstreams.
E.g.:
upstream web_backend {
server 35.157.101.5:8080; //assuming that your apache webserver is running on this port on this host
server 35.156.213.23:3000;//And a different port on the other.. you still need to add them here if your ports are same
}

Difference HTTP Redirect vs Reverse Proxy in NGINX

I'm having some difficulty in understanding the difference between reverse proxy (i.e. using proxy_pass directive with a given upstream server) and a 301 permanent redirect. How are they similar/different?
Reverse Proxy
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
HHTP Redirect
Apache Example: http://www.inmotionhosting.com/support/website/htaccess/redirect-without-changing-url
NGINX example:
server {
listen 80;
server_name domain1.com;
return 301 $scheme://domain2.com$request_uri;
}
Hence, it seems that both approaches have no difference from an end-user perspective. I want to ensure efficient bandwidth usage, while utilizing SSL. Currently, the app server uses its own self-signed SSL certificate with Nginx. What is the recommended approach for redirecting users from a website hosted by a standard web hosting company (hostgator, godaddy, etc.) to a separate server app server?
With a redirect the server tells the client to look elsewhere for the resource. The client will be aware of this new location. The new location must be reachable from the client.
A reverse proxy instead forwards the request of the client to some other location itself and sends the response from this location back to the client. This means that the client is not aware of the new location and that the new location does not need to be directly reachable by the client.

Nginx serves different website (on the same sever) when using HTTPS

I have several websites hosted on the same sever. To simplify I have just 2 (http-only.com and https.com) and using nginx to handle requests.
One has SSL enabled. And another doesn't. I noticed links like this in Google Search Console http-only.com/https_server_path and when accessing an http-only.com server with https protocol I get requests served by an https.com server instead.
https.com:
server {
listen 443 ssl;
server_name https.com;
ssl on;
}
only-http.com:
server {
listen 80;
server_name only-http.com;
}
I think I should define something like a default ssl server to handle ssl for http.com, but don't know how to do it properly. I guess nginx should redirect https request to an http url if corresponding server doesn't handle https. Or maybe there is a better solution?

SSL on ELB confusion when redirecting to HTTPS

I have installed an SSL on ELB. I have one EC2 instance in the ELB and can access the website via the SSL fine (IIS Windows 2008 server).
The confusion is when I am in NON HTTPS and I perform a redirect in my app to the HTTPS area, I get an error.
Doing some digging in the listeners area, I can see Port 443 on the ELB forwards to port 80 on the instances which makes sense, but then how do I handle this scenario?
For now, I have 'hacked' it by adding a self signed cert on my instance and then forwarding 443 from ELB to 443 on the instance, but this kind of defies the point?!
Any advice on how this should be structured would be great!
You have both port 80 and 443 on you load balancer forwarding to port 80 on your instance, so you need to figure out how to tell them apart.
The ELB sets a header value so you can tell these two types of requests apart.
Take a look at http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html#x-forwarded-headers but the value you want to check is X-Forwarded-Proto - this should have http or https, and obviously if it's http you would then redirect to https.