How to resolve Http Load Balancing Error 1003 on cloudflare dns? - load-balancing

SSL/TLS encryption mode on Cloudflare side is on "flexible" status (https on)
server side setting on third server is:
upstream backend {
server domain1.com;
server domain2.com;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
Error message is:
Error 1003
Direct IP access not allowed
Cloudflare explains:
Error 1003 Access Denied: Direct IP Access Not Allowed
Common cause
A client or browser directly accesses a Cloudflare IP address.
Resolution
Browse to the website domain name in your URL instead of the Cloudflare IP address.

I resolved this problem with set proxy to the header
proxy_set_header Host www.domain.io;

Related

How do I fix an infinite redirect loop on a self-hosted nginx server?

I'm learning how to build and host my own website using Python and Flask, but I'm unable to make my website work as I keep getting an infinite redirect loop when I try to access my website through my domain name.
I've made my website using Python, Flask, and Flask-Flatpages. I uploaded the code to GitHub and pulled it onto a Raspberry Pi 4 that I have at my house. I installed gunicorn on the RasPi to serve the website and set up two workers to listen for requests. I've also set up nginx to act as a reverse proxy and listen to requests from outside. Here is my nginx configuration:
server {
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
# listen on port 80 (http)
listen 80;
server_name <redacted>.com www.<redacted>.com;
location ~ /.well-known {
root /home/pi/<redacted>.com/certs;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443;
ssl on;
server_name <redacted>.com www.<redacted>.com;
# location of the SSL certificate
ssl_certificate /etc/letsencrypt/live/<redacted>.com/fullchain.pem; # m$
ssl_certificate_key /etc/letsencrypt/live/<redacted>.com/privkey.pem; #$
# write access and error logs to /var/log
access_log /var/log/blog_access.log;
error_log /var/log/blog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header X_Forwarded_Proto $scheme;
proxy_set_header Host $host;
location /static {
# handle static files directly, without forwarding to the application
alias /home/pi/<redacted>.com/blog/static;
expires 30d;
}
}
When I access the website by typing in the local IP of the RasPi (I've set up a static IP address in /etc/dhcpcd.conf), the website is served just fine, although it seems like my browser won't recognize the SSL certificate even though Chrome says the certificate is valid when I click on Not Secure > Certificate next to the .
To make the website public, I've forwarded port 80 on my router to the RasPi and set up ufw to allow requests only from ports 80, 443, and 22. I purchased a domain name using GoDaddy, then added the domain to CloudFlare by changing the nameservers in GoDaddy (I'm planning to set up cloudflare-ddns later, which is why I added the domain to CloudFlare in the first place). As a temporary solution, I've added the current IP of my router to the A Record in the CloudFlare DNS settings, which I'm hoping will be the same for the next few days.
My problem arises when I try to access my website via my public domain name. When I do so, I get ERR_TOO_MANY_REDIRECTS, and I suspect this is due to some problem with my nginx configuration. I've already read this post and tried changing my CloudFlare SSL/TLS setting from Flexible to Full (strict). However, this leads to a different problem, where I get a CloudFlare error 522: connection timed out. None of the solutions in the CloudFlare help page seem to apply to my situation, as I've confirmed that:
I haven't blocked any CloudFlare IPs in ufw
The server isn't overloaded (I'm the only one accessing it right now)
Keepalive is enabled (I haven't changed anything from the default, although I'm unsure whether it is enabled by default)
The IP address in the A Record of the DNS Table matches the Public IP of my router (found through searching "What is my IP" on google)
Apologies if there is a lot in here for a single question, but any help would be appreciated!
I only see one obvious problem with your config, which is that this block that was automatically added by certbot should probably be removed:
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
Because that behavior is already specified in the location / {} block, and I think the Certbot rule may take effect before the location ~ /.well-known block and break that functionality. I'm not certain about that, and I don't think that would cause the redirects, but you can test the well-known functionality yourself by trying to access http://yourhost.com/.well-known and seeing if it redirects to HTTPS or not.
On that note, the immediate answer to your question is, get more information about what's happening! My next step would be to see what the redirect loop is - your browser may show this in its network requests log, or you can use a command-line tool like curl or httpie or similar to try to access your site via the hostname and see what requests are being made. Is it simply trying to access the same URL over and over, or is it looping through multiple URLs? What are they? What does that point at?
And as a side note, it makes sense that Chrome wouldn't like your certificate when accessing it via IP - certificates are tied to one or more hostnames, so when you're accessing it over an IP address, the hostname doesn't match, so Chrome is probably (correctly) pointing that out and warning you that you're not at the hostname the certificate says you should be at.

Nginx proxy_pass backend server sends 302 to client and redirects client to a backend IP

I have a backend server with IP 192.168.1.10 port 8090 the nginx can reach this via example 10.0.0.10 which is then NATed to 192.168.1.10
nginx ----> 10.0.0.10 ----> 192.168.1.10
This works well in general but we are facing a specific problem here it seems with redirections.
The client goes to https://nginx/gateway and the connection is established and the backend server app now requires authentication so the backend server send a 302 redirect back to the client. What I notice in the client web browser is that the address changes and the client tries to establish a connection to http://192.168.1.10:8080 (auth service is located on a different port). The page fails to load as the client has no route to this destination.
My expectation is that nginx takes care of this redirect internally and is simply passing the content to the client using its own https://nginx address.
The config is simple:
location / {
proxy_pass "http://10.0.0.10:8090";
proxy_set_header Host $host:$server_port;
}
Any suggestions? Many thanks!

400 Bad Request load balancer for Apache servers with NGINX

I am using NGINX as load balancer for Apache WebServers (WordPress). All servers are made with AWS EC2. My config for NGINX:
cat /etc/nginx/sites-available/default
upstream web_backend {
server 35.157.101.5;
server 35.156.213.23;
}
server {
listen 80;
location / {
proxy_pass http://web_backend;
}
}
But after NGINX restart i am access load balancer via public ip and getting an error:
Bad Request
Your browser sent a request that this server could not understand.
Additionally, a 400 Bad Request error was encountered while trying to
use an ErrorDocument to handle the request.
Apache/2.4.29 (Ubuntu) Server at
ip-172-31-35-36.eu-central-1.compute.internal Port 80
If i refresh page i am getting same error but with another ip in the end (second server’s private ip), so i understand that NGINX do the work and it is Apache problem.
I tried to add 80 port for my servers in nginx config, replace ips with dns and private ip, but it didn’t help. Access log on Apache doesn’t show anything useful, just 400 errors.
What could be the problem?
Don’t use ‘_’ for upstream name, it was the only reason for my problem.
Just check on which ports are the Apache WebServers Running. You have to add those to your upstreams.
E.g.:
upstream web_backend {
server 35.157.101.5:8080; //assuming that your apache webserver is running on this port on this host
server 35.156.213.23:3000;//And a different port on the other.. you still need to add them here if your ports are same
}

Nginx reverse proxy to https location causes ssl_error_rx_record_too_long

ttI am adding an nginx reverse proxy in front of my existing nextcloudpi server in order to be able to route traffic to different servers in my network depending to the domain that is used. Currently the nextcloudpi server is the only one running, so all traffic is directly routed to it.
The server is only accessible via https and letsencrypt handles the certifactes on the nextcloudpi server.
In order to route traffic from my reverse proxy to the nextcloudpi server via https, I have set up the default.conf file to look like this:
server {
listen 443;
listen [::]:443;
server_name <my-public-url>;
location / {
proxy_pass https://<hostname-of-my-nextcloudpi-server>;
}
}
Unfortunately that doesn't work. Firefox returns SSL_ERROR_RX_RECORD_TOO_LONG and Chrome ERR_SSL_PROTOCOL_ERROR
I have also not seen anywhere where traffic is proxied to a https location. I am aware that in my internal network I can and should just route to the target location on port 80 but since the server is already set up to use https I want to keep it that way.
Thanks for your help

Difference HTTP Redirect vs Reverse Proxy in NGINX

I'm having some difficulty in understanding the difference between reverse proxy (i.e. using proxy_pass directive with a given upstream server) and a 301 permanent redirect. How are they similar/different?
Reverse Proxy
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
HHTP Redirect
Apache Example: http://www.inmotionhosting.com/support/website/htaccess/redirect-without-changing-url
NGINX example:
server {
listen 80;
server_name domain1.com;
return 301 $scheme://domain2.com$request_uri;
}
Hence, it seems that both approaches have no difference from an end-user perspective. I want to ensure efficient bandwidth usage, while utilizing SSL. Currently, the app server uses its own self-signed SSL certificate with Nginx. What is the recommended approach for redirecting users from a website hosted by a standard web hosting company (hostgator, godaddy, etc.) to a separate server app server?
With a redirect the server tells the client to look elsewhere for the resource. The client will be aware of this new location. The new location must be reachable from the client.
A reverse proxy instead forwards the request of the client to some other location itself and sends the response from this location back to the client. This means that the client is not aware of the new location and that the new location does not need to be directly reachable by the client.