I am using nginx, and trying to load the site on port 80 (or root) gives 502 bad gateway error after trying to connect for a while.
When I input: netstat -ltnp | grep :80 I get the results below.
And here it is my nginx.conf:
#user nginx;
# The number of worker processes is changed automatically by CustomBuild, according to the number of CPU cores, if it's set to "1"
worker_processes 4;
pid /var/run/nginx.pid;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
include /etc/nginx/nginx-events.conf;
}
http {
include /etc/nginx/mime.types;
# For user configurations not maintained by DirectAdmin. Empty by default.
include /etc/nginx/nginx-includes.conf;
# Supplemental configuration
#include /etc/nginx/nginx-modsecurity-enable.conf;
include /etc/nginx/nginx-defaults.conf;
include /etc/nginx/nginx-gzip.conf;
include /etc/nginx/nginx-proxy.conf;
include /etc/nginx/directadmin-ips.conf;
include /etc/nginx/directadmin-settings.conf;
include /etc/nginx/nginx-vhosts.conf;
include /etc/nginx/directadmin-vhosts.conf;
server {
listen 80;
root /var/www/html;
index index.html index.htm index.php;
}
}
Note: any other ports than 80 works fine. Obviously 8080 and 8081 are taken, but beside those, any other ports (e.g. 8000) works fine.
Things I have tried so far:
this solution, and using proxy.
What could be possibly cause this?
Alright, so I started commenting out all the includes, and came up to the knowing that the line below was defining another port 80 server:
include /etc/nginx/directadmin-vhosts.conf;
Therefore I manipulated this file (directadmin-vhosts.conf) instead of defining my servers in nginx.conf, and things were all good then.
Note, for some reason this command wasn't showing exactly all the port 80s that were taken:
netstat -ltnp | grep :80
Hence I used the command below instead, and that's when I realized there is somewhere else that nginx is using port 80 as well.
netstat -ltnp | grep :*
Noob mistake I reckon, but I hope this answer helps somebody out there who is struggling with the same issue.
Related
I'm learning how to build and host my own website using Python and Flask, but I'm unable to make my website work as I keep getting an infinite redirect loop when I try to access my website through my domain name.
I've made my website using Python, Flask, and Flask-Flatpages. I uploaded the code to GitHub and pulled it onto a Raspberry Pi 4 that I have at my house. I installed gunicorn on the RasPi to serve the website and set up two workers to listen for requests. I've also set up nginx to act as a reverse proxy and listen to requests from outside. Here is my nginx configuration:
server {
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
# listen on port 80 (http)
listen 80;
server_name <redacted>.com www.<redacted>.com;
location ~ /.well-known {
root /home/pi/<redacted>.com/certs;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443;
ssl on;
server_name <redacted>.com www.<redacted>.com;
# location of the SSL certificate
ssl_certificate /etc/letsencrypt/live/<redacted>.com/fullchain.pem; # m$
ssl_certificate_key /etc/letsencrypt/live/<redacted>.com/privkey.pem; #$
# write access and error logs to /var/log
access_log /var/log/blog_access.log;
error_log /var/log/blog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header X_Forwarded_Proto $scheme;
proxy_set_header Host $host;
location /static {
# handle static files directly, without forwarding to the application
alias /home/pi/<redacted>.com/blog/static;
expires 30d;
}
}
When I access the website by typing in the local IP of the RasPi (I've set up a static IP address in /etc/dhcpcd.conf), the website is served just fine, although it seems like my browser won't recognize the SSL certificate even though Chrome says the certificate is valid when I click on Not Secure > Certificate next to the .
To make the website public, I've forwarded port 80 on my router to the RasPi and set up ufw to allow requests only from ports 80, 443, and 22. I purchased a domain name using GoDaddy, then added the domain to CloudFlare by changing the nameservers in GoDaddy (I'm planning to set up cloudflare-ddns later, which is why I added the domain to CloudFlare in the first place). As a temporary solution, I've added the current IP of my router to the A Record in the CloudFlare DNS settings, which I'm hoping will be the same for the next few days.
My problem arises when I try to access my website via my public domain name. When I do so, I get ERR_TOO_MANY_REDIRECTS, and I suspect this is due to some problem with my nginx configuration. I've already read this post and tried changing my CloudFlare SSL/TLS setting from Flexible to Full (strict). However, this leads to a different problem, where I get a CloudFlare error 522: connection timed out. None of the solutions in the CloudFlare help page seem to apply to my situation, as I've confirmed that:
I haven't blocked any CloudFlare IPs in ufw
The server isn't overloaded (I'm the only one accessing it right now)
Keepalive is enabled (I haven't changed anything from the default, although I'm unsure whether it is enabled by default)
The IP address in the A Record of the DNS Table matches the Public IP of my router (found through searching "What is my IP" on google)
Apologies if there is a lot in here for a single question, but any help would be appreciated!
I only see one obvious problem with your config, which is that this block that was automatically added by certbot should probably be removed:
if ($host = <redacted>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
Because that behavior is already specified in the location / {} block, and I think the Certbot rule may take effect before the location ~ /.well-known block and break that functionality. I'm not certain about that, and I don't think that would cause the redirects, but you can test the well-known functionality yourself by trying to access http://yourhost.com/.well-known and seeing if it redirects to HTTPS or not.
On that note, the immediate answer to your question is, get more information about what's happening! My next step would be to see what the redirect loop is - your browser may show this in its network requests log, or you can use a command-line tool like curl or httpie or similar to try to access your site via the hostname and see what requests are being made. Is it simply trying to access the same URL over and over, or is it looping through multiple URLs? What are they? What does that point at?
And as a side note, it makes sense that Chrome wouldn't like your certificate when accessing it via IP - certificates are tied to one or more hostnames, so when you're accessing it over an IP address, the hostname doesn't match, so Chrome is probably (correctly) pointing that out and warning you that you're not at the hostname the certificate says you should be at.
I have purchased an SSL cert and bundled it up correctly in so much as when I verify the modulus (i.e. https://kb.wisc.edu/middleware/4064) then the hashes are the same.
I have moved the cert and key to my server # /etc/ssl and ensured that the folder permissions are 700 and each file is 600.
I have then the following nginx config:
server {
listen 80;
listen 443 ;
server_name escapehatch.chrisjowen.uk;
ssl on;
ssl_certificate /etc/ssl/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/secret.txt;
access_log /var/log/nginx/nginx.vhost.access.log;
error_log /var/log/nginx/nginx.vhost.error.log;
location / {
proxy_pass http://localhost:8080;
}
}
Finally, to test this, I have a Python SimpleHTTPServer running on port 8080. When I hit the URL on HTTPS, I receive an error
This site can’t provide a secure connection
Looking at the logs from the Python server, I see:
218.186.183.142 - - [21/Aug/2019 04:45:53] code 400, message Bad HTTP/0.9 request type ('\x16\x03\x01\x02\x00\x01\x00\x01\xfc\x03\x03\x01a\x96\x061LE\x88I\xf1i\x7f\xc3\xdc%d\x18r\xbbzq9q<\xeb\x1dD\xa3\x8b\x01\x10\x7f')
218.186.183.142 - - [21/Aug/2019 04:45:53] "�a�1LE�I�i��%dr�zq9q<�D�� n��Z�����SN�F���j;X.Zw�s^�"**�+�/�,�0̨̩����/5" 400 -
218.186.183.142 - - [21/Aug/2019 04:45:53] code 400, message Bad request version ('\x0fb\x03g\x8d\x04\x8b\xbe!\xad\x98W\x9bV\xd2\x8e\x1e\xc6\xf3\xaa\xff\xce\x0f\x1b\xc9\x0f\xebY\xae\xc4\x00"\xfa\xfa\x13\x01\x13\x02\x13\x03\xc0+\xc0/\xc0,\xc00\xcc\xa9\xcc\xa8\xc0\x13\xc0\x14\x00\x9c\x00\x9d\x00/\x005\x00')
So, it seems like nginx is not decrypting the request and terminating the SSL connection, instead it's passing it to the upstream server, which I do not want.
Checking the nginx logs /var/log/nginx/nginx.vhost.access.log shows nothing.
So, now I am stumped what to do to debug the issue, it appears that either nginx config is wrong or there is something wrong with the cert, but as mentioned I checked this with the following method https://kb.wisc.edu/middleware/4064
listen 80;
listen 443 ;
If you want it to listen for plain http on port 80 and https on port 443 the second line should be listen 443 ssl;.
ssl on;
From the documentation:
This directive was made obsolete in version 1.15.0. The ssl parameter of the listen directive should be used instead.
Also you have the following in the logs of your Python server:
218.186.183.142 - - [21/Aug/2019 04:45:53] code 400, ....
This Python server is clearly visited directly by an external IP address. If the request would be forwarded by the local nginx then the source IP should be 127.0.0.1 instead. This shows, that you don't hit nginx at all but somehow make a direct request to the Python server.
I have multiple small flask apps. I would want to run each of the flask apps on different ports on the same server(single domain).
For example say i have 3 flask apps
tasks.py --> has API endpoints with /task only
users.py --> has API endpoints with /user only
analysis.py --> has API endpoints with /analysis only
domain name : api.test.com
I want to run tasks.py on port 8080 , users.py on port 5000 and analysis.py on say port 4500.
I want to configure my uWSGI and nginx so that when i hit the api.test.com/task/xxxx i want the request to be directed to port 8080 (where tasks.py is running),
similarly api.test.com/user/xxxx should be directed to port 5000 and api.test.com/analysis/xxxx to 4500
It seems to me that you could have one single uWSGI isntance for that with one port, but if you like this way of thinking, then you can follow the next way.
Suppose, you already have several uWSGI instances running on different ports: 8080, 5000 and 4500.
Then you need to create an Nginx config with approximately the following content (read the comments, please):
# webserver configuration
server {
# port to be listened by the web-server
listen 80;
# domain name with its aliases or the ip
server_name api.test.com;
# SETUP THREE LOCATION SECTIONS FOR EACH ACTION
location /task {
# AND SPECIFY THE NEEDED ADDRESS:PORT HERE
uwsgi_pass YOUR_SERVER_IP:8080;
# ONE MAY ALSO USE UNIX-SOCKET (IF ON THE SAME SERVER), INSTEAD OF A PORT
# uwsgi_pass unix:/path/to/the/socket.sock
include /etc/nginx/uwsgi_params; # stadard parms with environment variables required by a uwsgi app
}
location /user {
uwsgi_pass YOUR_SERVER_IP:5000;
include /etc/nginx/uwsgi_params;
}
location /analysis {
uwsgi_pass YOUR_SERVER_IP:4500;
include /etc/nginx/uwsgi_params;
}
}
I hope, you know, how to run three uWSGI instances for each of the ports.
In my opinion, this is extremely ugly, as in order to add any new action you will have to edit Nginx config again. So I don't recommend this solution and only suggest this answer as demonstration of Nginx's capabilities in connection with other web servers.
I would like to run two versions of my rails site, one for production and one for development. The production one will listen on port 80 and the development will listen on port 9033. Here are my config server blocks which are located in the same file
server {
listen 80 default_server;
server_name mywebsite.com;
passenger_enabled on;
passenger_app_env production;
root /path/to/public/dir;
}
server {
listen 9033 default_server;
server_name mywebsite.com;
passenger_enabled on;
passenger_app_env development;
root path/to/public/dir;
passenger_friendly_error_pages on;
}
The problem lies in that when I try to connect to the website through my browser, regardless of which port I use I always get the version of the website corresponding to the environment specified in the first server block. So in the example I gave above, it'd always serve the production version of my website.
Why is it that the first server block overrides the second, and how can I make it so that I can access either version of my website without going in a manually changing the config files and reloading nginx?
UPDATE:
None of the suggestions were working, even after clearing the browser cache before sending every HTTP request. I changed my server blocks to the following in the hopes of my server returning different version of the website
server {
listen *:80;
server_name mywebsite.com;
passenger_enabled on;
passenger_app_env production;
root /home/alex/code/m2m/public/;
}
server {
listen *:80;
server_name dev.mywebsite.com;
passenger_enabled on;
passenger_app_env development;
root /home/alex/code/m2m/public/;
passenger_friendly_error_pages on;
}
and then added the following line in my /etc/hosts file
my.ip.addr.ess dev.mywebsite.com
But requests to both domains result in only the production version of my website being returned. Note I'm using the default nginx.conf file. Is there a way I can debug my browser (Chrome v40.0.2214.111 (64-bit)) to see if/where my requests are being altered? I'm thinking the problem lies clientside since the advice the commenters have given me seems like it should work.
And if you try this :
listen *:80;
and
listen *:9033;
This was my recommendation regarding the question that aims nginx config.
By putting those listen directives, according to nginx doc, nginx will first match ip:port server blocks and then look at server_name directives in server blocks that matched IP:port. So if request containing right 'port' end in the wrong environment this has something to do with either the app or the passenger directives.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have nginx (with passenger) installed on my user account (via homebrew). For a few hours I thought the thing just refuses to work, as I couldn't get any Rails3 application to respond on a simple nginx-declared location. After much deliberation (AKA trial-and-error) I came to conclusion that it does work, but refuses to use port 80.
I know, that a homebrew installation is a per user installation, thus it should not be able to run on root-only ports (ie. ports 1024 and below), but homebrew itself (and various sources on the net) suggest that simply running the server via sudo nginx would suffice to allow it to use port 80.
These are the important files of the configuration that does work:
/etc/hosts:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 doomhub.local localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
/usr/local/etc/nginx/nginx.conf:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
passenger_root /Users/ellmo/.rvm/gems/ruby-1.9.3-p125/gems/passenger-3.0.18/;
passenger_ruby /Users/ellmo/.rvm/rubies/ruby-1.9.3-p125/bin/ruby;
include mime.types;
default_type application/octet-stream;
access_log logs/access.log; #main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
server_name doomhub.local;
listen 8080;
root /Users/ellmo/rails/doomhub/public;
passenger_enabled on;
passenger_use_global_queue on;
rails_env development;
}
}
When I change the application's server port to 80, I naturally will - upon restart - receive a:
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
...but when I sudo it, the error won't show, the process will start with root as its owner. However opening http://doomhub.local in the browser returns nothing (well, technically it's a "browser could not resolve the address" error, but I get no other errors, no nginx error). I get no logs, no nothing.
When I change the listening port back to 8080 (or anything else) it works like a charm when I run it as a regular user... but then I really want to use http://doomhub.local in my browser rather than http://doomhub.local:8080.
Is there anything that would block OSX apps from listening on port 80, that I'm missing? As you can see I CAN use it, so there's no other process in the way. It just doesn't listen to anyhting.
Is there any way to treat 8080 in the browser as a "normal" http port?
EDIT:
Specifying passenger users as Jan Schejbal suggested didn't solve the issue for me, after a while I ended up creating rvm wrappers for passenger (rather than allowing it to use clear ruby binaries) as described in this great post:
http://everydayrails.com/2010/09/13/rvm-project-gemsets.html
Thanks to creating a passenger/bundler-only rvm wrapper I managed to get the application running when I start nginx as root. I can specify both root-only and user-allowed listening ports (ie. I tested both 8080 and 81) and the application is served fine even with all the gems that are NOT related to the wrapper binary. Yet...
...still I get absolutely nothing on port 80.
proper diagnosis
Hah!
I completely forgot I had pow installed on my system. I know I had it disabled altogether before I started playing with nginx, but that was not enough. As you may know pow is a zero-configuration server tool that automatically creates localhost domain - this also means that it... "appropriates" port 80, which you can see if you type:
sudo ipfw list
This should return something like:
00100 fwd 127.0.0.1,20559 tcp from any to me dst-port 80 in
65535 allow ip from any to any
...which clearly shows that any ip connection on port 80 is forwarded to 20559 (pow's port)
solution
What we want do do now is to delete this port-80 forwarding information and use some other port in it's place. That will allow us to easily host development servers for multiple rails applications (each with its own gemset and configuration) and proxy them through passenger's upstreams.
There's a great write-up on how to achieve the first part of this task. To me it seems like simply changing the ipfw entries manually would suffice, but I went with the blog entry. If you do this - make sure you use proper pow install/uninstall scripts from pow's manual; for example I had to fully un-install pow before the installation script would successfully compile.
I'd assume that running as root causes permission problems, causing the app not to work. Have you tried the PassengerUserSwitching option together with PassengerUser and PassengerGroup?
Edit: Missed the "could not resolve address" error. Still try the above (Passenger is weird sometimes), but also try running a netstat to see if the port is correctly bound.