deploy local nginx server to public ubuntu 16.04 - express

I am trying to deploy my local nginx server to the public. The nginx server runs as a reverse proxy to my node express app which is also running locally on port 3000. Therefore I have created a symbolic link from /etc/nginx/sites-available/express TO /etc/nginx/sites-enabled/express, so my configuration file is called express and looks like this.
/etc/nginx/sites-enabled/express
upstream express_servers{
server 127.0.0.1:3000;
}
server {
listen 80;
location / {
proxy_pass http://express_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I have removed the default file from the sites-enabled folder and I have not changed my nginx.conf file which looks like this
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
I also changed my firewall settings with ufw (uncomplicated firewall) to allow in http access (especially nginx). My ufw status looks like the following:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
80/tcp (Nginx HTTP) ALLOW IN Anywhere
80 ALLOW IN Anywhere
80/tcp (Nginx HTTP (v6)) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
when I am running load tests with wrk or loadtest(npm) everything seems to work fine. For example
wrk -t12 -c50 -d5s http://192.168.178.57/getCats/eng
so locally I can access the nginx server, but when I try to access the server from public with my Phone (3G/4G), I can't reach the server. What exactly did I miss ?
EDIT: I'm trying to access the service by http://PUBLIC_IP_ADDR/getCats/eng, not the local addr.

Your nginx config looks perfectly fine.
To be able to access your server from outside you need a public static IP from your ISP. Also ISP should not block incoming traffic to ports 80 and 443(in case you decide to go with https).
Then you probably have a LAN like this:
ISP <---> Router <---> Server
^
|
----> your other devices
In this case public IP will be assigned to router, all other devices will have local private ips like 192.168.x.x/24/10.x.x.x/8/172.16.0.0/20
You need to configure port forwarding to server's private ip from router. Depending on router's vendor this feature may be called virtual server or so and is usually found somewhere near WAN configuration. Set it up to forward TCP port 80 to server local port 80 and the same for 443.
Also you may need to configure server to static ip so that local ip address will not change

I think you have to put
listen *:80
in your file /etc/nginx/sites-enabled/express
nginx listen doc
I think it's not listening for requests from you ISP public IP as you have it now.

Related

React-App not showing only on port 80 on Server 2012 R2?

I have a full stack site designed to run on port 80 with the Node backend using port 5000. This site runs without fail on a Windows 10 machine.
When I copy it to a domain server running on 2012 R2 I cannot get it to function on port 80, although port 90 shows with no problems.
IIS is turned off and netstat -aon shows that Node is the PID using port 80. I then tried building the page and serving it with NGINX and am getting the same results, except that NGINX is now the process using port 80.
Here is the code I believe to be relevant but am uncertain of what to do with it.
My .env file for react-app is simple:
PORT=80
When switching to port 90 it functions successfully.
If I attempt to run through NGINX (with which I am unfamiliar) using the following configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# include mime.types;
# default_type application/octet-stream;
# sendfile on;
# keepalive_timeout 65;
# gzip on;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:90;
root C:\intranet\New_Test\frontend\build;
index $uri $uri/ /index.html;
}
location /api {
proxy_pass http://localhost:5000;
}
}
}
I still get nothing.
I have also tried it without forwarding port 80 to port 90 with the same results.
Do I have an incorrect configuration somewhere? The netstat also says that SYSTEM is using port 80 for some reason but it is also using a number of other HTTP ports.
** Edit **
I have since updated my nginx.conf file to this:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# include mime.types;
# default_type application/octet-stream;
# sendfile on;
# keepalive_timeout 65;
# gzip on;
include mime.types;
server {
listen 90;
server_name localhost;
root html;
index /index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api {
proxy_pass http://localhost:5000;
}
}
}
This is working fine to display the site in port 90 but for whatever reason port 80 in inaccessible to me on this machine.
Switched to a different model. Putting this answer to close the question. Went with nssm (https://alex.domenici.net/archive/deploying-a-node-js-application-on-windows-iis-using-a-reverse-proxy - step 5) and hosted the built React portion through IIS and using NSSM to run node as a service. Works well on local machine if I set my REACT_APP_HOST to localhost. Now experimenting with pathing so that the server can be reached from any client, not just a page on the localhost server.

Why do I get connection timeout on ssl even though nginx is listening and firewall allows 443?

Most/many visitors to the site https://example.org get a connection timeout. Some visitors get through, possibly ones redirected from http://example.org or those who've previously visited the site.
I'm trying to determine if this is a firewall issue or an nginx configuration issue.
Firewall
I'm using UFW as a firewall, which has the following rules:
To Action From
-- ------ ----
SSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
SSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
I could give some relevant rules from iptables if anyone needs that, but I'd need some direction on what to look for.
For sudo netstat -anop | grep LISTEN | grep ':443' I get
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 120907/nginx: worke off (0.00/0/0)
tcp6 0 0 :::443 :::* LISTEN 120907/nginx: worke off (0.00/0/0)
Not sure what "worke off" means.
nginx
It's a virtual host with the server name myservername.com which serves up two websites, example.org and example.com/directory. Example.org points to a docker container running eXist-db. Example.com/directory is serving up a directory on localhost:8080 proxied from another server where example.com lives. Example.com/directory is running smoothly on https when I access it in the browser -- I presume this is because it actually talks to the example.com host over http.
Example.org and myservername.com both have certs from let's encrypt generated by certbot.
When I try nmap from my local machine I get some results I can't explain. Notice the discrepancy between ports 80 and ports 443 and between IPv4 and IPv6
$ nmap -A -T4 -p443 example.org
443/tcp filtered https
$ nmap -A -T4 -p443 my.server.ip.address
443/tcp filtered https
$ nmap -A -T4 -p443 -6 my:server:ip::v6:address
443/tcp open ssl/http nginx 1.10.3
$ nmap -A -T4 -p80 example.org
80/tcp open http nginx 1.10.3
$ nmap -A -T4 -p80 my.server.ip.address
80/tcp open http nginx 1.10.3
My nginx.conf is
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
client_max_body_size 50M;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and my nginx server blocks:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _ myservername.com;
return 301 https://myservername.com$request_uri;
}
server {
# SSL configuration
#
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name _ myservername.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8080;
}
ssl_certificate /etc/letsencrypt/live/myservername.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myservername.com/privkey.pem;
}
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
gzip off;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8080;
}
}
server {
listen 80;
listen [::]:80;
server_name example.org www.example.org;
return 301 https://example.org$request_uri;
}
server {
# SSL configuration
#
listen 443 ssl;
listen [::]:443 ssl;
server_name example.org www.example.org;
gzip off;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://docker.container.ip.address:port/exist/apps/example/;
}
location /workshop2020/ {
return 302 http://example.org/forum2020/;
}
location /exist/apps/example/ {
rewrite ^/exist/apps/example/(.*)$ /$1;
}
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
}
Very grateful for any help!!
It turns out it was the firewall, not nginx. Although I'm using ufw as my firewall, there was a preexisting INPUT DROP rule in iptables (but not in ip6tables) that was catching https requests.
Thanks to Francis Daly over in the nginx forums who explained how to identify whether the https request to port 443 was even getting to nginx.
I disabled IPv6 in my browser and then tried loading the site. By looking at tcpdump while trying to load the site, I was able to see what was happening with the requests -- $ sudo tcpdump -nnSX -v port 443 showed a bunch of packets with Flags [S]. Thus the request was getting to the machine but there was no handshake.
Comparing this to the nginx access log, I was able to see that the request didn't get to nginx at all.
So I examined iptables more carefully and found the offending rule.
Please also note that some hosters/cloud providers have an additional hardware-level/external firewall, often enabled by default (with SSH port 22 the only allowed port), that also needs to be configured (e.g. Hetzner; Ionos; OVH; ...)!

NGINX problems after uninstalling apache (duplicate server)

I tried to install NGINX on my Debian server. Before switching to NGINX I used Apache2.4 and uninstalled it before installing NGINX.
My problem now is: I can't get it to work, the error is the following: "[emerg] a duplicate default server for 0.0.0.0:80 in /etc/nginx/sites-enabled/justarandomname.conf:4"
And yes, there are many posts about this problem, but none of them fixed it for me.
Additional information:
I uninstalled Apache properly (I think) and shut it down before uninstalling. Dpkg is not detecting any apache leftovers. I deleted the apache folder.
In my sites-enabled is only "justarandomname" and "justarandomname.conf" I deleted "default" (no other hidden files in there)
NGINX had some problem while installing, but after doing it manually it worked.
"justarandomname" looks like this:
server {
server_name mydomain.abc www.mydomain.abc;
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
}
and my "justarandomname.conf" looks like this:
server {
server_name mydomain.abc www.mydomain.abc;
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
}
My nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml app$
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
EDIT: Of course I restarted the server multiple times
In your nginx.conf, you include all files under /etc/nginx/sites-enabled/ by
include /etc/nginx/sites-enabled/*
and in your justarandomname and justarandomname.conf file, you both have the same line
listen 80 default_server
That is what causing you problem. see here
The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair.
You can either delete the justarandomname or change that line in nginx.conf to
include /etc/nginx/sites-enabled/*.conf

Regular connection fine but SSL issue

I'm running my application on CentOS 6.4 with Nginx 1.0.15 and gunicorn 19.1.1. My application works fine if I am just using port 80 and not using SSL. However, when I attempt to add SSL to the site, Nginx redirects to https://, however all I get after the redirect is "web page not available" with no additional information.
upstream apollo2_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/apollo2/run/gunicorn.sock fail_timeout=0;
}
#server {
# listen 80;
# server_name mysub.example.com;
# rewrite ^ https://$server_name$request_uri? permanent;
#}
# This works fine like this, but when I uncomment the above
# and the below ssl information, I get "webpage not available."
server {
listen 80;
# listen 443;
# ssl on;
# ssl_certificate /etc/nginx/ssl/2b95ec8183e5d1asdfasdfsadf.crt;
# ssl_certificate_key /etc/nginx/ssl/exmaple.com.key;
# server_name mysub.example.com;
client_max_body_size 4G;
keepalive_timeout 70;
access_log /webapps/apollo2/logs/nginx-access.log;
error_log /webapps/apollo2/logs/nginx-error.log;
location /static/ {
alias /webapps/apollo2/static/;
}
location /media/ {
alias /webapps/apollo2/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://apollo2_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/apollo2/static/;
}
}
I do not see anything in error logs.
I have checked port 443 here and
it is open: http://www.yougetsignal.com/tools/open-ports/
This is a wildcard certificate that I am using successfully on
another subdomain on a different server running on Debian 7 with Nginx with what I think is the same setup.
What should I be looking at? What am I missing?
I should have also shown my iptables as certainly someone would have figured it out then. I'm no expert in this area, but there was something wrong with my setup that caused the redirection to fail.
I ended up using the example from Linode and now this works.
https://www.linode.com/docs/security/securing-your-server#creating-a-firewall

NGINX SSL Timeout

Runnning NGINX SSL and the browser continues to timeout.
Here is my NGINX conf file:
worker_processes 4;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
proxy_next_upstream error;
charset utf-8;
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
keepalive_timeout 65;
keepalive_requests 0;
proxy_read_timeout 200;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
gzip on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css text/xml
application/x-javascript application/xml
application/atom+xml text/javascript;
server{
### WEB Address ###
server_name mydomain.com;
### SSL log files ###
access_log /var/log/nginx/ssl-access.log;
error_log /var/log/nginx/ssl-error.log;
listen 443;
### SSL Certificates ###
ssl on;
ssl_certificate /etc/nginx/unified.crt;
ssl_certificate_key /etc/nginx/ssl.key;
keepalive_timeout 60;
### PROXY TO TORNADO ###
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://127.0.0.1:8889;
}
}
}
The SSL access log and error log is blank.
I've tried restarting NGINX a couple of times. As a side note commenting out SSL and setting listen to 80 works for non-SSL connections.
Any help would be great?
Maybe 443 port is closed on your server? Check this with http://www.yougetsignal.com/tools/open-ports/
I agree with Klen´s answer, and I would add more.
First, go and check that your port 443 is open in http://www.yougetsignal.com/tools/open-ports/
If it´s closed, go to your aws console, select your instance and go to description -> security groups -> launch_wizard-1
Then click on edit -> Add Rule
Select HTTPS from the options and you should see this
There are several things to check out
#1: Check if https is allowed in your ubuntu server
sudo ufw allow https && sudo ufw enable
#2: Check if port 443 is opened
First i checked what is listening on port 443 by this command:
lsof -iTCP -sTCP:LISTEN -P
I saw nginx which was correct
Then i checked whether the 443 is opened by the tool mentioned by klen (http://www.yougetsignal.com/tools/open-ports/)
Port 443 was closed so I had to run
iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
to open port 443