nginx error: (99: Cannot assign requested address) - ssl

I am running Ubuntu Hardy 8.04 and nginx 0.7.65, and when I try starting my nginx server:
$ sudo /etc/init.d/nginx start
I get the following error:
Starting nginx: [emerg]: bind() to IP failed (99: Cannot assign requested address)
where "IP" is a placeholder for my IP address. Does anybody know why that error might be happening? This is running on EC2.
My nginx.conf file looks like this:
user www-data www-data;
worker_processes 4;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /usr/local/nginx/logs/access.log;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 3;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml
application/xml+rss text/javascript;
include /usr/local/nginx/sites-enabled/*;
}
and my /usr/local/nginx/sites-enabled/example.com looks like:
server {
listen IP:80;
server_name example.com;
rewrite ^/(.*) https://example.com/$1 permanent;
}
server {
listen IP:443 default ssl;
ssl on;
ssl_certificate /etc/ssl/certs/myssl.crt;
ssl_certificate_key /etc/ssl/private/myssl.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP;
server_name example.com;
access_log /home/example/example.com/log/access.log;
error_log /home/example/example.com/log/error.log;
}

With Amazon EC2 and elastic IPs, the server doesn't actually know its IP as with most any other server.
So you need to tell your linux to allow processes to bind to the non-local address. Just add the following line into /etc/sysctl.conf file:
# allow processes to bind to the non-local address
# (necessary for apache/nginx in Amazon EC2)
net.ipv4.ip_nonlocal_bind = 1
and then reload your sysctl.conf by:
$ sysctl -p /etc/sysctl.conf
which will be fine on reboots.

To avoid hard-coding the IP address in the config, do this:
listen *:80;
listen [::]:80;

As kirpit mentioned above you'll want to allow linux processes to bind to a local IP address:
nano /etc/sysctl.conf
# allow processes to bind to the non-local address
net.ipv4.ip_nonlocal_bind = 1
sysctl -p /etc/sysctl.conf
Then you want to add the private ip address that is associated with your elastic ip and add that to your sites config:
nano /etc/nginx/sites-available/example.com
Reload nginx:
service nginx reload
All done!

There might be remaining process/program that's using/listening at port 80.
You can check that using netstat -lp.
Kill that process and start nginx.

With Amazon EC2 and elastic IPs, the server doesn't actually know its IP as with most any other server. So in the apache virtual host files at least you put *:80 rather than your elastic ip :80
Then it works properly. So theoretically, doing *:80 for nginx should work the same but when you do you get [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use). Haven't found a solution yet.
.

For people who might be dealing with this in the future, I just looked up my private IP in the AWS instance and bound to that. I verified that nginx was able to listen publicly and perform my rewrite after that. I could not do *:PORT as I had an internal server I was proxying to.

If you are using Network Manager, you have to wait to raise the network interface before starting the service:
systemctl enable NetworkManager-wait-online.service

For Amazon EC2 and elastic IPs, sysctl.conf will not work as nginx still not listen on eth0.
So, you need to listen *;

Related

How to listen to different port on Nginx and proxy the request?

I am a newbie to Nginx config and all, I have a process which is an express app, running on port 3000 using pm2 and I have allowed port 3000 using ufw as well, and have made a server instance on Nginx to proxy it,
server {
# SSL configuration
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name .mysite.co;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/django/mysite;
}
proxy_cache mysite;
location / {
include proxy_params;
proxy_pass http://unix:/home/django/mysite/mysite.sock;
}
gzip_comp_level 3;
gzip_types text/plain text/css image/*;
ssl_certificate /etc/letsencrypt/live/mysite.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.co/privkey.pem; # managed by Certbot
}
server {
if ($host = www.mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name .mysite.co;
return 404; # managed by Certbot
}
server{
listen 3000;
listen 443 ssl http2;
server_name .mysite.co:3000;
location / {
proxy_pass https://localhost:3000;
}
}
I ran netstat -napl | grep 3000 and I could confirm that the process is running and pm2 status also says its running and no errors in log as well.
How could I make this work? Thanks for the help in advance.
You won't be able to use nginx to listen on port 3000 as well as your node process as only one service can really listen on the port at once. So you'll need to ensure nginx is listening for connections on a different port. I imagine what you're trying to do is to listen on port 80 / 443 and then send the request onto your express service which is listening on port 3000?
In this case your bottom server block is nearly correct. To get this working without TLS/SSL (just on port 80) you'll want to use something like this:
server {
listen 80;
server_name node.mysite.co
location / {
proxy_pass http://localhost:3000;
}
}
The following is a very basic example and you'll probably want to toggle some other settings. This will make "http://node.mysite.co" go proxy through to whatever service (in this case an Express server) is listening on port 3000 locally.
You do not need to make a firewall (ufw) exception for port 3000 in this case as it's a local proxy pass. You should close the port on the firewall so people can't access it directly, this way the must go through nginx.
If you want to get SSL/TLS working, you'll want another block that'll look something like the following. Again, this is very basic and doesn't have a lot of settings you probably want to research and set (such as cipher choices).
server {
listen 443 ssl;
server_name node.mysite.co
ssl_certificate certs/mysite/server.crt;
ssl_certificate_key certs/mysite/server.key;
location / {
proxy_pass http://localhost:3000;
}
}
You'll need to replace the cert and key path to point to your SSL/TLS ceritifcate and key respectively. This will enable you to access https://node.mysite.co and it'll be proxied onto the service on port 3000 as well.
Once you've done that you might then choose to go back and change the http (port 80) server to a redirect to https to force https only connections.
Also note that I've ensured the server_name is different to your existing django server_name with a subdomain (node.mysite.co). You might wish to change this value but you can't have two server blocks listening on the same port and server_name, otherwise nginx would have no idea what to do with the request. I'm sure you're doing this anyway but I wanted to make sure it was explicit and would work with your existing setup.
If you wish the site to be served only for mysite.co:3000
If for some reason you want the user to go to port 3000 on the domain mysite.co, then you will need to set the "listen" to 3000 and keep the server name as "mysite.co". This will allow someone to go to mysite.co:3000 in their browser and hit your node service. I imagine this isn't really what you want for a public facing website though, it also won't line up very nicely with your port 443 version.
Note: I don't claim to be an nginx expert, but I've used it for all my node projects for the past few years and I find this setup to be pretty clear. There might be some nicer syntax you can use.

nginx redirect all domain to another port but keep one domain for admin apps

I try to setup a web server with Docker, so I will use the main domain of my server "server.domain.com" for admin use (server.domain.com/phpmyadmin, ect...) and I want to redirect all the other domain to an apache container who listen on port 81.
So I have this code on my default.conf:
server {
listen 80;
listen [::]:80 default_server;
location / {
proxy_pass http://web/;
}
}
main.conf:
server {
listen 80;
listen [::]:80;
server_name server.domain.com;
location /phpmyadmin/ {
proxy_pass http://phpmyadmin/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
(Updated conf)
And my nginx.conf:
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.0;
gzip_min_length 0;
gzip_types *;
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites/*;
}
a part of ‘docker-compose.yml‘:
nginx:
build: ./server/proxy
ports:
- "80:80"
#volumes:
#- nginx_conf:/etc/nginx/
networks:
- web_network
depends_on:
- web
- phpmyadmin
- panel
At this moment I use "depends_on" for use the name of the container on my config but you talk only about network so I think "depends_on" is not obliged ?
But that gives me an error connection refused.
If I replace the 127.0.0.1 by server.domain.com the first vhost not working and redirect to nginx webRoot.
So I have no idea why ...
Thank you !
As far as I understand this nginx container is listening on port 80 and all connection requests going to your machine will be passed to it. So it's a proxy container only. I have a project with similar implementation. Let's try to make it out.
I suggest that you have 2 conf files for clarity.
1) main.conf - will serve your "server.domain.com"
server {
listen 80;
listen [::]:80;
server_name server.domain.com;
location /phpmyadmin {
proxy_pass http://server.domain.com:82;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
That's all of the configuration you basically need here. Later if you need them, you'll pass headers.
2) default.conf - will serve any other domain
server {
listen 80;
listen [::]:80 default_server;
location / {
proxy_pass http://server.domain.com:81;
}
}
This configuration assumes that:
1) There is a container running apache and requests coming to your machine on port 81 will be passed to apache2 container's port 80 (or whatever it's listening to)
2) There is a container running phpmyadmin and requests coming to your machine on port 82 will be passed to phpmyadmin container's port 80 (or whatever...)
SOME IMPROVEMENTS YOU SHOULD CONSIDER:
1) If you start all those containers with docker-compose you'll be able to set up a virtual network for them. This would allow you to proxy pass requests straight to the container by name. In my project I do it like:
proxy_pass http://adminer;
where adminer is defined as:
adminer:
image: phpmyadmin/phpmyadmin
volumes:
- ./db_interface/conf/config.inc.php/:/etc/phpmyadmin/config.inc.php
networks:
- demo_webnet
- prod_webnet
If you have questions just ask, I'll explain.
2) You could place another nginx server together with your apache2 server in its container. They work nice in bundle. Nginx is better to server statics. Apache2 better suits PHP in your case. I can show you how to do that as well.
IN CASE YOU NEED IT
It looks like you're trying to do something similar to what I did for our company's needs. If you're interested I can give you access to my project. I've built up a whole server infrastructure with docker and it now perfectly deployed on our server. In short it works as follows:
nginx proxy container above all
a container with apache2-nginx-php5.6 for demo apps
a container with apache2-nginx-php7.0 for demo apps
a container with apache2-nginx-php5.6 for production apps
a container with apache2-nginx-php7.0 for production apps
a container with maria db for demo
a container with maria db for production
a container with phpmyadmin to access both db services
Any request comes to nginx proxy.
It matches some virtual host and gets proxied to one of the 4 containers that have apache2 and nginx inside.
Also there is a lot of cool stuff like cron configured to autoreload apache2 and nginx when it detects changes to files, supervising services, https support and so on..
I'm planning to further develop it as an open source project, whoever is interested should let me know.

deploy local nginx server to public ubuntu 16.04

I am trying to deploy my local nginx server to the public. The nginx server runs as a reverse proxy to my node express app which is also running locally on port 3000. Therefore I have created a symbolic link from /etc/nginx/sites-available/express TO /etc/nginx/sites-enabled/express, so my configuration file is called express and looks like this.
/etc/nginx/sites-enabled/express
upstream express_servers{
server 127.0.0.1:3000;
}
server {
listen 80;
location / {
proxy_pass http://express_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I have removed the default file from the sites-enabled folder and I have not changed my nginx.conf file which looks like this
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
I also changed my firewall settings with ufw (uncomplicated firewall) to allow in http access (especially nginx). My ufw status looks like the following:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
80/tcp (Nginx HTTP) ALLOW IN Anywhere
80 ALLOW IN Anywhere
80/tcp (Nginx HTTP (v6)) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
when I am running load tests with wrk or loadtest(npm) everything seems to work fine. For example
wrk -t12 -c50 -d5s http://192.168.178.57/getCats/eng
so locally I can access the nginx server, but when I try to access the server from public with my Phone (3G/4G), I can't reach the server. What exactly did I miss ?
EDIT: I'm trying to access the service by http://PUBLIC_IP_ADDR/getCats/eng, not the local addr.
Your nginx config looks perfectly fine.
To be able to access your server from outside you need a public static IP from your ISP. Also ISP should not block incoming traffic to ports 80 and 443(in case you decide to go with https).
Then you probably have a LAN like this:
ISP <---> Router <---> Server
^
|
----> your other devices
In this case public IP will be assigned to router, all other devices will have local private ips like 192.168.x.x/24/10.x.x.x/8/172.16.0.0/20
You need to configure port forwarding to server's private ip from router. Depending on router's vendor this feature may be called virtual server or so and is usually found somewhere near WAN configuration. Set it up to forward TCP port 80 to server local port 80 and the same for 443.
Also you may need to configure server to static ip so that local ip address will not change
I think you have to put
listen *:80
in your file /etc/nginx/sites-enabled/express
nginx listen doc
I think it's not listening for requests from you ISP public IP as you have it now.

Nginx Basic Auth not Working

I am trying to passwoard protect the default server in my Nginx config. However, no username/password dialog is shown when I visit the site. Nginx returns the content as usual. Here is the complete configuration:
worker_processes 1;
events
{
multi_accept on;
}
http
{
include mime.types;
sendfile on;
tcp_nopush on;
keepalive_timeout 30;
tcp_nodelay on;
gzip on;
# Set path for Maxmind GeoLite database
geoip_country /usr/share/GeoIP/GeoIP.dat;
# Get the header set by the load balancer
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
real_ip_recursive on;
server {
listen 80;
server_name sub.domain.com;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd/sub.domain.com.htpasswd;
expires -1;
access_log /var/log/nginx/sub.domain.com.access default;
error_log /var/log/nginx/sub.domain.com.error debug;
location / {
return 200 '{hello}';
}
}
}
Interestingly, when I tried using an invalid file path as the value of auth_basic_user_file, the configtest still passes. This should not be the case.
Here's the Nginx and system info:
[root#ip nginx]# nginx -v
nginx version: nginx/1.8.0
[root#ip nginx]# uname -a
Linux 4.1.7-15.23.amzn1.x86_64 #1 SMP Mon Sep 14 23:20:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
We are using the Nginx RPM available through yum.
You need to add auth_basic and auth_basic_user_file inside of your location block instead of the server block.
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd/sub.domain.com.htpasswd;
return 200 '{hello}';
}
Did you tried to reload/stop-and-start your nginx after basic auth was added to config? It is necessary to reload nginx with something like:
sudo -i service nginx reload
---- in order to make new settings work.
Also I would double check the URLs that are under your tests.
(Once I tried to test Nginx Basic Auth in an Nginx proxy configuration accessing the actual URL of the resource that was behind the Nginx proxy and not the actual URL of Nginx.)
P.S.
Using an invalid file path as the value of auth_basic_user_file still doesn't cause the configtest to fail in 2018 as well.
Here's my version of Nginx:
nginx version: nginx/1.10.2
Though an invalid file path causes Basic Auth check to be failed and results in:
403 Forbidden
---- HTTP response after credentials provided.
In my case, adding the directives to /etc/nginx/sites-available/default worked, whereas adding the directives to /etc/nginx/nginx.conf did not.
Of course this only happens if you have this in your nginx.conf file:
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
The config is simple (put it under location for specific part of your website, or under server for your whole website):
server {
location /foo/ {
auth_basic "This part of website is restricted";
auth_basic_user_file /etc/apache2/.htpasswd;
}
}

Error code: ssl_error_rx_record_too_long on nginx ubuntu server

I have a site which was perfectly running with apache on some old ubuntu server and also has https for it. But now for some reasons i need to move to different(new ubuntu server with high configuration) server and trying to serve my site using Nginx, and so installed nginx (nginx/1.4.6 (Ubuntu)). Below is my nginx.conf file settings
server {
listen 8005;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /static/ {
alias /root/apps/project/static/;
}
location /media/ {
alias /root/apps/media/;
}
}
# Https Server
server {
listen 443;
location / {
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Url-Scheme $scheme;
# proxy_redirect off;
proxy_pass http://127.0.0.1:8001;
}
server_tokens off;
ssl on;
ssl_certificate /etc/ssl/certificates/project.com.crt;
ssl_certificate_key /etc/ssl/certificates/www.project.com.key;
ssl_session_timeout 20m;
ssl_session_cache shared:SSL:10m; # ~ 40,000 sessions
ssl_protocols SSLv3 TLSv1; # SSLv2
ssl_ciphers ALL:!aNull:!eNull:!SSLv2:!kEDH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP:#STRENGTH;
ssl_prefer_server_ciphers on;
}
Since i was already having https certificate(project.com.crt) and key(www.project.com.key) running on another server, i had just copied them to new server(which does not contain any domain as of now, and has only IP) and placed in at path /etc/ssl/certificates/ and trying to use them directly. Now i had restarted Nginx and tried to access my IP 23.xxx.xxx.xx:8005 with https:// 23.xxx.xxx.xx:8005 and i am getting the below error in firefox
Secure Connection Failed
An error occurred during a connection to 23.xxx.xxx.xx:8005. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long)
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site.
But when i access the IP without https, i can able to serve my site.
So whats wrong with my Https settings in the above nginx conf file ?
Whether we can't serve the certificate files by simply copying and pasting at some folder ? do we need to create any extra certificate for my new server ?
Change
listen 443;
to
listen 443 ssl;
and get rid of this line
ssl on;
That should fix your SSL issue, but it looks like you have several issues in your configuration.
So whats wrong with my Https settings in the above nginx conf file ?
You don't have a SSL/TLS server listening on the port the client is trying to connect to. The ssl_error_rx_record_too_long occurs because the client's SSL stack is trying to interpret a HTTP response as SSL/TLS data. A Wireshark trace should confirm the issue. Look at the raw bytes (follow the stream).
I don't know why the configuration is not correct. Perhaps someone with Nginx config experience can help. Or, the folks on Server Fault or Webmaster Stack Exchange.
This problem happens when the Client gets non-SSL content over SSL connection. the Server send HTTP content but the client awaits HTTPS content. You can check two main things to fix it, but it can caused by another side effects too.
Make sure you put ssl on listen directive;
listen [PORT_NUMBER] ssl;
Check your Host IP Address you tried to connect. DNS can be correct but maybe you have another point on your hosts file or your local DNS server.