NGINX problems after uninstalling apache (duplicate server) - apache

I tried to install NGINX on my Debian server. Before switching to NGINX I used Apache2.4 and uninstalled it before installing NGINX.
My problem now is: I can't get it to work, the error is the following: "[emerg] a duplicate default server for 0.0.0.0:80 in /etc/nginx/sites-enabled/justarandomname.conf:4"
And yes, there are many posts about this problem, but none of them fixed it for me.
Additional information:
I uninstalled Apache properly (I think) and shut it down before uninstalling. Dpkg is not detecting any apache leftovers. I deleted the apache folder.
In my sites-enabled is only "justarandomname" and "justarandomname.conf" I deleted "default" (no other hidden files in there)
NGINX had some problem while installing, but after doing it manually it worked.
"justarandomname" looks like this:
server {
server_name mydomain.abc www.mydomain.abc;
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
}
and my "justarandomname.conf" looks like this:
server {
server_name mydomain.abc www.mydomain.abc;
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
}
My nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml app$
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
EDIT: Of course I restarted the server multiple times

In your nginx.conf, you include all files under /etc/nginx/sites-enabled/ by
include /etc/nginx/sites-enabled/*
and in your justarandomname and justarandomname.conf file, you both have the same line
listen 80 default_server
That is what causing you problem. see here
The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair.
You can either delete the justarandomname or change that line in nginx.conf to
include /etc/nginx/sites-enabled/*.conf

Related

I have followed all instructions but cannot get TLS 1.3 on NGINX to show

I am trying to enable TLS 1.3 on my server. I have followed an abundance of articles on Google and have the same configs settings in my own config, yet I cannot get it past TLS 1.2.
I am on Ubuntu 16.
I am using NGINX version 1.14 which is built with OpenSSL 1.1.1.
➜ nginx -V
nginx version: nginx/1.14.2
built with OpenSSL 1.1.1 11 Sep 2018 (running with OpenSSL 1.1.1a 20 Nov 2018)
TLS SNI support enabled
These are all the required versions of the software I have seen that are needed to support TLS 1.3.
I'm using Chrome 72 and SSL Labs when testing the certificate but it just always says it's on 1.2.
Here is the part of my NGINX config file that's related to the SSL options
ssl_protocols TLSv1.3 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_ecdh_curve X25519:secp256k1:secp384r1:prime256v1;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES25
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 216.146.35.35 216.146.36.36 valid=60s;
resolver_timeout 2s;
I got the Ciphers from https://cipherli.st.
With these configuration options, I cannot get past the TLS 1.2 protocol.
I believe this is everything I can think of that might be causing me issues, but I can tell you of anything further you might need to know to help my case.
Thanks,
Chris
Enabling TLSv1.3 on Nginx might be looking pretty straight forward, but is not documented as it should.
Cutting to the chase now. The trick is to include the SSL settings in every server block of your config. Not doing so, will result in the fact of a disabled TLSv1.3. This makes sense in the way that the tls protocol is not "upgraded" upon the first request that hits the server:
sudo vi ssl_config
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Referrer-Policy no-referrer;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;
ssl_session_tickets on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_ecdh_curve auto;
keepalive_timeout 70;
ssl_buffer_size 1400;
ssl_dhparam ssl/dhparam.pem;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=86400;
resolver_timeout 10;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
And:
server {
server_name xxx.xxx.xxx.xxx; #Your current server ip address. It will redirect to the domain name.
listen 80;
listen 443 ssl http2;
include ssl_config;
return 301 https://example.com$request_uri;
}
server {
server_name www.example.com;
listen 80;
listen 443 ssl http2;
listen [::]:80;
listen [::]:443 ssl http2;
include ssl_config;
# Non-www redirect
return 301 https://example.com$request_uri;
}
server {
server_name example.com;
listen 443 ssl http2;
listen [::]:443 ssl http2;
root /var/www/html;
charset UTF-8;
include ssl_config;
location ~* \.(jpg|jpe?g|gif|png|ico|cur|gz|svgz|mp4|ogg|ogv|webm|htc|css|js|otf|eot|svg|ttf|woff|woff2)(\?ver=[0-9.]+)?$ {
expires max;
add_header Access-Control-Allow-Origin '*';
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
access_log off;
}
#access_log logs/host.access.log main;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
default_type "text/plain";
}
location / {
index index.php;
try_files $uri $uri/ /index.php?$args;
#limit_conn num_conn 15;
#limit_req zone=num_reqs;
}
error_page 404 /404.php;
#pass the PHP scripts to FastCGI server listening on php-fpm unix socket
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_pass php:9000; #for docker.
#fastcgi_pass unix:/var/run/php7-fpm.sock; #for non-docker.
fastcgi_pass_request_headers on;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
fastcgi_ignore_client_abort off;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_request_buffering on;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
include fastcgi_params;
}
location = /robots.txt {
access_log off;
log_not_found off;
}
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}
Now it will work 100%, using the strongest ciphers available.
I made a blog post a while back about how to enable TLS 1.3 in Nginx.
As added bonus, as of versions 1.18.0, 1.17.10 and above,i maintain fresh tls1.3 enabled docker images
Your ssl_protocols should be ordered as TLSv1.2 TLSv1.3.
Then, your ssl_ciphers should include the list of TLSv1.3 ciphers first (in this order):
TLS_AES_256_GCM_SHA384
TLS_CHACHA20_POLY1305_SHA256
TLS_AES_128_GCM_SHA256
TLS_AES_128_CCM_8_SHA256
TLS_AES_128_CCM_SHA256
followed by your TLSv1.2 ciphers. Here's what tls13.iachieved.it nginx.conf looks like:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_128_CCM_8_SHA256:TLS_AES_128_CCM_SHA256:ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
ssl_prefer_server_ciphers on;
And connecting to it with Chrome 72:
And the response from the site:
Your User Agent is: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36
Your client supports the following ciphers: 0x2a2a:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:0x000a
The negotiated cipher with this server is: TLS_AES_256_GCM_SHA384
Note that the your client supports the following ciphers is what your web browser supports, not the server.
Did you also check /etc/nginx/sites-enabled/yoursite and if you are using Let's Encrypt, /etc/letsencrypt/options-ssl-nginx.conf? Only editing /etc/nginx/nginx.conf might not be enough.
I experienced the same problem today. For me, the reason was that I use letsencrypt's Certbot. It creates /etc/letsencrypt/options-ssl-nginx.conf, where the ssl-protocols are also defined. If you don't adjust them there, changing the /etc/nginx/nginx.conf won't help.
Be careful when editing /etc/letsencrypt/options-ssl-nginx.conf as it is managed by Cerbot. Check, if everything is still working using sudo certbot renew --dry-run.
For further reading, I recommend https://libre-software.net/tls-nginx/.
anyone seeing this should be sure they check/adjust the protocol and cipher list in their default server block in nginx.conf also.
Just try to start nginx with simply ssl configuration:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;

nginx wildcard ssl configuration

I have this nginx configuration for my site and using a wildcard certificate for my domain
server {
server_name *.domain;
root /var/www;
index index.php;
listen *:80;
listen *:443 ssl http2;
listen [::]:443 ssl http2;
# indicate locations of SSL key files.
ssl_certificate /etc/nginx/ssl/domain.chained.crt;
ssl_certificate_key /etc/nginx/ssl/domain.key;
ssl_trusted_certificate /etc/nginx/ssl/domain.crt;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_stapling on;
# Enable HSTS. This forces SSL on clients that respect it, most modern browsers. The includeSubDomains flag is optional.
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
# Set caches, protocols, and accepted ciphers. This config will merit an A+ SSL Labs score as of Sept 2015.
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
# to avoid ssl stripping https://en.wikipedia.org/wiki/SSL_stripping#SSL_stripping
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
# WordPress single site rules.
# Designed to be included in any server {} block.
# This order might seem weird - this is attempted to match last if rules below fail.
# http://wiki.nginx.org/HttpCoreModule
location / {
try_files $uri $uri/ /index.php?$args;
}
# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
# Directives to send expires headers and turn off 404 error logging.
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
# Uncomment one of the lines below for the appropriate caching plugin (if used).
#include global/wordpress-wp-super-cache.conf;
#include global/wordpress-w3-total-cache.conf;
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# This is a robust solution for path info security issue and works with "cgi.fix_pathinfo = 1" in /etc/php.ini (default)
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_intercept_errors on;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
But im getting the error
NET::ERR_CERT_COMMON_NAME_INVALID
with message
This server could not prove that it is staging.wp.domain; its security certificate is from *.domain. This may be caused by a misconfiguration or an attacker intercepting your connection.
What im missing.
Thanks
This server could not prove that it is staging.wp.domain; its security certificate is from *.domain
Since you're using "example" names in your post, it's a bit difficult to say, but I suspect you are trying to do multiple sub domains with a wild card, which doesn't work.
Let's say you have a certificate that is valid for these names:
example.com
*.example.com
This is likely the kind of wild card certificate you have. You can tell by looking at the Subject Alternative Name in the certificate.
The "*" in a certificate does not mean "many levels deep", it means "one level deep".
These domains are valid for our certificate:
foo.example.com
bar.example.com
example.dom
These are not valid for this the certificate:
foo.bar.example.com
bar.foo.example.com
Your only option here is to get a certificate for *.wp.domain, or just staging.wp.domain if you don't need a wild card. A CA won't issue a certificate that is valid for *.*.example.com, and even browsers will ignore these kinds of wildcard rules.

deploy local nginx server to public ubuntu 16.04

I am trying to deploy my local nginx server to the public. The nginx server runs as a reverse proxy to my node express app which is also running locally on port 3000. Therefore I have created a symbolic link from /etc/nginx/sites-available/express TO /etc/nginx/sites-enabled/express, so my configuration file is called express and looks like this.
/etc/nginx/sites-enabled/express
upstream express_servers{
server 127.0.0.1:3000;
}
server {
listen 80;
location / {
proxy_pass http://express_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I have removed the default file from the sites-enabled folder and I have not changed my nginx.conf file which looks like this
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
I also changed my firewall settings with ufw (uncomplicated firewall) to allow in http access (especially nginx). My ufw status looks like the following:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
80/tcp (Nginx HTTP) ALLOW IN Anywhere
80 ALLOW IN Anywhere
80/tcp (Nginx HTTP (v6)) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
when I am running load tests with wrk or loadtest(npm) everything seems to work fine. For example
wrk -t12 -c50 -d5s http://192.168.178.57/getCats/eng
so locally I can access the nginx server, but when I try to access the server from public with my Phone (3G/4G), I can't reach the server. What exactly did I miss ?
EDIT: I'm trying to access the service by http://PUBLIC_IP_ADDR/getCats/eng, not the local addr.
Your nginx config looks perfectly fine.
To be able to access your server from outside you need a public static IP from your ISP. Also ISP should not block incoming traffic to ports 80 and 443(in case you decide to go with https).
Then you probably have a LAN like this:
ISP <---> Router <---> Server
^
|
----> your other devices
In this case public IP will be assigned to router, all other devices will have local private ips like 192.168.x.x/24/10.x.x.x/8/172.16.0.0/20
You need to configure port forwarding to server's private ip from router. Depending on router's vendor this feature may be called virtual server or so and is usually found somewhere near WAN configuration. Set it up to forward TCP port 80 to server local port 80 and the same for 443.
Also you may need to configure server to static ip so that local ip address will not change
I think you have to put
listen *:80
in your file /etc/nginx/sites-enabled/express
nginx listen doc
I think it's not listening for requests from you ISP public IP as you have it now.

Docker Nginx complains: SSL: error:02001002

I'm using the following nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server {
listen 80;
server_name mydomain.org;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/mydomain.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.org/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
add_header Strict-Transport-Security max-age=15768000;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/mydomain.org/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=86400;
root /var/www/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
root /var/www/html;
fastcgi_pass wp_db:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
}
}
But nginx container complains with:
nginx: [emerg] BIO_new_file("/etc/letsencrypt/live/mydomain.org/fullchain.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/mydomain.org/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I have all the certificates on that path for let's encrypt.
I found this thread
https://serverfault.com/questions/537343/nginx-startup-fails-ssl-no-such-file-or-directory
And did
chown -R root:root /etc/letsencrypt/live/mydomain.org/fullchain.pem
chmod -R 600 /etc/letsencrypt/live/mydomain.org/fullchain.pem
Same error was thrown from nginx container. I've placed the certs on /docker-compose/etc/nginx/certs giving the same permissions and changing links on nging.conf but nothing changed.
What I'm missing?
I was experiencing the same problem deploying harbor (a docker registry + access control UI) using volume mapping /etc/letsencrypt:/etc/letsencrypt
nginx reported "no such file" when loading the certificate file, even though I could enter that container (docker exec bash ..) and cat the files using the exact same path.
I suspected the problem is caused by letsencrypt use of symlinks, so my solution was to copy the live certs into another folder using cp -rL (to de-reference symlinks)
root#registry:/etc/letsencrypt# mkdir copy
root#registry:/etc/letsencrypt# cp -rL live/* copy/
then I changed the nginx.conf to refer to 'copy' instead of 'live'
Now nginx correctly starts inside docker.
This is not a long-term solution because when the certs are renewed the copy won't get automatically updated. But since I'll be running letsencrypt renew from a cronjob, that task can run the copy process again.
Also I've read that nginx must be restarted if the certs change, so that's another issue I'll need to face. But at least nginx starts correctly now.
I got this error when I renamed apps in Dokku (0.5.4). What had happened is that the links in the new app directory pointed to the old app name, e.g.
/home/dokku/[new app]/letsencrypt/certs/current -> /home/dokku/[old app]/letsencrypt/certs/f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1
So I manually recreated the links so the pointed to the right place.
try to start with /root :
ssl_certificate /root/etc/letsencrypt/live/mydomain.org/fullchain.pem;
ssl_certificate_key /root/etc/letsencrypt/live/mydomain.org/privkey.pem;
I solved the problem like this.
Joy
I wasted a day today and found the solution.
Run nginx docker engine with
-v /etc/letsencrypt/archive/your_domain.com:/nginx/letsencrypt/your_domain.com
and nginx.conf
ssl_certificate /nginx/letsencrypt/your_domain.com/fullchain1.pem;
ssl_certificate_key /nginx/letsencrypt/your_domain.com/privkey1.pem;

Why is gunicorn behind nginx with ssl 88% slower than gunicorn alone?

So, I have a simple Flask API application that is running on gunicorn running tornado workers. The gunicorn command line is:
gunicorn -w 64 --backlog 2048 --keep-alive 5 -k tornado -b 0.0.0.0:5005 --pid /tmp/gunicorn_api.pid api:APP
When I run Apache Benchmark from another server directly against gunicorn, here are the relevant results:
ab -n 1000 -c 1000 'http://****:5005/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second: 2823.71 [#/sec] (mean)
Time per request: 354.144 [ms] (mean)
Time per request: 0.354 [ms] (mean, across all concurrent requests)
Transfer rate: 2669.29 [Kbytes/sec] received
So we're getting close to 3k reqs/sec for performance.
Now, I need SSL. So I'm running nginx as a reverse proxy. Here is what the same benchmark looks against nginx on the same server:
ab -n 1000 -c 1000 'https://****/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second: 355.16 [#/sec] (mean)
Time per request: 2815.621 [ms] (mean)
Time per request: 2.816 [ms] (mean, across all concurrent requests)
Transfer rate: 352.73 [Kbytes/sec] received
That's a drop in performance of 87.4%. But for the life of me, I cannot figure out what is wrong with my nginx setup. Which is this:
upstream sdn_api{
server 127.0.0.1:5005;
keepalive 100;
}
server {
listen [::]:443;
ssl on;
ssl_certificate /etc/ssl/certs/api.sdninja.com.crt;
ssl_certificate_key /etc/ssl/private/api.sdninja.com.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!kEDH:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
ssl_session_cache shared:SSL:10m;
server_name api.*****.com;
access_log /var/log/nginx/sdn_api.log;
location / {
proxy_pass http://sdn_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 100M;
client_body_buffer_size 1m;
proxy_intercept_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 256 16k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 0;
proxy_read_timeout 300;
}
}
And my nginx.conf:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip off;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
So does anyone have any idea why it's running so slow with this config? Thanks!
A large part of HTTPS overhead is in the handshake. Pass -k to ab to enable persistent connections. You will see that the benchmark is now significantly faster.