I just buy RapidSSL from Name.com and tried to install it following this link
https://www.digitalocean.com/community/tutorials/how-to-install-an-ssl-certificate-from-a-commercial-certificate-authority
So when i ran
sudo service nginx restart
I got this.
Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.
So this is my /etc/nginx/sites-available/default
server {
listen 80;
server_name mydomain.co;
rewrite ^/(.*) https://mydomain.co/$1 permanent;
}
server {
listen 443 ssl;
ssl_certificate ~/key/www.mydomain.co.chained.crt;
ssl_certificate_key ~/key/www.mydomain.co.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
server_name mydomain.co;
root /www/mydomain/build;
index index.html index.htm;
rewrite ^/(.*)/$ $1 permanent;
location ~ ^.+\..+$ {
try_files $uri =404;
}
location / {
try_files $uri $uri/ /index.html;
}
location ~ /\. {
deny all;
access_log off;
log_not_found off;
return 404;
}
}
But when i remove this line
ssl_certificate ~/key/www.mydomain.co.chained.crt;
I can restart nginx.
Anyone know how to fix this?
Thanks!
The ~ in your nginx config file is probably not working in the way you intended. I assume you intended for it to become /home/username/key/www.mydomain.co.chained.crt, but it won't be handled like that.
To confirm this, readd the config line, and then run nginx -t. You will see nginx's config checking error log:
nginx: [emerg] BIO_new_file("/etc/nginx/~/key/www.mydomain.co.chained.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/~/key/www.mydomain.co.chained.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I can't comment because of my new user reputation, but do you mind pasting the nginx error log ? The reason of failure should be there
The 2 things i can think on top of my head are:
1. wrong file permissions or bad location
2. wrong .crt contents - make sure that your certificate file contains the combined certificate + CA intermediate certificates in the right order (certificate first, CA after) and when you pasted those you did not added extra lines or missed some chars.
Related
I am trying to use nginx as reverse proxy with ssl to access my locally running web services deployed by docker containers. When specifying locations in nginx, I don't only get the start page of the server but I am not able to follow any links on that page. Besides that, Images of my web service are not displayed.
I have already read the nginx documentation and tried out a lot of different things. For instance, when I am just omitting the location, the web service runs perfectly fine.
Working example of the nginx.conf:
location /{
location / {
proxy_pass http://127.0.0.1:7081/;
include /etc/nginx/proxy_params;
}
Not Working example of the nginx.conf:
location /wiki/ {
rewrite ^/wiki(.*) /$1 break;
proxy_pass http://127.0.0.1:7081/;
include /etc/nginx/proxy_params;
}
I am obviously missing something in the latter example. Does anyone know, what I am missing, so that I can simply proxy pass request directly to my dockerized web service?
EDIT:
Here a more and hopefully reproducible example:
The docker container I launched, was simply a base MediaWiki which was published internally on localhost on port 7081.
docker run --name some-mediawiki -p 127.0.0.1:7081:80 -d mediawiki
The file in /etc/nginx/sites-available/default looks like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name my.domain.de;
return 301 https://$host$request_uri; }
server {
listen 443 ssl;
server_name my.domain.de*;
# SSL-Certificate and key
ssl_certificate /etc/ssl/certs/my_full_chain.pem;
ssl_certificate_key /etc/ssl/private/my-key.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location /wiki {
rewrite ^/wiki(.*) /$1 break;
proxy_pass http://127.0.0.1:7081/;
include /etc/nginx/proxy_params;
}
}
I originally used certbot for my first site(default)'s cert. All was well and it has worked wonderfully for the past 6months.
Recently I've tried to add another site to my server , and the problem occurs when I try to certify it. (It works fine running on http - using port 80).
I followed the exact same steps as before using certbot to generate the ssl-cert (albeit changing the names), and I had no issues.
However now when I add that cert for site2 to, it redirects to default & shows as not-secure in the url-bar.
If I try go to default, it works fine and is still certified.
I'm certain it is an issue with the certificate for site2, but i'm not sure where the issue lies?
My original website "default" is a php script.
However the second site "site2" is a html script.
Default's code;
server {
listen 80 default_server ;
listen [::]:80 default_server ipv6only=on;
server_name default.com www.default.com;
return 301 https://www.default.com$request_uri;
}
server{
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include /etc/nginx/snippets/ssl-default.com.conf;
include /etc/nginx/snippets/ssl-params.conf;
location ~ /.well-known {
allow all;
}
root /var/www/default.com/site;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.1-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
site2's code;
server {
listen 80;
listen [::]:80 ;
server_name site2.com www.site2.com;
return 302 https://www.site2.com$request_uri;
}
server{
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name _;
include /etc/nginx/snippets/ssl-site2.com.conf;
include /etc/nginx/snippets/ssl-params.conf;
location ~ /.well-known {
allow all;
}
root /var/www/site2.com/;
index index.html;
location / {
try_files $uri $uri/ =404;
}
location ~ /\.ht {
deny all;
}
}
sudo nginx -t output;
[warn] "ssl_stapling" ignored, issuer certificate not found
nginx: [warn] conflicting server name "_" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "_" on [::]:443, ignored
nginx: [warn] conflicting server name "default.com" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "www.default.com" on 0.0.0.0:80,
ignored
nginx: [warn] conflicting server name "default.com" on [::]:80, ignored
nginx: [warn] conflicting server name "www.default.com" on [::]:80, ignored
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
params.conf contains;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# disable HSTS header for now
#add_header Strict-Transport-Security "max-age=63072000; includeSubDomains;
preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl.site2.com.conf has the locations of the privkey and fullchain. (same format & location as default's just change of names..)
*Checking the "not-secure" in the url-bar, states the cert for site2 is issued to default
server_name _ should not be used on ssl hosts unless there is only one virtual host; you should set the explicit name in each server section.
Most likely, you have 2 or more configuration files where you set one value in server_name for the same port. As I can see, one of these files are default. Try to check /etc/nginx/sites-enabled for existing sites.
Try to remove line server_name _; from both ::443 configs and it should work.
I have an issue wherein I am building an nginx reverse proxy for directing to multiple microservices at different url paths.
The system is entirely docker based and as a result the same environment is used for development and production. This has caused an issue for me when installing SSL as the SSL certs will only be available in production so when I configure NGINX with SSL the development environment no longer works as the ssl certs are not present.
Here is the relevant part of my conf file -
server {
listen 80;
listen 443 default_server ssl;
server_name atvcap.server.com;
ssl_certificate /etc/nginx/certs/atvcap_cabundle.crt;
ssl_certificate_key /etc/nginx/certs/atvcap.key;
...
}
But this throws the following when running my application in development mode -
nginx: [emerg] BIO_new_file("/etc/nginx/certs/atvcap_cabundle.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/atvcap_cabundle.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
Is it possible to only turn on SSL if the "/etc/nginx/certs/atvcap_cabundle.crt" is available?
I had tried something like the following -
if (-f /etc/nginx/certs/atvcap_cabundle.crt) {
ssl_certificate /etc/nginx/certs/atvcap_cabundle.crt;
ssl_certificate_key /etc/nginx/certs/atvcap.key;
}
But that threw the following error -
nginx: [emerg] "ssl_certificate" directive is not allowed here in
/etc/nginx/conf.d/default.conf:7
Any one have any ideas on how to achieve something like this?
Thanks
You can create an additional file ssl.conf and put here ssl configs:
ssl_certificate /etc/nginx/certs/atvcap_cabundle.crt;
ssl_certificate_key /etc/nginx/certs/atvcap.key;
Then include from the main config:
server_name atvcap.server.com;
include /somepath/ssl.conf*;
Make sure to include * symbol - this will not break when the file does not exist at development mode.
The answer of #super_p is correct. But to answer to #AbdolHosein comment I add my answer here if it's not clear.
You need to include your ssl_certificate directive in the included file.
# sample nginx config
http {
server {
listen 80 deferred;
server_name _;
include /ssl/ssl.conf*;
client_body_timeout 5s;
client_header_timeout 5s;
root /code;
}
}
Then in your /ssl/ssl.conf you can do whatever you want, such as enabling HTTPS:
# this is the /ssl/ssl.conf file
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /ssl/cert.cer;
ssl_certificate_key /ssl/key.key;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
The trick is that we don't look if the certificate exists but we check if the /ssl/ssl.conf exists. This is thanks to the * in the include /ssl/ssl.conf*; directory as stated by #super_p
I'm running nginx server on my Raspberry Pi and it seems to be working just fine using HTTP protocol.
Recently, I decided to add HTTPS support to my server and got certificate from Let's Encrypt.
And it still works like a charm, if you are sending requests from local network. But every external request via HTTPS ends with 504 Gateway Timeout error.
Here is my config:
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name domain.name;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 180m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5;
ssl_certificate /etc/letsencrypt/live/domain.name/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.name/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/domain.name/chain.pem;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location ~ /.well-known {
allow all;
root /usr/share/nginx/html;
}
}
Found out that my ISP has a firewall service active by default. It was blocking all connections to 443 port. Disabling it resolved my issue.
I'm using the following nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server {
listen 80;
server_name mydomain.org;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/mydomain.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.org/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
add_header Strict-Transport-Security max-age=15768000;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/mydomain.org/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=86400;
root /var/www/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
root /var/www/html;
fastcgi_pass wp_db:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
}
}
But nginx container complains with:
nginx: [emerg] BIO_new_file("/etc/letsencrypt/live/mydomain.org/fullchain.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/mydomain.org/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I have all the certificates on that path for let's encrypt.
I found this thread
https://serverfault.com/questions/537343/nginx-startup-fails-ssl-no-such-file-or-directory
And did
chown -R root:root /etc/letsencrypt/live/mydomain.org/fullchain.pem
chmod -R 600 /etc/letsencrypt/live/mydomain.org/fullchain.pem
Same error was thrown from nginx container. I've placed the certs on /docker-compose/etc/nginx/certs giving the same permissions and changing links on nging.conf but nothing changed.
What I'm missing?
I was experiencing the same problem deploying harbor (a docker registry + access control UI) using volume mapping /etc/letsencrypt:/etc/letsencrypt
nginx reported "no such file" when loading the certificate file, even though I could enter that container (docker exec bash ..) and cat the files using the exact same path.
I suspected the problem is caused by letsencrypt use of symlinks, so my solution was to copy the live certs into another folder using cp -rL (to de-reference symlinks)
root#registry:/etc/letsencrypt# mkdir copy
root#registry:/etc/letsencrypt# cp -rL live/* copy/
then I changed the nginx.conf to refer to 'copy' instead of 'live'
Now nginx correctly starts inside docker.
This is not a long-term solution because when the certs are renewed the copy won't get automatically updated. But since I'll be running letsencrypt renew from a cronjob, that task can run the copy process again.
Also I've read that nginx must be restarted if the certs change, so that's another issue I'll need to face. But at least nginx starts correctly now.
I got this error when I renamed apps in Dokku (0.5.4). What had happened is that the links in the new app directory pointed to the old app name, e.g.
/home/dokku/[new app]/letsencrypt/certs/current -> /home/dokku/[old app]/letsencrypt/certs/f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1
So I manually recreated the links so the pointed to the right place.
try to start with /root :
ssl_certificate /root/etc/letsencrypt/live/mydomain.org/fullchain.pem;
ssl_certificate_key /root/etc/letsencrypt/live/mydomain.org/privkey.pem;
I solved the problem like this.
Joy
I wasted a day today and found the solution.
Run nginx docker engine with
-v /etc/letsencrypt/archive/your_domain.com:/nginx/letsencrypt/your_domain.com
and nginx.conf
ssl_certificate /nginx/letsencrypt/your_domain.com/fullchain1.pem;
ssl_certificate_key /nginx/letsencrypt/your_domain.com/privkey1.pem;