I am trying to install an SSL certificate that I obtained from Godaddy onto my NGINX server. I am positive I have all of the paths correct and from what I understand my server configuration is correct, but still I get the following error.
Feb 20 11:06:35 my.server.com nginx[6173]: nginx: [emerg] cannot load certificate "/etc/ssl/certs/certificate.crt": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/ssl/certs/certificate.crt','r') error:2006D002:BIO routines:BIO_new_file:system lib)
Feb 20 11:00:01 my.server.com nginx[5969]: nginx: configuration file /etc/nginx/nginx.conf test failed
Below is my SSL configuration. I have placed this into a file at the path /etc/nginx/conf.d/ssl.conf.
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name my.server.com;
root /usr/share/nginx/html;
ssl_certificate /etc/ssl/certs/certificate.crt;
ssl_certificate_key /etc/ssl/private/private.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://[MY_IP_ADDRESS]:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
This looks to be a permissions issue, but I have ran chown to change the permissions to the root user, and I have changed the file permission to 600 via chmod. Is this not correct? Can someone please give me some guidance on how to resolve this issue?
** UPDATE **
I did check and found that the SSL certs was not owned by the root user. I've modified all SSL files to be owned by the root owner and group, and changed the file permissions to 600 and I've tried 700. I get this output below when I run sudo ls -l
-rwx------. 1 root root 7072 Feb 20 10:41 my.server.com.chained.crt
-rwx------. 1 root root 2277 Feb 20 10:36 my.server.com.crt
-rwx------. 1 root root 4795 Feb 20 10:39 intermediate.crt
I am still getting the same error though. I've also tried both the normal cert and the full chain cert. Does anyone have an idea what is going on?
I finally solved my issue. Turns out when I moved the files (mv) it changed the security context of the files, and thus made them unreadable to nginx. I resolved the issue by running the following command on my root nginx folder.
restorecon -v -R /etc/nginx
I found this from this post.
Thanks for all the help!
The solution: restorecon -v -R /etc/nginx - works for me RHEL 8
Relabeled /etc/nginx/ssl/vhost/server.crt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:httpd_config_t:s0
Relabeled /etc/nginx/ssl/vhost/archive.pem from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:httpd_config_t:s0
Relabeled /etc/nginx/ssl/vhost/intermediate.crt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:httpd_config_t:s0
Just Restart nginx.
Related
I have a server with around 3 websites on the same server.
To make things easier for me, I'm generating the nginx configuration files as well as the apache configuration files with ansible so it's easier and less error prone. As you will see below, I'm using the same port for all of them, so pretty much the only things are different on those apache and nginx configuration files are the server name, the root location the website location and also the location for the error and access logs.
The problem that I see now is that I can't see both websites at the same time, when I open the first website on my browser it opens fine, but when I want to open the second website I get this error:
Your browser sent a request that this server could not understand.
When I see apache logs I see the following error:
[Fri Nov 09 16:17:51.247904 2018] [ssl:error] [pid 18614] AH02032: Hostname myweb.intweb.net provided via SNI and hostname mysecondweb.intweb.net provided via HTTP are different
where mysecondweb.intweb.net is the other website I'm trying to open.
This is my nginx configuration file for one of them, where you can see I'm handling the request to apache:
# Force HTTP requests to HTTPS
server {
listen 80;
server_name myweb.intweb.net;
return 301 https://myweb.intweb.net$request_uri;
}
server {
listen 443 ssl;
root /var/opt/httpd/ifdocs;
server_name myweb.intweb.net ;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
ssl on;
ssl_certificate /etc/pki/tls/certs/star_intweb_net.pem;
ssl_certificate_key /etc/pki/tls/certs/star_intweb_net.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/iflogs/https/access.log;
error_log /var/log/nginx/iflogs/https/error.log;
###include rewrites/default.conf;
index index.php index.html index.htm;
# Make nginx serve static files instead of Apache
# NOTE this will cause issues with bandwidth accounting as files wont be logged
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
expires max;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_ssl_server_name on;
proxy_ssl_name $host;
proxy_pass https://127.0.0.1:4433;
}
# proxy the PHP scripts to Apache listening on <serverIP>:8080
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_ssl_server_name on;
proxy_ssl_name $host;
proxy_pass https://127.0.0.1:4433;
}
location ~ /\. {
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
This is my Apache configuration for the the sites:
<VirtualHost *:4433>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/star_intweb_net.crt
SSLCertificateKeyFile /etc/pki/tls/certs/star_intweb_net.key
SSLCertificateCcompanyFile /etc/pki/tls/certs/DigiCertCA.crt
ServerAdmin webmaster#company.com
DocumentRoot /var/opt/httpd/ifdocs
<Directory "/var/opt/httpd/ifdocs">
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
ServerName myweb.intweb.net
ErrorLog /var/log/httpd/iflogs/http/error.log
CustomLog /var/log/httpd/iflogs/http/access.log combined
# RewriteEngine on
# Include rewrites/default.conf
</VirtualHost>
Note:
If I remove the lines:
proxy_ssl_server_name on;
proxy_ssl_name $host;
I don't have that problem anymore and seems to solve the issue I'm having. In this case I was just curious if this will cause issues in the future or why would by removing those two lines in the configuration, I stop having those errors in apache?
Thank you!
I was able to fix this problem by adding this line of code in my nginx.conf file:
proxy_ssl_session_reuse off;
apparently there's a bug in OpenSSL. I tested it running the following code:
openssl genrsa -out fookey.pem 2048
openssl req -x509 -key fookey.pem -out foocert.pem -days 3650 -subj '/CN=testkey.invalid.example'
openssl s_server -accept localhost:30889 -cert foocert.pem -key fookey.pem -state -servername key1.example -cert2 foocert.pem -key2 fookey.pem
openssl s_client -connect localhost:30889 -sess_out /tmp/tempsslsess -servername key1.example
openssl s_client -connect localhost:30889 -sess_in /tmp/tempsslsess -servername key2.example
# observe key1.example in the SNI info reported by s_server for both requests. ("Hostname in TLS extension: "...)
# If s_server is restarted, and the s_client connections/sessions are re-run using key2.example first and key1.example second, observe key2.example in the SNI info reported by s_server for both requests.
# Furthermore, I just tested on a different machine, and sometimes SNI appears to be absent in the second requests.
# shouldn't s_client filter session use so that if it knows SNI info before selecting a cached session, it only selects one that matches the intended SNI name? And if it doesn't have a SNI name when it's searching for a session to re-use, shouldn't it still double check when it's provided (later, before connecting) SNI info, to make sure it's identical to SNI in the saved session it picked, and not use the saved session if they differ?
# It seems to me if the SNI specified by the client app ever differs from the SNI seen by the server, that's not good.
# this was discovered by someone reporting a problem using nginx to reverse proxy to apache, with apache warning that the SNI hostname didn't match the Host: header, despite the nginx config explicitly setting Host: and apache-side SNI to the same thing.
Multiple virtual hosts on my workstation, just stopped working. Upon an update of nginx to v1.10.2 and a new Passenger locations.ini file pointer in the nginx.conf file, I'm getting 403 Forbidden permissions errors on all of these vhosts. No clue what to look at.
passenger_root /usr/local/opt/passenger/libexec/src/ruby_supportlib/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/ruby;
But, which ruby:
/Users/rich/.rbenv/shims/ruby
So I changed that directive to the one above. Restart nginx, and still the same. The error reported:
2017/10/23 19:51:36 [error] 10863#0: *61 directory index of "/Library/WebServer/Documents/alpha/public/" is forbidden, client: 127.0.0.1, server: alpha.local, request: "GET / HTTP/1.1", host: "alpha.local"
Permissions haven't changed ever. Not to mention they are relaxed (only seen by me):
drwxrwxrwx 20 rich admin 680B Jun 17 01:52 HQ
cd HQ:
drwxr-xr-x 8 rich admin 272B Jul 12 17:32 public
nginx.conf:
user root admin;
worker_processes 8;
error_log /usr/local/var/log/error.log debug;
pid /usr/local/var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# index index.html index.erb;
access_log /usr/local/var/log/access.log;
passenger_root /usr/local/Cellar/passenger/5.1.11/libexec/src/ruby_supportlib/phusion_passenger/locations.ini;
passenger_ruby /Users/rich/.rbenv/shims/ruby;
passenger_friendly_error_pages on;
include /usr/local/etc/nginx/servers/*; # see below
}
server {
listen 80;
server_name alpha.local;
include /usr/local/etc/nginx/mime.types;
access_log /usr/local/var/log/access_alpha.log;
error_log /usr/local/var/log/error_alpha.log debug;
error_page 404 /404.html;
root /Library/WebServer/Documents/alpha/public;
passenger_enabled on;
passenger_base_uri /;
location / {
autoindex off;
# try_files $uri $uri/ /index.html?$query_string;
# index /;
# allow 192.168.1.0/24;
}
location = /img/favicon.ico { access_log off;}
}
nginx error log:
2017/10/24 15:35:39 [error] 10868#0: *86 directory index of "/Library/WebServer/Documents/alpha/public/" is forbidden, client: 127.0.0.1, server: alpha.local, request: "GET / HTTP/1.1", host: "alpha.local"
Odd stuff. Any ideas appreciated how to get all this serving again properly. It seems permissions were completely thrown off, and I'm not sure if it was the nginx update or not. Cheers
==============
Update 2: (changed alpha/HQ). Also, replicated on a completely separate box. Homebrew update, trips over nginx's dependency on openssl, which wants to update to version 1.1. I've posted in Github there. While I have no proof, it's the only feedback I have that shows a non-upgrade (still serving 1.12.0 instead of 1.12.2). So I am thinking it is that.
https://github.com/Homebrew/homebrew-core/issues/19810
Fixed. Homebrew issue, conditional if Passenger is installed, choosing version of openssl (openssl, openssl#1.1).
SO...
I have a node application running on a server on port 8080 and I am trying to enabled it to work over SSL using NGINX and CloudFlare. Note the following...
My host is running Ubuntu 16.04 LTS
I am currently using CloudFlare's Universal SSL (free tier)
I have my test host DNS setup as test.company.com
I have copied the CloudFlare origin pull cert from this post to my test box's /etc/nginx/certs
...my previous NGINX configuration looked like...
server {
listen 80;
location / {
proxy_pass http://localhost:8080;
}
}
...it now looks like...
# HTTP
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS
server {
listen 443;
server_name test.company.com;
ssl on;
ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
ssl_verify_client on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8080/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
...I followed the example here and the link it provides here and I'm skeptical that everything above is required (I'm a minimalist). Whenever I run sudo nginx -t I still get errors around ssl_certificate and ssl_certificate_key not being specified. I cannot figure out how to download the require files from CloudFlare and from what I understand, I don't believe I should need to.
If I try to re-use the CloudFlare origin pull cert as both the ssl_certificate and ssl_certificate_key, I get the error nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/certs/cloudflare.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)
I am confident that it is possible to create my own self-signed certificate, but I am planning on using this strategy eventually to spin up production machines. Any help on pointing me in the right direction is much appreciated.
It looks like you're using Cloudflare's Origin CA service, nice!
The issue looks like you've put your SSL private key in the ssl_client_certificate attribute and not put your real SSL certificate in your configuration. Your Nginx SSL configuration should contain the following lines instead:
ssl_certificate /path/to/your_certificate.pem;
ssl_certificate_key /path/to/your_key.key;
Make sure SSL Certificate corresponds to the .PEM file with the correct contents, and the Certificate Key file contains the .KEY file with the correct contents too.
To generate a certificate with Origin CA, navigate to the Crypto section of the Cloudflare dashboard. From there, click the Create Certificate button in the Origin Certificates section. Once you complete the steps in the wizard, you will see a window which allows you to download both the certificate file and the key file. Make sure you put them in the correct files and install them on your web server.
Further reading:
How to install an Origin CA certificate in NGINX
Creating and managing certificates with Origin CA
Also, ssl on is deprecated, instead, use listen 443 ssl;.
I just buy RapidSSL from Name.com and tried to install it following this link
https://www.digitalocean.com/community/tutorials/how-to-install-an-ssl-certificate-from-a-commercial-certificate-authority
So when i ran
sudo service nginx restart
I got this.
Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.
So this is my /etc/nginx/sites-available/default
server {
listen 80;
server_name mydomain.co;
rewrite ^/(.*) https://mydomain.co/$1 permanent;
}
server {
listen 443 ssl;
ssl_certificate ~/key/www.mydomain.co.chained.crt;
ssl_certificate_key ~/key/www.mydomain.co.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
server_name mydomain.co;
root /www/mydomain/build;
index index.html index.htm;
rewrite ^/(.*)/$ $1 permanent;
location ~ ^.+\..+$ {
try_files $uri =404;
}
location / {
try_files $uri $uri/ /index.html;
}
location ~ /\. {
deny all;
access_log off;
log_not_found off;
return 404;
}
}
But when i remove this line
ssl_certificate ~/key/www.mydomain.co.chained.crt;
I can restart nginx.
Anyone know how to fix this?
Thanks!
The ~ in your nginx config file is probably not working in the way you intended. I assume you intended for it to become /home/username/key/www.mydomain.co.chained.crt, but it won't be handled like that.
To confirm this, readd the config line, and then run nginx -t. You will see nginx's config checking error log:
nginx: [emerg] BIO_new_file("/etc/nginx/~/key/www.mydomain.co.chained.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/~/key/www.mydomain.co.chained.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I can't comment because of my new user reputation, but do you mind pasting the nginx error log ? The reason of failure should be there
The 2 things i can think on top of my head are:
1. wrong file permissions or bad location
2. wrong .crt contents - make sure that your certificate file contains the combined certificate + CA intermediate certificates in the right order (certificate first, CA after) and when you pasted those you did not added extra lines or missed some chars.
I have a site which was perfectly running with apache on some old ubuntu server and also has https for it. But now for some reasons i need to move to different(new ubuntu server with high configuration) server and trying to serve my site using Nginx, and so installed nginx (nginx/1.4.6 (Ubuntu)). Below is my nginx.conf file settings
server {
listen 8005;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /static/ {
alias /root/apps/project/static/;
}
location /media/ {
alias /root/apps/media/;
}
}
# Https Server
server {
listen 443;
location / {
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Url-Scheme $scheme;
# proxy_redirect off;
proxy_pass http://127.0.0.1:8001;
}
server_tokens off;
ssl on;
ssl_certificate /etc/ssl/certificates/project.com.crt;
ssl_certificate_key /etc/ssl/certificates/www.project.com.key;
ssl_session_timeout 20m;
ssl_session_cache shared:SSL:10m; # ~ 40,000 sessions
ssl_protocols SSLv3 TLSv1; # SSLv2
ssl_ciphers ALL:!aNull:!eNull:!SSLv2:!kEDH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP:#STRENGTH;
ssl_prefer_server_ciphers on;
}
Since i was already having https certificate(project.com.crt) and key(www.project.com.key) running on another server, i had just copied them to new server(which does not contain any domain as of now, and has only IP) and placed in at path /etc/ssl/certificates/ and trying to use them directly. Now i had restarted Nginx and tried to access my IP 23.xxx.xxx.xx:8005 with https:// 23.xxx.xxx.xx:8005 and i am getting the below error in firefox
Secure Connection Failed
An error occurred during a connection to 23.xxx.xxx.xx:8005. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long)
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site.
But when i access the IP without https, i can able to serve my site.
So whats wrong with my Https settings in the above nginx conf file ?
Whether we can't serve the certificate files by simply copying and pasting at some folder ? do we need to create any extra certificate for my new server ?
Change
listen 443;
to
listen 443 ssl;
and get rid of this line
ssl on;
That should fix your SSL issue, but it looks like you have several issues in your configuration.
So whats wrong with my Https settings in the above nginx conf file ?
You don't have a SSL/TLS server listening on the port the client is trying to connect to. The ssl_error_rx_record_too_long occurs because the client's SSL stack is trying to interpret a HTTP response as SSL/TLS data. A Wireshark trace should confirm the issue. Look at the raw bytes (follow the stream).
I don't know why the configuration is not correct. Perhaps someone with Nginx config experience can help. Or, the folks on Server Fault or Webmaster Stack Exchange.
This problem happens when the Client gets non-SSL content over SSL connection. the Server send HTTP content but the client awaits HTTPS content. You can check two main things to fix it, but it can caused by another side effects too.
Make sure you put ssl on listen directive;
listen [PORT_NUMBER] ssl;
Check your Host IP Address you tried to connect. DNS can be correct but maybe you have another point on your hosts file or your local DNS server.