SO...
I have a node application running on a server on port 8080 and I am trying to enabled it to work over SSL using NGINX and CloudFlare. Note the following...
My host is running Ubuntu 16.04 LTS
I am currently using CloudFlare's Universal SSL (free tier)
I have my test host DNS setup as test.company.com
I have copied the CloudFlare origin pull cert from this post to my test box's /etc/nginx/certs
...my previous NGINX configuration looked like...
server {
listen 80;
location / {
proxy_pass http://localhost:8080;
}
}
...it now looks like...
# HTTP
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS
server {
listen 443;
server_name test.company.com;
ssl on;
ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
ssl_verify_client on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8080/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
...I followed the example here and the link it provides here and I'm skeptical that everything above is required (I'm a minimalist). Whenever I run sudo nginx -t I still get errors around ssl_certificate and ssl_certificate_key not being specified. I cannot figure out how to download the require files from CloudFlare and from what I understand, I don't believe I should need to.
If I try to re-use the CloudFlare origin pull cert as both the ssl_certificate and ssl_certificate_key, I get the error nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/certs/cloudflare.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)
I am confident that it is possible to create my own self-signed certificate, but I am planning on using this strategy eventually to spin up production machines. Any help on pointing me in the right direction is much appreciated.
It looks like you're using Cloudflare's Origin CA service, nice!
The issue looks like you've put your SSL private key in the ssl_client_certificate attribute and not put your real SSL certificate in your configuration. Your Nginx SSL configuration should contain the following lines instead:
ssl_certificate /path/to/your_certificate.pem;
ssl_certificate_key /path/to/your_key.key;
Make sure SSL Certificate corresponds to the .PEM file with the correct contents, and the Certificate Key file contains the .KEY file with the correct contents too.
To generate a certificate with Origin CA, navigate to the Crypto section of the Cloudflare dashboard. From there, click the Create Certificate button in the Origin Certificates section. Once you complete the steps in the wizard, you will see a window which allows you to download both the certificate file and the key file. Make sure you put them in the correct files and install them on your web server.
Further reading:
How to install an Origin CA certificate in NGINX
Creating and managing certificates with Origin CA
Also, ssl on is deprecated, instead, use listen 443 ssl;.
Related
I have a QNAP TS-253A with its admin interface exposed to the internet.
The qnap has it's own certificate installed by a dedicated tool (ie. I don't know exactly where to locate the certificate).
https://mydomain.myqnapcloud.com points to my static IP, and my router has a firewall rule, which forwards port 443 to 192.168.200.6 which is the internal address of my QNAP.
That all works as it should.
Now I have spun up a Docker container on 192.168.200.18, which I would like to expose to https://identity.someotherdomain.com.
My Idea was to spin up another container with an Nginx reverse proxy (192.168.200.8), and change the firewall rule to forward 443 (and 80) to the reverse proxy.
There are lots of guides to use nginx to sit in front of a http server and add SSL certificate thereby converting an existing http site to https. But my use case should be even simpler as the server i forward to, is already https.
I have tried this, which doesn't work:
upstream qnap {
server 192.168.200.6:443;
}
server {
listen 192.168.200.8:443;
server_name mydomainmyqnapcloud.com;
location / {
proxy_pass https://qnap;
}
}
How do I configure nginx to forward traffic intended for https://mydomain.myqnapcloud.com to https://192.168.200.6
and traffic intended for https://identity.someotherdomain.com to https://192.168.200.18
The way I got this working was to locate the certificate and key on the Qnap (in /etc/stunnel) and copy them to a folder shared into the reverse proxy docker image and include them in the nginx.conf:
server {
listen 443 ssl;
server_name mydomain.myqnapcloud.com;
ssl_certificate /etc/ssl/private/backup.cert;
ssl_certificate_key /etc/ssl/private/backup.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://192.168.200.6;
proxy_read_timeout 90;
proxy_redirect https://192.168.200.6 https://mydomain.myqnapcloud.com;
}
}
I've been trying to set up SSL for my websites to no avail. I'm using NGINX on Ubuntu 18.04 as a reverse proxy for two NodeJS Express web servers. I used Certbot following these instructions. However, when trying to access my site via HTTPS, I get a "Site can't be reached"/"Took too long to respond" error.
Here's what my NGINX config in /etc/nginx/sites-available looks like:
server {
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name MYURL.com www.MYURL.com;
ssl on;
ssl_certificate /etc/letsencrypt/live/MYURL.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/MYURL.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
access_log /var/log/nginx/MYURL.access.log;
error_log /var/log/nginx/MYURL.error.log;
client_max_body_size 50M;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://localhost:3001;
}
}
When I replace the listen [::]:443 ssl and listen 443 ssl lines with listen 80; and try to access the site with HTTP, it works fine.
Any idea what the problem might be?
EDIT: Also, I feel I should mention that my UFW status has 22/tcp (LIMIT), OpenSSH (ALLOW), and Nginx Full (ALLOW), as well as their v6 counterparts
It turns out the DigitalOcean firewall was not allowing HTTPS connections. I allowed HTTPS and switched proxy_pass https://localhost:3001; to http:// and everything works now!
I am configuring nginx at port 80 as proxy server to Apache server on port 8080, using Centos 7.
I successfully configure both for http, but after installing lets encrypt certificate for Apache, I see Apache is directly receiving traffic for https. I tried to make nginx receive traffic for all HTTP and HTTPS, but face issue,
I do a lot of changes like disable apache to listen on port 443, and only listen to 8080.
I configure nginx to listen both at 80 and 443, additionally I remove certificate for apache and add to nginx configuration files. currently.
nginx configuration is as follow:
server {
listen 80;
listen [::]:80 default_server;
#server_name _;
server_name www.example.com;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://my.server.ip.add:8080;
root /usr/share/nginx/html;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
server {
listen 443 default_server;
server_name www.example.com;
root /usr/share/nginx/html;
ssl on;
ssl_certificate /etc/letsencrypt/live/www.example.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
ssl_prefer_server_ciphers on;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
#ssl_dhparam /etc/pki/nginx/dh2048.pem;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA--REMOVED-SOME-HERE-SHA';
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Note: I am using php 7.0
currently site is working on both https and http with 1 known issue i.e. User images are not loading. but I am not sure it is served by apache or nginx, in RESPONSE I can see "nginx/1.10.2"
What I was actually going to implement: I was trying to run both
node.js and apache using nginx. I donot start node yet.
My questions:
Is it really beneficial to use nginx in front and apache at the backend? (I read it protect from dDos attacks).
Where should we put certificate at nginx or apache?
How can I add node.js in nginx configuration? I already installed node js.
What can be best configuration of using both nginx and apache?
Good evening,
First of all all the considerations you have made at the infrastructure level are very good and in my opinion the proxy configuration despite the difficulties of implementation at this time is the best.
I've been using it for some time now and the benefits are enormous. However, I would like to ask you what type of cloud infrastructure you are using because there are so many things that change depending on the technical infrastructure. For example, I use only Google Cloud Platform that is completely different from CloudFlare or Other AWS.
The configuration made is too articulated and unclear from the point of view of the structure. You should try this way:First, enter the http context with the upstream domain name directive and inside the server IP address with Apache, and then make declarations for server and location contexts by including the parameters of the proxy_params file and snippet ssl.
If you want and help me understand the infrastructure we adopt, we can see how to make the configuration together but so it is imminent because each infrastructure responds to a different configuration.
It also applies to php7.0. For example, configuring PrestaShop 1.7.1.1 with php7.0 I had to make a lot of changes to the php.ini code of the CMS as I did not use CGI in FPM but this as I said was very varied.
see https://www.webfoobar.com/node/35
I have a site which was perfectly running with apache on some old ubuntu server and also has https for it. But now for some reasons i need to move to different(new ubuntu server with high configuration) server and trying to serve my site using Nginx, and so installed nginx (nginx/1.4.6 (Ubuntu)). Below is my nginx.conf file settings
server {
listen 8005;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /static/ {
alias /root/apps/project/static/;
}
location /media/ {
alias /root/apps/media/;
}
}
# Https Server
server {
listen 443;
location / {
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Url-Scheme $scheme;
# proxy_redirect off;
proxy_pass http://127.0.0.1:8001;
}
server_tokens off;
ssl on;
ssl_certificate /etc/ssl/certificates/project.com.crt;
ssl_certificate_key /etc/ssl/certificates/www.project.com.key;
ssl_session_timeout 20m;
ssl_session_cache shared:SSL:10m; # ~ 40,000 sessions
ssl_protocols SSLv3 TLSv1; # SSLv2
ssl_ciphers ALL:!aNull:!eNull:!SSLv2:!kEDH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP:#STRENGTH;
ssl_prefer_server_ciphers on;
}
Since i was already having https certificate(project.com.crt) and key(www.project.com.key) running on another server, i had just copied them to new server(which does not contain any domain as of now, and has only IP) and placed in at path /etc/ssl/certificates/ and trying to use them directly. Now i had restarted Nginx and tried to access my IP 23.xxx.xxx.xx:8005 with https:// 23.xxx.xxx.xx:8005 and i am getting the below error in firefox
Secure Connection Failed
An error occurred during a connection to 23.xxx.xxx.xx:8005. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long)
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site.
But when i access the IP without https, i can able to serve my site.
So whats wrong with my Https settings in the above nginx conf file ?
Whether we can't serve the certificate files by simply copying and pasting at some folder ? do we need to create any extra certificate for my new server ?
Change
listen 443;
to
listen 443 ssl;
and get rid of this line
ssl on;
That should fix your SSL issue, but it looks like you have several issues in your configuration.
So whats wrong with my Https settings in the above nginx conf file ?
You don't have a SSL/TLS server listening on the port the client is trying to connect to. The ssl_error_rx_record_too_long occurs because the client's SSL stack is trying to interpret a HTTP response as SSL/TLS data. A Wireshark trace should confirm the issue. Look at the raw bytes (follow the stream).
I don't know why the configuration is not correct. Perhaps someone with Nginx config experience can help. Or, the folks on Server Fault or Webmaster Stack Exchange.
This problem happens when the Client gets non-SSL content over SSL connection. the Server send HTTP content but the client awaits HTTPS content. You can check two main things to fix it, but it can caused by another side effects too.
Make sure you put ssl on listen directive;
listen [PORT_NUMBER] ssl;
Check your Host IP Address you tried to connect. DNS can be correct but maybe you have another point on your hosts file or your local DNS server.
I need to use Nginx as an SSL proxy, which forwards traffic to different back ends depending on the subdomain.
I have seem everywhere that I should define multiple "server {" sections but that doesn't work correctly for SSL. Doing that I would always have the SSL being processed in the first virtual host as the server name is unknown until you process the https traffic.
Scenario:
One IP address
One SSL wildcard wildcard
Multiple backends which needs to be accessed like the following:
https://one.mysite.com/ -> http://localhost:8080
https://two.mysite.com/ -> http://localhost:8090
Nginx says "if" is evil: http://wiki.nginx.org/IfIsEvil, but what else can I do?
I have tried this, but it doesn't work, I get an 500 error but nothing in the error logs.
server {
listen 443;
server_name *.mysite.com;
ssl on;
ssl_certificate ssl/mysite.com.crt;
ssl_certificate_key ssl/mysite.com.key;
location / {
if ($server_name ~ "one.mysite.com") {
proxy_pass http://localhost:8080;
}
if ($server_name ~ "two.mysite.com") {
proxy_pass http://localhost:8090;
}
}
Has anyone managed to accomplish this with Nginx? Any help/alternatives, link, would be much appreciated.
I found the solution which is basically to define the SSL options and the SSL certificate outside the "server" block:
ssl_certificate ssl/mysite.com.crt;
ssl_certificate_key ssl/mysite.com.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP;
ssl_prefer_server_ciphers on;
server {
listen 80;
server_name *.mysite.com;
rewrite ^ https://$host$request_uri? permanent;
}
server {
listen 443 ssl;
server_name one.mysite.com;
ssl on;
location / {
proxy_pass http://localhost:8080;
}
}
server {
listen 443 ssl;
server_name two.mysite.com;
ssl on;
location / {
proxy_pass http://localhost:8090;
}
}
Key things:
"ssl on;" is the only thing that needs to be within the "server" blocks that listen in https, you can put it outside too, but what will make the "server" blocks that listen in port 80 to use https protocol and not the expected http.
Because the "ssl_certificate", "ssl_ciphers: and other "ssl_*" are outside the "server" block, Nginx does the SSL offloading without a server_name. Which is what it should do, as the SSL decryption cannot happen based on any host name, as at this stage the URL is encrypted.
JAVA and curl don't fail to work now. There is no server_name - host miss match.
The short answer is to use Server Name Indication. This should work by default in common browsers and cURL.
according to http://www.informit.com/articles/article.aspx?p=1994795, you should indeed have two "server" sections, with two different server names.
In each one, you should include your ssl_* directives.