I want to use nginx 1.15.12 as a proxy for tls termination and authentication. If a valid client certificate is shown, the nginx server will forward to the respective backend system (localhost:8080 in this case) The current configuration does that for every request.
Unfortunately it is not possible to configure one certificate per location{} block. Multiple server blocks could be created, which each check for another certificate, but I have also the requirement to just receive requests via one port.
nginx.conf: |
events {
worker_connections 1024; ## Default: 1024
}
http{
# password file to be moved to seperate folder?
ssl_password_file /etc/nginx/certs/global.pass;
server {
listen 8443;
ssl on;
server_name *.blabla.domain.com;
error_log stderr debug;
# server certificate
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
# CA certificate for mutual TLS
ssl_client_certificate /etc/nginx/certs/ca.crt;
proxy_ssl_trusted_certificate /etc/nginx/certs/ca.crt;
# need to validate client certificate(if this flag optional it won't
# validate client certificates)
ssl_verify_client on;
location / {
# remote ip and forwarding ip
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# certificate verification information
# if the client certificate verified against CA, the header VERIFIED
# will have the value of 'SUCCESS' and 'NONE' otherwise
proxy_set_header VERIFIED $ssl_client_verify;
# client certificate information(DN)
proxy_set_header DN $ssl_client_s_dn;
proxy_pass http://localhost:8080/;
}
}
}
Ideally I would like to achieve something like that:
(requests to any path "/" except "/blabla" should be checked with cert1, if "/blabla" is matching, another key should be used to check the client certificate.
nginx.conf: |
events {
worker_connections 1024; ## Default: 1024
}
http{
# password file to be moved to seperate folder?
ssl_password_file /etc/nginx/certs/global.pass;
server {
listen 8443;
ssl on;
server_name *.blabla.domain.com;
error_log stderr debug;
# server certificate
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
# CA certificate for mutual TLS
ssl_client_certificate /etc/nginx/certs/ca.crt;
proxy_ssl_trusted_certificate /etc/nginx/certs/ca.crt;
# need to validate client certificate(if this flag optional it won't
# validate client certificates)
ssl_verify_client on;
location / {
# remote ip and forwarding ip
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# certificate verification information
# if the client certificate verified against CA, the header VERIFIED
# will have the value of 'SUCCESS' and 'NONE' otherwise
proxy_set_header VERIFIED $ssl_client_verify;
# client certificate information(DN)
proxy_set_header DN $ssl_client_s_dn;
proxy_pass http://localhost:8080/;
}
location /blabla {
# Basically do the same like above, but use another ca.crt for checking the client cert.
}
}
}
Im on a kubernetes cluster but using ingress auth mechanisms is no option here for reasons. Ideal result would be a way to configure different path, with different certificates for the same server block in nginx.
Thank you!
Edit:
The following nginx.conf can be used to check different certificates within nginx. Therefore 2 independent server{} blocks are needed with a different server_name. The URI /blabla can now only be accessed via blabla-api.blabla.domain.com.
events {
worker_connections 1024; ## Default: 1024
}
http{
server_names_hash_bucket_size 128;
server {
listen 8443;
ssl on;
server_name *.blabla.domain.com;
error_log stderr debug;
# password file (passphrase) for secret keys
ssl_password_file /etc/nginx/certs/global.pass;
# server certificate
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
# CA certificate for mutual TLS
ssl_client_certificate /etc/nginx/certs/ca.crt;
proxy_ssl_trusted_certificate /etc/nginx/certs/ca.crt;
# need to validate client certificate(if this flag optional it won't
# validate client certificates)
ssl_verify_client on;
location / {
# remote ip and forwarding ip
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# certificate verification information
# if the client certificate verified against CA, the header VERIFIED
# will have the value of 'SUCCESS' and 'NONE' otherwise
proxy_set_header VERIFIED $ssl_client_verify;
# client certificate information(DN)
proxy_set_header DN $ssl_client_s_dn;
proxy_pass http://localhost:8080/;
}
location /blabla {
return 403 "authorized user is not allowed to access /blabla";
}
}
server {
listen 8443;
ssl on;
server_name blabla-api.blabla.domain.com;
error_log stderr debug;
# password file (passphrase) for secret keys
ssl_password_file /etc/nginx/certs/global-support.pass;
# server certificate
ssl_certificate /etc/nginx/certs/server-support.crt;
ssl_certificate_key /etc/nginx/certs/server-support.key;
# CA certificate for mutual TLS
ssl_client_certificate /etc/nginx/certs/ca-support.crt;
proxy_ssl_trusted_certificate /etc/nginx/certs/ca-support.crt;
# need to validate client certificate(if this flag optional it won't
# validate client certificates)
ssl_verify_client on;
location /blabla {
# remote ip and forwarding ip
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# certificate verification information
# if the client certificate verified against CA, the header VERIFIED
# will have the value of 'SUCCESS' and 'NONE' otherwise
proxy_set_header VERIFIED $ssl_client_verify;
# client certificate information(DN)
proxy_set_header DN $ssl_client_s_dn;
proxy_pass http://localhost:8080/blabla;
}
}
}
I guess SNI is the answer.
With that in the ssl handshake a server with one IP and one port can provide multiple certificates
But in my understanding server_name attribute has to be different for the two servers. Not sure if this is needed for top and second level domain, or if you can do it simply with the path.
SNI extends the handshake protocol of TLS. This way before the connection is established during the ssl handshake the server can know what certificate to use.
Newer nginx versions should have SNI enabled by default. Can be checked: nginx -V
Look at this how to structure the nginx.conf
Related
I have Nginx which proxy request from client to IBM DataPower with mutual TLS.
I have an error when message is sent from Nginx to IBM DP:
sll server (SERVER) ssl peer did not send a certificate during the handshake datapower
Cut from my Nginx config
location ~ path {
proxy_pass https://HOST:PORT; # DataPower
proxy_ssl_trusted_certificate /opt/nginx/ssl/tr/ca-chain.cert.pem;
proxy_ssl_certificate /opt/nginx/ssl/client/client-nginx_cert.pem;
proxy_ssl_certificate_key /opt/nginx/ssl/client/client-nginx_key.pem;
proxy_http_version 1.1;
proxy_ssl_server_name on;
proxy_ssl_name HOST;
proxy_set_header Host HOST;
proxy_ssl_verify off;
proxy_ssl_verify_depth 2;
}
Message goes from client directly to IBM DP without errors.
You could try adding proxy_set_header X-SSL-CERT $ssl_client_cert;
I am having two VM, on hosts Nginx and the other is also a standalone server.
I will call the VMs as follows;
a standalone = Cash serving https
the one hosting the Nginx= LOCAL serving http
In order for LOCAL to communicate with CASH, we use a NGINX reverse proxy proxy to redirect HTTP traffic to HTTPS and handle the TLS handshakes and in case the CASH makes a call to LOCAL the NGINX again accepts this HTTPS traffic and redirecting it to LOCAL's HTTP as shown;
upstream api_http_within_this_vm {
server 127.0.0.1:9001; #LOCAL VM caal it HOST VM application
}
# SENDING HTTP TRAFFIC TO OUR HTTPS ENDPOINT Call CASH
server {
listen 80;
listen [::]:80;
server_name 10.0.0.13;
location / {
proxy_pass https:// api_https_to_another_vm;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_certificate /etc/nginx/sites-available/signed_by_CASH.pem;
proxy_ssl_certificate_key /etc/nginx/sites-available/local_key_used_to_generate_csr_for_CASH_to_sign.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/nginx/sites-available/CASH_CA.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
}
}
upstream api_https_to_another_vm {
server 10.0.0.13:8080; # CASH's VM IP and PORT
}
# RECIEVING HTTPS TRAFFIC ENDPOINT from CASH TO OUR LOCAL HTTP ENDPOINT
server {
listen 5555 ssl http2;
listen [::]:5555 ssl http2;
server_name 1270.0.0.1;
location / {
proxy_pass http://api_http_within_this_vm;
proxy_set_header X_CUSTOM_HEADER $http_x_custom_header;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass_request_headers on;
}
ssl_certificate /etc/nginx/sites-available/signed_by_CASH.pem;
ssl_certificate_key /etc/nginx/sites-available/local_key_used_to_generate_csr_for_CASH_to_sign.key;
ssl_verify_client off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
}
MY SUCCESS
The traffic from CASH to LOCAL work well.
MY CHALLENGE
The traffic from LOCAL to CASH does NOT work. I get 502 Bad Request yet when I use curl https://10.0.0.13:8080/ directly without Reverse proxyLOCAL to CASH I see some output even if no handshake happens.
Anywhere am going wrong, please advise.....
Secondly, does Nginx only redirect traffic to IPs within the VM or even to other VMs?
I majorly want to achieve this kind of leg that has failed on my side.
I have tested this configuration over time, I had to trace with a tcpdump and even checked my logs because I suspected the problem is network driven. I found out the actually the client CASH was dropping the connection before the TLS handshake completes.
2019/03/02 06:54:58 [error] 27569#27569: *62 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream, client: xx.xx.xx.xx, server: 1270.0.0.1, request: "GET / HTTP/1.1", upstream: "https://xx.xx.xx.xx:1000/", host: "xx.xx.xx.xx:80"
Thanks to all that viewed, but the script is correct.
I want to receive traffic at https://example.com, on server 1. I then want to proxy that traffic over https to server 2. Server 2 has Nginx set up with the exact same tls certificate and key as Server 1, so it should theoretically be able to serve the requests. However, when Nginx on server 2 tries to proxy a request to server 2, it sends it to server2.example.com, which differs from the common name on the cert, which is just example.com.
Is there a way to configure nginx to expect the name on the tls cert offered by the host (during tls handshake) to which it is proxying requests to be different from the address of the host to which it is proxying?
Example config on server 1:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example config on server 2:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass http://localhost:12345;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example curl from server 1:
$ curl https://server2.example.com/chat -H "Host: example.com"
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
If need be, I could generate a new self-signed cert and use that on server 2. However, I assumed it would be faster to just change the Nginx configuration. If the config change is not possible, I'll create a new cert.
You can use the proxy_ssl_name directive to specify the server name of the proxied host's certificate.
For example:
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_ssl_name $host;
...
}
See this document for details.
SO...
I have a node application running on a server on port 8080 and I am trying to enabled it to work over SSL using NGINX and CloudFlare. Note the following...
My host is running Ubuntu 16.04 LTS
I am currently using CloudFlare's Universal SSL (free tier)
I have my test host DNS setup as test.company.com
I have copied the CloudFlare origin pull cert from this post to my test box's /etc/nginx/certs
...my previous NGINX configuration looked like...
server {
listen 80;
location / {
proxy_pass http://localhost:8080;
}
}
...it now looks like...
# HTTP
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS
server {
listen 443;
server_name test.company.com;
ssl on;
ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
ssl_verify_client on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8080/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
...I followed the example here and the link it provides here and I'm skeptical that everything above is required (I'm a minimalist). Whenever I run sudo nginx -t I still get errors around ssl_certificate and ssl_certificate_key not being specified. I cannot figure out how to download the require files from CloudFlare and from what I understand, I don't believe I should need to.
If I try to re-use the CloudFlare origin pull cert as both the ssl_certificate and ssl_certificate_key, I get the error nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/certs/cloudflare.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)
I am confident that it is possible to create my own self-signed certificate, but I am planning on using this strategy eventually to spin up production machines. Any help on pointing me in the right direction is much appreciated.
It looks like you're using Cloudflare's Origin CA service, nice!
The issue looks like you've put your SSL private key in the ssl_client_certificate attribute and not put your real SSL certificate in your configuration. Your Nginx SSL configuration should contain the following lines instead:
ssl_certificate /path/to/your_certificate.pem;
ssl_certificate_key /path/to/your_key.key;
Make sure SSL Certificate corresponds to the .PEM file with the correct contents, and the Certificate Key file contains the .KEY file with the correct contents too.
To generate a certificate with Origin CA, navigate to the Crypto section of the Cloudflare dashboard. From there, click the Create Certificate button in the Origin Certificates section. Once you complete the steps in the wizard, you will see a window which allows you to download both the certificate file and the key file. Make sure you put them in the correct files and install them on your web server.
Further reading:
How to install an Origin CA certificate in NGINX
Creating and managing certificates with Origin CA
Also, ssl on is deprecated, instead, use listen 443 ssl;.
I have a Docker Server where I have installed GitLab from sameersbn/docker-gitlab
I have a nginx container that listen to 443:433 and 80:80, I will use this one to load balance HTTP and HTTPs (with signed cert) requests
nginx.conf
worker_processes auto;
events { worker_connections 1024; }
http {
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
upstream gitlab {
server gitlab:10080;
}
server {
listen 80;
listen 443 ssl;
server_name www.domain.tld;
ssl on;
ssl_certificate /usr/local/share/ca-certificates/domain.crt;
ssl_certificate_key /usr/local/share/ca-certificates/domain.key;
ssl_trusted_certificate /usr/local/share/ca-certificates/GandiStandardSSLCA2.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
root /usr/share/nginx/html;
location /git/ {
proxy_pass http://gitlab;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Without SSL, working url to acces gitlab is http://www.domain.tld:10080/git
With SSL, I want the url to be https://www.domain.tld/git
Using this nginx load balancer configuration
When I go on http://www.domain.tld/git
400 Bad Request
The plain HTTP request was sent to HTTPS port
When i go on https://www.domain.tld/git
ERR_CONNECTION_REFUSED
These are my first signed certificate, how is this supposed to work ?
To solve the problem there are 2 steps required:
make Nginx redirect HTTP to HTTPS
Make Gitlab to listen port
80 via HTTP
Why to make Gitlab to listen port 80? This technique called SSL offload that prevent redundant HTTPS encryption/decryption to happen between upstream and web-server. It is rarely required and only makes sense in case of different hosts with complex security requirements.
Nginx
server {
listen 80;
server_name www.domain.tld;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name www.domain.tld;
[....]
}
Gitlab
vi ./gitlab/config.yml
gitlab_url: "http://server1.example.com" # http rather than https