I have a Client Y running under Windows Server 2003 that only supports TLS 1.0 (.NET Framework 3.5) and I have a Service X running under Linux (Java 1.8) with NGINX, that accepts connection only with TLS 1.2.
Client Y (TLS 1.0) ---> calls ---> Service X (TLS 1.2 over Nginx).
This is my NGINX config:
# HTTPS server
server {
listen 80;
listen 443 ssl;
server_name myapi.com.br;
ssl_certificate /etc/pki/tls/certs/myfile.cer;
ssl_certificate_key /etc/pki/tls/private/myfile.key;
ssl_session_timeout 5m;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8080/;
proxy_ssl_session_reuse off;
proxy_set_header Host $host;
proxy_redirect off;
}
}
But it didn't work, the Client Y can't call the Service X, but another Client Z using TLS 1.2 can call the Service X without problem.
This is the error in nginx:
2019/08/06 11:23:18 [crit] 25199#0: *253185 SSL_shutdown() failed (SSL: error:140E0197:SSL routines:SSL_shutdown:shutdown while in init) while SSL handshaking, client: 10.31.68.186, server: 0.0.0.0:443
Related
I have Nginx which proxy request from client to IBM DataPower with mutual TLS.
I have an error when message is sent from Nginx to IBM DP:
sll server (SERVER) ssl peer did not send a certificate during the handshake datapower
Cut from my Nginx config
location ~ path {
proxy_pass https://HOST:PORT; # DataPower
proxy_ssl_trusted_certificate /opt/nginx/ssl/tr/ca-chain.cert.pem;
proxy_ssl_certificate /opt/nginx/ssl/client/client-nginx_cert.pem;
proxy_ssl_certificate_key /opt/nginx/ssl/client/client-nginx_key.pem;
proxy_http_version 1.1;
proxy_ssl_server_name on;
proxy_ssl_name HOST;
proxy_set_header Host HOST;
proxy_ssl_verify off;
proxy_ssl_verify_depth 2;
}
Message goes from client directly to IBM DP without errors.
You could try adding proxy_set_header X-SSL-CERT $ssl_client_cert;
I am having two VM, on hosts Nginx and the other is also a standalone server.
I will call the VMs as follows;
a standalone = Cash serving https
the one hosting the Nginx= LOCAL serving http
In order for LOCAL to communicate with CASH, we use a NGINX reverse proxy proxy to redirect HTTP traffic to HTTPS and handle the TLS handshakes and in case the CASH makes a call to LOCAL the NGINX again accepts this HTTPS traffic and redirecting it to LOCAL's HTTP as shown;
upstream api_http_within_this_vm {
server 127.0.0.1:9001; #LOCAL VM caal it HOST VM application
}
# SENDING HTTP TRAFFIC TO OUR HTTPS ENDPOINT Call CASH
server {
listen 80;
listen [::]:80;
server_name 10.0.0.13;
location / {
proxy_pass https:// api_https_to_another_vm;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_certificate /etc/nginx/sites-available/signed_by_CASH.pem;
proxy_ssl_certificate_key /etc/nginx/sites-available/local_key_used_to_generate_csr_for_CASH_to_sign.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/nginx/sites-available/CASH_CA.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
}
}
upstream api_https_to_another_vm {
server 10.0.0.13:8080; # CASH's VM IP and PORT
}
# RECIEVING HTTPS TRAFFIC ENDPOINT from CASH TO OUR LOCAL HTTP ENDPOINT
server {
listen 5555 ssl http2;
listen [::]:5555 ssl http2;
server_name 1270.0.0.1;
location / {
proxy_pass http://api_http_within_this_vm;
proxy_set_header X_CUSTOM_HEADER $http_x_custom_header;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass_request_headers on;
}
ssl_certificate /etc/nginx/sites-available/signed_by_CASH.pem;
ssl_certificate_key /etc/nginx/sites-available/local_key_used_to_generate_csr_for_CASH_to_sign.key;
ssl_verify_client off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
}
MY SUCCESS
The traffic from CASH to LOCAL work well.
MY CHALLENGE
The traffic from LOCAL to CASH does NOT work. I get 502 Bad Request yet when I use curl https://10.0.0.13:8080/ directly without Reverse proxyLOCAL to CASH I see some output even if no handshake happens.
Anywhere am going wrong, please advise.....
Secondly, does Nginx only redirect traffic to IPs within the VM or even to other VMs?
I majorly want to achieve this kind of leg that has failed on my side.
I have tested this configuration over time, I had to trace with a tcpdump and even checked my logs because I suspected the problem is network driven. I found out the actually the client CASH was dropping the connection before the TLS handshake completes.
2019/03/02 06:54:58 [error] 27569#27569: *62 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream, client: xx.xx.xx.xx, server: 1270.0.0.1, request: "GET / HTTP/1.1", upstream: "https://xx.xx.xx.xx:1000/", host: "xx.xx.xx.xx:80"
Thanks to all that viewed, but the script is correct.
I want to receive traffic at https://example.com, on server 1. I then want to proxy that traffic over https to server 2. Server 2 has Nginx set up with the exact same tls certificate and key as Server 1, so it should theoretically be able to serve the requests. However, when Nginx on server 2 tries to proxy a request to server 2, it sends it to server2.example.com, which differs from the common name on the cert, which is just example.com.
Is there a way to configure nginx to expect the name on the tls cert offered by the host (during tls handshake) to which it is proxying requests to be different from the address of the host to which it is proxying?
Example config on server 1:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example config on server 2:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass http://localhost:12345;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example curl from server 1:
$ curl https://server2.example.com/chat -H "Host: example.com"
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
If need be, I could generate a new self-signed cert and use that on server 2. However, I assumed it would be faster to just change the Nginx configuration. If the config change is not possible, I'll create a new cert.
You can use the proxy_ssl_name directive to specify the server name of the proxied host's certificate.
For example:
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_ssl_name $host;
...
}
See this document for details.
I have a master nginx server deciding on the incoming server name where to route requests to. For two secondary servers this master nginx server is also holding ssl certificates and keys. The 3rd server is holding his own certificates and keys because there is a frequent update process for those.
My question is now how I can configure the master nginx server to forward all requests to server 3 which are coming in for this server. I cannot copy the certificates and keys from server 3 to the master server as they change too often.
Try to proxy the tcp traffic instead of the http traffic
stream {
server {
listen SRC_IP:SRC_PORT;
proxy_pass DST_IP:DST_PORT;
}
}
for more details refer to the nginx documentation
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
Here's a configuration that might work. Proxy through the master and forward everything to Server3. Use the ssl port but turn ssl off.
server {
listen 443;
server_name myserver.mydomain.whatever;
ssl off;
access_log /var/log/nginx/myserver.access.log;
error_log /var/log/nginx/myserver.error.og;
keepalive_timeout 60;
location / {
set $fixed_destination $http_destination;
if ( $http_destination ~* ^https(.*)$ )
{
set $fixed_destination http$1;
}
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Destination $fixed_destination;
# Fix the “It appears that your reverse proxy set up is broken" error.
# might need to explicity set https://localip:port
proxy_pass $fixed_destination;
# force timeout if backend died.
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_read_timeout 90;
proxy_redirect http:// https://;
}
}
I have the following network configuration:
F5 LB --> 2 NGNIX nodes --> App server
For server to server calls we sign the request based on scheme, port and uri on the source server and compare this signature on the destination by re-signing the request again based on the same parameters.
server to server calls follow this path:
source server --> F5 LB --> NGNIX --> destination server.
The original request sent by the source server is sent to https without port, and thus signed without port (or using default port for that matter).
LB adds custom port to the request and pass it to NGNIX. NGNIX in turn is configured to pass the server scheme, host and port with the request to the app server:
proxy_set_header Host $host:$server_port;
proxy_set_header X-Scheme $scheme;
The destination server received the port coming from the LB instead of the one sent with the original request sent by the source server, ending up failing the signature check on the destination server.
The same was tested with Apache, using ajp with the proxied servers and the request passed is holding the original port, not the one added by the LB.
After thorough reading, it comes up to a simple question:
How do you access the original request (and port) in ngnix?
Here's the rest of the relevant configuration:
proxy.conf:
proxy_redirect off;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_buffer_size 8k;
proxy_http_version 1.0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
configuration
log_format upstreamlog '[$time_local] $remote_addr $status "$request" $body_bytes_sent - $server_name to: $upstream_addr $upstream_response_time sec "$http_user_agent"';
server {
listen 9080;
listen 9443 ssl;
server_name myserver.com;
root html;
error_log /data/server_openresty/error.log info;
access_log /ldata/server_openresty/logs/access.log upstreamlog;
gzip on;
gzip_types text/plain text/xml text/css text/javascript application/javascript application/xhtml+xml application/xml;
ssl_certificate /data/server_openresty/nginx/certs/dev_wildCard.crt;
ssl_certificate_key /code/server_openresty/nginx/certs/dev_wildCard.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:MEDIUM:!aNULL:!MD5;
### headers passed to the proxies
proxy_set_header Host $host:$server_port;
proxy_set_header X-Scheme $scheme;
location /api/serverA{
proxy_pass http://serverA-cluster;
}
location /api/serverB{
proxy_pass http://serverB-cluster;
}
}