nginx proxy pass to sslv3 upstream - ssl

I'm struggling to proxy with nginx to an SSL upstream. I realize that proxying to HTTPS is wasteful, but here's my setup, sometimes the API is accessed directly, other times I'm using nginx to serve a JS app which is also a client of the API, CORS and browser security mandates that the JS app communicates with the same domain as the app is served from:
+--------------------+ +---------------------+
| |+-------------------->| |
| Pure HTTP API Host | | CLI Tool API Client |
| |<--------------------+| |
+--------------------+ +---------------------+
| ^ (:3152)
| |
| | | +---------------------+
| +--------------------------------| |
| | | Javascript App |
+---------------------------------->| |
| +---------------------+
|
nginx proxy for CORS
With that out of the way, here's the stack. The API Host is written in GoLang, served using a signed certificate from StartSSL:
$ openssl s_client -ssl3 -connect local.api.theproject.io:3251
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : AES256-SHA
Session-ID:
Session-ID-ctx:
Master-Key: E021B27717F5A4
Key-Arg : None
Start Time: 1377589306
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
I've truncated that output, but sufficed to say that Go's ListenAndServeTLS only appears to work with SSLv3, as the following fails:
$ openssl s_client -connect local.api.theproject.io:3251
CONNECTED(00000003)
35899:error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version:/SourceCache/OpenSSL098/OpenSSL098-47.1/src/ssl/s23_clnt.c:602:
Thus the problem coming out of nginx is clear:
2013/08/27 09:30:21 [error] 35674#0: *3 kevent() reported that connect() failed (61:
Connection refused) while connecting to upstream, client: 127.0.0.1, server:
local.www.theproject.io, request: "GET / HTTP/1.1", upstream: "https://[::1]:3251//",
host: "local.www.theproject.io:4443"
2013/08/27 09:30:21 [error] 35674#0: *3 SSL_do_handshake() failed (SSL: error:1407742
E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version) while SSL handshaking
to upstream, client: 127.0.0.1, server: local.www.theproject.io, request: "GET /
HTTP/1.1", upstream: "https://127.0.0.1:3251//", host: "local.www.theproject.io:4443"
(Note: I'm using [::1] here, but that's not significant, it also fails, of course on 127.0.0.1)
Thus the question is, what is missing from:
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass https://local.api.theproject.io:3251/;
in order to get this to proxy correctly using SSLv3 internally?

Have you tried "ssl_ciphers ALL;"?
Although that's not recommended (because that allows weak ciphers), that shall narrow down the scope of the problem. If that doesn't work, most likely the cause of your problem is that the openssl you use doesn't have the suitable ciphers to complete the SSL handshake with your Go server.
Note that Go's tls package is only "partially" implemented and the supported ciphers is very limited.
There are two solutions:
You'll have to upgrade your openssl version that supports what Go's tls package already implemented. And then, of course, recompile your nginx.
You'll have to patch tls package to support whatever your current openssl ciphers provides by adding the appropriate suite ids to cipherSuites in tls/cipher_suites.go (I think)

Related

FastAPI with uvicorn: TLS connection dropped after Client Hello

I do run a simple FastAPI server listening on https:
uvicorn main:app --reload --ssl-keyfile=./certs/app-key.pem --ssl-certfile=./certs/full.chain.pem --port 9063 --host 192.168.2.201 --ssl-version=2
It works fine, i can use Postman to access the app via https://192.168.2.201:9063.
But my client is actually a network device (using exactly same API as Postman) and that is failing to establish TLS connection. Device is configured correctly to trust CA certificate. When troubleshooting i have found out what is happening:
network device establishing TCP connection to 192.168.2.201:9063
network device sends TLS Client Hello (TLS envelope is version 1.0, but inner Client Hello is version 1.2)
FastAPI app sends empty ACK and then FIN
So it looks like uvicorn does not like that TLS proposal: version or ciphers (my guess). This is what is happening:
I have tried all version uvicorn with --ssl-version=1 but got error invalid or unsupported protocol version 1.
When testing with nmap i can see uvicorn is supporting only TLS 1.2 and 1.3:
michal#certs % nmap --script ssl-enum-ciphers -p 9063 192.168.2.201
Starting Nmap 7.93 ( https://nmap.org ) at 2022-12-27 12:27 CET
Nmap scan report for ise.example.com (192.168.2.201)
Host is up (0.00024s latency).
PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A
| compressors:
| NULL
| cipher preference: server
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| cipher preference: server
|_ least strength: A
Is there any way to troubleshoot it ? How to confirm why uvicorn is finishing that TLS session ?
Update: i was able to pass uvicorn argument --ssl-version=3 which looks like is TLS1.0, but still the network device proposal is rejected by uvicorn.
Thanks,
Michal
OK, solved the issue, the problem was in the incompatible ciphers used by default by uvicorn (no match between very specific network device proposal and uvicorn).
Once i have discovered all supported ciphers:
import ssl
ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ciphers = ctx.get_ciphers()
v12ciphers = ":".join([cipher['name'] for cipher in ciphers if cipher['protocol'] =='TLSv1.2'])
print(v12ciphers)
And used all of those when starting uvicorn all started to work fine (using TLS 1.2):
--ssl-version=5 --ssl-ciphers="ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256"

Nginx - SSL handshake error when connecting to upstream with self signed certificate

I am trying to proxy a old server running with self signed certificate.
Simple nginx conf:
server {
listen 8009;
location / {
proxy_ssl_verify off;
proxy_ssl_session_reuse off;
proxy_pass https://192.168.10.20:8009/;
}
}
I get SSL Handshake error in nginx log.
2018/05/02 11:31:39 [crit] 3500#2284: *1 SSL_do_handshake() failed (SSL: error:14082174:SSL routines:ssl3_check_cert_and_algorithm:dh key too small) while SSL handshaking to upstream, client: 127.0.0.1, server: , request: "GET /ping HTTP/1.1", upstream: "https://192.168.10.20:8009/ping", host: "localhost:8009"
I was hoping that adding the "proxy_ssl_verify off;" will ignore all the SSL errors but does not seem to .
ssl3_check_cert_and_algorithm:dh key too small
The problem is that the old server is providing a DH key which is considered insecure (logjam attack). This has nothing to do with certificate validation and thus trying to disable certificate validation will not help - and is a bad idea anyway.
Instead this problem need to be fixed at the server side to provide stronger DH parameters. Alternatively one might try to enforce nginx to not use DH ciphers in the first place by using the proxy_ssl_ciphers parameter. Which ciphers can be chosen there depends on what the old server supports but you might try something like HIGH:!DH as argument which allows nginx to offer all strong ciphers except the DH ciphers.

SSL in Bluemix and nginx configuration

I have added a certificate in Bluemix, following this post : https://www.ibm.com/blogs/bluemix/2014/09/ssl-certificates-bluemix-custom-domains/
I can see the certificate in the domain tab, and it's the one I have uploaded.
Now I have a container running nginx, because we use it as a reverse proxy. Previously it was handling the SSL configuration, but now that it's done in Bluemix directly, we just want to accept https request, without configuring certificate.
What we did was forwarding the http requests to https, like advised in the post (explaining how to do it for node.js though). We get something like this:
server {
listen 80;
server_name *hostname.domain*;
return 301 https://$http_host$request_uri;
}
And in the 443 part, we only listen, without the ssl part:
server {
listen 443;
server_name *host.domain*;
*other stuff for reverse proxy*
}
However, when trying to access it, I get a generic error in chrome: ERR_SSL_PROTOCOL_ERROR
Firefox gives a bit more information:
An error occurred during a connection to *host.domain*. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG
And when I try to check the certificate from command line, I don't get any.
openssl s_client -connect *host.domain*:443
CONNECTED(00000003)
140250419918480:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:782:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 289 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1.2
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1484673167
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
There's no error in nginx logs, and I can't manage to tell if the issue in on Bluemix side, or in the configuration of nginx, or if nginx allows this kind of configuration where it has to handle https requests, without the certificate configuration...
Does someone have any idea?
Many thanks.
Regards.
If you want NGINX to pass-thru SSL, you have to use the stream module.
Thanks for your answers. I was not able to check your solutions, but I talked with a technical expert from IBM meanwhile, and here is what I learned.
About the SSL pass-thru, we would need to configure each component (behind the nginx) to handle the SSL, so it seems to be harder to manage. I'm not an expert though, so I'm just reporting what I had as an answer on that point.
First, what we want should be doable by removing the public IP adress of our nginx container.
Then, by creating a route from BM load balancer to our nginx container, we should solve the issue. Then the route would be configured to forward the port 443 to the nginx on port 80 (since the container is not publicly available, there no need to handle 80 AND 443).
However, Bluemix allows route only for container groups (for now?). Unfortunately, we use docker-compose that does not allow (for now?) to create container groups on BlueMix.
So the best solution was to put back the ssl configuration in nginx. The certificate being both on BlueMix domain and nginx container. And it's working fine, so we'll just improve the procedure to update the certificate, and wait until there's a need, or a new way to do it...
K.

malformed HTTP response with docker private registry (v2) behind an nginx proxy

I have setup a Docker private registry (v2) on a CentOS 7 box following their offical documentation: https://docs.docker.com/registry/deploying/
I am running docker 1.6.0 on a Fedora 21 box.
The registry is running on port 5000, and is using an SSL key signed by a trusted CA. I set a DNS record for 'docker-registry.example.com' to be the internal IP of the server. Running 'docker pull docker-registry.example.com:5000/tag/image', it works as expected.
I setup an nginx server, running nginx version: nginx/1.8.0, and setup a dns record for 'nginx-proxy.example.com' pointing to the nginx server, and setup a site. Here is the config:
server {
listen 443 ssl;
server_name nginx-proxy.example.com;
add_header Docker-Distribution-Api-Version: registry/2.0 always;
ssl on;
ssl_certificate /etc/ssl/certs/cert.crt;
ssl_certificate_key /etc/ssl/certs/key.key;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Docker-Distribution-Api-Version registry/2.0;
location / {
proxy_pass http://docker-registry.example.com:5000;
}
}
When I try to run 'docker pull nginx-proxy.example.com/tag/image' I get the following error:
FATA[0001] Error response from daemon: v1 ping attempt failed with error: Get https://nginx-proxy.example.com/v1/_ping: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
My question is twofold.
Why is the docker client looking for the /v1_/ping?
Why am I seeing the 'malformed http response'
If I run 'curl -v nginx-proxy.example.com/v2' I see:
[root#alex amerenda] $ curl -v https://nginx-proxy.example.com/v2/
* Hostname was NOT found in DNS cache
* Trying 10.1.43.165...
* Connected to nginx-proxy.example.com (10.1.43.165) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=*.example.com,O="example, Inc.",L=New York,ST=New York,C=US
* start date: Sep 15 00:00:00 2014 GMT
* expire date: Sep 15 23:59:59 2015 GMT
* common name: *.example.com
* issuer: CN=GeoTrust SSL CA - G2,O=GeoTrust Inc.,C=US
> GET /v2/ HTTP/1.1
> User-Agent: curl/7.37.0
> Host: nginx-proxy.example.com
> Accept: */*
> \x15\x03\x01\x00\x02\x02
If I do 'curl -v docker-registry.example.com' I get a 200 OK response. So nginx has to be responsible for this. Does anyone have an idea why this is happening? It is driving me insane!
proxy_pass http://docker-registry.example.com:5000;
you are passing the request with plain HTTP (i.e. no https)
\x15\x03\x01\x00\x02\x02
And you are getting a SSL response back. So it looks like you must use https:// and not http:// to access port 5000. And you even know that you are using SSL:
The registry is running on port 5000, and is using an SSL key signed by a trusted CA...
Apart from that: please use the names reserved for examples like example.com and don't use domain names in your example which don't belong to you.

Support for two-way TLS/HTTPS with ELB

One way (or server side) TLS/HTTPS with Amazon Elastic Load Balancing is well documented
Support for two-way (or client side) TLS/HTTPS is not as clear from the documentation.
Assuming ELB is terminating a TLS/HTTPS connection:
Does ELB support client authenticated HTTPS connections?
If so, does a server served by ELB recieve a X-Forwarded-* header to identify the client authenticated by ELB?
ELB does support TCP forwarding so an EC2 hosted server can establish a two-way TLS/HTTPS connection but in this case I am interested in ELB terminating the TLS/HTTPS connection and identifying the client.
I don't see how it could, in double-ended HTTPS mode, because the ELB is establishing a second TCP connection to the back-end server, and internally it's decrypting/encrypting the payload to/from the client and server... so the server wouldn't see the client certificate directly, and there are no documented X-Forwarded-* headers other than -For, -Proto, and -Port.
With an ELB running in TCP mode, on the other hand, the SSL negotiation is done directly between the client and server with ELB blindly tying the streams together. If the server supports the PROXY protocol, you could enable that functionality in the ELB so that you could identify the client's originating IP and port at the server, as well as identifying the client certificate directly because the client would be negotiating directly with you... though this means you are no longer offloading SSL to the ELB, which may be part of the point of what you are trying to do.
Update:
It doesn't look like there's a way to do everything you want to do -- offload SSL and identify the client certificatite -- with ELB alone. The information below is presented “for what it’s worth.”
Apparently HAProxy has support for client-side certificates in version 1.5, and passes the certificate information in X- headers. Since HAProxy also supports the PROXY protocol via configuration (something along the lines of tcp-request connection expect-proxy) ... so it seems conceivable that you could use HAProxy behind a TCP-mode ELB, with HAProxy terminating the SSL connection and forwarding both the client IP/port information from ELB (via the PROXY protocol) and the client cert information to the application server... thus allowing you to still maintain SSL offload.
I mention this because it seems to be a complementary solution, perhaps more feature-complete than either platform alone, and, at least in 1.4, the two products work flawlessly together -- I am using HAProxy 1.4 behind ELB successfully for all requests in my largest web platform (in my case, ELB is offloading the SSL -- there aren't client certs) and it seems to be a solid combination in spite of the apparent redundancy of cascaded load balancers. I like having ELB being the only thing out there on the big bad Internet, though I have no reason to think that directly-exposed HAProxy would be problematic on its own. In my application, the ELBs are there to balance between the HAProxies in the A/Z's (which I had originally intended to also auto-scale, but the CPU utilization stayed so low even during our busy season that I never had more than one per Availability Zone, and I've never lost one, yet...) which can then do some filtering, forwarding, and and munging of headers before delivering the traffic to the actual platform in addition to giving me some logging, rewriting, and traffic-splitting control that I don't have with ELB on its own.
In case your back end can support client authenticated HTTPS connections itself, you may use ELB as TCP on port 443 to TCP on port your back end listens to. This will make ELB just to resend unencrypted request directly to your back end. This config also doesn't require installation of SSL certificate to a load balancer.
Update: with this solution x-forwarded-* headers are not set.
You can switch to single instance on Elastic Beanstalk, and use ebextensions to upload the certs and configure nginx for mutual TLS.
Example
.ebextensions/setup.config
files:
"/etc/nginx/conf.d/00_elastic_beanstalk_ssl.conf":
mode: "000755"
owner: root
group: root
content: |
server {
listen 443;
server_name example.com;
ssl on;
ssl_certificate /etc/nginx/conf.d/server.crt;
ssl_certificate_key /etc/nginx/conf.d/server.key;
ssl_client_certificate /etc/nginx/conf.d/ca.crt;
ssl_verify_client on;
gzip on;
send_timeout 300s;
client_body_timeout 300s;
client_header_timeout 300s;
keepalive_timeout 300s;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-SSL-client-serial $ssl_client_serial;
proxy_set_header X-SSL-client-s-dn $ssl_client_s_dn;
proxy_set_header X-SSL-client-i-dn $ssl_client_i_dn;
proxy_set_header X-SSL-client-session-id $ssl_session_id;
proxy_set_header X-SSL-client-verify $ssl_client_verify;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
}
"/etc/nginx/conf.d/server.crt":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
MIJDkzCCAvygAwIBAgIJALrlDwddAmnYMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD
...
LqGyLiCzbVtg97mcvqAmVcJ9TtUoabtzsRJt3fhbZ0KKIlzqkeZr+kmn8TqtMpGn
r6oVDizulA==
-----END CERTIFICATE-----
"/etc/nginx/conf.d/server.key":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN RSA PRIVATE KEY-----
MIJCXQIBAAKBgQCvnu08hroXwnbgsBOYOt+ipinBWNDZRtJHrH1Cbzu/j5KxyTWF
...
f92RjCvuqdc17CYbjo9pmanaLGNSKf0rLx77WXu+BNCZ
-----END RSA PRIVATE KEY-----
"/etc/nginx/conf.d/ca.crt":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
MIJCizCCAfQCCQChmTtNzd2fhDANBgkqhkiG9w0BAQUFADCBiTELMAkGA1UEBhMC
...
4nCavUiq9CxhCzLmT6o/74t4uCDHjB+2+sIxo2zbfQ==
-----END CERTIFICATE-----