nginx - log SSL handshake failures - ssl

I'm running an nginx server with SSL enabled.
My protocol / cipher settings are fairly secure, and I've checked them at ssllabs.com, but --
-- since this is a web service which is called by http clients that I have no control over, I have concerns about compatibility.
To the point:
Is there a way to log SSL handshake failures as they happen (if they happen) in my nginx logs?
For example, I've got SSLv3 disabled, and if I try to "curl -3" (forcing SSlv3) to my server, then I get this:
NSS error -12286 (SSL_ERROR_NO_CYPHER_OVERLAP)
Cannot communicate securely with peer: no common encryption algorithm(s).
Closing connection 0 curl: (35) Cannot communicate securely with peer: no common encryption algorithm(s).
I would like to log this type of error in server logs too, with the default nginx settings, there is nothing.
Enabling "debug" log level for the error log does what I want, will log SSL handshake errors -- but unfortunately it also logs too much other stuff, making the log too bloated, drowning out other potentially useful info.

You can use the info log level.

Related

HAproxy SSL handshake failure

when i use HAproxy as load balancer, at HTTP termination mode and i tail log of it
(tail -f /var/log/haproxy.log). There are 2 types of log appearing
[time] frontend_name/1: SSL handshake failure
and
[time] frontend_name~ message
frontend_name is name follow frontend keyword config in /etc/haproxy/haproxy.cfg
I don't know what /1 and ~ in log message is, and why SSL handshake failure appearing at log has ~
Can someone help me explain and fix this error?
Thanks!
~ after frontend name means connection has been established using SSL/TLS
You can find reference to it in %ft entry in the table at: https://cbonte.github.io/haproxy-dconv/2.4/configuration.html#8.2.4
About /1 in frontend_name/1: SSL handshake failure:
I can't find it in the docs, but by experimenting i found it's the number of port in frontend, to which connection was attempted and SSL handshake failed.
For config:
frontend frontend_name
bind *:443,*:444 ssl crt <path_to_cert>
bind *:445 ssl crt <path_to_cert> no-tlsv13
If i make TLS1.3 connection to port 445 (e.g. openssl s_client -connect 127.0.0.1:445 -tls1_3), i will get:
frontend_name/3: SSL handshake failure
because 445 is 3. port listed in this frontend.
[UPDATE]
I found a bit more. Error log format explains that /1 in frontend_name/1 is bind_name and can be declared:
bind *:443,*:443 ssl crt <path_to_cert> no-tlsv13 name bind_ssl_foo
will result in frontend-name/bind_ssl_foo: SSL handshake failure.
Unfortunately we can't change error log format.
To learn more we have to make that connection successful and that most likely requires us to lower security (FOR DEBUGGING ONLY!). Normal clients will still negotiate highest security they can, TLS 1.2 or 1.3.
bind *:443 ssl crt <path_to_cert> ssl-min-ver TLSv1.0
Since haproxy 2.2 default for ssl-min-ver is TLSv1.2.
Second step is to log SSL version, negotiated cipher and maybe whole cipherlist send by client by appending %sslv %sslc and maybe %[ssl_fc_cipherlist_str] to your log-format:
log-format "your_log_format_here %sslv %sslc %[ssl_fc_cipherlist_str]"
If you don't have your own log format you can extend HTTP format:
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %sslv %sslc %[ssl_fc_cipherlist_str]"
To use ssl_fc_cipherlist_str we need to set tune.ssl.capture-cipherlist-size 800 in global section, because default is 0.
sslv is SSL/TLS version client connected with.
sslc is SSL/TLS cipher client connected with.
ssl_fc_cipherlist_str is cipher list client offered when negotiating SSL/TLS connection. It can be long. Use if you are extra curious.
That will append to your logs info like this:
TLSv1 ECDHE-RSA-AES256-SHA ECDHE-ECDSA-AES256-SHA,ECDHE-RSA-AES256-SHA,DHE-RSA-AES256-SHA,ECDHE-ECDSA-AES128-SHA,ECDHE-RSA-AES128-SHA,DHE-RSA-AES128-SHA,AES256-SHA,AES128-SHA,TLS_EMPTY_RENEGOTIATION_INFO_SCSV
Match by IP previous errors with current entries and you will know what TLS version and ciphers they were using. Then decide whether to adjust your ciphers or force this client to upgrade their SSL software.
So all required changes below:
global
log /dev/log daemon
tune.ssl.capture-cipherlist-size 800
frontend frontend_name
bind *:443 ssl crt <path_to_cert> ssl-min-ver TLSv1.0
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %sslv %sslc %[ssl_fc_cipherlist_str]"
mode http
(...)
Again, lower security only for debugging if this connection error really is a problem for you.

TLS handshake of clients

I'm working on an academic project about TLS handshakes and i have captured some TLS traffic generated by multiple clients (google chrome, firefox...) and I want to see if for a given browser the client hello message will always be the same or no (I have removed the GREASE extensions because they are added in a random way to the client hello message and I omitted the SNI). I found that the same browser generate multiple client hello messages.
Is it normal to see such behavior or I'm doing something wrong?
A TLS handshake is done for each TCP connection involved in HTTPS and it is common that the browser uses multiple TCP connections in parallel. This is probably what you see. Multiple TLS handshakes within the same TCP connection are uncommon but might happen if a server requires a client certificate only for a specific path and thus triggers a renegotiation.

Receiving SSL Handshake success/failure events on Tomcat 8 using APR/native connector

My tomcat instance is using APR/native connector, which uses OpenSSL under the hood to make SSL connections. I want to receive SSL handshake events, so that any handshake be it success or failure can be logged. I don't see any APR connector option for it.
I am aware of the -Djavax.net.debug=XXX option, but that won't work since the connection is going through OpenSSL stack.
Is there something like SSLHandshakeCompletion/SSLHandshakeFailure event that can be captured?
If not, is there a way to enable SSL level logging when using APR connector?
I believe it's a common problem, but I haven't been able to find a solution for it yet.

handshake failure(40) and TLS_EMPTY_RENEGOTIATION_INFO_SCSV

A client installed on jBOSS is trying to access a secured website configured on DataPower xi50v6.0.0.2 appliance. The connection is getting failed at SSL handshake.
I have taken a packet capture at DataPower and observed that SSL Handshake is failing with the Description:Handshake failure(40).
However, at the Client Hello step, I have observed that, only one Cipher Suite is specified which is : TLS_EMPTY_RENEGOTIATION_INFO_SCSV.
The TLS protocol used ( as per packet capture) is TLS1.1. Can this Cipher Suite be a problem?
In the DataPower system logs I can see below error:
Request processing failed: Connection terminated before request headers read because of the connection error occurs
Update:
The client application is running on jBOSS7.I have asked our jBOSS administrator to check the configuration at jBOSS end. I somehow got the access to server where jBOSS instance is installed and checked domain.xml where the ssl is configured. Where exactly in domain.xml, ths configuration related to cipher suites can be found?
I have observed that, only one Cipher Suite is specified which is : TLS_EMPTY_RENEGOTIATION_INFO_SCSV
This is no real cipher. If no other ciphers are specified then the client does not offer any ciphers at all which means that no shared ciphers can be found and thus the handshake will fail. It looks like the client is buggy. Reason might be a failed attempt to fight POODLE attack by disabling all SSL3.0 ciphers, which in effect disables all ciphers for TLS1 1.0 and TLS 1.1.

How to track down "Connection timout during SSL handshake" and "Connection closed during ssl handshake" errors

I have recently switched over to HAProxy from AWS ELB. I am terminating SSL at the load balancer (HAProxy 1.5dev19).
Since switching, I keep getting some SSL connection errors in the HAProxy log (5-10% of the total number of requests). There's three types of errors repeating:
Connection closed during SSL handshake
Timeout during SSL handshake
SSL handshake failure (this one happens rarely)
I'm using a free StartSSL certificate, so my first thought was that some hosts are having trouble accepting this certificate, and I didn't see these errors in the past because ELB offers no logging. The only issue is that some hosts have do have successful connections eventually.
I can connect to the servers without any errors, so I'm not sure how to replicate these errors on my end.
This sounds like clients who are going away mid-handshake (TCP RST or timeout). This would be normal at some rate, but 5-10% sounds too high. It's possible it's a certificate issue; I'm not certain exactly how that presents to
Things that occur to me:
If negotiation is very slow, you'll have more clients drop off.
You may have underlying TCP problems which you weren't aware of until your new SSL endpoint proxy started reporting them.
Do you see individual hosts that sometimes succeed and sometimes fail? If so, this is unlikely to be a certificate issue. I'm not sure how connections get torn down when a user rejects an untrusted certificate.
You can use Wireshark on the HAProxy machine to capture SSL handshakes and parse them (you won't need to decrypt the sessions for handshake analysis, although you could since you have the server private key).
I had this happen as well. The following appeared first SSL handshake failure then after switching off option dontlognull we also got Timeout during SSL handshake in the haproxy logs.
At first, I made sure all the defaults timeouts were correct.
timeout connect 30s
timeout client 30s
timeout server 60s
Unfortunately, the issue was in the frontend section
There was a line with timeout client 60 which I only assume means 60ms instead of 60s.
It seems certain clients were slow to connect and were getting kicked out during the SSL handshake. Check your frontend for client timeouts.
How is your haproxy ssl frontend configured ?
For example I use the following to mitigate BEAST attacks :
bind X.X.X.X:443 ssl crt /etc/haproxy/ssl/XXXX.pem no-sslv3 ciphers RC4-SHA:AES128-SHA:AES256-SHA
But some clients seem to generate the same "SSL handshake failure" errors. I think it's because the configuration is too restrictive.