How to set up SSL protocols priority in OpenSSL - ssl

I am implementing an SSL Client using OpenSSL which
(1) only "speaks" TLS 1.2, TLS 1.1 and TLS 1.0,
(2) set exactly this priority: TLS 1.2. If communication is not possible, use TLS 1.1. If not, TLS 1.0. If not, refuse connection.
I achieve (1) by using
SSL_CTX_set_options(m_ssl_ctx, SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3);
But I don't know any way to achieve (2). Is there any "elegant" way to do this in OpenSSL or do I have to attempt several connections checking if communication was possible and, if not, attempt a lower protocol version?
Thanks.

There is no protocol priority setting. The client will announce the best version it can do to the server and the server will pick this or a lower version. If the version picked by the server is not supported by the client then the handshake will fail. This is not specific to OpenSSL but this is how SSL/TLS works.
Don't confuse this handshake between client and server with the TLS downgrading mechanism most browsers use. In this case browsers retry the SSL handshake on a new TCP connection with a lower version if the handshake with the better version failed. This behavior is to work around broken SSL/TLS implementations. These downgrades are mostly restricted to browsers, simpler TLS stacks are less tolerant and fail permanently if the first handshake failed.

Related

Delayed certificate in TLS 1.3

In ASP.NET Core there are 4 available certificate modes:
// Summary:
// A client certificate is not required and will not be requested from clients.
NoCertificate,
// Summary:
// A client certificate will be requested; however, authentication will not fail
// if a certificate is not provided by the client.
AllowCertificate,
// Summary:
// A client certificate will be requested, and the client must provide a valid certificate
// for authentication to succeed.
RequireCertificate,
// Summary:
// A client certificate is not required and will not be requested from clients at
// the start of the connection. It may be requested by the application later.
DelayCertificate
I want to use DelayCertificate mode because this shouldn't ask for certificate to user in e.g. web explorer, but still I can require certificate on some endpoints. But I have one consideration: It seems to me that I once read that delayed certificates aren't supported in TLS 1.3. I'm not sure about it, and I can't find the source where I read this. So what's it like with this mode? Will it work with TLS 1.3?
... I once read that delayed certificates aren't supported in TLS 1.3
In TLS 1.2 and lower renegotiation was used to ask for client certificates after the initial TLS handshake was already done. This mechanism does not exist in TLS 1.3 anymore, which means it also cannot be used for delayed client certificates anymore in TLS 1.3.
But, TLS 1.3 introduced a different mechanism for this purpose: Post-Handshake Client Authentication. Thus, delayed client certificates can still work with TLS 1.3, only differently. But it requires that the client supports it and not all might do.
Additionally TLS 1.3 post-handshake client authentication is explicitly forbidden in HTTP/2 - see RFC 8740. But similar to this delaying client certificates in response to access to a protected resource weren't allowed with TLS 1.2 in HTTP/2 either: RFC 7540 explicitly forbids renegotiation after the actual HTTP/2 protocol (inside the TLS) has been started.

NiFi ListenHTTP processor: Uses an unsupported protocol

I have configured a ListenHTTP 1.7.0 processor in NiFi 1.7.0-RC1. It is listening on a custom port behind a reverse proxy. I have configured a StandardRestrictedSSLContextService with a JKS keystore and have added the keystore password. We have not configured the truststore as we don't expect to need mutual TLS. The certificate is signed by an internal enterprise CA and is (or should be!) trusted by the client.
When I test this with Chrome I receive the following:
This site can’t provide a secure connection
my.server uses an unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
Unsupported protocol
The client and server don't support a common SSL protocol version or cipher suite.
Troubleshooting:
We have tried both TLS and TLSv1.2 in the ListenHTTP processor.
We have treid using curl (Linux) and Invoke-WebRequest (Windows) but have received variations on the bad cipher/SSL version message above.
I don't see anything in the release notes suggesting that the ListenHTTP processor changed much since 1.7.0, so I'm assuming that I don't need to upgrade NiFi.
Can anyone suggest what to try next or explain why we see this error?
I have read the following:
https://www.simonellistonball.com/technology/nifi-ssl-listenhttp/
https://cwiki.apache.org/confluence/display/NIFI/Release+Notes
Nifi: how to make ListenHTTP work with SSL
What version of Java are you running on? Java 11 provides TLSv1.3, which is the default offering if you have generic TLS selected, but NiFi 1.7.0 doesn't support TLSv1.3 (and doesn't run on Java 11). So assuming you are running on Java 8, recent updates have introduced TLSv1.3 but should still provide for TLSv1.2. This can also indicate that the certificate you have provided is invalid or incompatible with the cipher suite list provided by the client. You can use $ openssl s_client -connect <host:port> -debug -state -CAfile <path_to_your_CA_cert.pem> to try diagnosing the available cipher suites & protocol versions. Adding -tls1_2 or -tls1_3, etc. will restrict the connection attempt to the specified protocol version as well.
You should definitely upgrade from NiFi 1.7.0 -- it was released over 2 years ago, has known issues, and there have been close to 2000 bug fixes and features added since, including numerous security issues. NiFi 1.12.1 is the latest released version.

OpenSSL connection: alert internal error

I have 100 HTTPS services running on a single server using SNI. (Actually, I don't have access to them. It's an assignment. All I know are their domain names N.xxx.yy where N is in range from 00 to 99.) The goal of the assignment is to evaluate security of every single connection to each of these servers. So some of the servers contain expired certificates, certificates with wrong CN, etc.
My problem is that I cannot get past the handshake on some of the servers. I have written my own application in C++ using OpenSSL, but I've also tried it with openssl s_client. This is how I connect to the server:
openssl s_client -host N.xxx.yy -port 443 -verify 1 -servername N.xxx.yy -CAfile assignment-ca.pem
And this is what I get:
139625941858168:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:s3_pkt.c:1493:SSL alert number 80
139625941858168:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
In Wireshark, I see that client sent ClientHello, server responded with ServerHello (choosing TLSv1.2 and ECDHE-RSA-AES256-GCM-SHA384) followed by Certificate and then it sent me Alert message containing Internal Error (80).
After trying different thing I have found out that if I run s_client with -tls1 or -tls1_1 I can successfully get past the handshake. -tls1_2 does not work. What is even stranger is that connection through Chrome/Firefox/any other browser succeeds even if TLSv1.2 is negotiated. From what I see, Chrome is sending a different cipher list than me or s_client but even after modifying the cipher list to match the one in Chrome (and making sure that server chooses ECDHE-RSA-AES128-GCM-SHA256), it does not work either. Chrome is sending these TLS extensions, which I don't but most of them seem empty:
Unknown 47802
renegotiation_info
Extended Master Secret
signed_certificate_timestamp
status_request
Application Layer Protocol Negotiation
channel_id
Unknown 6682
Can anybody explain me what is happening here? Unfortunately, I have no way to debug it on the server side so this is all I know.
UPDATE:
After playing around with forged ClientHello messages I managed to track it down to signature_algorithms extension. My app and s_client provide SHA384 + {RSA,DSA,ECDSA} but if I remove these and keep just SHA256 + {RSA,DSA,ECDSA}, as Chrome does, it works and I receive Server Key Exchange message successfully. Could it be that server somehow does not support it, but instead of providing meaningful error message, it just ends unexpectedly and gives me this internal error?
UPDATE 2:
I found answer to why it works with TLS versions prior to 1.2 in RFC5246. Question from the previous UPDATE still holds.
Note: this extension is not meaningful for TLS versions prior to 1.2.
Clients MUST NOT offer it if they are offering prior versions.
However, even if clients do offer it, the rules specified in [TLSEXT]
require servers to ignore extensions they do not understand.
Since you wrote that -tls1_2 does not work I assume either you and/or the server uses an older openssl library. The current version while writing this is 1.1.0e
There were quite some fixes since 0.9.8, which could often be seen on older systems.
For Version 1.0.1 there was this fix, which sounds like your problem:
`Some servers which support TLS 1.0 can choke if we initially indicate
support for TLS 1.2 and later renegotiate using TLS 1.0 in the RSA
encrypted premaster secret. As a workaround use the maximum permitted
client version in client hello, this should keep such servers happy
and still work with previous versions of OpenSSL.`
Maybe also notable:
Don't allow TLS 1.2 SHA-256 ciphersuites in TLS 1.0, 1.1 connections.
So I would suggest to update your openssl-Version and in case of the servers out of your control I would stick to the settings you already found.

Globally disabling protocols in OpenSSL

Is it possible to globally disable TLS 1.1 for an application that is indirectly using OpenSSL?
I would like to disable TLS 1.1 for a C application that makes soap HTTPS calls using gSOAP.
Disabling TLS 1.1 fixes a intermittent SSL connection problem I have been experiencing for the last few days (SSL routines:SSL3_GET_RECORD:wrong version number).
Currently TLS 1.1 is disabled by using a custom build of gSOAP but ideally I would like to disable the protocol using a config file or some code in my application.
Ubuntu 12.04.5 LTS
OpenSSL 1.0.1-4ubuntu5.20
gSOAP 2.8.4-2
Although there is a global OpenSSL config file it can not be used to restrict the default SSL version(s). And unfortunately there seems to be no API or configuration for the gSOAP library to restrict the SSL version. So you must probably live with your custom build version and hope that someday they provide an API to set the SSL version.
At a minimum you will need gSOAP 2.8.28. Use the SOAP_TLSv1_2 option with soap_ssl_client_context() and soap_ssl_server_context() to restrict the TLS protocol to TLSv1.2 only. TLS1.0/TLS1.1/SSLv3 are disabled. You can't combine the SSL/TLS protocol options, so only TLSv1.2 will be enabled with this option. This works with OpenSSL 1.0.1 or later and recent GNUTLS versions. Perhaps there will be new options in upcoming gSOAP releases to support subsets of protocols, which would be nice.

OpenSSL let the server and client negotiate the method

Following a really outdated tutorial I managed to create an HTTPS server using OpenSSL with TLS1.2, and I'm very proud of it ;)
However TLS 1.2 is only supported in latest browsers and I would like to have some kind of negotiation of the protocol between the client and server, which I'm sure it can be done, but I'm not able to find how! So that if the client only supports TLS1.0, well use that. And if it only supports SSLv3, use that. Not sure about SSLv2, maybe better leave that...
The code I use right now is:
SSL_library_init();
OpenSSL_add_all_algorithms();
SSL_load_error_strings();
ssl_method = TLSv1_2_server_method();
ssl_ctx = SSL_CTX_new(ssl_method);
Then the server certificates are loaded and the ssl_ctx is shared among all connections. When a client is accepted by the server socket it is encapsulated in an SSL object (whatever it represents):
ssl = SSL_new(ssl_ctx);
SSL_set_fd(ssl, client_socket);
SSL_accept(ssl);
So I guess that something has to be changed in the ssl_ctx creation to allow more methods... Any idea?
<rant> No decent, extensive documentation can be found for OpenSSL, the best available is a 10 years old tutorial! </rant>
Thanks in advance.
You do this by using SSLv23_method() (and friends) instead of a specific method (e.g. TLSv1_2_server_method() in your example). This sends the SSLv2 ClientHello but also specifies the highest protocol supported. The somewhat outdated man page says:
SSLv23_method(void), SSLv23_server_method(void), SSLv23_client_method(void)
A TLS/SSL connection established with these methods will understand
the SSLv2, SSLv3, and TLSv1 protocol. A client will send out SSLv2
client hello messages and will indicate that it also understands SSLv3
and TLSv1. A server will understand SSLv2, SSLv3, and TLSv1 client
hello messages. This is the best choice when compatibility is a
concern.
This online man page doesn't discuss the newer TLSv1_1 and TLSv1_2 protocols, but I verified in the 1.0.1g source of s23_clnt.c that SSLv23_method() includes them.
You then limit the protocols you actually accept with SSL_CTX_set_options():
The list of protocols available can later be limited using the
SSL_OP_NO_SSLv2, SSL_OP_NO_SSLv3, SSL_OP_NO_TLSv1 options of the
SSL_CTX_set_options() or SSL_set_options() functions. Using these
options it is possible to choose e.g. SSLv23_server_method() and be
able to negotiate with all possible clients, but to only allow newer
protocols like SSLv3 or TLSv1.
Note, however, that you can't enable arbitrary sets of protocols, only contiguous protocols in SSLv2, SSLv3, TLSv1, TLSv1_1, TLSv1_2. For example, you can't choose only SSLv3 and TLSv1_1, omitting TLSv1. This comment in the source explains why:
SSL_OP_NO_X disables all protocols above X if there are some protocols below X enabled. This is required in order to maintain "version capability" vector contiguous. So that if application wants to disable TLS1.0 in favour of TLS1>=1, it would be insufficient to pass SSL_NO_TLSv1, the answer is SSL_OP_NO_TLSv1|SSL_OP_NO_SSLv3|SSL_OP_NO_SSLv2.