Is there any method to do so? I have access to the application's assembly in Ghidra as well as app's TLS 1.2 traffic in Wireshark.
I used Ghidra and found some data about OpenSSL 1.0.2l, but breakpoints on int SSL_connect(SSL *s) function don't get hit, so I guess it isn't used anymore/yet.
Related
For the purpose of debugging I am looking for a way to print the few important contents of the SSL structure in a human friendly way. Does a function already exist in the openssl libraries to do that?
The SSL handshake is failing in my custom private key method provider in some cases and I was looking for ways to print the SSL parameters exchanged on the connection so far.
For eg., TLS version, cipher chosen, etc
I'm trying to bring an old TLS 1.0 implementation (that I did not write) up to date to speak TLS 1.2.
As a first step I integrated the TLS 1.1 change of putting the plaintext initialization vector in the record. That was no problem. It seemed to work well enough that I could read https://example.com in TLS 1.1, as well as SSL Labs viewMyClient.html.
Then I adapted to the TLS 1.2 change of the pseudorandom function to (for most practical purposes) P_SHA256 instead of the (more complex and bizarre) half and half MD5/SHA1 rigamarole. I did it wrong the first time and got an invalid MAC error, but it was more or less a typo on my part and I fixed it. Then the invalid MAC error went away.
But despite that, after sending the ClientKeyExchange->ChangeCipherSpec messages, I'm getting a "Decrypt Error" back from the server(s) (same Alert regardless, https://google.com or anything I try). I gather the ChangeCipherSpec message is encrypting just one byte, putting it into a message with padding and the MAC, etc.
If I tweak the MAC randomly by one byte, it goes back to complaining about invalid MAC. What confuses me is that the MAC itself is encrypted as part of GenericBlockCipher:
struct {
opaque IV[SecurityParameters.record_iv_length];
block-ciphered struct {
opaque content[TLSCompressed.length];
opaque MAC[SecurityParameters.mac_length]; // <-- server reads this fine!
uint8 padding[GenericBlockCipher.padding_length];
uint8 padding_length;
};
} GenericBlockCipher;
UPDATE: FWIW, I've added a Wireshark log of the failing 1.2 read of https://example.com, as well as a log of a functioning 1.1 session running what is the same code, not counting the P_SHA256 MAC update:
http://hostilefork.com/media/shared/stackoverflow/example-com-tls-1.2.pcapng (fails)
http://hostilefork.com/media/shared/stackoverflow/example-com-tls-1.1.pcapng (succeeds)
So, what exactly is it having trouble decrypting? The padding seems correct, as if add or subtract 1 to the byte I get an invalid MAC error. (The spec says "The receiver MUST check this padding and MUST use the bad_record_mac alert to indicate padding errors.", so that is to be expected.) If I corrupt the client-iv in the message from what I used to encrypt (just put a bad byte in the transmitted version), doing so also gives me Bad Record MAC. I'd expect that to wreck the decryption also.
So I'm puzzled on what could be the problem:
The server demonstrates discernment of valid MAC vs. not, so it must have decrypted. How's it getting the right MAC -and- having a decrypt error?
Cipher suite is an old one (TLS_RSA_WITH_AES_256_CBC_SHA) but I'm just tackling one issue at a time...and if I'm not mistaken, that shouldn't matter.
Does anyone with relevant experience have a theory of how TLS 1.2 could throw a wrench into code that otherwise works in TLS 1.1? (Perhaps someone who's done a similar updating to a codebase, and had to change more than the two things I've changed to get it working?) Am I missing another crucial technical change? What recourse do I have to find out what is making the server unhappy?
There's actually not anything wrong with the ChangeCipherSpec message. It's actually the Finished message that has the problem. It is complaining about the decrypted verify_data inside that message, which is not matching an expected hash (despite the encryption/decryption itself being correct).
But what's confusing in the Wireshark log is that the Finished message shows up on the same log line, but under the name "EncryptedHandshakeMessage" This makes it look like some kind of tag or label describing ChangeCipherSpec, but it's not. That message actually isn't encrypted at all.
TLS finished packet renamed encrypted handshake message
HTTPS over TLS - encrypted type
From the second link:
In practice, you will see unencrypted Client Hello, Server Hello, Certificate, Server Key Exchange, Certificate Request, Certificate Verify and Client Key Exchange messages. The Finished handshake message is encrypted since it occurs after the Change Cipher Spec message.
"Hoping someone has experience updating TLS 1.0 or 1.1 to 1.2, and might have seen a similar problem due to not changing more than the P_SHA256 MAC and bumping the version number"
They only mention two of the three places that you need to update the MD5/SHA1 combination in the "changes from TLS 1.1" section of RFC 5246:
The MD5/SHA-1 combination in the pseudorandom function (PRF) has been replaced with cipher-suite-specified PRFs. All cipher suites in this document use P_SHA256.
The MD5/SHA-1 combination in the digitally-signed element has been replaced with a single hash. Signed elements now include a field that explicitly specifies the hash algorithm used.
(Note: The second applies to certificates, and if you haven't gotten to certificate checking you wouldn't be at that point yet.)
What they don't mention in that section is the third place the MD5/SHA-1 combination changes, which is a hash used in the seed for the verify_data of the Finished message. However, this point is also a change from TLS 1.1, described much further down the document in section 7.4.9:
"Hash denotes a Hash of the handshake messages. For the PRF defined in Section 5, the Hash MUST be the Hash used as the basis for the PRF. Any cipher suite which defines a different PRF MUST also define the Hash to use in the Finished computation."
For a formal spec they're being a bit vague on "hash used as the basis for the PRF" (is it the HMAC or just the plain hash?) But it's the plain hash. So SHA256, unless the cipher suite's spec says otherwise.
(Note also the cipher suite can dictate the length of the verify_data as more than 12 bytes, though none mentioned in the spec do so.)
"What recourse do I have to find out what is making the server unhappy?"
YMMV. But what I did was just build OpenSSL as a static debug library, and linked it to a simple server. Then I added breakpoints and instrumentation to see what it was upset about. (GDB wasn't letting me step into the shared library, for some reason.)
Circa 30-Sep-2018, on a plain linux machine:
git://git.openssl.org/openssl.git
./config no-shared no-asm -g3 -O0 -fno-omit-frame-pointer -fno-inline-functions no-ssl2 no-ssl3
make
The simple server I used came from Simple TLS Server. Compile against the static library with:
gcc -g -O0 simple.c -o simple -lssl -lcrypto -ldl -lpthread
I followed the instructions for generating certificates here, but changed the AAs to localhost
openSSL sign https_client certificate with CA
Then I changed the cert.pem => rootCA.pem and key.pem => rootCA.key in the simple server code. I was able to do:
wget https://localhost:4433 --no-check-certificate
And successfully get back test as a response. So then it was just a matter of seeing where my client caused a failure.
I can think of 2 different situations that creates this problem:
Sending incorrect IV. IV affects only 1st block in decryption of CBC mode, so if your content is more than 16 bytes (AES block size), MAC part of your data will be decrypted correctly.
If you are using incorrect padding structure, you may get error in decryption(because padding verification fails), but content will be decrypted correctly.
I have an application that supposedly runs well under Mono, but is having some problems on my system. In the meantime, I tried running it through WINE after using winetricks to install the proper version of .NET (winetricks dotnet452).
This worked great! The application hits Github to check for updates and manages the SSL/TLS connection flawlessly. Elsewhere while using it, it attempts to access another website, https://themoose.co.uk, but fails with an SSL/TLS error. The only reasonable difference I could find between that site and Github was that it uses an ECC cert as opposed to Github's more traditional RSA cert.
I also saw these lines in WINE's console output:
fixme:secur32:schan_get_cipher_algid Don't know CALG for encryption algorithm 2, returning 0
fixme:secur32:schan_imp_get_max_message_size Returning 1 << 14.
fixme:secur32:schan_get_cipher_algid Don't know CALG for encryption algorithm 2, returning 0
Googling these messages doesn't return anything useful.
The conclusion I am drawn to is that WINE doesn't support new ECC certificates, but I do not see that limitation documented anywhere! Am I going crazy, or is this an oversight in the documentation somewhere?
We have a OpenSSL running on our embedded system, which is running ECOS OS. We are now upgrading our OpenSSL to 1.0.2 version. We have successfully ported and compiled the OpenSSL library. But when when we try to connect our device using SSL (via https), handshake fails with bad record mac alert always. We have enabled OpenSSL debug option, but unable to identify why its failing.
Have someone ported latest OpenSSL code to ECOS? Do we need to take of any special compilation flags with latest OpenSSL code for ECOS?
For reference, here is the relevant part of ssl3_get_record:
mac = rr->data + rr->length;
i=s->method->ssl3_enc->mac(s,md,0 /* not send */);
if (i < 0 || CRYPTO_memcmp(md, mac, (size_t)mac_size) != 0)
{
al=SSL_AD_BAD_RECORD_MAC;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_DECRYPTION_FAILED_OR_BAD_RECORD_MAC);
goto f_err;
}
After debugging we found that the random library (RAND) was failing for ECOS. There were lot of places in OpenSSL where it checks for random_bytes return type. Due to this failure, pre-master key decryption was failing. And incoming packets were not decrypted properly. Hence a BAD Mac records error was seen.
We also checked with our old ported code (0.9.6), RAND library was failing there also, but there we no return check for random_bytes and pseudo_rand_bytes. As a fix we made RAND to return success every time, and we can see SSL session being established fine with OpenSSL 1.0.2 version.
I recently needed to configure CocoaHttpServer, which we're using in our application with success, to handle HTTPS connections coming from a client application (running on Android devices). This is fine - there is copious sample code which allows for this, and we were able to enable the secure server without issue.
In practice we were seeing incredibly long SSL negotiation phases while the client was doing its handshaking with our server - upwards of 70 seconds.
Through a long series of searches, I found that the delay was because of the calculation of Diffie-Hellman parameters used by default when SSL is enabled in CFSocket. This thread is where I first started to find the answer to my issue.
To match what our Windows server was doing (using a less-secure SSL cipher) I needed to set the cipher explicitly on the Mac, which isn't easy when using AsyncSocket as a wrapper for the socket communications.
Our Windows server was using:
TLS_RSA_WITH_RC4_128_MD5 )(0x04)
RC4 128 bits MD5 RSA
Our Macintosh server was using:
TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x039)
AES 256 bits SHA-1 Ephemeral Diffie-Hellman key exchange using RSA certificate
The difference in "security" is large, but likely not worth the effort/computation/delay that we were seeing. Security Theater?
Please note that there are different ciphers that can be chosen - I chose to use the same one as our Windows implementation for consistency.
With information from another question mentioned above, I figured out how to set the cipher for CFSocket to use the same as Windows, and the code appears to be now quite a bit better - like it really works! CFSocket isn't directly exposing the SecureTransport support, which makes this kind of hard, but defining a particular key makes it work nicely.
For posterity, here's the code I've added to -onSocketWillConnect: in our HTTPConnection class:
// define this value; it isn't exposed by CFSocketStream.h
const extern CFStringRef kCFStreamPropertySocketSSLContext;
...
CFReadStreamRef stream = [sock getCFReadStream];
CFDataRef data = (CFDataRef) CFReadStreamCopyProperty(stream, kCFStreamPropertySocketSSLContext);
// Extract the SSLContextRef from the CFData
SSLContextRef sslContext;
CFDataGetBytes(data, CFRangeMake(0, sizeof(SSLContextRef)), (UInt8*)&sslContext);
SSLCipherSuite *ciphers = (SSLCipherSuite *)malloc(1 * sizeof(SSLCipherSuite));
ciphers[0] = SSL_RSA_WITH_RC4_128_MD5; // Basic cipher - not Diffie-Hellman
SSLSetEnabledCiphers(sslContext, ciphers, 1);
I hope this helps anyone working through the same issue as I - I'd be happy to share some more code and advice if needed.
For what it's worth, I contributed a patch to CocoaAsyncSocket about a week before you had this issue. Sorry that I didn't notice your question back then. :-)