What does OpenSSL's function SSL_read() return for a non-Application record? - ssl

The SSL/TLS protocol has four sub-protocols and message types:
Application
Handshake
Change cipher spec
Alerts
What does SSL_read() return (for a blocking socket) if the record received was NOT an Application message? And if it does return non-zero, how is the caller supposed to know what to do with it?
I don't see what the caller/client can do with the 3 non-Application messages, they seem more like internal state for SSL.
If it returns 0 bytes, this will be confusing for a blocking socket.
If it returns > 0 bytes, the caller would this an Application message has been received? (there is no flag returned to the caller to indicate the record type).
I am looking at the source code but it's not clear.

SSL_read will only return data retrieved from application records. Any other messages received will only change the internal state of the SSL session, like proceeding with the SSL handshake (if not previously finished), saving session tickets for later use or closing the connection (on shutdown alert).
If this internal change of the SSL session results in the session getting closed or invalid (like when getting an alert), then SSL_read will return with an error and the reason can be retrieved using SSL_get_error.

Related

Reporting Status Results for Control Transfers

Consider section 8.5.3.1 of USB 2.0 specification:
Control write transfers return status information in the data phase of the Status stage transaction.
For control writes, the host sends an IN token to the control pipe to initiate the Status stage. The function
responds with either a handshake or a zero-length data packet to indicate its current status.
In IN transactions handshake is done by host, not device!
Question is: how device can send handshake for an IN transaction?
In IN transactions handshake is done by host, not device!
I believe there is some misunderstanding.
Device sends NAK/STALL during handshake phase of IN transaction(control write) if there is no data packet during status stage.
If there is a data packet from function corresponding to the IN token, the function expects ACK handshake from host after sending the data packet.
Data packets during status stages are Zero Length Packets.
This is the illustration of scenario in the question:
See also link in comments.

How to handle EAGAIN case for TLS over SCTP streams using memory BIO interface

I'm using BIO memory interface to have TLS implemented over SCTP.
So at the client side, while sending out application data,
SSL_write() api encrypts the data and writes data to the associated write BIO interface.
Then the data from BIO interface is read to a output buffer using BIO_read() call and then
send out to the socket using sctp_sendmsg() api.
Similarly at the server side, while reading data from socket
sctp_recvmsg() api reads ecrypted message chunks from socket,
BIO_write() api writes it to the read BIO buffer, and
SSL_read() api decrypts the data read from the BIO.
The case i'm interested at is where at client side, steps 1 and 2 are done, and while doing 3, i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
Now when i do this, and later when steps 1, 2 and 3 at client side goes through fine, at the server side, openssl finds it that the record that it received has got a a bad_record_mac and closes the connection.
From googling i came to know that one possibility for it to happen is if TLS packets comes out of sequence, as MAC encoding has dependency on the previous packet encoded, and, TLS needs to have the packets delivered in the same order. So when i was cleaning up the data on EAGAIN i am dropping an SSL packet and then sending a next packet which is out of order (missing clarity here) ?
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
Can someone help me here with this EAGAIN handling ? I can't do an infinite wait to get around the issue, is there any other way out ?
... i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
If you get an EAGAIN on the socket you should try to send the same encrypted data later.
What you do instead is to throw the encrypted data away and ask the application to send the same plain data again. This means that these data get encrypted again. But encrypting plain data in SSL also includes a sequence number of the SSL frame and this sequence number is not the same as for the last SSL frame you throw away.
Thus, if you have thrown away the full SSL frame you are trying to send a new SSL frame with the next sequence number which does not fit the expected sequence number. If you've succeeded to send part of the previous SSL frame and thew away the rest then the new data you send will be considered part of the previous frame which means that the HMAC of the frame will not match.
Thus, don't throw away the encrypted data but try to resent these instead of letting the upper layer resent the plain data.
Select for writability.
Repeat the send.
If the send was incomplete, remove the part of the buffer that got sent and go to (1).
So whatever data i've read from the BIO buffer, i clean it up
I don't know what this means. You're sending, not receiving.
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
That's exactly what you should do. I can't imagine what else you could possibly have been doing instead, and your description of it doesn't make any sense.

How to wait for entire buffer to arrive in a SSL connection

I am implementing a client server program, in which the client sends HTTP messages to the server. It can be both HTTP or HTTPS
In case of large messages, like file transfer using HTTP, the client sends the whole message at one go, whereas it reaches the server in multiple fragments( the network does it). I wait for the entire message to come, and keep merging it so that I get the whole message. Content length is found using a parameter I send in the HTTP message.
But in the case of HTTPS there is no way to know if the enitre message has arrived.
If i decrypt the fragment, it returns junk values. I think that is because, the whole encrypted message must be joined before decrypting it.
How is it possible to identify if the entire message has arrived in HTTPs
I am using SSL library and using windows sockets.
SSL encrypts plain data into blocks and then those blocks are transmitted individually to the other party. The receiver needs to read the raw socket data and pump it into the SSL decryption engine as it arrives. When the engine has enough bytes for a given block, it decrypts that block and outputs the plain data for just that block. So you simply keep reading socket data and pumping it into the decryption engine, buffering whatever plain data is outputted, until you encounter a decrypted <CRLF><CRLF> sequence denoting the end of the HTTP message headers, then you process those headers to determine whether the HTTP message body is present and how it is encoded. If a message body is present, keep reading socket data, pumping it into the decryption engine, and buffering the output plain data, until you encounter the end of the message body. RFC 2616 Section 4.4 - "Message Length" describes how to determine the encoding of the HTTP message body (after decryption is applied) and what condition terminates the message body.
In other words, you are not supposed to look for the end of an encrypted socket message. You are supposed to decrypt everything you receive until you detect the end of the decrypted HTTP message.

OpenSSL SSL_ERROR_WANT_WRITE never recovers during SSL_write()

I have two applications talking to each other over SSL. The client is running on a windows machine, the server is a linux based application. The client is sending a large amount of data to the server on startup. The data is sent in ~4000byte chunks over to the server that contains 30 entries. I have to send about 50000 entries over.
During that transmission the server sends a message to the client, the message size is ~4000bytes. After that happens, the SSL_write() on the client side begins to return error of SSL_ERROR_WANT_WRITE. The client sleeps for 10ms, and retries the SSL_write with the exact same parameters, however, the SSL_write fails infinitely. Subsequently it aborts. If it tries to send a new message, I get an error indicating I am not sending the same aborted message from earlier.
error:1409F07F:SSL routines:SSL3_WRITE_PENDING: bad write retryā€¯
The server eventually kills the connection since it has not heard from the client for 60s and re-establishes a new one. This is just an FYI, the real issue is how can I get SSL_write to resume.
If the server does not send a request during the receive the problem goes away. If I shrink the size of the request from 16K to 100 bytes the problem does not happen.
The SSL CTX MODE is set to SSL_MODE_AUTO_RETRY and SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER.
Does anyone have an idea what might cause a simultaneous transmission from both sides with large information can cause this failure. What can I do to prevent it if this is a limitation other than capping the size that goes out from the server to the client. My concern is that if the client is not sending anything the throttling I applied to avoid this issue is a waste.
On the client side I tried to perform an SSL_read to see if I need to read during a write even though I never receive an SSL_ERROR_PENDING_READ, but the buffer is not that big anyway. ~1000bytes in size.
Any insight on this would be appreciated.
SSL_ERROR_WANT_WRITE - This error is returned by OpenSSL (I am assuming you are using OpenSSL) only when socket send gives it an EWOULDBLOCK or EAGAIN error. The socket send will give a EWOUDLBLOCK error when the send side buffer is full, which in turn means that your Server is not reading the messages sent from Client.
So, essentially, the problem lies with your Server which is not reading the messages sent to it. You need to check your server and fix it, which will automatically fix your client problem.
Also, why have you set the option "SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER"? SSL always expects that the record which it is trying to send should be sent completely before the next record can be sent.
As it turns out that with both the client and server side app, the read and writes are processed in one thread. In a perfect storm as I described above, the client is busy writing (non blocking). The server then decides to do a write a large set of messages of its own in between processing its rx buffers. The server tx is a blocking call. The server gets stuck writing, starves the read, the buffers fill up and we have a deadlock scenario.
The default windows buffer is 8k bytes so it doesn't take much to fill it up.
The architecture should be such that there is a separate thread for the rx and tx processing on both sides. As a short cut/term fix, once can increase the rx buffers and rate limit the tx side to prevent the deadlock.

how to combined multiple Handshake messages to a record in java?

used java code to request a https site, do tcpdump and find "Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message" will be set to two records:
1. Client Key Exchange
2. Change Cipher Spec, Encrypted Handshake Message
how to combined these three Handshake messages to a record in java?
Why do you care how those are put on the wire? Are you trying to save just a few bytes or have a legitimate real reason for that?
I don't know the specifics of Java's implementation and whether you can influence it through config/params, but from the TLS protocol perspective, it doesn't make any difference how you send handshake messages on the wire. In the case of separate records, you just send some extra bytes, that's all.
Furthermore, for those three in particular, they cannot be combined in a single record and there is a reason for that. The ClientKeyExchange is a plaintext message, so it goes into a record. The ChangeCipherSpec is not a handshake message, rather a record type, therefore it cannot go into the same record as the CKE. Since CCS is a record type on its own, you need to follow it with another handshake message wrapped into a record, therefore you see 3 separate records. Also, the Finished message is encrypted, so you need to add a MAC at the record layer, and cannot be combined with plaintext handshake messages into the same record.
I hope this clears it up a bit.