How to wait for entire buffer to arrive in a SSL connection - ssl

I am implementing a client server program, in which the client sends HTTP messages to the server. It can be both HTTP or HTTPS
In case of large messages, like file transfer using HTTP, the client sends the whole message at one go, whereas it reaches the server in multiple fragments( the network does it). I wait for the entire message to come, and keep merging it so that I get the whole message. Content length is found using a parameter I send in the HTTP message.
But in the case of HTTPS there is no way to know if the enitre message has arrived.
If i decrypt the fragment, it returns junk values. I think that is because, the whole encrypted message must be joined before decrypting it.
How is it possible to identify if the entire message has arrived in HTTPs
I am using SSL library and using windows sockets.

SSL encrypts plain data into blocks and then those blocks are transmitted individually to the other party. The receiver needs to read the raw socket data and pump it into the SSL decryption engine as it arrives. When the engine has enough bytes for a given block, it decrypts that block and outputs the plain data for just that block. So you simply keep reading socket data and pumping it into the decryption engine, buffering whatever plain data is outputted, until you encounter a decrypted <CRLF><CRLF> sequence denoting the end of the HTTP message headers, then you process those headers to determine whether the HTTP message body is present and how it is encoded. If a message body is present, keep reading socket data, pumping it into the decryption engine, and buffering the output plain data, until you encounter the end of the message body. RFC 2616 Section 4.4 - "Message Length" describes how to determine the encoding of the HTTP message body (after decryption is applied) and what condition terminates the message body.
In other words, you are not supposed to look for the end of an encrypted socket message. You are supposed to decrypt everything you receive until you detect the end of the decrypted HTTP message.

Related

How to handle EAGAIN case for TLS over SCTP streams using memory BIO interface

I'm using BIO memory interface to have TLS implemented over SCTP.
So at the client side, while sending out application data,
SSL_write() api encrypts the data and writes data to the associated write BIO interface.
Then the data from BIO interface is read to a output buffer using BIO_read() call and then
send out to the socket using sctp_sendmsg() api.
Similarly at the server side, while reading data from socket
sctp_recvmsg() api reads ecrypted message chunks from socket,
BIO_write() api writes it to the read BIO buffer, and
SSL_read() api decrypts the data read from the BIO.
The case i'm interested at is where at client side, steps 1 and 2 are done, and while doing 3, i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
Now when i do this, and later when steps 1, 2 and 3 at client side goes through fine, at the server side, openssl finds it that the record that it received has got a a bad_record_mac and closes the connection.
From googling i came to know that one possibility for it to happen is if TLS packets comes out of sequence, as MAC encoding has dependency on the previous packet encoded, and, TLS needs to have the packets delivered in the same order. So when i was cleaning up the data on EAGAIN i am dropping an SSL packet and then sending a next packet which is out of order (missing clarity here) ?
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
Can someone help me here with this EAGAIN handling ? I can't do an infinite wait to get around the issue, is there any other way out ?
... i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
If you get an EAGAIN on the socket you should try to send the same encrypted data later.
What you do instead is to throw the encrypted data away and ask the application to send the same plain data again. This means that these data get encrypted again. But encrypting plain data in SSL also includes a sequence number of the SSL frame and this sequence number is not the same as for the last SSL frame you throw away.
Thus, if you have thrown away the full SSL frame you are trying to send a new SSL frame with the next sequence number which does not fit the expected sequence number. If you've succeeded to send part of the previous SSL frame and thew away the rest then the new data you send will be considered part of the previous frame which means that the HMAC of the frame will not match.
Thus, don't throw away the encrypted data but try to resent these instead of letting the upper layer resent the plain data.
Select for writability.
Repeat the send.
If the send was incomplete, remove the part of the buffer that got sent and go to (1).
So whatever data i've read from the BIO buffer, i clean it up
I don't know what this means. You're sending, not receiving.
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
That's exactly what you should do. I can't imagine what else you could possibly have been doing instead, and your description of it doesn't make any sense.

Handling Telnet negotiation

I'm trying to implement Telnet Client using C++ and QT as GUI.
I have no idea to handling the telnet negotiations.
Every telnet command is preceded by IAC, e.g.
IAC WILL SUPPRESS_GO_AHEAD
The following is how I handling the negotiation.
Search for IAC character in received buffer
According to the command and option, response to the request
My questions are described as follows:
It seems that the telnet server won't wait for a client response after a negotiation command is sent.
e.g. (send two or more commands without waiting for client reponse)
IAC WILL SUPPRESS_GO_AHEAD
IAC WILL ECHO
How should I handle such situation? Handle two requests or just the last one?
What the option values would be if I don't response the request? Are they set as default?
Why IAC character(255) won't be treated as data instead of command?
Yes, it is allowed to send out several negotiations for different options without synchronously waiting for a response after each of them.
Actually it's important for each side to try to continue (possibly after some timeout if you did decide to wait for a response) even if it didn't receive a reply, as there are legitimate situations according to the RFC when there shouldn't or mustn't be a reply and also the other side might just ignore the request for whatever reason and you don't have control over that.
You need to consider both negotiation requests the server sent, as they are both valid requests (you may choose to deny one or both, of course).
I suggest you handle both of them (whatever "handling" means in your case) as soon as you notice them, so as not to risk getting the server stuck if it decides to wait for your replies.
One possible way to go about it is presented by Daniel J. Bernstein in RFC 1143. It uses a finite state machine (FSM) and is quite robust against negotiation loops.
A compliant server (the same goes for a compliant client) defaults all negotiable options to WON'T and DON'T (i.e. disabled) at the start of the connection and doesn't consider them enabled until a request for DO or WILL was acknowledged by a WILL or DO reply, respectively.
Not all servers (or clients for that matter) behave properly, of course, but you cannot anticipate all ways a peer might misbehave, so just assume that all options are disabled until enabling them was requested and the reply was positive.
I'll assume here that what you're actually asking is how the server is going to send you a byte of 255 as data without you misinterpreting it as an IAC control sequence (and vice versa, how you should send a byte of 255 as data to the server without it misinterpreting it as a telnet command).
The answer is simply that instead of a single byte of 255, the server (and your client in the opposite direction) sends IAC followed by another byte of 255, so in effect doubling all values of 255 that are part of the data stream.
Upon receiving an IAC followed by 255 over the network, your client (and the server in the opposite direction) must replace that with a single data byte of 255 in the data stream it returns.
This is also covered in RFC 854.

Binding Request inside Data attribute of Send Indication

When two peers are using WebRTC transmission with TURN as a relay server we've noticed that from time to time the data inside Send Indication or Channel Data is actually a valid STUN Binding Request message (type 0x0001). The other peer responds in the same way with a valid Binding Request Response (type 0x0101). It happens repeatedly during the whole conversation. Both peers are forced to use TURN server. What is the purpose of encapsulating typical STUN message inside data attribute of TURN transmission frame? Is it described in any document?
Here is an example of Channel Data frame:
[0x40,0x00,0x00,0x70,0x00,0x01,0x00,0x5c,0x21,0x12,0xa4,0x42,0x71,0x75,0x6d,0x6a,0x6f,0x66,0x69,0x6f...]
0x40,0x00 - channel number
0x00,0x70 - length of data
0x00,0x01,0x00,0x5c,0x21,0x12... - data, that can be parsed to a Binding Request
This is ICE (described in RFC 5245) connectivity checks running via TURN as well as consent checks described in RFC 7675.

Why does the TLS heartbeat extension allow user supplied data?

The heartbeat protocol requires the other end to reply with the same data that was sent to it, to know that the other end is alive. Wouldn't sending a certain fixed message be simpler? Is it to prevent some kind of attack?
At least the size of the packet seems to be relevant, because according to RFC6520, 5.1 the heartbeat message will be used with DTLS (e.g. TLS over UDP) for PMTU discovery - in which cases it needs messages of different sizes. Apart from that it might be simply modelled after ICMP ping, where you can also specify the payload content for no reason.
Just like with ICMP Ping, the idea is to ensure you can match up a "pong" heartbeat response you received with whichever "ping" heartbeat request you made. Some packets may get lost or arrive out of order and if you send the requests fast enough and all the response contents are the same, there's no way to tell which of your requests were answered.
One might think, "WHO CARES? I just got a response; therefore, the other side is alive and well, ready to do my bidding :D!" But what if the response was actually for a heartbeat request 10 minutes ago (an extreme case, maybe due to the server being overloaded)? If you just sent another heartbeat request a few seconds ago and the expected responses are the same for all (a "fixed message"), then you would have no way to tell the difference.
A timely response is important in determining the health of the connection. From RFC6520 page 3:
... after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
By allowing the requester to specify the return payload (and assuming the requester always generates a unique payload), the requester can match up a heartbeat response to a particular heartbeat request made, and therefore be able to calculate the round-trip time, expiring the connection if appropriate.
This of course only makes much sense if you are using TLS over a non-reliable protocol like UDP instead of TCP.
So why allow the requester to specify the length of the payload? Couldn't it be inferred?
See this excellent answer: https://security.stackexchange.com/a/55608/44094
... seems to be part of an attempt at genericity and coherence. In the SSL/TLS standard, all messages follow regular encoding rules, using a specific presentation language. No part of the protocol "infers" length from the record length.
One gain of not inferring length from the outer structure is that it makes it much easier to include optional extensions afterwards. This was done with ClientHello messages, for instance.
In short, YES, it could've been, but for consistency with existing format and for future proofing, the size is spec'd out so that other data can follow the same message.

Netty SSL mode strange behavior

I am trying to understand, why does Netty SSL mode work on strange way?
Also, the problem is following, when any SSL client(https browser, java client using ssl, also any ssl client application) connects to Netty server I get on beginning the full message, where I can recognize correctly the protocol used, but as long the channel stays connected, any following messages have strange structure, what is not happening same way with non-ssl mode.
As example on messageReceived method when the https browser connects to my server:
I have used PortUnificationServerHandler to switch protocols.. (without using nettys http handler, it is just example, because i use ssl mode for my own protocol too)
first message is ok, I get full header beginning with GET or POST
than I send response...
second message is only one byte long and contains "G" or "P" only.
third message is than the rest beginning either with ET or OST and the rest of http header and body..
here again follows my response...
fourth message is again one byte long and again contains only one byte..
fifth message again the rest... and on this way the game goes further..
here it is not important, which sub protocol is used, http or any else, after first message I get firstly one byte and on second message the rest of the request..
I wanted to build some art of proxy, get ssl data and send it unencoded on other listener, but when I do it directly without waiting for full data request, the target listener(http server as example) can not handle such data, if the target gets one byte as first only (even if the next message contains the rest), the channel gets immediately closed and request gets abandoned..
Ok, first though would be to do following, cache the first byte temporarily and wait for next message and than join those messages, and only than response, that works fine, but sometimes that is not correct approach, because the one byte is sometimes really the last message byte, and if i cache it and await wrongly next message, i can wait forever, because the https browser expects at this time some response and does not send any data more..
Now the question, is it possible to fix this problem with SSL? May be there are special settings having influence on this behavior?
I want fully joined message at once as is and not firstly first byte and than the rest..
Can you please confirm, that with newer Netty versions you have same behaving by using PortUnificationServerHandler (but without netty http handler, try some own handler.)
Is this behavior Ok so, I do not believe, it was projected so to work..
What you're experiencing is likely to be due to the countermeasures against the BEAST attack.
This isn't a problem. What seems to be the problem is that you're assuming that you're meant to read data in terms of messages/packets. This is not the case: TCP (and TLS/SSL) are meant to be used as streams of continuous data. You should keep reading data while data is available. Where to split incoming data where it's meaningful is guided by the application protocol. For HTTP, the indications are the blank line after the header and the Content-Length or chunked transfer encoding for the entity.
If you define your own protocol, you'll need a similar mechanism, whether you use plain HTTP or SSL/TLS. Assuming you don't need it only works by chance.
I had experienced this issue and found it was caused bu using JDK1.7. Moving back to JDK1.6 solved it. I did not have time to investigate further but have assumed for now that the SSLEngine implementation has changed in the JDK. I will investigate further when time permits.