I have two applications talking to each other over SSL. The client is running on a windows machine, the server is a linux based application. The client is sending a large amount of data to the server on startup. The data is sent in ~4000byte chunks over to the server that contains 30 entries. I have to send about 50000 entries over.
During that transmission the server sends a message to the client, the message size is ~4000bytes. After that happens, the SSL_write() on the client side begins to return error of SSL_ERROR_WANT_WRITE. The client sleeps for 10ms, and retries the SSL_write with the exact same parameters, however, the SSL_write fails infinitely. Subsequently it aborts. If it tries to send a new message, I get an error indicating I am not sending the same aborted message from earlier.
error:1409F07F:SSL routines:SSL3_WRITE_PENDING: bad write retryā€¯
The server eventually kills the connection since it has not heard from the client for 60s and re-establishes a new one. This is just an FYI, the real issue is how can I get SSL_write to resume.
If the server does not send a request during the receive the problem goes away. If I shrink the size of the request from 16K to 100 bytes the problem does not happen.
The SSL CTX MODE is set to SSL_MODE_AUTO_RETRY and SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER.
Does anyone have an idea what might cause a simultaneous transmission from both sides with large information can cause this failure. What can I do to prevent it if this is a limitation other than capping the size that goes out from the server to the client. My concern is that if the client is not sending anything the throttling I applied to avoid this issue is a waste.
On the client side I tried to perform an SSL_read to see if I need to read during a write even though I never receive an SSL_ERROR_PENDING_READ, but the buffer is not that big anyway. ~1000bytes in size.
Any insight on this would be appreciated.
SSL_ERROR_WANT_WRITE - This error is returned by OpenSSL (I am assuming you are using OpenSSL) only when socket send gives it an EWOULDBLOCK or EAGAIN error. The socket send will give a EWOUDLBLOCK error when the send side buffer is full, which in turn means that your Server is not reading the messages sent from Client.
So, essentially, the problem lies with your Server which is not reading the messages sent to it. You need to check your server and fix it, which will automatically fix your client problem.
Also, why have you set the option "SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER"? SSL always expects that the record which it is trying to send should be sent completely before the next record can be sent.
As it turns out that with both the client and server side app, the read and writes are processed in one thread. In a perfect storm as I described above, the client is busy writing (non blocking). The server then decides to do a write a large set of messages of its own in between processing its rx buffers. The server tx is a blocking call. The server gets stuck writing, starves the read, the buffers fill up and we have a deadlock scenario.
The default windows buffer is 8k bytes so it doesn't take much to fill it up.
The architecture should be such that there is a separate thread for the rx and tx processing on both sides. As a short cut/term fix, once can increase the rx buffers and rate limit the tx side to prevent the deadlock.
Related
I'm using BIO memory interface to have TLS implemented over SCTP.
So at the client side, while sending out application data,
SSL_write() api encrypts the data and writes data to the associated write BIO interface.
Then the data from BIO interface is read to a output buffer using BIO_read() call and then
send out to the socket using sctp_sendmsg() api.
Similarly at the server side, while reading data from socket
sctp_recvmsg() api reads ecrypted message chunks from socket,
BIO_write() api writes it to the read BIO buffer, and
SSL_read() api decrypts the data read from the BIO.
The case i'm interested at is where at client side, steps 1 and 2 are done, and while doing 3, i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
Now when i do this, and later when steps 1, 2 and 3 at client side goes through fine, at the server side, openssl finds it that the record that it received has got a a bad_record_mac and closes the connection.
From googling i came to know that one possibility for it to happen is if TLS packets comes out of sequence, as MAC encoding has dependency on the previous packet encoded, and, TLS needs to have the packets delivered in the same order. So when i was cleaning up the data on EAGAIN i am dropping an SSL packet and then sending a next packet which is out of order (missing clarity here) ?
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
Can someone help me here with this EAGAIN handling ? I can't do an infinite wait to get around the issue, is there any other way out ?
... i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
If you get an EAGAIN on the socket you should try to send the same encrypted data later.
What you do instead is to throw the encrypted data away and ask the application to send the same plain data again. This means that these data get encrypted again. But encrypting plain data in SSL also includes a sequence number of the SSL frame and this sequence number is not the same as for the last SSL frame you throw away.
Thus, if you have thrown away the full SSL frame you are trying to send a new SSL frame with the next sequence number which does not fit the expected sequence number. If you've succeeded to send part of the previous SSL frame and thew away the rest then the new data you send will be considered part of the previous frame which means that the HMAC of the frame will not match.
Thus, don't throw away the encrypted data but try to resent these instead of letting the upper layer resent the plain data.
Select for writability.
Repeat the send.
If the send was incomplete, remove the part of the buffer that got sent and go to (1).
So whatever data i've read from the BIO buffer, i clean it up
I don't know what this means. You're sending, not receiving.
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
That's exactly what you should do. I can't imagine what else you could possibly have been doing instead, and your description of it doesn't make any sense.
Does OpenSSL and/or the SSL/TLS protocol provide some kind of built in protection against infinite renegotiation?
In particular, is it possible for SSL_read() to continue executing forever because the remote side (possibly maliciously) keeps requesting renegotiations without sending payload data?
I am worried about this because I want to service a number of SSL connections from a single thread using a polling mechanism and also ensure a form of fairness where the processing of I/O on one connection does not lead to starvation of I/O on the other connections.
When I call regular read() on a socket in nonblocking mode, I know it cannot keep executing forever, because the buffer will fill up eventually.
However, since SSL_read() can handle renegotiations transparently, it seems to me that if the remote side (possibly maliciously) keeps requesting renegotiations without sending payload data, and the underlying transport layer is fast enough to make the underlying reads and writes never fail with EWOULDBLOCK, then SSL_read() could end up executing forever, and thereby starving the other connections.
Therefore my question: Does OpenSSL or the protocols have mechanisms for avoiding that? The question applies equally to SSL_write() by the way.
EDIT: For example, can I be sure that SSL_read() will return with an SSL_ERROR_WANT_READ/SSL_ERROR_WANT_WRITE indication before engaging in multiple renegotiations, even if the underlying read/write operations never fail with EWOULDBLOCK?
EDIT: For the purpose of this question, assume that I am using a regular socket BIO (BIO_s_socket()) and that the underlying socket is in nonblocking mode.
There is no built-in protection in OpenSSL. But you can use SSL_CTX_set_info_callback or similar to set a function which gets called on each negotiation. This way you can cut the connection if too much renegotiations happen inside the same connection. See Protect against client-initiated renegotiation DoS in OpenSSL/Python for more information.
I have a server to which my client sends a HTTP GET request with some values. The server on its end simply stores these values to a database.
Now, I am observing that sometimes I do not observe these values in the database. One of the following could have happened:
The client never sent it
The server never received it
The server failed in writing to the database
My strongest doubt is that the reason is 2 - but I am unable to explain it completely. Since this is an HTTP request (which means there is TCP underneath) reliable delivery of the GET request should be guaranteed, right? Is it possible that even though I send a GET request to the server - it was never received by the server? If yes, what is TCP doing there?
Or, can I confidently assert that if the server is up and running and everything sent to the server is written to the database, then the absence of the details of the GET request in the database means the client never sent it?
Not sure if the details will help - but I am running a tomcat server and I am just sending a name-value pair through the get request.
There are a few things you seem to be missing. First of all, yes, if TCP finishes successfully, you pretty much have a guarantee that your message (i.e. the TCP payload) has reached the other side: TCP assures that it will take care of lost packages and the order in which packages arrive. However, this is not universially failproof, as there are still things beyond the powers of TCP (think of a physical disconnect by cutting through an ethernet cable). There is also no assertion regarding the syntactical correctness of the protocol "above." Any checks beyond delivering a bit-perfect copy is simply not TCP's concern.
So, there is a chance that the requests issued by your client are faulty or that they are indeed correct but not parsed correctly by your server. Former is striking me as more likely as latter one as Tomcat is a very mature piece of software. I think it would help tremendously if you would record and analyse some of your generated traffic through e.g. Wireshark.
You do not really mention what database you have in use. But there are some sacrificing acid-compliance in favour of increased write speeds. The nature of these databases brings it that you can never be really sure wether something actually got written to disk or is still residing in some buffer in memory. Should you happen to use such a db, this were another line of investigation.
Programmatically, I advise you take the following steps when dealing with HTTP traffic:
Has writing to the socket finishes without error?
Could a response be read from the socket?
Does the response carry a code in the 2xx range (indicating a successful operation)?
If any of these fail, you should really log something.
On a realated note, what you are doing there does not call for the GET method but for POST as you are changing application state. Consider it as a nice-to-have ;)
Without knowing the specifics, you can break it down into two parts. The HTTP request and the DB write. The client will receive a 200 OK response from the server when its GET request has been acknowledged. I've written code under Tomcat to connect to a MySQL DB using DAO. In the case of a failure an exception would be thrown and logged. Which ever method you're using, you'll want to figure out how failures are logged.
I have a server application that uses Microsoft's I/O Completion Port (IOCP) mechanism to manage asynchronous network socket communication. In general, this IOCP approach has performed very well in my environment. However, I have encountered an edge case scenario for which I am seeking guidance:
For the purposes of testing, my server application is streaming data (lets say ~400 KB/sec) over a gigabit LAN to a single client. All is well...until I disconnect the client's Ethernet cable from the LAN. Disconnecting the cable in this manner prevents the server from immediately detecting that the client has disappeared (i.e. the client's TCP network stack does not send notification of the connection's termination to the server)
Meanwhile, the server continues to make WSASend calls to the client...and being that these calls are asynchronous, they appear to "succeed" (i.e. the data is buffered by the OS in the outbound queue for the socket).
While this is all happening, I have 16 threads blocked on GetQueuedCompletionStatus, waiting to retrieve completion packets from the port as they become available. Prior to disconnecting the client's cable, there was a constant stream of completion packets. Now, everything (as expected) seems to have come to a halt...for about 32 seconds. After 32 seconds, IOCP springs back into action returning FALSE with a non-null lpOverlapped value. GetLastError returns 121 (The semaphore timeout period has expired.) I can only assume that error 121 is an artifact of WSASend finally timing out after the TCP stack determined the client was gone?
I'm fine with the network stack taking 32 seconds to figure out my client is gone. The problem is that while the system is making this determination, my IOCP is paralyzed. For example, WSAAccept events that post to the same IOCP are not handled by any of the 16 threads blocked on GetQueuedCompletionStatus until the failed completion packet (indicating error 121) is received.
My initial plan to work around this involved using WSAWaitForMultipleEvents immediately after calling WSASend. If the socket event wasn't signaled within (e.g. 3 seconds), then I terminate the socket connection and move on (in hopes of preventing the extensive blocking effect on my IOCP). Unfortunately, WSAWaitForMultipleEvents never seems to encounter a timeout (so maybe asynchronous sockets are signaled by virtue of being asynchronous? Or copying data to the TCP queue qualifies for a signal?)
I'm still trying to sort this all out, but was hoping someone had some insights as to how to prevent the IOCP hang.
Other details: My server application is running on Win7 with 8 cores; IOCP is configured to use at most 8 concurrent threads; my thread pool has 16 threads. Plenty of RAM, processor and bandwidth.
Thanks in advance for your suggestions and advice.
It's usual for the WSASend() completions to stall in this situation. You won't get them until the TCP stack times out its resend attempts and completes all of the outstanding sends in error. This doesn't block any other operations. I expect you are either testing incorrectly or have a bug in your code.
Note that your 'fix' is flawed. You could see this 'delayed send completion' situation at any point during a normal connection if the sender is sending faster than the consumer can consume. See this article on TCP flow control and async writes. A better plan is to use a counter for the amount of oustanding writes (per connection) that you want to allow and stop sending if that counter gets reached and then resume when it drops below a 'low water mark' threshold value.
Note that if you've pulled out the network cable into the machine how do you expect any other operations to complete? Reads will just sit there and only fail once a write has failed and AcceptEx will simply sit there and wait for the condition to rectify itself.
I have Created A UDP Server-Client application. There is only single thread at Server's side which continuously executes recvfrom().
If I run 3 Clients Simultaneously from 3 different machines, and send some data, the Server is able to read the data from each of the client.
But how can I test the reliability of this application?
How would I know how many Maximum number of Clients can this Server handle at a time?
Also what is the maximum Payload?
But how can I test the reliability of this application?
Run as many clients as you can. The more clients you can run and send data, the better. Try to run many clients of different machines, and on each machine try to run as many clients as you can, and keep sending data automatically.
Make the clients send data in a loop, without waiting for input, and put a delay between each call to send. A few seconds of delay is fine, then you can lower the delay later and see how your server is handling it.
How would I know how many Maximum number of Clients can this Server handle at a time?
You can't. You are using a UDP server, and UDP is connectionless. Clients do not need to connect to the server to send data, they just send it. Usually it is limited by available resources (memory, etc.) on your server.
Also what is the maximum Payload?
The maximum payload of what? A UDP message? You can read more about the UDP packet structure.