What is the purpose of the MaxReceivedMessageSize on the client-side? - wcf

I did a test against a WCF server where the response from the server exceeds the MaxRecievedMessageSize property defined in the client-side binding object, resulting in a CommunicationException. I examined request and response using Fiddler. Despite exceeding the MaxRecievedMessageSize, the entirety of the response is sent to the client.
I believe I am missing the point of this behavior. As I see it, no bandwidth is saved as the data has already been received. The client application could have processed the data but the client binding has discarded before it is given to the application.
If saving bandwidth is not the purpose of the MaxReceivedMessageSize on the client-side, what is it for?

The answer is simple: security.
It would indeed be better for the bandwidth if your client could say to the server: "oh, by the way, don't bother sending me replies bigger than X bytes", but that is something they didn't implement :-)
And even if it was, what if the server has a bug, or is intentionally misbehaving...
What if the server returned a 2 TB string? Your client would then try to allocate a 2TB buffer to receive the request and will probably get a OutOfMemoryException. That would bring your client down.

Related

About losing HTTP Requests

I have a server to which my client sends a HTTP GET request with some values. The server on its end simply stores these values to a database.
Now, I am observing that sometimes I do not observe these values in the database. One of the following could have happened:
The client never sent it
The server never received it
The server failed in writing to the database
My strongest doubt is that the reason is 2 - but I am unable to explain it completely. Since this is an HTTP request (which means there is TCP underneath) reliable delivery of the GET request should be guaranteed, right? Is it possible that even though I send a GET request to the server - it was never received by the server? If yes, what is TCP doing there?
Or, can I confidently assert that if the server is up and running and everything sent to the server is written to the database, then the absence of the details of the GET request in the database means the client never sent it?
Not sure if the details will help - but I am running a tomcat server and I am just sending a name-value pair through the get request.
There are a few things you seem to be missing. First of all, yes, if TCP finishes successfully, you pretty much have a guarantee that your message (i.e. the TCP payload) has reached the other side: TCP assures that it will take care of lost packages and the order in which packages arrive. However, this is not universially failproof, as there are still things beyond the powers of TCP (think of a physical disconnect by cutting through an ethernet cable). There is also no assertion regarding the syntactical correctness of the protocol "above." Any checks beyond delivering a bit-perfect copy is simply not TCP's concern.
So, there is a chance that the requests issued by your client are faulty or that they are indeed correct but not parsed correctly by your server. Former is striking me as more likely as latter one as Tomcat is a very mature piece of software. I think it would help tremendously if you would record and analyse some of your generated traffic through e.g. Wireshark.
You do not really mention what database you have in use. But there are some sacrificing acid-compliance in favour of increased write speeds. The nature of these databases brings it that you can never be really sure wether something actually got written to disk or is still residing in some buffer in memory. Should you happen to use such a db, this were another line of investigation.
Programmatically, I advise you take the following steps when dealing with HTTP traffic:
Has writing to the socket finishes without error?
Could a response be read from the socket?
Does the response carry a code in the 2xx range (indicating a successful operation)?
If any of these fail, you should really log something.
On a realated note, what you are doing there does not call for the GET method but for POST as you are changing application state. Consider it as a nice-to-have ;)
Without knowing the specifics, you can break it down into two parts. The HTTP request and the DB write. The client will receive a 200 OK response from the server when its GET request has been acknowledged. I've written code under Tomcat to connect to a MySQL DB using DAO. In the case of a failure an exception would be thrown and logged. Which ever method you're using, you'll want to figure out how failures are logged.

OpenSSL SSL_ERROR_WANT_WRITE never recovers during SSL_write()

I have two applications talking to each other over SSL. The client is running on a windows machine, the server is a linux based application. The client is sending a large amount of data to the server on startup. The data is sent in ~4000byte chunks over to the server that contains 30 entries. I have to send about 50000 entries over.
During that transmission the server sends a message to the client, the message size is ~4000bytes. After that happens, the SSL_write() on the client side begins to return error of SSL_ERROR_WANT_WRITE. The client sleeps for 10ms, and retries the SSL_write with the exact same parameters, however, the SSL_write fails infinitely. Subsequently it aborts. If it tries to send a new message, I get an error indicating I am not sending the same aborted message from earlier.
error:1409F07F:SSL routines:SSL3_WRITE_PENDING: bad write retryā€¯
The server eventually kills the connection since it has not heard from the client for 60s and re-establishes a new one. This is just an FYI, the real issue is how can I get SSL_write to resume.
If the server does not send a request during the receive the problem goes away. If I shrink the size of the request from 16K to 100 bytes the problem does not happen.
The SSL CTX MODE is set to SSL_MODE_AUTO_RETRY and SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER.
Does anyone have an idea what might cause a simultaneous transmission from both sides with large information can cause this failure. What can I do to prevent it if this is a limitation other than capping the size that goes out from the server to the client. My concern is that if the client is not sending anything the throttling I applied to avoid this issue is a waste.
On the client side I tried to perform an SSL_read to see if I need to read during a write even though I never receive an SSL_ERROR_PENDING_READ, but the buffer is not that big anyway. ~1000bytes in size.
Any insight on this would be appreciated.
SSL_ERROR_WANT_WRITE - This error is returned by OpenSSL (I am assuming you are using OpenSSL) only when socket send gives it an EWOULDBLOCK or EAGAIN error. The socket send will give a EWOUDLBLOCK error when the send side buffer is full, which in turn means that your Server is not reading the messages sent from Client.
So, essentially, the problem lies with your Server which is not reading the messages sent to it. You need to check your server and fix it, which will automatically fix your client problem.
Also, why have you set the option "SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER"? SSL always expects that the record which it is trying to send should be sent completely before the next record can be sent.
As it turns out that with both the client and server side app, the read and writes are processed in one thread. In a perfect storm as I described above, the client is busy writing (non blocking). The server then decides to do a write a large set of messages of its own in between processing its rx buffers. The server tx is a blocking call. The server gets stuck writing, starves the read, the buffers fill up and we have a deadlock scenario.
The default windows buffer is 8k bytes so it doesn't take much to fill it up.
The architecture should be such that there is a separate thread for the rx and tx processing on both sides. As a short cut/term fix, once can increase the rx buffers and rate limit the tx side to prevent the deadlock.

Sending large messages from server to client with pollingDuplexHttpBinding

First, I'd like to ask for the theory, because I didn't find any related documentation: We have a Silverlight client and a WCF service. The communication between them is through a pollingDuplexHttpBinding.
Suppose the server wants to sent to the client a message which its size is larger than the set MaxBufferSize and MaxReceivedMessageSize. What is going behind the scenes in that case?
Now, here is my actual experience with this issue:
The binding configuration on the server-side:
<binding name="eventServiceBinding" sendTimeout="00:00:10" inactivityTimeout="24:00:00" receiveTimeout="24:00:00" serverPollTimeout = "00:01:00"/>
Send a large (i.e. larger than the value set in client's binding's properties as described above) message from the server to the client. Then, send a second (not large) message => I got a send timeout for the second message (I don't know if the client ever got the first message).
I've tried to search for some helpful logging in order to see what happens with the first message. Done it both in the server side (by activating the WCF logger) and on the client side (by using Fidler). I've found nothing really interesting in the log (but maybe I didn't search in the correct places).
Moreover - when sendTimeout is set to large value (say 10 minutes), it looks like all additional messages sent from server to client are "stuck" - never received by the client and no exceptions are thrown until the send timeout is reached. In addition, I experience a strange phenomenon in which no communication between any client and any service exposed by the hosting IIS application is working - until I reset IIS. I am not 100% sure though that this is related to the previous problem described here.
Setting the MaxBufferSize and MaxReceivedMessageSize properties in the client side's binding seems to resolve both issues.
Please let me know if you have any experience with such issue and whether you can tell what's actually is going behind the scenes within WCF here.

Looking for the optimal WCF quota settings

I know, my question is kinda wishy washy, but what would you say are "optimal" settings for WCF quotas, e.g. MaxReceivedMessageSize etc.?
My service mostly returns small values, but sometimes the return values exceed the default quotas. There are even larger return values, which I return as streams at a second endpoint.
Now the default value for MaxReceivedMessageSize (no question, the streamed endpoint uses higher values; my question concerns buffered communication) of 65536 bytes is quite low, I think. There are tons of "tutorials" which just set this value to Int32.MaxValue, which isn't a good idea at all ;)
Well what do you think? Which values are viable but are also safe enough not to make your service vulnerable for DoS and other stuff?
Regards
Vialbe value really depends on the size of data you are expecting. If you know that sometimes you can get up to 256KB then set the value to 256KB. In case of internal service the limit can be probably set to Int32.MaxValue but I think it is much more about lazyness of making the assumtion about transferred data. For a public web service you will hardly set the value to Int32.MaxValue because anybody will be able to blow up your server.
Btw. if we are talking about data returned from the service then this decission is on the client - both quotas and MaxReceiveMessageSize target receiving message not sending message so if your service returns data in response to client's requests the limit will be set on the client side. For example in case of public web service you don't have all clients under your control so you must also consider how much data do you want to return.
A separate endpoint is separate configuration on both client and server sides.

Timeout Question about Invoking a Remote WCF Service

When I invoke a remote WCF service I get the following timeout:
The request channel timed out while waiting for a reply after 00:00:59.2810338. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
Please note that I am sending a single object which is LOADED with a LOT of data.
Any ideas how to fix this issue and is this a problem on the client (ME) or the Server.
Given the size, have you tried increasing your maxBufferSize/maxReceivedMessageSize in your binding?
Chunk your data into smaller pieces if possible and try again. This is a server setting that you will need to work around or request that the service provider increase it.
Without a stack trace I'm can't be 100% sure, but I'm relatively certain this is a client side exception. If you know it's going to take more than a minute to send the data all you need to do is change the sendTimeout on your binding to be whatever amount of time you need it to be.