WCF readerQuotas settings - drawbacks? - wcf

If a WCF service returns a byte array in its response message, there's a chance the data will exceed the default length of 16384 bytes. When this happens, the exception will be something like
The maximum array length quota (16384)
has been exceeded while reading XML
data. This quota may be increased by
changing the MaxArrayLength property
on the XmlDictionaryReaderQuotas
object used when creating the XML
reader.
All the advice I've seen on the web is just to increase the settings in the <readerQuotas> element to their maximum, so something like
<readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647"
maxArrayLength="2147483647" maxBytesPerRead="2147483647"
maxNameTableCharCount="2147483647" />
on the server, and similar on the client.
I would like to know of any drawbacks with this approach, particularly if the size of the byte array may only occassionally get very large. Do the settings above just make WCF declare a huge array for each request? Do you have to limit the maximum size of the data returned, or can you just specify a reasonably-sized buffer and get WCF to keep going until all the data is read?
Thanks!

The main drawback is a potential vulnerability to attacks - e.g. a malicious source can now flood your webserver with message up to 2 GB in size and potentially bring it down.
Of course, allowing 2 GB messages also puts some strain on your server in terms of memory consumption, since those messages need to be assembled in memory, in full (unless you use streaming protocols in WCF). If you have 10 clients sending you 2 GB messages, you'll need plenty of RAM on your server! :-)
Other than that, I don't see any real issues.
Marc

There is an article on MSDN which explains the various security considerations you need to think about when setting these values. Some denial-of-service attacks are those which eat up your memory and some of them (such as MaxDepth not being set properly) could cause fatal StackOverflowExceptions which could bring down your server in a single request.
http://msdn.microsoft.com/en-us/library/ms733135.aspx

Related

Stream 100 gig file over WCF?

Can I stream 100 Gig file over WCF to an IIS WebService? I am running into
The maximum message size quota for incoming messages (2147483647) has
been exceeded. TO increase the quota, use the MaxReceivedMessageSize
property.
The only problem with that advice is I think I have it set to the max value.
The key is to be sure you are using transferMode="Streamed" (Buffered which is the default will give you OutOfMemoryException) and set MaxReceivedMessageSize to be very large it can hold a long. The real gotcha is that when you do a "Configure Service Reference" sometimes it will remove transferMode="Streamed" and revert you back the default Buffered

Is there any concern in Setting "maxBufferSize" to 1GB of WCF NetTcpBinding

I have a Duplex communication which sends Small as well as Large messages.
In very rare cases I have more data to be send, because of this reason I am in a confusion to change the TransportMode to Streamed.
So I set the buffer size to 1GB (or a Large Size).
maxBufferSize="1073741824"
Is this cause my Small message communication ?
As the setting says this is the maximum buffer size not the buffer size used for all requests. Although maxBufferSize is only used for defining the buffer size for headers when the message is streamed

How should I decide the quotas for the Tridion core service binding?

I am connecting to the Tridion core service using a System.ServiceModel.WsHttpBinding. The service will only be used by authenticated users, and probably only by code which I control. I have to choose values for the following
MaxBufferPoolSize (default 524,288 bytes)
MaxReceivedMessageSize (default 65,536 bytes)
ReaderQuotas.MaxArrayLength (default 16384 bytes)
ReaderQuotas.MaxBytesPerRead (default 4096 bytes)
ReaderQuotas.MaxNameTableCharCount (default 16384 bytes)
ReaderQuotas.MaxStringContentLength (default 8192 bytes)
The code examples I have seen for using the core service invariably set at least some of these to values larger than the defaults, for example 4Mb. Is this because of known problems when other values, such as the defaults, are used?
MaxBufferPoolSize is there to allow you to prevent excessive garbage collections. Is this simply a matter of monitoring GCs and tuning based on that?
MaxReceivedMessageSize, MaxArrayLength and MaxBytesPerRead are there to defend against Dos attacks, so in my scenario, perhaps I can improve throughput by increasing these. Would a really large number help?
MaxNameTableCharCount seems to be there to prevent uncontrolled growth of something you might not want to grow uncontrolledly, so perhaps leaving the default would be a good thing.
The documentation on MaxStringContentLength doesn't specify what happens if you exceed the quota. Presumably ReadContentAsString will fail in some way, so perhaps this value should be large.
So - should I leave these values at their defaults? Will that cause me problems? Should I increase them to large values? Will that help with throughput etc., or is it more likely to cause other problems?
The general rule is to have these values as small as possible, just enough for you code to work. If you will take a look at default config that is shipped with CoreService.dll, it has some of the values increased.
For example, if you expect to get large XML lists (or search results) - you should increase MaxReceivedMessageSize. Keep in mind that you have control over the size of the list you will get using BaseColumns property of a filter.
If you prefer to use GetList, GetSystemWideList and GetSearchResults methods over their XML counterparts, you will probably have to increase ReaderQuotas.MaxArrayLength together with MaxReceivedMessageSize. But please note, that large arrays will be stored in memory.
I'm not sure you want to increase any of these values until you will hit the limit. WCF is quite good with pointing you to the parameter you have to adjust.
I'm afraid this is not really an answer to your questions... But, from my experience, I increased the values to more than the defaults suggested. I used 4MB as you already suggested. This was namely because I was experiencing error while communicating with the Core Service. They were related to the request/response sizes exceeding the allotted sizes.
Moreover, in the case of Core Service transactionality, I saw more of these exceptions. It seems that the sizes of request/responses increase quite a bit when using transactions. In my case, I was creating a batch of Components in one big transaction. If one Component failed to create, I would roll-back the whole transaction.
Hope this helps.
I've been experimenting with the Core Service recently and have seen an XmlReader exception occur when trying to open a large TBB (C# fragment) using the following code:
using(var client = new CoreService.CoreService2010Client())
{
var item = client.Read(tcmId,new ReadOptions());
//More code
}
System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be
increased by changing the MaxStringContentLength property on the
XmlDictionaryReaderQuotas object used when creating the XML reader.
Line 1, position 9201.
As it says in the message, I had to up the ReaderQuotas.MaxStringContentLength to fix this. So if you're working with any Building Blocks that have content bigger than 8KB expect this error.

Maximum binary contents length over WCF/Http

We have a WCF service that has a threshold of 30MB to send files over an http message, anything above that value gets transferred by file copy and the path sent back to the caller. Now we were requested to eliminate that file copy because customers complained it was too slow. So the decision was to remove any size limitation when sending binary content over WCF/HTTP.
My question is - how reliable is that? What type of issues will we encounter by pushing, say, a 2GB file over the wire in a single WCF message, if that is even possible?
Thanks!
If you set the MaxReceivedMessageSize in WCF to a high enough value on your WCF service, you can push a fairly large file through that service. The maximum is int64.MaxValue = 9,223,372,036,854,775,807, so you should be able to set a value to cover a 2GB message.
You might want to control the MaxBufferSize to ensure you're not trying to store too much into memory, and maybe consider switching to the more binary-efficient MTOM message encoding if you can. Note that the MaxReceivedMessageSize governs the size of the message after the binary file has been encoded, which means the original binary file size which can be sent over the service will be smaller than 2GB.
MSDN has a very nice article covering sending large amounts of data over WCF and what to look out for: Large Data and Streaming.
Edit: Turns out the max value allowed is actually Int64.MaxValue)

Will the maximum limit of configuration property MaxReceivedMessageSize in wcf affects service performance?

I'm getting the following communication exception for my wcf service making cal to another wcf service:
"The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element."
I resolved this by increasing the size as below:
maxReceivedMessageSize="50000000"
But, here I want to know whether any side effects of increasing message size to such big level.
Yes - it might. The reason WCF keeps this limit low (64K) by default is this: imagine your server is busy responding to requests, say dozens or hundreds, and they all require the maximum message size.
Potentially, your server could have to allocate dozens or hundreds of message buffers at the same time - if you have 100 users and each requests 64K, that's 6.4 MByte - but if you have 200 users and each requests 5 MB - that's a gigabyte of RAM in the server - just for the message buffers, for one service.
So yes - putting a limit on the max message size does make sense and it helps manage your server's memory consumption (and thus performance). If you open it up too wide, an attacker might just do such an attack - flooding your server with bogus requests, each allocating as much memory as they can get, ultimately bringing your server down (Denial of Service attacks like that are quite common).
You need to increase the quota according to the web service. One side effect I can think of if you use very large values is that memory usage will increase but you can safely increase it to a suitable value without any adverse effects.