Maximum binary contents length over WCF/Http - wcf

We have a WCF service that has a threshold of 30MB to send files over an http message, anything above that value gets transferred by file copy and the path sent back to the caller. Now we were requested to eliminate that file copy because customers complained it was too slow. So the decision was to remove any size limitation when sending binary content over WCF/HTTP.
My question is - how reliable is that? What type of issues will we encounter by pushing, say, a 2GB file over the wire in a single WCF message, if that is even possible?
Thanks!

If you set the MaxReceivedMessageSize in WCF to a high enough value on your WCF service, you can push a fairly large file through that service. The maximum is int64.MaxValue = 9,223,372,036,854,775,807, so you should be able to set a value to cover a 2GB message.
You might want to control the MaxBufferSize to ensure you're not trying to store too much into memory, and maybe consider switching to the more binary-efficient MTOM message encoding if you can. Note that the MaxReceivedMessageSize governs the size of the message after the binary file has been encoded, which means the original binary file size which can be sent over the service will be smaller than 2GB.
MSDN has a very nice article covering sending large amounts of data over WCF and what to look out for: Large Data and Streaming.
Edit: Turns out the max value allowed is actually Int64.MaxValue)

Related

WCF nettcp traffic optimization

How to optimize traffic on nettcp binding ?
One data object takes 300-1000 bytes in memory. I need transfer near 1 000 000 objects. So i can create more than 1 Gb traffic. Can length of field name influent on serialized object size ( i.e. xml serializer use names in xml elements ) ?
And i expect that binary serializer used by default ?
Can gzip compression enabled be effective on 1Gb size , total time pack + network transfer + unpack ?
May be in this case more effective way create custom serializer ?
By default net.tcp binding will use Microsoft binary XML encoding, which is dependent on the length of XML tags you use, but list them only once. So if you pass all 1000000 objects in one WCF message, then all tags will appear only once.
But what is more important WCF uses buffered mode by default. Which means that you will have all your objects in memory (1 Gb), then WCF will serialize them into something - let's assume this is another 1 Gb. If you use reliable sessions then one more copy of message will reside in memory until confirmation from receiving side is received.
So, not only traffic is important, but also local memory footprint will be significant.

Which is the best transferMode for soap response?

I have added service reference to my web.config file but I am not sure about the transferMode property inside binding tag.
In the basicHttpBinding, which is the best transferMode for soap/xml response?
Basically there are four transfer mode. If you narrow down those to two, buffered and streamed, here is the criteria:
If you are transferring large files, mostly binary files, try using streamed. This method streams the data to the client instead of sending a big chunk of data. It helps your application to be more efficient in terms of memory consumption. Some of the advanced functionalities of WCF are not available with this transfer mode.
By default buffered is selected. This is suitable for normal messages with relatively small or medium size. The whole request or response will be buffered in memory and then flush to the client or server.
There is another method that needs a custom channel that send messages in multiple chunks.
Chunk Channel

How should I decide the quotas for the Tridion core service binding?

I am connecting to the Tridion core service using a System.ServiceModel.WsHttpBinding. The service will only be used by authenticated users, and probably only by code which I control. I have to choose values for the following
MaxBufferPoolSize (default 524,288 bytes)
MaxReceivedMessageSize (default 65,536 bytes)
ReaderQuotas.MaxArrayLength (default 16384 bytes)
ReaderQuotas.MaxBytesPerRead (default 4096 bytes)
ReaderQuotas.MaxNameTableCharCount (default 16384 bytes)
ReaderQuotas.MaxStringContentLength (default 8192 bytes)
The code examples I have seen for using the core service invariably set at least some of these to values larger than the defaults, for example 4Mb. Is this because of known problems when other values, such as the defaults, are used?
MaxBufferPoolSize is there to allow you to prevent excessive garbage collections. Is this simply a matter of monitoring GCs and tuning based on that?
MaxReceivedMessageSize, MaxArrayLength and MaxBytesPerRead are there to defend against Dos attacks, so in my scenario, perhaps I can improve throughput by increasing these. Would a really large number help?
MaxNameTableCharCount seems to be there to prevent uncontrolled growth of something you might not want to grow uncontrolledly, so perhaps leaving the default would be a good thing.
The documentation on MaxStringContentLength doesn't specify what happens if you exceed the quota. Presumably ReadContentAsString will fail in some way, so perhaps this value should be large.
So - should I leave these values at their defaults? Will that cause me problems? Should I increase them to large values? Will that help with throughput etc., or is it more likely to cause other problems?
The general rule is to have these values as small as possible, just enough for you code to work. If you will take a look at default config that is shipped with CoreService.dll, it has some of the values increased.
For example, if you expect to get large XML lists (or search results) - you should increase MaxReceivedMessageSize. Keep in mind that you have control over the size of the list you will get using BaseColumns property of a filter.
If you prefer to use GetList, GetSystemWideList and GetSearchResults methods over their XML counterparts, you will probably have to increase ReaderQuotas.MaxArrayLength together with MaxReceivedMessageSize. But please note, that large arrays will be stored in memory.
I'm not sure you want to increase any of these values until you will hit the limit. WCF is quite good with pointing you to the parameter you have to adjust.
I'm afraid this is not really an answer to your questions... But, from my experience, I increased the values to more than the defaults suggested. I used 4MB as you already suggested. This was namely because I was experiencing error while communicating with the Core Service. They were related to the request/response sizes exceeding the allotted sizes.
Moreover, in the case of Core Service transactionality, I saw more of these exceptions. It seems that the sizes of request/responses increase quite a bit when using transactions. In my case, I was creating a batch of Components in one big transaction. If one Component failed to create, I would roll-back the whole transaction.
Hope this helps.
I've been experimenting with the Core Service recently and have seen an XmlReader exception occur when trying to open a large TBB (C# fragment) using the following code:
using(var client = new CoreService.CoreService2010Client())
{
var item = client.Read(tcmId,new ReadOptions());
//More code
}
System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be
increased by changing the MaxStringContentLength property on the
XmlDictionaryReaderQuotas object used when creating the XML reader.
Line 1, position 9201.
As it says in the message, I had to up the ReaderQuotas.MaxStringContentLength to fix this. So if you're working with any Building Blocks that have content bigger than 8KB expect this error.

Send binary data via WCF: binary vs MTOM encoding

I have limited knowledge in WCF as well as sending binary data via WCF, so this question may be somewhat rudimental.
I would like to know the difference between sending data with BinaryMessageEncodingBindingElement and MtomMessageEncodingBindingElement. It is still not clear to me when to use which approach after reading this page from MSDN on Large Data and Streaming.
Also, a small question: are a message with attachments and an MTOM message the same thing?
MTOM is a standard that uses multi-part mime-encoded messages to send portions of the message that are large and would be too expensive to base64 encode as pure binary. The SOAP message itself is sent as the initial part of the message and contains references to the binary parts which a web service software stack like WCF can then pull back together to create a single representation of the message.
Binary encoding is entirely proprietary to WCF and really doesn't just have to do with large messages. It presents a binary representation of the XML Infoset which is far more compact across the wire and faster to parse than text based formats. If you happen to be sending large binary chunks of data then it just fits right in with the other bytes that are being sent.
Streaming can be done used with any message format. That's more about when the data is written across the network vs. being buffered entirely in memoery before being presented to the network transport. Smaller messages make more sense to buffer up before sending and larger messages, especially those containing large binary chunks or streams, necessitate being streamed or will exhaust memory resources.

Looking for the optimal WCF quota settings

I know, my question is kinda wishy washy, but what would you say are "optimal" settings for WCF quotas, e.g. MaxReceivedMessageSize etc.?
My service mostly returns small values, but sometimes the return values exceed the default quotas. There are even larger return values, which I return as streams at a second endpoint.
Now the default value for MaxReceivedMessageSize (no question, the streamed endpoint uses higher values; my question concerns buffered communication) of 65536 bytes is quite low, I think. There are tons of "tutorials" which just set this value to Int32.MaxValue, which isn't a good idea at all ;)
Well what do you think? Which values are viable but are also safe enough not to make your service vulnerable for DoS and other stuff?
Regards
Vialbe value really depends on the size of data you are expecting. If you know that sometimes you can get up to 256KB then set the value to 256KB. In case of internal service the limit can be probably set to Int32.MaxValue but I think it is much more about lazyness of making the assumtion about transferred data. For a public web service you will hardly set the value to Int32.MaxValue because anybody will be able to blow up your server.
Btw. if we are talking about data returned from the service then this decission is on the client - both quotas and MaxReceiveMessageSize target receiving message not sending message so if your service returns data in response to client's requests the limit will be set on the client side. For example in case of public web service you don't have all clients under your control so you must also consider how much data do you want to return.
A separate endpoint is separate configuration on both client and server sides.