Is there any effective ways to prevent message size quota exceeded for NetTCP service that returns an undetermined-sized collection? Example I have the following service:
public List<string> GetDirInfo(string path)
{
List<string> result;
result = new List<string>(Directory.EnumerateDirectories(path));
result.AddRange(new List<string>(Directory.EnumerateFiles(path)));
return result;
}
which basically just returns a list of directories and files in that given path. At times, I will reached the message size quota, e.g. return C:\Windows\System32.
Without changing message size quota, what is the best way to handle or prevent this sort of service call? A few ways that I can think of:
Convert to Stream service call
Implement Pagination
As a prevention, always check the size of result prior to returning.
Implement Callback mechanism where service will return a series of fixed sized result back to the client.
Is there any WCF configuration where the returned message will be split into multiple packets and reconstruct back by CLR while confining to the message size quota.
Edit:
I saw several postings on similar subject matter, but it all involved in changing the message size. I do not want to change the default message size to minimized DOS attacks.
I want to get some advise on the best practices and how you guys are handling/implementing such a case where the returning result might exceed the message size quota.
Related
I am connecting to the Tridion core service using a System.ServiceModel.WsHttpBinding. The service will only be used by authenticated users, and probably only by code which I control. I have to choose values for the following
MaxBufferPoolSize (default 524,288 bytes)
MaxReceivedMessageSize (default 65,536 bytes)
ReaderQuotas.MaxArrayLength (default 16384 bytes)
ReaderQuotas.MaxBytesPerRead (default 4096 bytes)
ReaderQuotas.MaxNameTableCharCount (default 16384 bytes)
ReaderQuotas.MaxStringContentLength (default 8192 bytes)
The code examples I have seen for using the core service invariably set at least some of these to values larger than the defaults, for example 4Mb. Is this because of known problems when other values, such as the defaults, are used?
MaxBufferPoolSize is there to allow you to prevent excessive garbage collections. Is this simply a matter of monitoring GCs and tuning based on that?
MaxReceivedMessageSize, MaxArrayLength and MaxBytesPerRead are there to defend against Dos attacks, so in my scenario, perhaps I can improve throughput by increasing these. Would a really large number help?
MaxNameTableCharCount seems to be there to prevent uncontrolled growth of something you might not want to grow uncontrolledly, so perhaps leaving the default would be a good thing.
The documentation on MaxStringContentLength doesn't specify what happens if you exceed the quota. Presumably ReadContentAsString will fail in some way, so perhaps this value should be large.
So - should I leave these values at their defaults? Will that cause me problems? Should I increase them to large values? Will that help with throughput etc., or is it more likely to cause other problems?
The general rule is to have these values as small as possible, just enough for you code to work. If you will take a look at default config that is shipped with CoreService.dll, it has some of the values increased.
For example, if you expect to get large XML lists (or search results) - you should increase MaxReceivedMessageSize. Keep in mind that you have control over the size of the list you will get using BaseColumns property of a filter.
If you prefer to use GetList, GetSystemWideList and GetSearchResults methods over their XML counterparts, you will probably have to increase ReaderQuotas.MaxArrayLength together with MaxReceivedMessageSize. But please note, that large arrays will be stored in memory.
I'm not sure you want to increase any of these values until you will hit the limit. WCF is quite good with pointing you to the parameter you have to adjust.
I'm afraid this is not really an answer to your questions... But, from my experience, I increased the values to more than the defaults suggested. I used 4MB as you already suggested. This was namely because I was experiencing error while communicating with the Core Service. They were related to the request/response sizes exceeding the allotted sizes.
Moreover, in the case of Core Service transactionality, I saw more of these exceptions. It seems that the sizes of request/responses increase quite a bit when using transactions. In my case, I was creating a batch of Components in one big transaction. If one Component failed to create, I would roll-back the whole transaction.
Hope this helps.
I've been experimenting with the Core Service recently and have seen an XmlReader exception occur when trying to open a large TBB (C# fragment) using the following code:
using(var client = new CoreService.CoreService2010Client())
{
var item = client.Read(tcmId,new ReadOptions());
//More code
}
System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be
increased by changing the MaxStringContentLength property on the
XmlDictionaryReaderQuotas object used when creating the XML reader.
Line 1, position 9201.
As it says in the message, I had to up the ReaderQuotas.MaxStringContentLength to fix this. So if you're working with any Building Blocks that have content bigger than 8KB expect this error.
We have a WCF service that has a threshold of 30MB to send files over an http message, anything above that value gets transferred by file copy and the path sent back to the caller. Now we were requested to eliminate that file copy because customers complained it was too slow. So the decision was to remove any size limitation when sending binary content over WCF/HTTP.
My question is - how reliable is that? What type of issues will we encounter by pushing, say, a 2GB file over the wire in a single WCF message, if that is even possible?
Thanks!
If you set the MaxReceivedMessageSize in WCF to a high enough value on your WCF service, you can push a fairly large file through that service. The maximum is int64.MaxValue = 9,223,372,036,854,775,807, so you should be able to set a value to cover a 2GB message.
You might want to control the MaxBufferSize to ensure you're not trying to store too much into memory, and maybe consider switching to the more binary-efficient MTOM message encoding if you can. Note that the MaxReceivedMessageSize governs the size of the message after the binary file has been encoded, which means the original binary file size which can be sent over the service will be smaller than 2GB.
MSDN has a very nice article covering sending large amounts of data over WCF and what to look out for: Large Data and Streaming.
Edit: Turns out the max value allowed is actually Int64.MaxValue)
I know, my question is kinda wishy washy, but what would you say are "optimal" settings for WCF quotas, e.g. MaxReceivedMessageSize etc.?
My service mostly returns small values, but sometimes the return values exceed the default quotas. There are even larger return values, which I return as streams at a second endpoint.
Now the default value for MaxReceivedMessageSize (no question, the streamed endpoint uses higher values; my question concerns buffered communication) of 65536 bytes is quite low, I think. There are tons of "tutorials" which just set this value to Int32.MaxValue, which isn't a good idea at all ;)
Well what do you think? Which values are viable but are also safe enough not to make your service vulnerable for DoS and other stuff?
Regards
Vialbe value really depends on the size of data you are expecting. If you know that sometimes you can get up to 256KB then set the value to 256KB. In case of internal service the limit can be probably set to Int32.MaxValue but I think it is much more about lazyness of making the assumtion about transferred data. For a public web service you will hardly set the value to Int32.MaxValue because anybody will be able to blow up your server.
Btw. if we are talking about data returned from the service then this decission is on the client - both quotas and MaxReceiveMessageSize target receiving message not sending message so if your service returns data in response to client's requests the limit will be set on the client side. For example in case of public web service you don't have all clients under your control so you must also consider how much data do you want to return.
A separate endpoint is separate configuration on both client and server sides.
I have a wcf service which returns a list of many objects e.g. 100,000
I get an error when calling this function because the maximum size i am allowed to pass back from wcf has been exceeded.
Is there a built in way i could return this in smaller chunks e.g. 20,000 at a time
I can increase the size allowed back from the wcf but was wondering what the alternatives were.
Thanks
Without knowing your requirements, I'd take a look at two other possible options:
Paging: If your 100,000 objects are coming from a database, then use paging to reduce the amount of data and invoke the service in batches with a page number. If the objects are not coming from a database, then you'd need to look at how that data will be stored server-side during invocations.
Streaming: Return the data to the caller as a stream instead.
With the streaming option, you'd have to do some more work in terms of managing the serialization of the objects, but it would allow the client to 'pull' the objects from the service at its own pace. Streaming is supported in most, if not all, the standard bindings (including HTTP).
I don't expect anyone to be able to help out with this but let's give it a go.
I have a WinForms app that uses a WCF service to pull down a rather large JSON document serialised into a string. I have changed the client's Reader Quota on strings to 8192000 (arbitrary but suitable for most cases) and put the service onto a custom binding with an explicit readerquota of 8192000
checking the service reference in Notepad by eye the quota in the .svcinfo files is set to 8192 although this could be a red herring.
I'm at the end of my tether, I've followed every piece of advice I can find on Google:
http://www.haveyougotwoods.com/archive/2008/03/29/wcf-and-large-messages.aspx
http://msdn.microsoft.com/en-us/magazine/cc163394.aspx
to name but two and all the suggested answers I could find on here i.e.
WCF service The maximum array length quota (16384) has been exceeded
WCF Maximum Message Size Quota
Maximum array length quota
and I looked at this:
http://wildermuth.com/2009/09/10/Using_Large_Message_Requests_in_Silverlight_with_WCF
which was a response to one of the above or one of the many other things I have looked at that I have not retrieved from my "Recently Closed Tabs" list.
Basically I can't think of anything else to do to increase this limit and yet it still insists upon first encountering a string longer than 64k in length that the limits have not been altered at all.
So could anyone just give me a really basic step-by-step to altering this one setting for a WinForms app serialising and then deserialising JSON data as a string on either end of the transaction? A lot of the other advice has been about silverlight or some other scenario and for whatever reason it just fails to affect this case.
I tried the solution as shown in the last article I linked to again, just to go over my previous work. This time instead of preventing the WCF services from working at all (which is what had happened previously) it instead started to work and upped the limits.
I don't know what I was doing wrong the first time or what I did right this time... one of those things I guess.