How do I increase the reader quota on WCF service? - wcf

I don't expect anyone to be able to help out with this but let's give it a go.
I have a WinForms app that uses a WCF service to pull down a rather large JSON document serialised into a string. I have changed the client's Reader Quota on strings to 8192000 (arbitrary but suitable for most cases) and put the service onto a custom binding with an explicit readerquota of 8192000
checking the service reference in Notepad by eye the quota in the .svcinfo files is set to 8192 although this could be a red herring.
I'm at the end of my tether, I've followed every piece of advice I can find on Google:
http://www.haveyougotwoods.com/archive/2008/03/29/wcf-and-large-messages.aspx
http://msdn.microsoft.com/en-us/magazine/cc163394.aspx
to name but two and all the suggested answers I could find on here i.e.
WCF service The maximum array length quota (16384) has been exceeded
WCF Maximum Message Size Quota
Maximum array length quota
and I looked at this:
http://wildermuth.com/2009/09/10/Using_Large_Message_Requests_in_Silverlight_with_WCF
which was a response to one of the above or one of the many other things I have looked at that I have not retrieved from my "Recently Closed Tabs" list.
Basically I can't think of anything else to do to increase this limit and yet it still insists upon first encountering a string longer than 64k in length that the limits have not been altered at all.
So could anyone just give me a really basic step-by-step to altering this one setting for a WinForms app serialising and then deserialising JSON data as a string on either end of the transaction? A lot of the other advice has been about silverlight or some other scenario and for whatever reason it just fails to affect this case.

I tried the solution as shown in the last article I linked to again, just to go over my previous work. This time instead of preventing the WCF services from working at all (which is what had happened previously) it instead started to work and upped the limits.
I don't know what I was doing wrong the first time or what I did right this time... one of those things I guess.

Related

Recommended way of sending a big chunk of data from VB.NET server to HTML5 client?

Good morning!
I have this old Flash app (no worries the question is not about this!) which receives data from .NET server.
The data is a two thousand rows table and basically what I do is: query the DB with vb.net, create a long querystring with the data and send it back to Flash with a response.write method via GET. Then the Flash client parses it accordingly. The app is a parking lot GPS map that our employees uses to locate the vehicles. Believe it or not it has worked fine for the last 12 years and counting!
Long story short now my boss asked me to start over and remake entirely the app in HTML5. One major change is that for the sake of "standartization" the data will be converted from regular table columns to XML format so the chunk of data will grow in size.
Also I confess that I never feel completely happy on moving data back and forth via GET. I can't remember exactly WHY I did it this dirty. Probably by the time we were in a rush to get the app running so it just worked and among a lot of other things to do it was put in the backburner and the rest is history.
Anyway, since we are restarting it fresh I'd like to do it the right way this time. So questions are:
What would you recommend for sending data from .NET server to AJAX client? The POST method is the obvious alternative or there is a newest and best way of doing it?
Should I send the whole XML as a big unique chunk of data and parse it entirely in client or would be better to send it in array format (each item node as an array entry) and parse the array entries? My question here is what would be less CPU intensive for client, considering that machines are tablets and not PCs.
Stream the data would be an option or this is a silly idea?
I appreciate suggestions and examples!
Thanks!
First off, I would suggest using JSON over XML. There are two libraries you can use to serialize/deserialize JSON data: either Newtonsoft or System.Text.Json.
What would you recommend for sending data from .NET server to AJAX client? The POST method is the obvious alternative or there is a newest and best way of doing it?
You should definitely be doing this via a POST request.
Should I send the whole XML as a big unique chunk of data and parse it entirely in client or would be better to send it in array format (each item node as an array entry) and parse the array entries? My question here is what would be less CPU intensive for client, considering that machines are tablets and not PCs.
This really depends. If I were writing this I would add support for server-side pagination so that you know how many total records would be returned, but you're only returning however many records are currently visible. This would dramatically improve speed.
Stream the data would be an option or this is a silly idea?
Just return a JSON response.
What would you recommend for sending data from .NET server to AJAX client? The POST method is the obvious alternative or there is a newest and best way of doing it?
There is no real difference between a GET and a POST, certainly not one that matters to your context anyway; GET would be fine.
A GET might look like this:
GET /api/parkinglot/1234 HTTP/1.1
Host: somehost.com
A POST might look like this:
GET /api/parkinglot HTTP/1.1
Host: somehost.com
{ "id":1234 }
It's a text file, in essence, sent to the server. The server responds. It's not something that is "the way we do things now" or "more modern", POST doesn't "perform better".. It uses trivially more bytes, and is interpreted slightly differently by the server.. That's about it. For what you're describing, GET would be every bit as valid
Should I send the whole XML as a big unique chunk of data and parse it entirely in client or would be better to send it in array format (each item node as an array entry) and parse the array entries? My question here is what would be less CPU intensive for client, considering that machines are tablets and not PCs.
It doesn't really matter. An array isn't necessarily magically more or less of anything than XML; it's all just text, interpreted by the client. You could write a really wasteful array based solution or a lean XML one. What you ought to be throwing away is the idea of sending massive blocks of data to the client. Clients are limited in resource; don't send 2000 anything; what possible use could the user of the device have for 2000 items of data? You can't show it on screen and meaningfully interpret it; if it's a tabular block of data they'll end up panning around it, scrolling, zooming, searching.. Think about redesigning the app so that it sends the data they need when they need it. You might consider that sending 2000 points of data to be rendered as 1000 pins on a map, lat and long, might be a great idea, the client might have a really good rendering engine that can cope with it and make it quick and a pleasure to use.. but really? It sounds like the server needs to do a lot more of the work here
Stream the data would be an option or this is a silly idea?
This is all streaming. Every download or upload is a stream of data. Data gets from A to B in a serial flow so that it pops out the other end the same order it went in. You need to mentally move away from the concept of streaming vs downloading vs sending vs whatever else you think of in terms of getting data around the place. These are not distinct things; start focusing on being really efficient with the data you request, the time it takes to process and emit from the server, and the processing that happens on the client. Decide where it's best to do various calculations; there's no point the client searching for all users called smith, the server sending a million people to the client and the client parsing and searching the data. The server should do most of that. If you want to draw a triangle on screen, you can send 3 points and have the client render it instead of having the server render a 2 million pixel image, sending it and having the client draw the image. In one of these examples the server does a lot, in the other the client does a lot. In both the problem is that there is an excess of data flowing. Focus on the strengths of each resource
I appreciate suggestions and examples!
It isn't really what stackoverflow is for; we don't design your programs for you or write them - you have to do that and we tell you how to fix issues you hit along the way. Questions that ask "what is the best" are typically off topic because they attract opinionated answers.
In writing this answer I haven't really answered any of the questions you've asked in the way you want, because it simply isn't permitted. Instead I've tried to keep to factual observations and points you should consider when forming your own solution. When you hit problems with that solution, we can help but "design and implement my solution for me" is not a problem

WCFstreaming issue when setting position to 0

On a WCF rest service I am dealing with streams. In a service method I am uploading a stream in a data contract which works fine. And on service side I process the stream and its position is now at eof. After doing that I need to set its position to 0 again therefore I can save it there. But it throws the exception:
Specified method is not supported.
Does it mean I can't process a stream more then once? If it does I will need a workaround for that :/ and only solution pops into my mind is sending the stream two times so I can process it separately, but it is not good since I would have to upload it twice.
Any help would be appreciated.
Funny that I found my own solution :) first I saved the stream, then read it from that path for further processes over that stream. its interesting that finding the solution didn't require more detailed, technical information but a change of logical approach.

How should I decide the quotas for the Tridion core service binding?

I am connecting to the Tridion core service using a System.ServiceModel.WsHttpBinding. The service will only be used by authenticated users, and probably only by code which I control. I have to choose values for the following
MaxBufferPoolSize (default 524,288 bytes)
MaxReceivedMessageSize (default 65,536 bytes)
ReaderQuotas.MaxArrayLength (default 16384 bytes)
ReaderQuotas.MaxBytesPerRead (default 4096 bytes)
ReaderQuotas.MaxNameTableCharCount (default 16384 bytes)
ReaderQuotas.MaxStringContentLength (default 8192 bytes)
The code examples I have seen for using the core service invariably set at least some of these to values larger than the defaults, for example 4Mb. Is this because of known problems when other values, such as the defaults, are used?
MaxBufferPoolSize is there to allow you to prevent excessive garbage collections. Is this simply a matter of monitoring GCs and tuning based on that?
MaxReceivedMessageSize, MaxArrayLength and MaxBytesPerRead are there to defend against Dos attacks, so in my scenario, perhaps I can improve throughput by increasing these. Would a really large number help?
MaxNameTableCharCount seems to be there to prevent uncontrolled growth of something you might not want to grow uncontrolledly, so perhaps leaving the default would be a good thing.
The documentation on MaxStringContentLength doesn't specify what happens if you exceed the quota. Presumably ReadContentAsString will fail in some way, so perhaps this value should be large.
So - should I leave these values at their defaults? Will that cause me problems? Should I increase them to large values? Will that help with throughput etc., or is it more likely to cause other problems?
The general rule is to have these values as small as possible, just enough for you code to work. If you will take a look at default config that is shipped with CoreService.dll, it has some of the values increased.
For example, if you expect to get large XML lists (or search results) - you should increase MaxReceivedMessageSize. Keep in mind that you have control over the size of the list you will get using BaseColumns property of a filter.
If you prefer to use GetList, GetSystemWideList and GetSearchResults methods over their XML counterparts, you will probably have to increase ReaderQuotas.MaxArrayLength together with MaxReceivedMessageSize. But please note, that large arrays will be stored in memory.
I'm not sure you want to increase any of these values until you will hit the limit. WCF is quite good with pointing you to the parameter you have to adjust.
I'm afraid this is not really an answer to your questions... But, from my experience, I increased the values to more than the defaults suggested. I used 4MB as you already suggested. This was namely because I was experiencing error while communicating with the Core Service. They were related to the request/response sizes exceeding the allotted sizes.
Moreover, in the case of Core Service transactionality, I saw more of these exceptions. It seems that the sizes of request/responses increase quite a bit when using transactions. In my case, I was creating a batch of Components in one big transaction. If one Component failed to create, I would roll-back the whole transaction.
Hope this helps.
I've been experimenting with the Core Service recently and have seen an XmlReader exception occur when trying to open a large TBB (C# fragment) using the following code:
using(var client = new CoreService.CoreService2010Client())
{
var item = client.Read(tcmId,new ReadOptions());
//More code
}
System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be
increased by changing the MaxStringContentLength property on the
XmlDictionaryReaderQuotas object used when creating the XML reader.
Line 1, position 9201.
As it says in the message, I had to up the ReaderQuotas.MaxStringContentLength to fix this. So if you're working with any Building Blocks that have content bigger than 8KB expect this error.

Looking for the optimal WCF quota settings

I know, my question is kinda wishy washy, but what would you say are "optimal" settings for WCF quotas, e.g. MaxReceivedMessageSize etc.?
My service mostly returns small values, but sometimes the return values exceed the default quotas. There are even larger return values, which I return as streams at a second endpoint.
Now the default value for MaxReceivedMessageSize (no question, the streamed endpoint uses higher values; my question concerns buffered communication) of 65536 bytes is quite low, I think. There are tons of "tutorials" which just set this value to Int32.MaxValue, which isn't a good idea at all ;)
Well what do you think? Which values are viable but are also safe enough not to make your service vulnerable for DoS and other stuff?
Regards
Vialbe value really depends on the size of data you are expecting. If you know that sometimes you can get up to 256KB then set the value to 256KB. In case of internal service the limit can be probably set to Int32.MaxValue but I think it is much more about lazyness of making the assumtion about transferred data. For a public web service you will hardly set the value to Int32.MaxValue because anybody will be able to blow up your server.
Btw. if we are talking about data returned from the service then this decission is on the client - both quotas and MaxReceiveMessageSize target receiving message not sending message so if your service returns data in response to client's requests the limit will be set on the client side. For example in case of public web service you don't have all clients under your control so you must also consider how much data do you want to return.
A separate endpoint is separate configuration on both client and server sides.

Is there a built-in way to determine the size of a WCF response?

Before a client gets the full payload of the web request, we'd like to first send it a measurement of the size of the response it will get. If the response will be too large, the client will present a message to the user giving them the option to abort the operation.
We can write some custom code to preload the response on the server, determine the size, and then pass it on to the client, but we'd rather not if there's another way to do it.
Does anyone know if WCF has any tricky way to do this? Or are there any free third party tools out there that will accomplish this?
Thanks.
I don't think there's anything "tricky" in WCF or the .NET framework to do this, really. What are you passing back to the client? An instance of a class?
What you could do is run the query or however you fetch the response, and then serialize that into a memory stream and see how big it gets. This won't be a totally accurate size - the SOAP messages has some overhead to it, like SOAP envelope and headers and stuff - but it can give you a ballpark figure of whether you're about to return a few hundred bytes, or a couple megabytes.
Trouble is: this might take a while on the server just to assemble / query, and then to actually "measure", too. Plus you'd almost have to have two calls - one to "MeasureResult" which returns an Int or Long or something, and then a second call "GetResult" to actually get the results. So you'll incur that effort to assemble the message twice....
I don't really have a good answer for you, but maybe you just need to figure out some other way to allow the client to abort a call if it takes too long. Or find a way to more quickly figure out an indicator as to how large the response will be (without getting all the details of the response itself).