I have run into the exception below a few times in the past and each time I just change the configuration to allow a bigger object graph.
"Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."
However I was speaking to a colleague and he said that WCF should not be used to send large amounts of data, instead the data should be bite sized.
So what is the general consensus about large amounts of data being returned?
In my experience using synchronous web service operations to transmit large data sets or files leads to many different problems.
Firstly, you have performance related issues - serialization time at the service boundary. Then you have availability issues. Incoming requests can time out waiting for a response, or may be rejected because there is no dispatcher thread to service the request.
It is much better to delegate large data transfer and processing to some offline asynchronous process.
For example, in your situation, you send a request and the service returns a URI to the eventual resource you want. You may have to wait for the resource to become available, but you can code your consumer appropriately.
I haven't got any concrete examples but this article seems to point to WCF being used for large data sets, and I am aware of people using it for images.
Personally, I have always had to increase this property for any real world data.
Related
I use a queue per message type. I have tended to create a windows service per queue to process those messages. Is this the best use of resources? I suspect not. How do you decide how many processes should service a queue(s)?
One thing to consider here is service levels. Does all of the data represented by the message types require identical processing service levels? Are some messages more important than others? Do some messages have latency requirement for delivery? Are some messages critical to the business whereas others not? Are the expected volumes of all message types different?
Currently the way you have things set up means that you can manage each of your message type channels as a separate concern, which allows you maximum flexibility to support all possible service level scenarios. However this comes as a cost of higher resource cost/more moving parts.
I would say that unless resource usage is a concern, then your set up is the best possible as you decouple your data processing channels from one another very effectively in this way.
What is a best practice for designing WCF services concerning to the use of more or less operations under a single service.
Taking into consideration that a Service must be generic and Business oriented, I have encountered some SOAP services # work that have too much XML elements per operation in their contracts and too many operations in a single service.
From my point of view, without testing, I think the number of operations within a service will not have any impact on the performance in the middleware since a response is build specifically for each operation containing only the XML elements concerning that operation.
Or are there any issues for having too many operations within a SOAP service ?
There is an issue, and that is when trying to do a metadata exchange or a proxy creation against a service with many methods (probably in the thousands). Since it will be trying to do the entire thing at once, it could timeout, or even hit an OutOfMemory exception.
Dont hink it will impact performance much but important thing is methods must be logically grouped in different service. Service with large number of method usually mean they are not logically factored.
I have a wcf service which returns a list of many objects e.g. 100,000
I get an error when calling this function because the maximum size i am allowed to pass back from wcf has been exceeded.
Is there a built in way i could return this in smaller chunks e.g. 20,000 at a time
I can increase the size allowed back from the wcf but was wondering what the alternatives were.
Thanks
Without knowing your requirements, I'd take a look at two other possible options:
Paging: If your 100,000 objects are coming from a database, then use paging to reduce the amount of data and invoke the service in batches with a page number. If the objects are not coming from a database, then you'd need to look at how that data will be stored server-side during invocations.
Streaming: Return the data to the caller as a stream instead.
With the streaming option, you'd have to do some more work in terms of managing the serialization of the objects, but it would allow the client to 'pull' the objects from the service at its own pace. Streaming is supported in most, if not all, the standard bindings (including HTTP).
I would like to be able to prioritize the outgoing data/messages from a WCF service.
Here's the basic scenario:
Client requests from server a data stream. The data stream is live, large, and potentially unending (equipment monitoring data). We'll call this HighPriorityDataStream.
Client requests additional data. We'll call this LowPriorityData.
The bandwidth is limited (think dial-up modem or satellite). It is very important that the current HigPriorityDataStream not be interrupted or delayed when a request for LowPriorityData is made.
I have a sockets-based legacy system already where this is accomplished by manually controlling the order that data is placed into the socket buffer. High-priority data is placed in the buffer, and if there's room left over, lower priority data is added to fill the rest of the buffer.
I'm trying to reengineer this process with WCF... I don't know of any out-of-the box solutions and am thinking I may need to write a custom channel behavior, but I'd like to pick the brains of the community before I go that route :)
I think there is no general out-of-the box solution. The solution is dependend on your other requirements. Do you want to control bandwith per client or per whole server (all clients)? Do you want to call low priority operation from the same proxy or do you start new proxy for new operation? Do you want to run more high priority operations at the same time? Do you want to prioritize incomming requests?
The easiest solution expects that you control bandwith per client, you are reusing same proxy for all calls, only one high priority operation can be completed at the same time and requests are processed in FIFO order. Than you just mark your service implementation with [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, ConcurrencyMode = ConcurrencyMode.Single)] (this should be default setting for services exposed over NET.TCP). This settings will reuse the same service instance for all calls from the same client proxy but only one call will be processed at time (other will wait in queue until they are processed or time outed).
Best regards,
Ladislav
After a lot of poking around (thanks Ladislav for your thoughtful ideas), I've come to the conclusion that I'm asking the communication layer to solve a buisness-layer problem. To better state the problem, there are multiple connections and one data source. The datasource must prioritize which data it gathers from it's own data sources (live data streams and also persisted databases) and send the data back to the various clients based on their priority. To be clear, the clients have a relative priority based on their role-based identity, the data sources have a priority (prefer live data over persisted data) and individual fields within a data source have a priority order (all else being equal, field X must always be sent before field Y).
This is all firmly business-logic and the solution we adopted as a set of priority queues that autoamatically sorted the input data items based on these priority requirements and then served each request in that order.
I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx