WCF RIA Getting Large Data Faster - silverlight-4.0

I have a Silverlight Client(4.0) calling a WCF RIA Service method which returns a large set of Data.The method returns a List where CustomObject has around 20 fields.
What i noticed is that it's extremely slow when the number of items in that list is 20,000.
If i put a break point on the return statement in the server and put one break point on the Client side, i can see its taking at least 40 seconds, to hit the break point on the client side once it returns the list from the server.I am wondering why is it taking so much time to bring the data from the server to the client.
Is it normal with WCF RIA services ? Is there any way to increase the efficiency,
Thanks !

Well, 20.000 records are... 20.000 records. The lengthy download is not an uncommon issue in a scenario like this. You might do two things:
Page the records.
Ask WCF to compress the data. Not really sure if this is possible as Silverlight does not uses the full WCF functionality.

Related

OutOfMemory exception when large number of records is sent from WCF to Silverlight

I have a Silverlight application in which I call my WCF service to get data from the database. If there is a small number of records then it's working fine, but if there are many records then it throws a System.OutOfMemory exception.
I have traced it in a WCF error log file. Are there any ways to compress the data which is coming from WCF to the Silverlight application?
U can use IIS dynamic compression for WCF messages.
Read next threads/articles:
Enabling dynamic compression
GZip compression with WCF hosted on IIS7
In your service web config, add this item in the service behavior and endpoint behavior. Then it can transfer data upto 2 gb.
<dataContractSerializer maxItemsInObjectGraph="2147483647"/>
Transferring 500,000 (half a million) records in one go is too large for your system to handle. I'd also say that it was too many for your users to handle.
You should break this down into pages of data and only return a couple of pages at a time. The Silverlight/WCF (RIAServices) DomainDataService can handle all this for you:
<riaControls:DomainDataSource QueryName="GetResults"
LoadSize="200"
PageSize="100"
AutoLoad="True"/>
You add a pager control to your page to move through the pages of data under user control.
This makes your application more responsive as you are only returning a small amount of data each time. Returning 500,000 records in one go will also more than likely cause timeouts for people with slow connections.
I'd also suggest you look into filtering your data so that you only return the data the user is interested in.

ASP.NET MVC site, shared WCF client object, causing a single-threaded bottleneck?

I'm trying to nail down a performance issue under load in an application which I didn't build, but have become very familiar with the workings of.
The architecture is: mobile apps call an ASP.NET MVC 3 website to get data to display. The ASP.NET site calls a third-party SOAP API using WCF clients (basicHttpBinding), caching results as much as it can to minimize load on that third party.
The load from the mobile apps is in the order of 200+ requests per second at peak times, which translates to something in the order of 20 SOAP requests per second to the third-party, after caching.
Normally it runs fine but we get periods of cascading slowness where every request to the API starts taking 5 seconds.. then 10.. 15.. 20.. 25.. 30.. at which point they time out (we set the WCF client timeout to 30 seconds). Clearly there is a bottleneck somewhere which is causing an increasingly long queue until requests can't be serviced inside 30 seconds.
Now, the third-party API is out of my control but they swear that it should not be having any issues whatsoever with 20 requests per second. So I've been looking into the possibility of a bottleneck at my end.
I've read questions on StackOverflow about ServicePointManager.DefaultConnectionLimit and connectionManagement, but digging through the source, I think the problem is somewhat more fundamental. It seems that our WCF client object (which is a standard System.ServiceModel.ClientBase<T> auto-generated by "Add Service Reference") is being stored in the cache, and thus when multiple requests come in to the ASP.NET site simultaneously, they will share a single Client object.
From a quick experiment with a couple of console apps and spawning multiple threads to call a deliberately slow WCF service with a shared Client object, it seems to me that only one call will occur at a time when multiple threads use a single ClientBase. This would explain a bottleneck when e.g. 20 calls need to be made per second and each one takes more than 50ms to complete.
Can anyone confirm that this is indeed the case?
And if so, and if I switched to every request creating it's own WCF Client object, I would just need to alter ServicePointManager.DefaultConnectionLimit to something greater than the default (which I believe is 2?) before creating the Client objects, in order to increase my maximum number of simultaneous connections?
(sorry for the verbose question, I figured too much information was better than too little)

Polling Pattern for Silverlight 4 WCF Ria Services

I am creating an application in Silverlight using Ria Services that can take quite a bit of time once the service call is initiated. I've looked for ways to increase the WCF service timeout, but the more I think it through, this is not the right aproach.
What I would rather do is call the DomainContext and return right away, then have the client poll the server to find out when the long running query is complete.
I'm looking for a pattern or example of a good way to implement something like this. One potential issue that keeps coming to mind are that web services should not keep state between service calls, but this is exactly what I would be doing.
Any thoughts?
Thanks,
-Scott
Take a look at the WCF Duplex Service. It should solve your problem.
Can you make the service call take less time? If not, why not?
Typically when I've seen queries take this long, it either means the SQL running at the end isn't efficient enough, the SQL server has poor indexes, or the client is requesting far more data than they'll actually be able to use in a short period of time.
For example, instead of requesting 500 entities right away and showing a large list/DataGrid/whatever, why not request 10-50 at a time and have a paging UI that only requests the next batch when the user nedes it?
Take a look at signalr, it can run side by side with ria and it lets you push messages back to the client from the server.

Is this possible in wcf?

I have a wcf service which returns a list of many objects e.g. 100,000
I get an error when calling this function because the maximum size i am allowed to pass back from wcf has been exceeded.
Is there a built in way i could return this in smaller chunks e.g. 20,000 at a time
I can increase the size allowed back from the wcf but was wondering what the alternatives were.
Thanks
Without knowing your requirements, I'd take a look at two other possible options:
Paging: If your 100,000 objects are coming from a database, then use paging to reduce the amount of data and invoke the service in batches with a page number. If the objects are not coming from a database, then you'd need to look at how that data will be stored server-side during invocations.
Streaming: Return the data to the caller as a stream instead.
With the streaming option, you'd have to do some more work in terms of managing the serialization of the objects, but it would allow the client to 'pull' the objects from the service at its own pace. Streaming is supported in most, if not all, the standard bindings (including HTTP).

WCF Thread was being aborted error

I have a WCF service that works fine in IIS 7, however once deployed to Windows Server 2003, IIS6, I'm now getting - "The thread was being aborted" error message. This happens after a few minutes of the service running.
I've tried manually changing some timeout values and turned off IIS keep alives.
Any ideas on how to fix this problem would be welcomed.
Thanks
If you're having this problem - please read! Hopefully you'll save yourself A LOT of trouble knowing this. Get coffee first!
You might come from a traditional programming background, in fields not SOA related, and now you're writing SOA services with the mindset of "traditional programmer". Here are 4 of the most important lessons I've learnt since building SOA services.
Rule number 1
Try your very best not to write services that take an extended amount of time to complete. I know this can be VERY tricky to accomplish, but it is much more reliable to have smaller operations being called many times, than 1 long service performing all the work, then returning a response. For example recently I wrote a service which processed ALL tasks. Each task was stored as an XML file in the IIS site, and each task would export data to a system for example : SharePoint. At any given times during high volumes there could be up to 30 000 tasks waiting to be processed. Over the past 2 months I have yet to get it 100% reliable, this is after diving deep into timeout settings in IIS, AppPools and WCF bindings. Every now and again I would get - "The thread was being aborted" and no reason or explanation as to why this was happening. I exhausted all online knowledge bases, no one seemed the wiser. Eventually after not being able to fix the issues or even reproduce them in a reliable way, I opted for a complete rewrite. I changed my code to instead of process ALL tasks, process just 1 task at a time.
This essentially meant calling 1 web service 30 000 times, rather than calling it once, but performance wise, it is around the same. Each call issues a response quick, and does a lot less work. This has another benefit, I can provide instant feedback on each operation to the client. In the Long call, you get a response back right at the end and ALL at once.
You can also much more easily catch and retry a service call if it does fail, because you don't have to redo the whole call for each operation again, but simply the operation that failed.
Its easier to test too, not only because of the live feedback, but also because you can test 1 inner operation, without the overhead of the loop if you wanted to.
Lastly it adds better scaling if you plan on extending your application later, because you're broken things down into more manageable units of work. So for example: Before you had 1 service which processed ALL Tasks, now you have a web service that can process 1 TASK, because of this you can more easily extend the functionality if you needed to process 10 Tasks, or tasks by selection.
Rule Number 2
Don't upgrade your existing ASMX web services to WCF 3 just because you think its a better technology. WCF 3 is over architectured and not a real pleasure to work with, or deploy. If you need to go WCF, try your best to hold out for the version that ships with .net 4 of the framework, it seems to have been revamped. Another thing you will miss is that WCF has no test forms, so you can't just fire up a web browser quick to test your services. If you're like me - "Keep it simple stupid" Then WCF 3.5 will frustrate you.
Rule Number 3
IIS6 can be dodgy, if at all possible avoid having to host your services in IIS6, if you're after reliable services. I am not saying its impossible to achieve reliability in IIS6, but it requires a LOT of work, and a great deal of testing. If you're dealing with services that are critical, try avoid using a product developed in 2001.
Rule Number 4
Don't underestimate the development and testing required to create reliable SOA services. To be honest all I can say is it is a massive undertaking.
I thought I'd mention that this error is thrown by SharePoint when calling some functions from a user account. Those functions need to be run with SPSecurity.RunWithElevatedPrivileges
This answer shows up when searching for "wcf sharepoint Thread was being aborted" so hopefully this can be useful to someone since 'thread being aborted' isn't very useful of SharePoint to throw when its a permissions issue.