Polling Pattern for Silverlight 4 WCF Ria Services - silverlight-4.0

I am creating an application in Silverlight using Ria Services that can take quite a bit of time once the service call is initiated. I've looked for ways to increase the WCF service timeout, but the more I think it through, this is not the right aproach.
What I would rather do is call the DomainContext and return right away, then have the client poll the server to find out when the long running query is complete.
I'm looking for a pattern or example of a good way to implement something like this. One potential issue that keeps coming to mind are that web services should not keep state between service calls, but this is exactly what I would be doing.
Any thoughts?
Thanks,
-Scott

Take a look at the WCF Duplex Service. It should solve your problem.

Can you make the service call take less time? If not, why not?
Typically when I've seen queries take this long, it either means the SQL running at the end isn't efficient enough, the SQL server has poor indexes, or the client is requesting far more data than they'll actually be able to use in a short period of time.
For example, instead of requesting 500 entities right away and showing a large list/DataGrid/whatever, why not request 10-50 at a time and have a paging UI that only requests the next batch when the user nedes it?

Take a look at signalr, it can run side by side with ria and it lets you push messages back to the client from the server.

Related

How to keep WCF Service Alive?

I have a situation where I have two programs (one exe and one dll loaded into the process space of another third-party exe) communicating requests with each other using a local machine wcf service (using net named pipe binding). There's a third host exe that starts hosting the service. It all works great (so far anyways... I'm still learning), but I got to thinking about what would happen if the channel faults or the service times out. What would be the best practice for checking and handling faults as well as keep the channel alive?
In my case it will be up to the user to keep the applications open or close them and we do have those users who tend to keep them open overnight, over the weekend, etc... It seems to me this could open the possibility of a fault or loss of service and I don't have a clue how to recover. Any help would be greatly appreciated.
Firstly, why would you keep the channel alive indefinitely?
Imagine you are connecting to a database from which you want to read over the course of one day. Would you create the database connection in the morning and then close it in the evening?
It is relatively cheap to construct a channel in WCF for each call, unless you know you are going to be making multiple calls within a few seconds of each other, in which case you should reuse the channel.
EDIT
This post explains how to do it. It's pretty complicated and it may be easier to just set a huge timeout value for the binding in code (as suggested at the end of the post):
Do WCF Callbacks TimeOut
EDIT
There's tons of stuff on google about this: http://bit.ly/10ZPWE2

WCF RIA Getting Large Data Faster

I have a Silverlight Client(4.0) calling a WCF RIA Service method which returns a large set of Data.The method returns a List where CustomObject has around 20 fields.
What i noticed is that it's extremely slow when the number of items in that list is 20,000.
If i put a break point on the return statement in the server and put one break point on the Client side, i can see its taking at least 40 seconds, to hit the break point on the client side once it returns the list from the server.I am wondering why is it taking so much time to bring the data from the server to the client.
Is it normal with WCF RIA services ? Is there any way to increase the efficiency,
Thanks !
Well, 20.000 records are... 20.000 records. The lengthy download is not an uncommon issue in a scenario like this. You might do two things:
Page the records.
Ask WCF to compress the data. Not really sure if this is possible as Silverlight does not uses the full WCF functionality.

Two wcf servers vs a wcf server with callback

I have got two applications that need to communicate via WCF:
Called A and B.
A suppose to push values to B for storage/update
B suppose to push a list of values stored in it to A
the senior programmer in my team wants to open a WCF server at A and another WCF server at B.
I claim that one should be the server and the other should be the client and use server call back In order to avoid splitting the interface into two, avoid circular dependency, and duplication of code settings. he doesn't understand it. can anyone help me explain him why his solution is bad code?
It depends on your criteria. Let's assume a client/server model where A is the client and B is the server. You state that B should "push" data to A.
If you truly need push then you should make B into a duplex server. This does put some strain on your bandwith, so if you have a bandwith restriction, this might not be the right choice.
If you can incur some delay at A than you might want to opt for a polling mechanism of your own (maybe based on timing, or some other logic).
If both are not an option, you can try and swap roles. So then make B the client and A the server. It's les intuitive, but it might fit your scenario. If you can incur a delay on storing data, make B poll A for changes in the data and save at an interval.
If there can be no delay in both and bandwith is limited, you do end up with two WCF services. Altough it may look silly at first glance, keep in mind they are services and not servers. It does make things a bit more complex, so I would keep it as a last resort.
A service should encapsulate a set of functionality that other applications can consume. All it does is wait for and respond to requests from other components, it doesn't initiate actions by itself.
If Application B is storing data, then it can of course be provided to Application A as a service. It provides the "service" of storing data without application A having to worry about how or where, and returns successfully stored data. This is exactly the kind of thing that WCF Services are meant to handle.
I am assuming that application A is the one initiating the requests (unless you have an unmentioned 3rd application, one of them must be the initiator). If Application A is initiating actions (for example, it has a UI, or is triggered to do some batch processing etc.) then it should not be modeled as a "service".
I hope that helps :)

WCF Thread was being aborted error

I have a WCF service that works fine in IIS 7, however once deployed to Windows Server 2003, IIS6, I'm now getting - "The thread was being aborted" error message. This happens after a few minutes of the service running.
I've tried manually changing some timeout values and turned off IIS keep alives.
Any ideas on how to fix this problem would be welcomed.
Thanks
If you're having this problem - please read! Hopefully you'll save yourself A LOT of trouble knowing this. Get coffee first!
You might come from a traditional programming background, in fields not SOA related, and now you're writing SOA services with the mindset of "traditional programmer". Here are 4 of the most important lessons I've learnt since building SOA services.
Rule number 1
Try your very best not to write services that take an extended amount of time to complete. I know this can be VERY tricky to accomplish, but it is much more reliable to have smaller operations being called many times, than 1 long service performing all the work, then returning a response. For example recently I wrote a service which processed ALL tasks. Each task was stored as an XML file in the IIS site, and each task would export data to a system for example : SharePoint. At any given times during high volumes there could be up to 30 000 tasks waiting to be processed. Over the past 2 months I have yet to get it 100% reliable, this is after diving deep into timeout settings in IIS, AppPools and WCF bindings. Every now and again I would get - "The thread was being aborted" and no reason or explanation as to why this was happening. I exhausted all online knowledge bases, no one seemed the wiser. Eventually after not being able to fix the issues or even reproduce them in a reliable way, I opted for a complete rewrite. I changed my code to instead of process ALL tasks, process just 1 task at a time.
This essentially meant calling 1 web service 30 000 times, rather than calling it once, but performance wise, it is around the same. Each call issues a response quick, and does a lot less work. This has another benefit, I can provide instant feedback on each operation to the client. In the Long call, you get a response back right at the end and ALL at once.
You can also much more easily catch and retry a service call if it does fail, because you don't have to redo the whole call for each operation again, but simply the operation that failed.
Its easier to test too, not only because of the live feedback, but also because you can test 1 inner operation, without the overhead of the loop if you wanted to.
Lastly it adds better scaling if you plan on extending your application later, because you're broken things down into more manageable units of work. So for example: Before you had 1 service which processed ALL Tasks, now you have a web service that can process 1 TASK, because of this you can more easily extend the functionality if you needed to process 10 Tasks, or tasks by selection.
Rule Number 2
Don't upgrade your existing ASMX web services to WCF 3 just because you think its a better technology. WCF 3 is over architectured and not a real pleasure to work with, or deploy. If you need to go WCF, try your best to hold out for the version that ships with .net 4 of the framework, it seems to have been revamped. Another thing you will miss is that WCF has no test forms, so you can't just fire up a web browser quick to test your services. If you're like me - "Keep it simple stupid" Then WCF 3.5 will frustrate you.
Rule Number 3
IIS6 can be dodgy, if at all possible avoid having to host your services in IIS6, if you're after reliable services. I am not saying its impossible to achieve reliability in IIS6, but it requires a LOT of work, and a great deal of testing. If you're dealing with services that are critical, try avoid using a product developed in 2001.
Rule Number 4
Don't underestimate the development and testing required to create reliable SOA services. To be honest all I can say is it is a massive undertaking.
I thought I'd mention that this error is thrown by SharePoint when calling some functions from a user account. Those functions need to be run with SPSecurity.RunWithElevatedPrivileges
This answer shows up when searching for "wcf sharepoint Thread was being aborted" so hopefully this can be useful to someone since 'thread being aborted' isn't very useful of SharePoint to throw when its a permissions issue.

Concurrent access to WCF client proxy

I'm currently playing around a little with WCF, during this I stepped on a question where I'm not sure if I'm on the right track.
Let's assume a simple setup that looks like this: client -> service1 -> service2.
The communication is tcp-based.
So where I'm not sure is, if it makes sense that the service1 caches the client proxy for service2. So I might get a multi-threaded access to that proxy, and I have to deal with it.
I'd like to take advantage of the tcp session to get better performance, but I'm not sure if this "architecture" is supported by WCF/network/whatever at all. The problem I see is that all the communication goes over the same channel, if I'm not using locks or another sync.
I guess the better idea is to cache the proxy in a threadstatic variable.
But before I do that, I wanted to confirm that it's really not a good idea to have only one proxy instance.
tia
Martin
If you don't know that you have a performance problem, then why worry about caching? You're opening yourself to the risk of improperly implementing multithreading code, and without any clear, measurable benefit.
Have you measured performance yet, or profiled the application to see where it's spending its time? If not, then when you do, you may well find that the overhead of multiple TCP sessions is not where your performance problems lie. You may wish you had the time to optimize some other part of your application, but you will have spent that time optimizing something that didn't need to be optimized.
I am already using such a structure. I have one service that collaborates with some other services and realise the implementation. Of course, in my case the client calls some one-way method of the first service. I am getting very good benifit. Of course, I also have configured it to limit the number of concurrent calls in some of the cases.
Yes, that architecture is supported by WCF. I deal with applications every day that use similar structures, using NetTCPBinding.
The biggest thing to worry about is the ConcurrencyMode of the various services involved, and making sure that they do not block unnecessarily. It is very easy to get into a scenario where you will be guaranteed timeouts, or at the least have poor performance due to multiple, synchronous calls across service boundaries. Even OneWay calls are not guaranteed to immediately return.
careful with threadstatic, .net changes the thread so the variable can get null.
For session...perhaps you could use session enabled calls:
http://msdn.microsoft.com/en-us/library/ms733040.aspx
But i would not recomend using if you do not have any performance issue. I would use the normal way, or if service 1 is just for forwarding you could use that functionality easily with 4.0:
http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/2979/Whats-New-in-WCF-40.aspx
Regards
Firstly, make sure you know about the behaviour of ThreadStatic in ASP.NET applications:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
The same thread that started your request may not be the same thread that finishes it. Basically the only safe way of storing Thread local storage in ASP.NET applications is inside HttpContext. The next obvious approach would be to creat a wrapper client to manage your WCF client proxy and ensure each IO request is thread safe using locks.
Although my personal preference would be to use a pool of proxy clients. Whenever you need one pop it off the pool queue and when you're finished with it put it back on.