Strategy for busy WCF service - wcf

I've got a really busy self-hosted WCF server that requires 2000+ clients to update their status on a frequent basis. What I'm finding is that the CPU utilization of the server is sitting at around 70% constantly, and the clients have a 50% chance of actually getting a connection to the server. They will timeout after 60 seconds. This is problematic because if the server doesn't hear back from a client, it'll assume the client is offline.
I've implemented throttling so I can adjust concurrent connections/sessions/etc., but if I'm not mistaken, increasing this will only lead to higher CPU utilization and worse connectivity problems. Right?
Will increasing the timeout to something more than 60 seconds help? I'm not exactly sure how it works, but will a client sit in a type of queue until the server can field the request? Or is it best to set the timeout to something smaller and make the client check in more often if it can't get connected (this seems like it could only make the problem worse in a sense)?

If it's really important for the server to know if the client is still connected, I don't think relying solely on WCF is your best bet for that.
Maybe your server should have some sort of ping mechanism that either allows it to ping client machines based on some sort of timer or vice versa.
If you're super concerned about the messages always getting through, no matter what, then I suggest exploring Reliable services. Check out the enableReliableSession behavior attribute. I suggest reading through at least the first chapter in Juval Lowy's Programming WCF Services which is available for free as the Kindle sample of the book.

Increasing the timeout may help, but probably not much, and the Amazing Ever-Increasing Timeout is kind of a motif on http://www.thedailywtf.com . Making the client hammer the server if it can't get through the first time is guaranteed to cause pain.
If all that you care about is knowing whether the client is there, might it be practical to go down a layer or two, and have the client send you an HTTP POST once in a while? WCF requires some active back-and-forth, but a POST can just lay there until your server has time to deal with it, and the client can just send it and forget about it.

Related

Async WCF and Protocol Behaviors

FYI: This will be my first real foray into Async/Await; for too long I've been settling for the familiar territory of BackgroundWorker. It's time to move on.
I wish to build a WCF service, self-hosted in a Windows service running on a remote machine in the same LAN, that does this:
Accepts a request for a single .ZIP archive
Creates the archive and packages several files
Returns the archive as its response to the request
I have to support archives as large as 10GB. Needless to say, this scenario isn't covered by basic WCF designs; we must take additional steps to meet the requirement. We must eliminate timeouts while the archive is building and memory errors while it's being sent. Both of these occur under basic WCF designs, depending on the size of the file returned.
My plan is to proceed using task-based asynchronous WCF calls and streaming mode.
I have two concerns:
Is this the proper approach to the problem?
Microsoft has done a nice job at abstracting all of this, but what of the underlying protocols? What goes on 'under the hood?' Does the server keep the connection alive while the archive is building (could be several minutes) or instead does it close the connection and initiate a new one once the operation is complete, thereby requiring me to properly route the request through the client machine firewall?
For #2, clearly I'm hoping for the former (keep-alive). But after some searching I'm not easily finding an answer. Perhaps you know.
You need streaming for big payloads. That is the right approach. This has nothing at all to do with asynchronous IO. The two are independent. The client cannot even tell that the server is async internally.
I'll add my standard answers for whether to use async IO or not:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Each request runs over a single connection that is kept alive. This goes for both streaming big amounts of data as well as big initial delays. Not sure why you are concerned about routing. Does your router kill such connections? That's a problem.
Regarding keep alive, there is nothing going over the wire to do that. TCP sessions can stay open indefinitely without any kind of wire traffic.

How to limit a request execution time of WCF service?

Is there something in WCF configuration that defines a timeout for executing a request at service side? E.g. WCF service will stop executing request after some time period. I have a service which make some work depending on client input. In some cases a such call may take too much time. I want to limit the execution time of such requests on service side, not client one using SendTimeout. I know about OperationTimeout property, but it doesn't abort the service request, it just tells a client that the request is timed out.
In general terms, there's nothing that will totally enforce this. Unfortunately, it's one of those things that the runtime can't really enforce nicely without possibly leaving state messed up (pretty much the only alternative for it would be to abort the running thread, and that has a bunch of undesirable consequences).
So, basically, if this is something you want to actively enforce, it's a lot better to design your service to deal with this so that your operation execution has safe interruption points where the operation can be terminated if the maximum execution time has been exceeded.
Though it's a lot more work, you'll likely be more satisfied with it in the long run.

Concurrent access to WCF client proxy

I'm currently playing around a little with WCF, during this I stepped on a question where I'm not sure if I'm on the right track.
Let's assume a simple setup that looks like this: client -> service1 -> service2.
The communication is tcp-based.
So where I'm not sure is, if it makes sense that the service1 caches the client proxy for service2. So I might get a multi-threaded access to that proxy, and I have to deal with it.
I'd like to take advantage of the tcp session to get better performance, but I'm not sure if this "architecture" is supported by WCF/network/whatever at all. The problem I see is that all the communication goes over the same channel, if I'm not using locks or another sync.
I guess the better idea is to cache the proxy in a threadstatic variable.
But before I do that, I wanted to confirm that it's really not a good idea to have only one proxy instance.
tia
Martin
If you don't know that you have a performance problem, then why worry about caching? You're opening yourself to the risk of improperly implementing multithreading code, and without any clear, measurable benefit.
Have you measured performance yet, or profiled the application to see where it's spending its time? If not, then when you do, you may well find that the overhead of multiple TCP sessions is not where your performance problems lie. You may wish you had the time to optimize some other part of your application, but you will have spent that time optimizing something that didn't need to be optimized.
I am already using such a structure. I have one service that collaborates with some other services and realise the implementation. Of course, in my case the client calls some one-way method of the first service. I am getting very good benifit. Of course, I also have configured it to limit the number of concurrent calls in some of the cases.
Yes, that architecture is supported by WCF. I deal with applications every day that use similar structures, using NetTCPBinding.
The biggest thing to worry about is the ConcurrencyMode of the various services involved, and making sure that they do not block unnecessarily. It is very easy to get into a scenario where you will be guaranteed timeouts, or at the least have poor performance due to multiple, synchronous calls across service boundaries. Even OneWay calls are not guaranteed to immediately return.
careful with threadstatic, .net changes the thread so the variable can get null.
For session...perhaps you could use session enabled calls:
http://msdn.microsoft.com/en-us/library/ms733040.aspx
But i would not recomend using if you do not have any performance issue. I would use the normal way, or if service 1 is just for forwarding you could use that functionality easily with 4.0:
http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/2979/Whats-New-in-WCF-40.aspx
Regards
Firstly, make sure you know about the behaviour of ThreadStatic in ASP.NET applications:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
The same thread that started your request may not be the same thread that finishes it. Basically the only safe way of storing Thread local storage in ASP.NET applications is inside HttpContext. The next obvious approach would be to creat a wrapper client to manage your WCF client proxy and ensure each IO request is thread safe using locks.
Although my personal preference would be to use a pool of proxy clients. Whenever you need one pop it off the pool queue and when you're finished with it put it back on.

MSMQ, WCF, and Flaky Servers

I have two applications, let us call them A and B. Currently A uses WCF to send messages to B. A doesn't need a response and B never sends messages back to A.
Unfortunately, there is a flaky network connection between the servers A and B are running on. This results in A getting timeout errors from time to time.
I would like to use WCF+MSMQ as a buffer between the two applications. That way if B goes down temporarily, or is otherwise inaccessible, the messages are not lost.
From an architectural standpoint, how should I configure this?
I think you might have inflated your question a bit with the inclusion of the word "architectural".
If you truly need an architectural overview of this issue from that high of a level, including SLA concerns, your SL will be as good as your MSMQ deployment, so if you are concerned about SL, just look at the documentation on the internet about MSMQ and SLA.
If you are looking more for the actual implementation from a code standpoint, this article is excellent:
http://code.msdn.microsoft.com/msmqpluswcf
It goes over a lot of the things you'll need to know, including how to setup MSMQ and how to implement chunking to get around MSMQ's 4MB limit (if this is necessary... I hope it's not).
Here's a good article about creating a durable and transactional queue that will cross machines using an MSMQ cluster: http://www.devx.com/enterprise/Article/39015/1954

To poll or not to poll (in a web services context)

We can use polling to find out about updates from some source, for example, clients connected to a webserver. WCF provides a nifty feature in the way of Duplex contracts, in which, I can maintain a connection to a client, and make invocations on that connection at will.
Some peeps in the office were discussing the merits of both solutions, and I wanted to get feedback on when each strategy is best used.
I would use an event-based mechanism instead of polling. In WCF, you can do this easily by following the Publish-Subscribe framework that Juval Lowy provides at his website, IDesign.net.
Depends partly on how many users you have.
Say you have 1,000,000 users you will have problems maintaining that many sessions.
But if your system can respond to 1000 poll requests a second then each client can poll every 1000 seconds.
I think Shiraz nailed this one, but I wanted to say two more things.
I've had trouble with Duplex
contracts. You have to have all of
your ducks in a row with regards to
the callback channel... you have to
check it to make sure it's open,
etc. The IDesign.net stuff would be
a minimum amount of plumbing code
you'll have to include.
If it makes sense for your solution
(this is only appropriate in certain
situations), the MSMQ binding allows
a client to send data to a service
in an async manner (like Duplex),
but the service isn't "polling" for
messages... it gets notified when
one enters the queue through some
under-the-covers plumbing.
This sort of forces you to turn the
communication around (client becomes
server, server becomes client), but
if the majority of the communication
is one-way, this would provide a lot
of benefits. The other advantage
here is obviously the queued
communication - the server can be
down and not miss any messages...
it'll pick 'em up when it comes back
online.
Something to think about.