WCF client proxy keep alive? - wcf

It there any disadvantage of creating a wcf client in code everytime a call is needed. currently i have a static class that creates a client and reuses it for a period of time (couple of minutes before the wcf service times out)
i'm having problems with it getting into a faulted state while i'm in development because i keep recompiling the WCF code. its an annoyance now but think it'll be fine in production.
but... creating client proxy with user creds everytime a call is made... bad practice? performance issues?

As far as I know there is no performance penalty and this is the good way of doing it i.e create a client proxy each time you need it.
And each time you're done with it, it is a recommended best practice to always close the proxy. Closing the proxy releases the connection held toward the service, which is particularly important to do in the presence of a transport session. It also helps ensure the threshold for the maximum number of connections on the client’s machine is not reached. Closing the proxy terminates the session with the service instance.

I think the best answer is a little of both.
there is definitely a performance hit creating a proxy client every call. if you can create a proxy client and use it for all the calls you're going to make immediately. then dispose of it. it is much faster.

Related

Lifetime of WCF proxies

During migration of a set of old applications from Remoting to WCF I'm struggling with a good way of lifetime handling for the WCF proxies.
Initially I kept the same pattern I had with Remoting: Create a proxy server object during application startup and use it as long as the application is running.
This pattern had 2 problems, though:
When any call during the runtime of the application failed and the server threw an exception, the client proxy would move to Faulted state and was no longer usable. I fixed this by adding an ErrorHandler on the server side and only throwing FaultExceptions.
With infrequent server calls, the proxy channel would timeout after some time and you only notice when the next call fails. Increasing the Send/ReceiveTimeout to very long timespans is a no-go, from what I read.
This article suggested creating a new proxy for every call and caching the ChannelFactory.
While this solved both problems, it also killed performance.
Caching the ChannelFactory was a good idea, but in contrast to what the article above said, creating the proxy is far from light-weight.
Well, creating the proxy itself is fast, but opening the Channel (what has to be done when calling the server) is incredible slow.
I've been using a plain vanilla net.tcp channel and each server call took about 2 seconds (in contrast to a few ms if re-using the proxy).
Because it's a large code base I don't want to go through each and every server call and check the lifetime requirements for each block of calls.
Now I'm unsure which way to go. Any advice?
Thanks in advance,mav

How best to manage Redis connections using ServiceStack?

I work on a few .NET web apps that use Redis heavily for caching along with ServiceStack's Redis client. In all cases I've got Redis running on the same machine. I've used both BasicRedisClientManager and PooledRedisClientManager (always implemented as singletons) and have had some issues with both approaches.
With BasicRedisClientManager, things would work fine for a while, but eventually Redis would start refusing connections. Using netstat we discovered that thousands of TCP connections to the default Redis port were hanging around in TIME_WAIT status.
We then switched to PooledRedisClientManager, which seemed to fix the problem immediately. However, not long after, we started noticing occasional CPU spikes that we narrowed down to thread waiting (System.Threading.Monitor.Wait calls) caused by PooledRedisClientManager.GetClient.
In code, we use a get-in-get-out approach (using ServiceStack's handy ExecAs shortcuts) so in general connections are acquired very frequently but held as briefly as possible.
We get a modest amount of traffic but we're no StackExchange, and I can't help but think the ServiceStack client is up to the job and we're just doing something wrong. Is PooledRedisClientManager the correct approach here? Would it be advisable to simply increase the pool size? Or is that likely just masking a problem with our code?
Just looking for general guidance here, I don't have specific code I need help with at this point. Thanks in advance.
Are you absolutely sure all Redis connections are being disposed?
With ServiceStack, the Redisproperty on Service and ViewPageBase (if you're using SS Razor) do dispose themselves, but any time you request a connection from the pool yourself you must dispose it yourself.
However, despite this, we recently had issues with our pool being exhausted of all connections, too. One of my colleagues discovered that there wasn't proper clean up for Razor pages and made a pull request here - This means that there has only been correct disposal on Razor pages since ServiceStack v4.0.21. I have not checked if that fix has been back-ported to the v3 branch.
My colleague also added TrackingRedisClientsManager that may help you track down the improper disposal. See here
You can also check the stats of a PooledRedisClientManager by using this helper method. We threw it on a little razor page to check the stats as we feel appropriate) but you could write better code around this to monitor the pool health of specific nodes, too.

closing WCF proxy

I have always followed the guidance of try/Close/catch/Abort when it comes to a WCF proxy. I am facing a code base now that creates proxies in MVC controllers and just lets them go out of scope. I'm arguing the case that we need to edit the code base to use try/Close/catch/Abort but there is resistance.
Does anyone know a metric (e.g. perfmon) I can capture to illustrate the problem/benefit. Or a definitive reference that spells out the problem/benefit no one can dispute?
You can create a sample application to mimic the problem. Though I haven't tried you can try this,
Create a simple service and limit the maxConcurrentCalls and maxConcurrentSessions to 5.
Create a client application and in that, call the service method without closing the connection.
Fire up 6 or more clients
See what happens when you open a new connection from a client. Probably the client will wait for certain time and you get some exception.
If the client don't close the connection properly, the connection will still remain open in the service so what happens if 1000s of client connected to the service at a time and leave their connections open? The service has a limitation that it could server 'n' connections at a time and because of that the service can't handle any new requests from clients and that's why closing connections are very important.
I think you are aware about the using problem in WCF service. In my applications I close the WCF connections using an extension method as said in this thread.
Have you tried a simple 'netstat -N' from the command prompt both on server and client? Yoy are likely to see a lot of waiting/pending connections which might exhaust your server resources for no reason.

End a WCF Session from the Server?

This may be a shot in the dark (I don't know much about the internals of WCF), but here goes...
I'm currently working with a legacy application at a client site and we're experiencing a persistent issue with a WCF service. The application is using the Microsoft Sync Framework 2.0 and syncing through the aforementioned service. The server-side implementation of the service has a lot of custom code in various states of "a mess."
Anyway, we're seeing an error on the client application most of the time and the pattern we're narrowing down centers around different users using the application on the same machine hitting the same service. It seems that the service and the client are getting out of sync in some way on an authentication level.
The error is discussed in an article here, and we're currently investigating the approach of switching from message layer security to transport layer security, which will hopefully solve the problem. However, we may be able to solve it in a less invasive manner if this question makes sense.
In the linked article, one of the suggestions was to forcibly terminate the connection if the specific exception is caught, try again, and if it fails again it wasn't because of this particular theory. Sounds good, and easy to implement. However, I find myself unable to say with confidence if the connection is being properly terminated.
The service operates through a custom interface, which is implemented on the server-side. The only thing that interface can do to end the connection is call EndSession() on the proxy itself, which calls EndSession() on the server which is a custom method.
So...
From a WCF service method, is there a way to properly and gracefully terminate the connection with the client in a way the client will like?
That is, in this custom EndSession() is there some last step I can take to cause the server to completely forget that this connection was open? Because it seems like when another user on the same machine tries to hit the service within the application, that's when it fails with the error in the linked article.
The idea is that, at the client side of things, code which calls EndSession() is followed by nulling out the proxy object, then a factory method is called to supply another one the next time it's needed. So I wonder if something additional needs to happen on the server side (and does by default in WCF were it not for all this custom implementation code) to terminate the connection on that end?
Like I said, a shot in the dark. But maybe in answers/discussions here I can at least further diagnose the problem, so any help is much appreciated. Thanks.
Unfortunately there are only really three ways in which a session can terminated
The client closes the proxy
The service's receiveTimeout is exceeded
before the client sends another
request
the service throws a
non-fault exception which will fault
the channel and so terminate the
session
if you don't want the client involved then you only have 2 and 3 neither of which end well for the client - they will get an exception in both situation on the next attempt to talk to the service.
You could use Duplex messaging and get the service to notify the client that its requires session termination - the client then gets an opportunity to close down the proxy gracefully but this is a cooperative strategy

WCF net.tcp server disconnects - how to handle properly on client side?

I'm stuck with a bit of an annoying problem right now.
I've got a Silverlight 4 application (which runs OOB by default). It uses WCF with net.tcp as means of communicating with the server.
The client uses a central instance of the wcf client proxy. As long as everything keeps running on the server side, everything's fine.
If i kill the server in the middle of everything, i drown in an avalanche of exceptions on the client side (connection lost, channel faulted etc etc).
Now i'm looking for a way to handle this in a clean and centralized way (if centralized is possible).
The SL app has one central client object sitting in App.cs (public static MyClient Client { get;set;}), which gets initialized on application start.
Any idea how to properly handle any connectivity problems on the client object?
You mention that you're using a central instance of the WCF client proxy.
If this is the case, then when a server error occurs, the proxy will go into the Faulted state. To keep things centralized, you could cast the client proxy to an ICommuicationObject and attach an event handler to the Faulted event which replaces the faulted proxy, with a new proxy when the event fires.
The usual warnings about thread-safety for centralized access to resources apply!
I think i found a workable - though not centralized - solution. Instead of cluttering the code with try/catch blocks, all it seems to need is a null-check for the event.Error property. If something happened to the connection, this property is always not null. The exceptions only get raised if you try to access event.Result.
It may not be the most beautiful solution, but it appears to work so far.
Perhaps there is a more elegant way though...