WCF Client Connection Caching/Pooling - wcf

Suppose you expose a WCF Service from one project and consume it in another project using 'Add Service Reference' (in this case a Framework 3.5 WPF Application).
Will ClientBase perform any kind of connection pooling of the underlying channel when you re-instantiate a ClientBase derived proxy or will you incur the full overhead of establishing the connection with the service every time? I am especially concerned about this since we are using security mode="Message" with wsHttpBinding.

Please take a look at this article which describes best practices on how to cache your client proxies. If you're creating your proxy directly (MyProxy p = new MyProxy(...)), then it seems that you really can't cache the underlying ChannelFactory, which is the expensive part. But if you use ChannelFactory to create your proxy, the ChannelFactory is cached by the proxy at the AppDomain level, and it's based on the parameters you pass to the proxy (kind of like connection pooling which is based on the connection string).
The article goes through a number of details on what's going on under the covers, but the main point is that you get a performance bump if you use ChannelFactory to create your proxy instead of instantiating it directly.
Hope this helps!!

This article explains that yes, there is TCP connection pooling for WCF. What it doesn't explain though is in which cases it will take effect. As far as I can tell, as long as you construct your proxy object by providing it a named endpoint (IE not using a custom Binding object), connection pooling will work. I tested this by throwing load at my web app and checking open TCP connections with netstat.
But the bottom line is you do not need to cache your proxy objects in order to re-use the TCP connections.

Related

Lifetime of WCF proxies

During migration of a set of old applications from Remoting to WCF I'm struggling with a good way of lifetime handling for the WCF proxies.
Initially I kept the same pattern I had with Remoting: Create a proxy server object during application startup and use it as long as the application is running.
This pattern had 2 problems, though:
When any call during the runtime of the application failed and the server threw an exception, the client proxy would move to Faulted state and was no longer usable. I fixed this by adding an ErrorHandler on the server side and only throwing FaultExceptions.
With infrequent server calls, the proxy channel would timeout after some time and you only notice when the next call fails. Increasing the Send/ReceiveTimeout to very long timespans is a no-go, from what I read.
This article suggested creating a new proxy for every call and caching the ChannelFactory.
While this solved both problems, it also killed performance.
Caching the ChannelFactory was a good idea, but in contrast to what the article above said, creating the proxy is far from light-weight.
Well, creating the proxy itself is fast, but opening the Channel (what has to be done when calling the server) is incredible slow.
I've been using a plain vanilla net.tcp channel and each server call took about 2 seconds (in contrast to a few ms if re-using the proxy).
Because it's a large code base I don't want to go through each and every server call and check the lifetime requirements for each block of calls.
Now I'm unsure which way to go. Any advice?
Thanks in advance,mav

Should I Open and Close my WCF client after each service call?

I have developed a WCF service for consumption within the organization's Ethernet.
The service is currently hosted on a windows-service and is using net.tcp binding.
There are 2 operation contracts defined in the service.
The client connecting to this service is a long running windows desktop application.
Employees(>30,000) usually have this client running throughout the week from Monday morning to Friday evening straight.
During this lifetime there might be a number of calls to the wcf service in question depending on a certain user action on the main desktop client.
Let us just say 1 in every 3 actions on the main desktop application would
trigger a call to our service.
Now we are planning to deploy this window service on each employee's desktop
I am also using `autofac` as the dependency resolver container.
My WCF service instance context is `PerSession`, but ideally speaking we have both the client and service running in the same desktop (for now) so I am planning to inject the same service instance for each new session using `autofac` container.
Now am not changing the `InstanceContext` attribute on the service implementation
because in future I might deploy the same service in a different hosting environment where I would like to have a new service object instance for each session.
Like mentioned earlier the client is a long running desktop application and I have read that it is a good practise to `Open` and `Close` the proxy for each call but if I leave the service to be PerSession it will create a new service instance for each call, which might not be required given the service and client have a 1-1 mapping. Another argument is that I am planning to inject the same instance for each session in this environment, so Open & Close for each service call shouldn't matter ?
So which approach should I take, make the service `Singleton` and Open Close for each call or
Open the client-side proxy when the desktop application loads/first service call and then Close it only when the desktop application is closed ?
My WCF service instance context is PerSession, but ideally speaking we have both the client and service running in the same desktop (for now) so I am planning to inject the same service instance for each new session using autofac container
Generally you want to avoid sharing a WCF client proxy because if it faults it becomes difficult to push (or in your case reinject) a new WCF to those parts of the code sharing the proxy. It is better to create a proxy per actor.
Now am not changing the InstanceContext attribute on the service implementation because in future I might deploy the same service in a different hosting environment where I would like to have a new service object instance for each session
I think there may be some confusion here. The InstanceContext.PerSession means that a server instance is created per WCF client proxy. That means one service instance each time you new MyClientProxy() even if you share it with 10 other objects being injected with the proxy singleton. This is irrespective of how you host it.
Like mentioned earlier the client is a long running desktop application and I have read that it is a good practise to Open and Close the proxy for each call
Incorrect. For a PerSession service that is very expensive. There is measurable cost in establishing the link to the service not to mention the overhead of creating the factories. PerSession services are per-session for a reason, it implies that the service is to maintain state between calls. For example in my PerSession services, I like to establish an expensive DB connection in the constructor that can then be utilised very quickly in later service calls. Opening/closing in this example essentially means that a new service instance is created together with a new DB connection. Slow!
Plus sharing a client proxy that is injected elsewhere sort of defeats the purpose of an injected proxy anyway. Not to mention closing it in one thread will cause a potential fault in another thread. Again note that I dislike the idea of shared proxies.
Another argument is that I am planning to inject the same instance for each session in this environment, so Open & Close for each service call shouldn't matter ?
Yes, like I said if you are going to inject then you should not call open/close. Then again you should not share in a multi-threaded environment.
So which approach should I take
Follow these guidelines
Singleton? PerCall? PerSession? That entirely depends on the nature of your service. Does it share state between method calls? Make it PerSession otherwise you could use PerCall. Don't want to create a new service instance more than once and you want to optionally share globals/singletons between method calls? Make it a Singleton
Rather than inject a shared concrete instance of the WCF client proxy, instead inject a mechanism (a factory) that when called allows each recipient to create their own WCF client proxy when required.
Do not call open/close after each call, that will hurt performance regardless of service instance mode. Even if your service is essentially compute only, repeated open/close for each method call on a Singleton service is still slow due to the start-up costs of the client proxy
Dispose the client proxy ASAP when no longer required. PerSession service instances remain on the server eating up valuable resources throughout the lifetime of the client proxy or until timeout (whichever occurs sooner).
If your service is localmachine, then you consider the NetNamedPipeBinding for it runs in Kernel mode; does not use the Network Redirector and is faster than TCP. Later when you deploy a remote service, add the TCP binding
I recommend this awesome WCF tome

closing WCF proxy

I have always followed the guidance of try/Close/catch/Abort when it comes to a WCF proxy. I am facing a code base now that creates proxies in MVC controllers and just lets them go out of scope. I'm arguing the case that we need to edit the code base to use try/Close/catch/Abort but there is resistance.
Does anyone know a metric (e.g. perfmon) I can capture to illustrate the problem/benefit. Or a definitive reference that spells out the problem/benefit no one can dispute?
You can create a sample application to mimic the problem. Though I haven't tried you can try this,
Create a simple service and limit the maxConcurrentCalls and maxConcurrentSessions to 5.
Create a client application and in that, call the service method without closing the connection.
Fire up 6 or more clients
See what happens when you open a new connection from a client. Probably the client will wait for certain time and you get some exception.
If the client don't close the connection properly, the connection will still remain open in the service so what happens if 1000s of client connected to the service at a time and leave their connections open? The service has a limitation that it could server 'n' connections at a time and because of that the service can't handle any new requests from clients and that's why closing connections are very important.
I think you are aware about the using problem in WCF service. In my applications I close the WCF connections using an extension method as said in this thread.
Have you tried a simple 'netstat -N' from the command prompt both on server and client? Yoy are likely to see a lot of waiting/pending connections which might exhaust your server resources for no reason.

WCF Proxy Pooling - Is it worth it?

Is it really worth pooling WCF proxy clients, or is it better to instanciate a new proxy on every call to a given method?
By the way, does anyone have a pooling pattern for this kind of proxies which he/she is willing to share?
It is worth to cache ChannelFactory because its construction is costly. Proxies generated by Add Service Reference (or svcutil.exe directly) do this in some scenarios (generally you must not build binding in code if you want to have this caching). If you build ChannelFactory manually (you don't use generated proxies) it is up to you to store it somewhere instead of initializing it every time you need it.
Pooling proxies probably doesn't make much sense. For stateless services the proxy creation should be fast (if you have cached factory). For statefull services you don't want sharing proxy among multiple "clients". There is also pooling on connection level itself. HTTP bindings use something called persistent connections by default. These connections can be reused by multiple proxies. Net.tcp and net.pipe bindings use connection pooling internally. It means that lifetime of the proxy doesn't have to be the same as lifetime of the connection.

Is WCF Duplex a good choice?

After developing mini project with WCF duplex (Chat Service | Sms Service), I got a Point that maybe not be correct!!
I believed Duplex theory is good and useful but there is a lot problem about using Wcf Duplex. (like reliable session, Time-out exceptions, Client address-Management on server side, proxy management on Client Side)
am I think wrong ? am I miss something?
For more Information I Used wsDualHttpBinding not tcpBinding.
If you need bidirectional communication and you want to use WCF, duplex channels are the way to go. You just need to design your application correctly and correctly handle all problems you have described. If you feel that these problems are overhead and make things even worse you can always use network programming directly (sockets) or handle bidirectional communication by yourselves exposing separate service on server and another on client (where first call from client inform server about clients address) - this scenario will suffer from the same communication problems as WsDualHttpBinding.
WsDualHttpBinding itself is special kind of duplex communication. I personally don't like it because people very often misuse it. The problem is that this binding uses two separate connections - one from client to server and second from server to client. That is big difference to net.tcp where only connection initiated from client to server is used. Obviously using WsDualHttpBinding over internet (= you don't have control over client machines) becomes much more complicated because each client must configure its firewall (in computer, on home internet gateway, etc.) to allow connection on some port. Also if you want to run more then one instance of application on the same client machine, each instance must use its own port.