We have an MVC application using downstream web services via wcf.
We followed the proposed approach of registering a singleton ChannelFactory, which will create a channel InstancePerDependency (if I am not mistaken).
We observe the following issues in production:
Profiling shows that an inordinate amount of time is spent in System.ServiceModel.Channels.ServiceChannelFactory.ChannelCreated (few 100ms, occasionally multiple seconds). The only thing that can take any significant time in that method is acquiring a lock.
Performance counters show linear increase in CLR > Lock and Thread > Contention Rate / sec over time.
I suspect that somehow channels are not properly disposed. The ChannelFactory keeps a list of all channels (OnCreated adds the new Channel to the list, after acquiring the lock). When a channel is closed, or Aborted it gets removed from the list, after acquiring the lock. If the list becomes huge the removal can take long, and OnCreated has to wait for the lock.
We have the autofac resolved IService injected in Controller methods, and in some instances also use DependecyResolver.Current.GetService. My understanding was that the autofac WCF integration would take care of the disposal. Is that not so? What's the proper way to ensure Channel disposal?
The suspicion voiced in the question turned out to be true, channels were leaking.
There were two issues with the application at hand:
the IService dependency resolution was registered with default perDependency instance scope, resulting in a ton of channels being created. In a web application you probably want to use the perHttpRequest scope for wcf client channel resolution. (this alone should not have resulted in a leak though)
In Global.Application_Start a global filter resolved via autofac was registerd. The constructor of that filter class took a func<IDependency> as a constructor argument and some sub dependency of it had a dependency on IService. The func was only evaluated from within a web request, but it appears the lifetime scope of all dependencies resolved during func evaluation was that the func was resolved in, i.e. the application. (Not 100% certain on this. But if the perHttpRequest instance scope is requested for the IService resolution, one instance, for the app lifetime, is created according to the resolution in the global filter, and never disposed, and another is created for each httpRequest, and properly disposed at the end of the request)
Related
I am trying to consume Apache common pool library to implement an object pooling for the objects that are expensive to create in my application. For respource pooling I have used the GenericObjectPool class of the library to use the default implementation provided by API for the object pooling. In order to ensure that we do not end up having several idle objects in memory, I set up the minEvictableIdleTimeMillis and timeBetweenEvictionRunsMillis properties to 30 minutes.
As I understood from other questions, blogs and API documentation, these properties trigger a separate thread in order to evict the idle objects from pool.
Could someone help me if that has any adverse impact on application performance and if there is any way to test if that thread is actually executed or not?
Library comes with the performance disclaimer when evictor is enabled
Eviction runs contend with client threads for access to objects in the pool, so if they run too frequently performance issues may result.
reference : https://commons.apache.org/proper/commons-pool/api-1.6/org/apache/commons/pool/impl/GenericObjectPool.html
However, we have a high TPS system running eviction every 1 sec and we don't see much of a performance bottle necks.
As for the eviction thread runs are concerned, you can override the evict() method in your implementation of GenericObjectPool and add a log line.
#Override
public void evict() throws Exception {
//log
super.evict();
}
How can I make the WCF server instance (the instance of the class in the .svc.cs / .svc.vb file) stay alive between requests?
It's a stateless, read-only type of service: I'm fine with different clients reusing the same instance. However, it's not thread-safe: I don't want two threads to execute a method on this instance concurrently.
Ideally, what I'm looking for is that WCF manages a "worker pool" of these instances. Say, 10. New request comes in: fetch an instance, handle the request. Request over, go back to the pool. Already 10 concurrent requests running? Pause the 11th until a new worker is free.
What I /don't/ want is per-client sessions. Startup for these instances is expensive, I don't want to do that every time a new client connects.
Another thing I don't want: dealing with this client-side. This is not the responsibility of the client, which should know nothing about the implementation of the server. And I can't always control that.
I'm getting a bit lost in unfamiliar terminology from the MSDN docs. I have a lot working, but this pool system I just can't seem to get right.
Do I have to create a static pool and manage it myself?
Thanks
PS: A source of confusion for me is that almost anything in this regard points toward the configuration of the bindings. Like basicHttp or wsHttp. But that doesn't sound right: this should be on a higher level, unrelated to the binding: this is about the worker managers. Or not?
In the event that you have a WCF service that centralizes business logic, provides/controls access to another “single” backend resource (e.g. data file, network socket) or otherwise contains some type of shared resource, then you most likely need to implement a singleton.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
In general, use a singleton object if it maps well to a natural singleton in the application domain. A singleton implies the singleton has some valuable state that you want to share across multiple clients. The problem is that when multiple clients connect to the singleton, they may all do so concurrently on multiple worker threads. The singleton must synchronize access to its state to avoid state corruption. This in turn means that only one client at a time can access the singleton. This may degrade responsiveness and availability to the point that the singleton is unusable as the system grows.
The singleton service is the ultimate shareable service, which has both pros(as indicated above) and cons (as implied in your question, you have to manage thread safety). When a service is configured as a singleton, all clients get connected to the same single well-known instance independently of each other, regardless of which endpoint of the service they connect to. The singleton service lives forever, and is only disposed of once the host shuts down. The singleton is created exactly once when the host is created.
https://msdn.microsoft.com/en-us/magazine/cc163590.aspx
I am trying to understand how instances with WCF works. I have a WCF service which the InstanceContextMode set to PerCall (so for each call of every client a new instance will be created) and ConcurrencyMode set to Single (so the service instance is executing exactly one or no operation call at a time).
So with this I understand that when a client connects a new instance is created. But what happens when the client leaves the service. Does the instance die. The reason I ask is because I need to implement a ConcurrentQueue in the service. So a client will connect to the service and put loads of data to be processed and then leave the service. The workers will work of the queue. After the work is finished I need the instance to be destroyed.
Basically, learning from the "WCF Master Class" tought by Juval Lowy, per-call activation is the preferred choice for services that need to scale, i.e. that need to handle lots of concurrent requests.
Why?
With the per-call, each incoming request (up to a configurable limit) gets its own, fresh, isolated instance of the service class to handle the request. Instantiating a service class (a plain old .NET class) is not a big overhead - and the WCF runtime can easily manage 10, 20, 50 concurrently running service instances (if your server hardware can handle it). Since each request gets its own service instance, that instance just handles one thread at a time - and it's totally easy to program and maintain, no fussy locks and stuff needed to make it thread-safe.
Using a singleton service (InstanceContextMode=Single) is either a terrible bottleneck (if you have ConcurrencyMode=Single - then each request is serialized, handled one after another), or if you want decent performance, you need ConcurrencyMode=Multiple, but that means you have one instance of your service class handling multiple concurrent threads - and in that case, you as a programmer of that service class must make 100% sure that all your code, all your access to variables etc. is 100% thread-safe - and that's quite a task indeed! Only very few programmers really master this black art.
In my opinion, the overhead of creating service class instances in the per-call scenario is nothing compared to the requirements of creating a fully thread-safe implementation for a multi-threaded singleton WCF service class.
So in your concrete example with a central queue, I would:
create a simple WCF per-call service that gets called from your clients, and that only puts the message into the queue (in an appropriate fashion, e.g. possibly transforming the incoming data or something). This is a quick task, no big deal, no heavy processing of any kind - and thus your service class will be very easy, very straightforward, no big overhead to create those class instances at all
create a worker service (a Windows NT service or something) that then reads the queue and does the processing - this is basically totally independent of any WCF code - this is just doing dequeuing and processing
So what I'm saying is : try to separate the service call (that delivers the data) from having to build up a queue and do large and processing-intensive computation - split up the responsibilities: the WCF service should only receive the data and put it into a queue or database and then be done with it - and a second, separate process should do the processing/heavy-lifting. That keeps your WCF service lean'n'mean
Yes, per call means, you will have a new insance of the service per each connection, once you setup the instance context mode to percall and ConcurrencyMode to single, it will be single threaded per call. when the client leaves, done with the job, your instance will dispose. In this case, you want to becareful not to create your concurrentqueue multiple times, as far as i can imagine, you will need a single concurrentqueue? is that correct?
I would recommend you to use IntanceContextMode=Single and ConcurrencyMode to Mutli threaded. This scales better.if you use this scheme, you will have a single concurrent queue, and you can store all your items within that queue.
One small note, ConcurrentQueue, has a bug, you should be aware of, check the bug database.
I've been reading a lot of WCF articles online and it seems like most people cache the ChannelFactory objects but not the channels itself. It appears that most people are afraid to use channel caching because they don't want to handle the network faults that could render the cached channel unusable. But that could be easily fixed by catching the CommunicationException on the method, recreate the channel, and replay the method using Reflection.
Then there are people who think it's bad to do channel caching because all communication will go through a single channel. See following article.
http://social.msdn.microsoft.com/Forums/is/wcf/thread/9cbdf92a-a749-40ce-9ebe-3f2622cd78ee
Is this necessarily a bad thing? Can you not share channels across threads? Will performance suffer because multiple method calls made to this single channel will get processed serially?
I haven't found evidence that sharing channels will degrade performance. What I did find is that using a cached channel is about 5 times faster than using a non-cached channel, even if it means having to use Reflection to make the methods calls on the cached channels.
The other advantage is not having to surround all your WCF calls with try/catch/finally statements to call Close(), Abort(), or Dispose() on the channel when you are done with it. To me it seems like WCF took a step in the wrong direction by forcing developers to have to manage WCF channel resources. In .NET Remoting, you created the proxy using the Activator class and you didn't have to do anything to it to clean it up. The .NET Framework handled all of that for you.
2 main reasons:
A ChannelFactory is expensive to create and it is thread safe => perfect candidate for caching.
A Channel generated by a channel factory is not expensive to create but it is not thread safe (well in reality it is thread safe but concurrent calls will be blocked and executed sequentially) => don't cache it in a multithreaded environment.
Here's a nice article which goes into further details.
I have a service that uses a fairly expensive object to create. I would like to improve the performance from call to call.
When I remove the object and run a load test, like how many invocations I can do per second, I have a massive performance difference between to situations.
Situation 1. I remove the expensive object: Invocations per sec ~= 130.
Situation 2. I use it as normal, with the object: rate is ~= 2 per sec.
I have a .NET WCF service hosted on an IIS 2008 server.
I was wondering if there was a way I could create an object cache/pool and hand those objects to each invocation of the service.
Any thoughs/comments that may help this situation?
You could run the WCF service in per session mode and create the object using the singleton pattern, that way you only create the object once per session, as opposed to once per call.
You may also be able to cache the objects using enterprise libray caching.
If the expensive part is building the State of the object, and you only want to limit the number of times that you create that object, I suggest using a Durable Service.
A Durable WCF component persists its state between calls and between clients. Each time you call a method it writes its state to a persistence store (the default is a sql server database). The catch is you have to pass around a context token between whoever is going to call your Durable component. This token could be persisted in a file or database or whatever.
This would allow you to make a call against the component, it could create its state one time and then you could keep calling it from other clients as long as they have access to its context token.
Nothing hangs around in memory since the object goes away each time your client closes, but the state persists.
Here's a tutorial.