Monitor ThreadPool use - wcf

I wish to do some benchmarking on a busy WCF service (IIS-hosted, PerSession). Since WCF gets a new ThreadPool thread for each service call, I'd like to know if the ThreadPool's max threads count is ever reached, and if I should increase it (SetMaxThreads). The only way to get hard facts I can think of is to instrument the code with ThreadPool.GetAvailableThreads. Is there any way for me to monitor if ThreadPool has reached max threads and is waiting for threads to be released? Thanks.

Related

Spring AMQP RabbitMQ does not consume all messages, workers finish prematurely

I am struggling to find proper setting to delay timeout for workers in RabbitMQ.
By default prefetchCount since the version 2.0 are set to 250 and exactly this amount of messages are being received and processed.
I would like to keep workers busy, until they clear up an entire queue (lets say 10k messages).
I can manipulate this number manually, such as changing default limit or assigning more threads resulting in multiplying default number.
Results are always the same. Once the number is reached, workers stop their job and application finish its execution
o.s.a.r.l.SimpleMessageListenerContainer : Successfully waited for workers to finish.
I would like them to finish when the queue is empty. Any ideas?
The logger.info("Successfully waited for workers to finish."); happens only in one place - doShutdown(). And this one is called from the shutdown(), which is called from the destroy() or stop().
I somehow think that you exit from your application by some reason. You just don't block the main() to work permanently.
Please, share a simple project we can play with.

Need to hold request for a thread until previous request is finished

I am looking for a technique to hold off on requesting a thread (background worker, Task, etc,) from starting while a previous thread is still processing. The thread has an object writer and if it is busy I cannot use it in the next thread until it finishes its write.
Note, that the processing that occurs before each thread request is sufficiently long enough that there should not be an issue, this is just precautionary.
I am guessing that how I request the thread here is critical to having some sort of response back that will allow the next thread to get called. But I could use some help on how to set this up. If anyone has a specific scenario of similar design I would be happy researching the recommended technique. Sort of new to this sort of thread processing.
vb.net
I'm not sure how you plan on implementing this, but you should try and use the TPL vs. using Threads directly. With Tasks, you can wait on them to complete.
See the following example https://msdn.microsoft.com/en-us/library/dd537610(v=vs.100).aspx
And read the following on Threads vs. Tasks if you need more information on the differences.
http://blog.slaks.net/2013-10-11/threads-vs-tasks/
Typically mutexes are used for synchronization.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms684266(v=vs.85).aspx
Note that you'll also need handle WAIT_ABANDONED, which is the status when a thread that had the mutex dies instead of finishing.
Examples and more info for .Net here: https://msdn.microsoft.com/en-us/library/system.threading.mutex(v=vs.110).aspx

How to tell for a particular request that all available worker threads are BUSY

I have a high-rate UDP server using Netty (3.6.6-Final) but notice that the back-end servers can take 1 to 10 seconds to respond - i have no control over those, so cannot improve latency there.
What happens is that all handler worker threads are busy waiting for response and that any new request must wait to get processed, over time this response comes very late. Is it possible to discover for a given request that the thread pool is exhausted, so as to intercept the request early and issue a server busy response?
I would use an ExecutionHandler configured with an appropriate ThreadPoolExecutor, with a max thread count and a bounded task queue. By choosing differenr RejectedExecutionHandler policies, you can either catch the RejectedExecutionException to answer with a "server busy", or use a "caller runs policy", in which case the IO worker thread will execute the task and create a push back (but that is what you wanted to avoid).
Either way, an execution handler with a limited capacity is the way forward.

WCF polling, background processing, and resource starvation

I have a web service, implemented with WCF and hosted in IIS7, with a submit-poll communication pattern. An initial request is made, which returns quickly and kicks off a background process. The client polls for the status of the background process. This interface is set and can't be changed (it's a simulation of an external service we depend on).
I implemented the background processing by adding another service contract to the existing service with a one-way message contract that starts the long-running process. The "background" service keeps a database updated with the status in order to communicate with the main service. This avoids creating any new web services or items to deploy.
The problem is that the background process is very CPU intensive, and it seems to be starving the other service calls out. It will take up an entire processor, and while a single instance of the background process is running, status polling calls to the main service can take over a minute. I don't care how long the background process takes.
Is there any way to throttle the resource usage of the background method? Or an obvious way to do long running async processes in WCF without changing my submit/poll service contract? Would separating them into different web services help if the two services were still running on the same server?
The first thing I would try would be to lower the priority.
If you're actually spinning off a separate process for the background work, then you can do it like this:
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.BelowNormal;
If it's really just a background thread, use this instead (from within the thread):
Thread.CurrentThread.Priority = ThreadPriority.BelowNormal;
(Actually, it's better to start the thread suspended and change the priority at the caller before running it, but it's generally OK to lower your own priority.)
At the very least it should help determine whether or not it's really a CPU issue. If you still have problems after lowering the priority then it might be something else that's getting starved, like file or network I/O.

Performance of WCF with net.tcp

I have a WCF net.tcp service hosted with the builtin ServiceHost, and when doing some stress tests I get a strange behavior. The first time i send a bunch of requests, 5 to 10 requests are answered quickly, and the rest are returning at about 2 second intervals. The second time i send the requests, 10 - 20 are returned quickly, and rest with 2 sencond intervals.
The above repeats until I can get over 100 requests returned quickly, but if I wait a minute or so the memory usage of the service goes down and the requests go back to 5-10 returning quick.
The service I am testing has a small delay, so that I can get many open connections at the same time, if this delay is removed the requests return so quickly that i have perhaps 2-5 connections open at the same time. This delay is to simulate DB connections and other outgoing stuff.
From the behavior it looks like the ServiceHost is allocating something, threads, class instances, but I can not figure out what it is.
I could have a timer in the client that calls the service to keep it working, but that seems like a bad solution.
If I have a high sustained load to the service it will crunch all requests quickly, but if I have a period of low activity and then a surge of connections comes in the service will be slow.
I guess my question is WHAT is it the get allocated during high load of the WCF service, and HOW can I configure the service to preallocate more of the things that get allocated.
EDIT:
I did some more testing, and looking at the taskmgr for the process I can see that when the servicehost is 'resting' there are 10 threads open, but when I start sending requests, the threadcount goes up. As long as the threadcount is high the servicehost can process incoming requests quickly, but if I pause sending the requests, the open threadcount decreases, and subsequent requests starts taking longer time to process.
Now, how can I tell the servicehost to keep a bunch of threads open? Or more than the 10-12 that it keeps by default?
Well, after lots of googling, it seems that the problem is the threadpool. The CLR threadpool allocates a few threads, and when they are used, it throttles the creation of new threads, and after a time it also deallocates unused threads.
There is some confusion about a bug that meant that the ThreadPool did not honor the SetMinThreads call.
http://www.michaelckennedy.net/blog/PermaLink,guid,708ee9c0-a1fd-46e5-8fa0-b1894ad6ce0f.aspx
I am not sure if this bug is solved, or what, because when I modify the ThreadPool settings, the problem persists.
The thing that determines how may request are handled simultaneous is the ServiceThrottlingBehavior. There are a number of different threasholds that will limit the amount of request being processed. This also depends on the binding your are using, for example wsHttpBinding defaults to sessions on while basicHttpBinding uses no sessions and the default session limit of 10 is no problem.
See http://msdn.microsoft.com/en-us/library/ms735114.aspx for more details.
The bug you referenced is fixed in .NET 3.5 SP1. That may have had something to do with the problem, I think it's more likely (much more likely) that throttling is your problem rather than thread as Maurice keyed into.
<system.serviceModel>
<service name="???" >
<endpoint ... />
</service>
</system.serviceModel>
What's the throttle limit for this "empty" config? 10 session, 16 concurrent calls! Beware.
Here's more on the threading:
http://www.michaelckennedy.net/blog/2008/08/20/ThreadPoolBugInNET20SP1IsFixed.aspx
This feels like a hack but it seems to solve your issue. The problem is that the threadpool will take time to start up a new thread, so what you really need is threads waiting on standby. Add a constructor to your service and set the minimum number of threads you would like.
public YourService()
{
int workerThreads;
int portThreads;
ThreadPool.GetMinThreads(out workerThreads, out portThreads);
ThreadPool.SetMinThreads(200, portThreads);
}