Webservice wcf performance counters for queue - wcf

Am trying to performance test a wcf webservice which should get a lot of traffic. Which performance counters are sensible to use and for which purpose..Naturally I am looking at CPU and RAM, but I would like to know when IIS is queing and when its having trouble...
Any advice on sensible performance counters gratefully received...
Cheers alex

MSDN has an entire section on WCF administration and diagnostics, and specifically, for performance counters in WCF.
There are also specific sections for performance counters hosted service calls, as well as for the endpoint and for operations.
I would suggest looking through those first, as there is a good amount of valuable information there.

Analysing performance counters is complicated and takes a lot of practice, which is my way of saying that I am not experienced enough to give a complete list.
You are going to look for some specific things to start with.
First off is of course how long it takes to return the webservice calls. This tells you if you even have a performance issue at that load.
Next every one looks at CPU. This really does not tell you a lot however.
RAM is good, but you want to know how often your app is paging to disk, so check the Page Faults/sec.
Check your logical and physical disks for Current Disk Queue Length. If your physical disk is queing at all, you are reading/writing to much to the disk.
Beyond that you would normally be trying to find a specific and likely obscure problem.
I usually take performance testing in stages. Do a first test with the basics and if a particular page is having a problem look at the load it is causing.
If the whole production server is not performing adequately, it is easier to add more hardware, but I prefer looking at the code that is running and make that better.

Before you run your performance monitors, you want to add the registry key:
HKLM/Services/CurrentControlSet/service/
Add ServiceModelService 4.0.0.0
under that add Performance then add a DWORD FileMappingFile.
The size for that will be number of services exposed * 33 * 350.
In your config you would then add
<system.ServiceModel>
<diagnostics performanceCounters="ServiceOnly"/>
</system.ServiceModel>
You can watch the following counters:
CPU / RAM (for memory leaks) / for each service Call and Call Duration as well as Calls Outstanding
CPU will show you how heavily your are saturating your server
RAM will show if you have memory leaks if it continues to grow and grow and grow
Calls will show the number of calls you are getting accumulative,
Calls Per Second will give you the volume you're handling
Calls Outstanding are clients that are waiting because your services could not handle the volume.
If you find some questionable numbers in these groupings then start looking at other elements like Calls Faulted or Calls Failed. (not sure of the difference between a failure and a fault)
It is rare that you would need to dig further into the issues than what the service only numbers will provide. When you get into the other two sets of counters your shared memory utilization gets really high.

Related

Understanding BLOCKED_TIME in PerfView

We are suspecting that we're experciencing thread pool starvation on a server that is running a couple of ASP.NET Core APIs and a couple of .NET Core consoles.
I ran perfview one one of our servers were we are suspecting problems with thread pool starvation. However I'm having a bit of trouble analyzing the results.
I ran PerfView /threadTime collect for about 60 seconds. And this is the result I got (I chose one to look at one of our ASP.NET Core APIs):
Looking at "By Name" we can see that there is a lot of time spent in BLOCKED_TIME. If I double click then I'm taken to the following view where I can expand one of the nodes to get the following view (the overwritten part is the name of our API process):
What does that tell me? Shouldn't I be able to see what exactly is blocking? And does it look like the problem is that a lot of threads is blocking each one for a small amount of time?
Are there any other conclusions we can draw from this?
BLOCKED_TIME generally means a period when the thread wasn't doing anything at all. This could be periods of I/O, where network or other types of latency are involved or time spent waiting on locks such as in situations with semaphores. In short, this doesn't necessarily tell you anything, as there's perfectly standard and reasonable reasons for the thread to be idled. However, a goodish amount of time spent blocked can be an indication of an underlying problem. Perhaps you have too much network latency. Perhaps you're trying to do too much file system work on a slow drive. In short, it may or may not indicate a problem, and even if it does indicate a problem, it doesn't really tell you what the problem is.
In general, if you're experiencing thread starvation, the first thing you should look at is thread pool utilization. Are you using async everywhere you can? Are you doing things that are big no-nos in web apps such as using Task.Run, Task.StartNew or worse, Thread.Start? All those created threads are coming out of the same thread pool, and thus proportionally reducing your server throughput.
There's an all too common pattern of attempting to schedule long-running jobs by shuffling them to new threads. That's death to a web application. All threads in the pool are there to service requests, not long-running jobs, and as such, requests should be handled quickly and efficiently so that the thread can be returned to the pool in short order to field other requests. If you need to background work, you need to truly background it, by offloading to another process or even a different machine entirely.
Short of all that, maybe you're just getting more load than the server can handle in general. That's always a possibility. Perhaps you need to vertically scale your system resources (and the thread pool with it). Perhaps you need to horizontally scale by replicating this server with a load balancer in front. Given that you're running multiple different things on the same server, an easy way to horizontally scale is to simply divvy out these things to their own machines. That alone would probably help tremendously. However, scaling, either vertically or horizontally, should be your last resort. Make sure you're using resources efficiently first, before throwing more resources at your inefficient things.

High CPU with ImageResizer DiskCache plugin

We are noticing occasional periods of high CPU on a web server that happens to use ImageResizer. Here are the surprising results of a trace performed with NewRelic's thread profiler during such a spike:
It would appear that the cleanup routine associated with ImageResizer's DiskCache plugin is responsible for a significant percentage of the high CPU consumption associated with this application. We have autoClean on, but otherwise we're configured to use the defaults, which I understand are optimal for most typical situations:
<diskCache autoClean="true" />
Armed with this information, is there anything I can do to relieve the CPU spikes? I'm open to disabling autoClean and setting up a simple nightly cleanup routine, but my understanding is that this plugin is built to be smart about how it uses resources. Has anyone experienced this and had any luck simply changing the default configuration?
This is an ASP.NET MVC application running on Windows Server 2008 R2 with ImageResizer.Plugins.DiskCache 3.4.3.
Sampling, or why the profiling is unhelpful
New Relic's thread profiler uses a technique called sampling - it does not instrument the calls - and therefore cannot know if CPU usage is actually occurring.
Looking at the provided screenshot, we can see that the backtrace of the cleanup thread (there is only ever one) is frequently found at the WaitHandle.WaitAny and WaitHandle.WaitOne calls. These methods are low-level synchronization constructs that do not spin or consume CPU resources, but rather efficiently return CPU time back to other threads, and resume on a signal.
Correct profilers should be able to detect idle or waiting threads and eliminate them from their statistical analysis. Because New Relic's profiler failed to do that, there is no useful way to interpret the data it's giving you.
If you have more than 7,000 files in /imagecache, here is one way to improve performance
By default, in V3, DiskCache uses 32 subfolders with 400 items per folder (1000 hard limit). Due to imperfect hash distribution, this means that you may start seeing cleanup occur at as few as 7,000 images, and you will start thrashing the disk at ~12,000 active cache files.
This is explained in the DiskCache documentation - see subfolders section.
I would suggest setting subfolders="8192" if you have a larger volume of images. A higher subfolder count increases overhead slightly, but also increases scalability.

mongodb high cpu usage

I have installed MongoDB 2.4.4 on Amazon EC2 with ubuntu 64 bit OS and 1.6 GB RAM.
On this server, only MongoDB running nothing else.
But sometime CPU usage reach to 99% and load average: 500.01, 400.73,
620.77
I have also installed MMS on server to monitor what's going on server.
Here is MMS detail
As per MMS details, indexing working perfectly for each queries.
Suspect details as below
1) HIGH non-mapped virtual memory
2) HIGH page faults
Can anyone help me to understand what exactly causing high CPU usage ?
EDIT:
After comments of #Dylan Tong, i have reduced active connetions but
still there is high non-mapped virtual memory
Here's a summary of a few things to look into:
1. Observed a large number of connections and cursors (13k):
- fix: make sure your connection pool is appropriate. For reporting, and your current request rate, you only need a few connections at most. Also, I'm guessing you have a m1small instance, which means you only have 1 core.
2. Review queries and indexes:
- run your queries with explain(), to observe how the queries are executed. The right model normally results in queries only pulling very few documents and utilization of an index.
3. Memory (compact and readahead setting):
- make the best use of memory. 1.6GB is low. Check how much free memory you have, and compare it to what is reported as resident. A couple of common causes of low resident memory is due to fragmentation. If there are alot of documents moving, changing size and such, you should run the compact command to defragment your data files. Also, a bad readahead can lead to poor use of memory as well. Check your readahead setting (http://manpages.ubuntu.com/manpages/lucid/man2/readahead.2.html). Try a few values starting with low values (http://docs.mongodb.org/manual/administration/production-notes/). The production notes recommend 32 (for standard 512byte blocks). Sometimes higher values are optimal if your documents are larger. The hope is that resident memory should be close to your available memory and your page faults should start to lower.
If you're using resources to the fullest after this, and you're still capped out on CPU then it means you need to up your resources.

What is the performance hit for using WCF Performance Counters (performanceCounters = "ALL")?

Does anyone have experience with using the WCF Performance Counters in a production system and running into any performance issues? I suspect if you are monitoring all Service, Endpoints, and Operations and log all counters to a file, sampling every second, then this is the worst case scenario. From what I gather, the hit comes when you actually sample, not when the counters are turned on. Any real-life experience out there using them in production?
I can't answer for the WCF ones in detail, but Performance Counters in general work by writing values to some shared memory all the time. So WCF always writes values to a memory mapped file or a shared section in a dll.
When the perfmon application wants to display them, it loads the shared memory and reads from it. That's not a performance hit particularly.
The problem comes when you want to do something with that counter data, like write it to a file or update a graph. That's when the performance starts to be noticeable. This goes double if the reader is running across the network.

Is there an ideal number of network operations for iPhone OS?

I'm using NSOperation and NSOperationQueue to handle all of my networking threads so my interface can remain responsive while handling data transfer over the internet. Currently, I've got my operation queue set to a maximum concurrent operation count of 5, and it seems to work well.
I'm wondering, though, if there is a more ideal number of concurrent network operations that would best maximize the available resources without choking the hardware. Are there any recommendations, or steps I might take to measure and find out for myself?
Given the iPhone (currently) runs a single core, I would guess 5 is around the right number.
But the only way to be sure would be to instrument it and find out what the usage looks like (CPU, Memory and Network). Network usage you could get based on the data transferred - but its hard to know what a reaspnable usage would be. I'm not sure if it is possible to get CPU/Memory statistics from the iPhone.
If you are doing large transfers, then more connections probably wont help much. If you are doing lots of small transfers, then more connections will help work around the back and forth of setting up and tearing down the connection.