Can the performance (response time) of a query executed in a DBMS like SQL Server be influenced by whatever it's happening on the machine on which the server runs? To be more specific, is the response time expected to increase when running a couple of Windows processes that continuously check and clean the machine, and process data received from the network?
Thanks.
The four key resources for any program are available memory, processors, disk space, and disk usage.
Let's investigate each of these in turn. Available memory is well-managed in SQL Server (see here). The default behavior is to start with a bunch of memory and then increase it as necessary. If your query load is not changing, then SQL Server should hit a maximum amount of memory and stop growing. It sounds like your query load is consistent over time, so memory would not be a big issue. Also, many configurations of SQL server fix the memory size to avoid interference with other processors.
Processing power. This can be a big one. SQL Server requires processing power. The processors may be used by other Windows processes. This would slow down queries, particularly those that are processing (as opposed to I/O) constrained. However, this might be mitigated on a multi-processor machine. A given SQL Server instance might be assigned a certain number of processors. The rest could be used for Windows.
Disk space. This has little impact. In general, either disk that is needed is available or it is not (and the query needing it fails). One exception is temporary disk space, the availability of which can influence query execution plans. Often, temporary space is put on its own drive to avoid needless conflict with other processes.
I/O bandwidth. SQL Server needs to communicate through the file system to disk. This can be a real performance drag, and it can occur at multiple different levels. The operating system itself could be saturated with I/O calls, slowing down the database. The network between the CPUs and the disks could be saturated, slowing down reading and writing speeds. The disk system itself can be slow because of multiple concurrent actions -- even from different servers. And this can get all the more complicated in a virtual environment.
The answer is "yes". Windows processes can affect the performance of SQL queries. My best guess is that the affect would be either in terms of eating up processors or using up disk bandwidth.
Yes, the database server is sharing resources with anything else that runs on the machine, so any resource intensive process could affect the database performance noticeably.
One important resource is the memory. The SQL Server will by default use up all free memory if it has any use for it. If you are running any other processes on the server, you should limit the memory use of SQL Server so that it allocates a bit less, to allow room for the other proccesses, to reduce memory swapping.
Related
I am using Ignite 2.6 with data streamer nodes consuming data from kafka and putting in ignite cache. A high server load average is being observed with reduced throughput.
I have tried giving index inline size for the indexes defined in the cache which is giving a good performance but also increases the server memory utilization and high load average. Please advise how does increasing the datastreamer threadpool size will impact in this case.
When optimizing data streamer performance, you need to understand, where the bottleneck is. It may be either on data nodes or streaming nodes. Servers may be inserting data slowly or streamers may be just giving too little load.
CPU, memory, network, disk resources can be drained on either side. Therefore, before trying to optimize configuration parameters, try to look for a reason, that's causing the slowness. Using JFR, JProfiler, VisualVM or something similar may help you with it.
If the bottleneck is on the streaming side, then in most cases calling IgniteDataStreamer#addData on a single data streamer from multiple threads increases servers' utilization.
If you see the issue on the server's side, then you may need to increase the number of servers or try to optimize the insertion time.
I realize this number will change based on many factors, but in general, when I write data to a hard-drive (e.g. copy a file), how long does it take for that data to actually be written to the platter after Windows says the copy is done?
Could anyone point me in the right direction to discover more on this topic?
If you are looking for a hard number, that is pretty much unknowable. Generally it is the order of a tens to a few hundred milliseconds for the data to start reaching the disk platters, but can be as high as several seconds in a large server disk array with RAID and de-duplication.
The flow of events goes something like this.
The application calls a function like fwrite().
This call is handled by the filesystem layer in your Operating System, which has to figure out what specific disk sectors are to be manipulated.
The SATA/IDE driver in your OS will talk to the hard drive controller hardware. On a modern PC, it typically uses DMA to feed the data to the disk.
The data sits in a write cache inside the hard disk (RAM).
When the physical platters and heads have made it into position, it will begin to transfer the contents of cache onto the platters.
Steps 3-6 may repeat several times depending on how much data is to be written, where on the disk it is to be written. Additionally, there is usually filesystem metadata that must be updated (e.g. free space counters), which will trigger more writes to the disk.
The time it takes from steps 1-3 can be unpredictable in a general purpose OS like Windows due to task scheduling, background threads, and your disk write is probably queued up with a few dozen other processes. I'd say it is usually on the order of 10-100msec on a typical PC. If you go to the Windows Resource Monitor and click the Disk tab, you can get an idea of the average disk queue length. You can use the Performance Monitor to produce more finely-controlled graphs.
Steps 3-4 are largely controlled by the disk controller and disk interface (SATA, SAS, etc). In the server world, you can be talking about a SAN with FC or iSCSI network switches, which impose their own latencies.
Step 5 will be controlled by they physical performance of the disk. Many consumer-grade HDD manufacturers do not post average seek times anymore, but 10-20msec is common.
Interesting detail about Step 5: Some HDDs lie about flushing their write cache to get better benchmark scores.
Step 6 will depend on your filesystem and how much data you are writing.
You are right that there can be a delay between Windows indicating that data writing is finished and the last data actually written. Things to consider are:
Device Manager, Disk Drive, Properties, Policies - Options for disabling Write Caching.
You might be better off using Direct I/O so that Windows does not save it temporarily in File Cache.
If your program writes the data, you can log what has been copied.
If you are sending the data over a network, you are likely to have no control of when the remote system has finished.
To see what is happening, you can set up Perfmon logging. One of my examples of monitoring:
http://www.roylongbottom.org.uk/monitor1.htm#anchor2
I have installed MongoDB 2.4.4 on Amazon EC2 with ubuntu 64 bit OS and 1.6 GB RAM.
On this server, only MongoDB running nothing else.
But sometime CPU usage reach to 99% and load average: 500.01, 400.73,
620.77
I have also installed MMS on server to monitor what's going on server.
Here is MMS detail
As per MMS details, indexing working perfectly for each queries.
Suspect details as below
1) HIGH non-mapped virtual memory
2) HIGH page faults
Can anyone help me to understand what exactly causing high CPU usage ?
EDIT:
After comments of #Dylan Tong, i have reduced active connetions but
still there is high non-mapped virtual memory
Here's a summary of a few things to look into:
1. Observed a large number of connections and cursors (13k):
- fix: make sure your connection pool is appropriate. For reporting, and your current request rate, you only need a few connections at most. Also, I'm guessing you have a m1small instance, which means you only have 1 core.
2. Review queries and indexes:
- run your queries with explain(), to observe how the queries are executed. The right model normally results in queries only pulling very few documents and utilization of an index.
3. Memory (compact and readahead setting):
- make the best use of memory. 1.6GB is low. Check how much free memory you have, and compare it to what is reported as resident. A couple of common causes of low resident memory is due to fragmentation. If there are alot of documents moving, changing size and such, you should run the compact command to defragment your data files. Also, a bad readahead can lead to poor use of memory as well. Check your readahead setting (http://manpages.ubuntu.com/manpages/lucid/man2/readahead.2.html). Try a few values starting with low values (http://docs.mongodb.org/manual/administration/production-notes/). The production notes recommend 32 (for standard 512byte blocks). Sometimes higher values are optimal if your documents are larger. The hope is that resident memory should be close to your available memory and your page faults should start to lower.
If you're using resources to the fullest after this, and you're still capped out on CPU then it means you need to up your resources.
I work with SQL Server 2005 and wonder, if not CPU, disk or network, what are users waiting for when SQL Server is working. The strange thing is that system monitor shows that the 4 processors are at an average of 5%, the disk (demonstrated 50MB/s write) works with about 5-8 MB/s, but the execution (inserts and selects) take up to 10 minutes. I'd be happy to install additional hardware, but I don't see what device is the bottleneck and how do I measure its capacity and current workload.
Any advice would be appreciated.
Thanks
additional info: RAM is constantly at about 70% capacity and I am running windows xp.
check your disk read and write 'wait' time. a heavy load database may just make a lot of read and write request with very small piece of data that saturates the IO.
As others mention, disks are rarely the bottleneck when it comes to bandwidth, but rather in the number of IO operations they can perform per second - commonly called IOPS.
The IOPS capabilities of your disks will vary according to disk type, cache and the RAID setup you have.
Another thing you may run into is locking. If you have a lot of concurrent access to the same data, especially inside of large transactions, you may see other transactions being blocked - causing no network, CPU nor disk usage while being blocked, just wasted time.
Probably the disks. If you are seeking all over the place, the throughput (MB/s) will be low even though the disks are running as fast as they can.
Generic advice: try increasing SQLServer's cache, tune your queries, check that the appropriate indexes exist and are used (and that you don't have too many).
We have a 16 processor SQL Server 2005 cluster. When looking at CPU usage data we see that most of the time only 4 of the 16 processors are ever utilized. However, in periods of high load, occasionally a 5th and 6th processor will be used, although never anywhere near the utilization of the other 4. I'm concerned that in periods of tremendously high load that not all of the other processors will be utilized and we will have performance degradation.
Is what we're seeing standard SQL Server 2005 cluster behavior? I assumed that all 16 processors would be utilized at all times, though this does not appear to be the case. Is this something we can tune? Or is this expected behavior? Will SQL server be able to utilize all 16 processors if it comes to that?
I'll consider you did due diligence and validated that the CPU consumption belongs to the sqlservr.exe process, so we're not chasing a red herring here. If not, please make sure the CPU is consumed by sqlservr.exe by checking the Process\% Processor performance counters.
You need to understand the SQL Server CPU scheduling model, as described in Thread and Task Architecture. SQL Server spreads requests (sys.dm_exec_requests) across schedulers (sys.dm_os_schedulers) by assigning each requests to task (sys.dm_os_tasks) that is run by a worker (sys.dm_os_workers). A worker is backed by an OS thread or fiber (sys.dm_os_threads). Most requests (a batch sent to SQL Server) spawn only one task, some requests though may spawn multiple tasks (parallel queries being the most notorious).
The normal behavior of SQL Server 2005 scheduling should be to distribute the tasks evenly, across all schedulers. Each scheduler corresponds to one CPU core. The result should be an even load on all CPU cores. But I've seen the problem you describe a few times in the labs, when the physical workload would distribute unevenly across only few CPUs. You have to understand that SQL Server does not control the thread affinity of its workers, but instead relies on the OS affinity algorithm for thread locality. What that means is that even if SQL Server spreads the requests across the 16 schedulers, the OS might decide to run the threads on only 4 cores. In correlation with this issue there are two problems that may cause or aggravate this behavior:
Hyperthreading. If you enabled hyperthreading, turn it off. SQL Server and hyperthreading should never mix.
Bad drivers. Make sure you have the proper system device drivers installed (for things like main board and such).
Also make sure your SQL 2005 is at least at SP2 level, prefferably at latest SP and all CU applied. Same goes for Windows (do you run Windows 2003 or Windows 2008?).
In theory the behavior could also be explained by a very peculiar workload, ie. SQL sees only few very long and CPU demanding requests that have no parallle option. But that would be an extremly skewed load and I never seen something like that in real life.
Even accounting for IO bottleneck I would check is whether you have processor affinities set up, what your maxdop setting is, whether it is SMP or NUMA which should also affect what maxdop you may wish to set.
when you say you have a 16 processor cluster, you mean 2 SQL servers in a cluster with 16 processors each, or 2 x 8 way SQL servers?
Are you sure that you're not bottlenecking elsewhere? On IO perhaps?
Hard to be sure without hard data, but I suspect the problem is that you're more IO-bound or memory-bound than CPU-bound right now, and 4 processors is enough to keep up with your real bottleneck.
My reasoning is that if there were some configuration problem that was keeping you limited to 4 cpus, you wouldn't see it spill over to the 5th and 6th processors at all.