SQL Server completely slows down after occupying full memory - sql

On our server with 32 GB RAM we have an instance of SQL Server running that is capped with max Memory at 80%.
Everything works fine when the memory utilization is low. See screenshot below
But as time passes, 3-4 days hence SQL will utilize the full RAM (80% of total).
During this 3-4 days we make no changes to the server, but day by day it keeps eating more RAM.
When it reaches the Max Limit the entire performance goes for a toss, and we face query timeouts on our website. It takes many seconds to execute the same query that was executing within milliseconds.
At this point we have no choice but the restart the entire server and things return to normal. (Restarting the service only does not work)
This will work for a week or so, after which we have to restart it again
I have read online, that SQL server does not release memory. But they have also mentioned that it is how SQL functions, but does not affect performance.
In my case it does and performance suffers.
Is there a memory leak? Or a stored proc consuming lots of memory and never releasing it? If so how do I debug it?

By default SQL server consumes entire memory if uncapped and uses all memory available when capped.This is Normal.You will also have to ensure SQLSERVER is the only application on the box(which is recommended) and also try to cap memory as per best practices.
I would start troubleshooting using below approach
1.start finding top memory consuming queries and see if I can reduce the memory usage of the query ..memory consuming queries can be found by below query.
SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC -- logical reads
-- ORDER BY qs.total_logical_writes DESC -- logical writes
-- ORDER BY qs.total_worker_time DESC -- CPU time
now that you found queries causing high memory, you need to fine tune them to see if you can reduce the memory usage.
EX: query may be doing many reads due to an unsuitable index or may be your IO device has an issue due to which buffer pool is getting flushed many times
2.You can also find top components that use memory, which gives you an understanding of how RAM is spent
SELECT TOP(20) [type], [name], SUM(single_pages_kb) AS [SPA Mem, Kb]
FROM sys.dm_os_memory_clerks
GROUP BY [type], [name]
ORDER BY SUM(single_pages_kb) DESC;
if buffer pool uses more memory consistently, I would not worry about that, but if it is cachestore_obcp.then you might be having many ad-hoc queries filling up cache store which is bad
One part of investigation leads to another, so you will have to troubleshoot based on the leads as there is no one click solution
Side Note: Not Recommended:
one of our dev instances used to face the same issue, so instead of doing all the tuning stuff, we used to run below command which effectively releases memory immediately..but this is not at all recommended for a production instance, as this flushes plans stored in the cache and you may face slight CPU pressure
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
References:
sql-server-memory-related-queries
sql-server-find-most-expensive-queries-using-dmv

You need to find out Most expensive queries and need to review query code, many times the query code is not effectively optimized. Performance tuning you need to do.

Related

Select statement low performance on simple table

Using Management Studio, I have a table with the six following columns on my SQL Server:
FileID - int
File_GUID - nvarchar(258)
File_Parent_GUID - nvarchar (258)
File Extension nvarchar(50)
File Name nvarchar(100)
File Path nvarchar(400)
It has a primary key on FileID.
This table has around 200M rows.
If I try and process the full data, I receive an memory error.
So I have decided to load this in partitions, using a select statement in every 20M where I split on the FileID number.
These selects take forever, the retrieval of rows is extremely slow, I have no idea why.. There are no calculations whatsoever, just a pull of data using a SELECT.
When I ran the query analyzer I see:
Select cost = 0%
Clustered Index Cost = 100%
Do you guys have any idea on why this could be happening or maybe some tips that I can apply ?
My query:
Select * FROM Dim_TFS_File
Thank you!!
Monitor the query while it's running to see if it's blocked or waiting on resources. If you can't easily see where the bottleneck is during monitoring the database and client machines, I suggest you run a couple of simple tests to help identify where you should focus your efforts. Ideally, run the tests with no other significant activity and a cold cache.
First, run the query on the database server and discard the results. This can be done from SSMS with the discard results option (Query-->Query Options-->Results-->Grid-->Discard Results after execution). Alternatively, use a Powershell script like the one below:
$connectionString = "Data Source=YourServer;Initial Catalog=YourDatabase;Integrated Security=SSPI;Application Name=Performance Test";
$connection = New-Object System.Data.SqlClient.SqlConnection($connectionString);
$command = New-Object System.Data.SqlClient.SqlCommand("SELECT * FROM Dim_TFS_File;", $connection);
$command.CommandTimeout = 0;
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$connection.Open();
[void]$command.ExecuteNonQuery(); #this will discard returned results
$connection.Close();
$sw.Stop();
Write-Host "Done. Elapsed time is $($sw.Elapsed.ToString())";
Repeat the above test on the client machine. The elapsed time difference reflects data transfer network overhead. If the client machine test is significantly faster than the application, focus you efforts on the app code. Otherwise, take a closer look at database and network. Below are some random notes that might help remediate performance issues.
This trivial query will likely perform a full clustered index scan. The limiting performance factors on the database server will be:
CPU: Throughtput of this single-threaded query will be limited by spead of a single CPU core.
Storage: The SQL Server storage engine will use read-ahead reads during large scans to fetch data asynchronously so that data will already be in memory by the time it is needed by the query. Sequential read performance is important to keep up with the query.
Fragmentation: Fragmentation will result in more disk head movement against spinning media, adding several milliseconds per physical disk IO. This is typically a consideration only for large sequential scans on a single-spindle or low-end local storage arrays, not SSD or enterprise class SAN. Fragmentation can be eliminated with a reorganizing or rebuilding the clustered index. Be sure to specify MAXDOP 1 for rebuilds for maximum benefits.
SQL Server streams results as fast as they can be consumed by the client app but the client may be constrained by network bandwidth and latency. It seems you are returning many GB of data, which will take quite some time. You can reduce bandwidth needs considerably with different data types. For example, assuming the GUID-named columns actually contain GUIDs, using uniqueidentifier instead of nvarchar will save about 80 bytes per row over the netowrk and on disk. Similarly, use varchar instead of nvarchar if you don't actually need Unicode characters to cut data size by half.
Client processing time: The time to process 20M rows by the app code will be limited by CPU and code efficiency (especially memory management). Since you ran out of memory, it seems you are either loading all rows into memory or have a leak. Even without an outright out of memory error, high memory usage can result in paging and greatly slow throughput. Importantly, the database and network performance is moot if the app code can't process rows as fast as data are returned.

High CPU Usage By Postgres Process

I have an application running on Postgres database, sometimes when I have about 8-10 people working on the application, the CPU usage soars high to something between 99-100%, The application was built on Codeigniter framework which I believe had made provision for closing up connections to the database each and every time it is not needed, What could be solution to this problem. I would appreciate any suggestions. Thank you
Basically, what the people do on the application is to running insert queries but at a very fast rate, A person could run between 70 - 90 insert queries in a minute.
I came across with the similar kind of issue. The reason was - some transactions were getting stuck and running since long time. Hence CPU utilization got increased to 100%. Following command helped to find out the connections running for the longest time:
SELECT max(now() - xact_start) FROM pg_stat_activity
WHERE state IN ('idle in transaction', 'active');
This command shows the the amount of time a connection has been running. This time should not be greater than an hour. So killing the connection which was running for a long long time or stuck at any point worked for me. I followed this post for monitoring and solving my issue. Post includes lots of useful commands to monitor this situation.
You need to find out what PostgreSQL is doing. Relevant resources:
Monitoring in general
Monitoring queries
Finding slow queries
Once you find what the slow or the most common queries are use, use EXPLAIN to make sure they are being executed efficiently.
Here are some cases we met that cause high CPU usage of Postgres.
Incorrect indexes are used in the query
Check the query plan - Through EXPLAIN, we could check the query plan, if the index is used in the query, the Index Scan could be found in the query plan result.
Solution: add the corresponding index for the query SQL to reduce CPU usage
Query with sort operation
Check EXPLAIN (analyze, buffers) - If the memory is insufficient to do the sorting operation, the temporary file could be used to do the sorting, and high CPU usage comes up.
Note: DO NOT "EXPLAIN (analyze)" in a busy production system as it actually executes the query behind the scenes to provide more accurate planner information and its impact is significant
Solution: Tune up the work_mem and sorting operations
Sample: Tune sorting operations in PostgreSQL with work_mem
Long-running transactions
Find long-running transactions through
SELECT pid
, now() - pg_stat_activity.query_start AS duration, query, state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '2 minutes';
Solution:
Kill the long-running transaction through select pg_terminate_backend(pid)
Optimize the transaction or query SQL through corresponding indexes.

max memory per query

How can I configure the maximum memory that a query (select query) can use in sql server 2008?
I know there is a way to set the minimum value but how about the max value? I would like to use this because I have many processes in parallel. I know about the MAXDOP option but this is for processors.
Update:
What I am actually trying to do is run some data load continuously. This data load is in the ETL form (extract transform and load). While the data is loaded I want to run some queries ( select ). All of them are expensive queries ( containing group by ). The most important process for me is the data load. I obtained an average speed of 10000 rows/sec and when I run the queries in parallel it drops to 4000 rows/sec and even lower. I know that a little more details should be provided but this is a more complex product that I work at and I cannot detail it more. Another thing that I can guarantee is that my load speed does not drop due to lock problems because I monitored and removed them.
There isn't any way of setting a maximum memory at a per query level that I can think of.
If you are on Enterprise Edition you can use resource governor to set a maximum amount of memory that a particular workload group can consume which might help.
In SQL 2008 you can use resource governor to achieve this. There you can set the request_max_memory_grant_percent to set the memory (this is the percent relative to the pool size specified by the pool's max_memory_percent value). This setting in not query specific, it is session specific.
In addition to Martin's answer
If your queries are all the same or similar, working on the same data, then they will be sharing memory anyway.
Example:
A busy web site with 100 concurrent connections running 6 different parametrised queries between them on broadly the same range of data.
6 execution plans
100 user contexts
one buffer pool with assorted flags and counters to show usage of each data page
If you have 100 different queries or they are not parametrised then fix the code.
Memory per query is something I've never thought or cared about since last millenium

SQL Server single query memory usage

I would like to find out or at least estimate how much memory does a single query (a specific query) eats up while executing. There is no point in posting the query here as I would like to do this on multiple queries and see if there is a change over different databases. Is there any way to get this info?
Using SQL Server 2008 R2
thanks
Gilad.
You might want to take a look into DMV (Dynamic Management Views) and specifically into sys.dm_exec_query_memory_grants. See for example this query (taken from here):
DECLARE #mgcounter INT
SET #mgcounter = 1
WHILE #mgcounter <= 5 -- return data from dmv 5 times when there is data
BEGIN
IF (SELECT COUNT(*)
FROM sys.dm_exec_query_memory_grants) > 0
BEGIN
SELECT *
FROM sys.dm_exec_query_memory_grants mg
CROSS APPLY sys.dm_exec_sql_text(mg.sql_handle) -- shows query text
-- WAITFOR DELAY '00:00:01' -- add a delay if you see the exact same query in results
SET #mgcounter = #mgcounter + 1
END
END
While issuing the above query it will wait until some query is running and will collect the memory data. So to use it, just run the above query and after that your query that you want to monitor.
What do you mean by "how much memory a query eats up?", and why exactly do you want to know?
I don't think memory in SQL Server works the way you might imagine - memory management in SQL Server is an incredibly complex topic - you could easily write entire books about SQL Servers memory management. I can't claim to know that much about SQL Servers memory management, but I do know that there is pretty much no useful information that you can extrapolate from knowing how much memory a single query uses up.
That said, if you did want to have a go at understanding whats going on with memory when you execute a query then I would probably start with looking at the buffer pool. Nearly all memory in SQL Server is organised into 8KB chunks (the same size as a page) of memory that can be used to store anything from a data page or index page to a cached query plans. The buffer pool is the main memory component in SQL Server - All 8KB chunks of memory not in use elsewhere remains in the buffer pool to be used as a cache for data pages.
Note that in order for a data page or index page to be used it must exist in memory - this means that if it doesn't already exist in memory elsewhere ready for use, a free buffer must be made available to ready the page in to. The buffer pool serves both as a pool of "expendable" free buffers, and a cache of pages already present in memory.
You can examine whats in the buffer pool using DMVs, there is a suitable query listed on this page:
What's swimming in your bufferpool?
By cleaning out your buffer pool using the command DBCC DROPCLEANBUFFERS (DONT DO THIS ON A PRODUCTION SQL SERVER!!!) and then executing your query, in theory the new pages that appear in the buffer pool should be the pages that were used in the last query.
This can give you a rough idea of the data and index pages used in a query, however doesn't cover other areas of SQL Server where memory is used, such as in the query plan cache, SQL Server Workers etc..
Like I said, SQL Server memory management is complex - If you really want to know more I recommend that you buy a book on SQL Server internals.
Update: You can also use the query statistics to view the aggregate performance statistics for a query including "physical reads" (pages read from the disk) and "logcal reads" (pages read from the buffer pool). See this page for a suitable query.
This might also give you some more hints on how much memory a query is using, however beware - playing around I found queries that performed many more logical reads than they did physical reads, which as far as I can work out meant that they read the same pages over and over again, i.e. 100 logical reads != 100 pages used in the buffer pool.

SQL server virtual memory usage and performance

I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceeds the amount of physical memory available. Currently, physical memory is 10GB (10238k bytes) whereas the virtual memory returns significantly more - 8388607k bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed.
USE [master];
GO
select
cpu_count
, hyperthread_ratio
, physical_memory_in_bytes / 1048576 as 'mem_MB'
, virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB'
, max_workers_count
, os_error_mode
, os_priority_class
from
sys.dm_os_sys_info
Are you having general problems on this box or problems with a specific query? In most cases, query optimization is best asked here, but everything else about general SQL Server performance profiles belongs on serverfault.com, especially OS/Server/hardware configuration.
I have a very large DB used mostly for analytics.
OLAP: Memory and CPU intensive.
Currently, physical memory is 10GB
Small server: Does not compute.
That seems really wrong, but I'm at a bit of a loss on how to proceed
Upgrade server. A Very large dDatabase is 100+Gigabyte - likely these days 1000+ Gigabyte. 10Gb RAM for that uner analytical circumstance: joke, not server.