We are running SQL Server 2005 on 64 bit. The PF Usage reaches close to 25 GB every week. Queries that normally take less than a second to run become very slow during this time. What could be causing this?
After running PerfMon, the two counters, Total Server Memory and Target Server Memory show 20 GB and 29 GB respectively. Processor Queue Length and Disk Queue Length are zero.
Sounds like you don't have enough memory, how much is on the Server? More than your page file?
Also could mean Sql Server has "paged out" meaning Windows decided to swap all the info it stored in memory onto the disk.
Open Perfmon ( Goto a command prompt, and type perfmon ) and add these counters:
SQLServer:Buffer Manager - Buffer cache hit ratio
SQLServer:Buffer Manager - Page life expectancy
SQLServer:Memory Manager - Memory Grants Pending
If Buffer Cache Hit Ratio is < 95% it means Sql is using the disk instead of memory a lot you need more memory.
If your page life expectancy is < 500 it mean SqlServer is not keeping results cached in memory, you need more memory.
If you have a lot of Memory Grants Pending, you need more memory.
There are also two stats which let you know how much memory SqlServer wants and how much its actually using. They are called something like "Total Memory Used" and "Total Memory Requested". If Requested > Used, guess what, you need more memory.
There are also some dmv's you can use to determine if your queries are being held up while waiting for memory to free up. I think its sys_dmv.os_wait_stats, something like that.
I community wikied this so a real dba can come in here and clean this up. Don't know the stats off the top of my head.
This is a great read on how to use DMV's to determine memory consumption on sql: http://technet.microsoft.com/en-us/library/cc966540.aspx - look for the 'memory bottlenecks' section.
One big thing is to determine if the memory pressure is internal to SQL (usually the case) or external (rare, but possible). For example, perhaps nothing is wrong with your server, but a driver installed by your antivirus program is leaking memory.
Related
I am a beginner in sql server.I have read about buffer cache in sql server.I think it is the location in system RAM where the data will be stored once the query is executed.If that is correct,i have a few questions about query execution.
1)If my RAM size is 2GB, and i have data in my sql server of 10GB size and if i execute the sqlstatment for retreiving all the data(10 GB) from database,what will happen(Work/Not work)?
2)In the same case, if multiple user execute queries for retrieving 5 GB each(total 10 GB),what will happen?
When you issue a SELECT * FROM [MyTable] and your table has 10Gb on a system that has only 2Gb of RAM a database does not have to read the entire 10 Gb at once in memory. The select will start a scan of the data, starting with the first data page. For this it only needs that page (which is 8Kb) in memory, so it reads the page and consumes 8Kb of RAM. As it scans this page, it produces output that you see as the result set. As soon as is done with this page, it needs the next page so it read is in memory. It scans the records in it and produces output for your result. Then next page, then next page. The key point is that once is done with a page, its no longer needed. As it keeps adding these 8kb pages into RAM, they will eventually add up and consume all free RAM. In that moment SQL will free the old, unused, pages in RAM and thus make room for new ones. It will keep doing so until the entire 10Gb of your table are read.
If there are two users reading a table of 5GB each, things work exactly the same. Each user's query is scanning only one page at a time, and as they make progress and keep reading pages, they will fill up the RAM. When all the available RAM is used, SQL will start discarding old pages from RAM to make room for new ones.
In the real world things are fancier because of considerations like read-ahead.
And as a side note, you should never scan 10Gb of data. Your application should always request only the data it needs, and the data should be retrievable fast by using an index, exactly to avoid such large scan that needs to examine the entire table.
Thankfully you don't have to worry about this. Sure, it is important to tune your queries, minimize resultsets for network transfer, etc. etc. but SQL Server has been around a long time and it's very good at its own memory management. Unless you encounter a specific query that misbehaves, I'd say don't worry about it.
As you noted, data retrieved goes into the buffer cache. Some of that is real memory, some is spooled off to disk.
You can observe whether or not you're reusing memory by watching the Page Life Expectancy performance counter (there other indicators too, but this one is a quick shorthand). If you run a query that returns huge amounts of data and pushes other data out of the cache, you can watch the page life expectancy drop. As you run lots & lots of small queries, you can see the page life expectancy grow, sometimes to a lenght of days & days.
It's not a great measure for memory pressure on the system, but it can give you an indication of how well you're seeing the data in cache get reused.
the full resultset of a query is not stored on the RAM by sql server for reusing. What can be stored is the execution plan used on a query.
SQL Server stores data to manipulate it, like on an update statement, it reads the data from the DB, stores on the RAM, edit it and then another process writes it back to the DB. BUt as #n8wrl said, you dont need to worry about it.
The reason I ask it we have a dedicated RAID10 array with ~150GB for the tempdb (the "t" drive). It is only used for storing tempdb. The t drive isn't used by by SQL Server or any other process for anything else.
Our DBA has tempdb setup with 15GB initial size and autogrow 20% increments. Everytime the server starts it resized to 15GB and then over the course of the day grows to ~80GB (on average). Now IT is looking into making initial size larger say 30 or 40GB but given the drive is ONLY used for tempdb my thinking is why not "max it" right away.
Is the any negative effect to simply create 4 data files in the primary group for tempdb give them each an initial size of 30GB (120GB total), turn autogrow off and be done with it?
Are there any limits on SQL Server ability to span multiple tempdb data files in one query? i.e. will it cause problems if the tempdb has say 70GB total free but the file used by one process is full (30 of 30GB used)?
I would size them to about 100GB and leave autogrow on, this way you don't have to wait for it to grow every time, I would also add multiple files
Is the any negative effect to simply
create 4 data files in the primary
group for tempdb give them each an
initial size of 30GB, turn autogrow
off and be done with it?
Sounds like a good plan to me, however I would leave autogrow on just in case someone decides to do a sort operation on a big table which doesn't have an index on that column
See also here: http://technet.microsoft.com/en-us/library/cc966534.aspx
It is recommended to have .25 to 1
data files (per filegroup) for each
CPU on the host server.
This is especially true for TEMPDB
where the recommendation is 1 data
file per CPU.
Dual core counts as 2 CPUs; logical
procs (hyperthreading) do not.
We have found it very useful to create large TempDB data and log files. Any actions that limit server OS activities such as resizing TempDB increase server efficiencies. We have a 16 processor machine with 113 GB dedicated to TempDB data space. This machine is dedicated to large SSIS ETL processes, thus resulting in mass data operations.
The bulk of our ETL operations spawn up to 4 SQL threads. After initially configuring a TempDB file for each processor (16), we quickly realized via performance monitoring that our configuration was forcing SQL\windows to unnecessarily span the multiple TempDB files. We settled on 5 larger TempDB data files and realized performance improvements. We have since moved on to a 24 processor box and are using 8 TempDB files.
Please note that this is a large data migration server; I’m sure transaction-oriented systems would still benefit from the recommended 1-1 processor to TempDB file configuration. It should also be noted that having a large increase % on a TempDB file may force a critical transaction to take the windows operation hit and thus may not be appropriate for your specific application.
We're running our application's database on dedicated box running only SQL Server 2005.
This DB server has 32 Gb of RAM... and the database file itself is only 6 Gb.
I'd like to force several of the heavily read/queried tables into the SQL Memory buffer to increase speed.
I understand that SQL server is really good about keeping necessary data cached in memory once it's read from disk... But our clients would probably prefer their query running quickly the FIRST time.
"Fastest Performance the Second Time" isn't exactly a product highlight.
Short of the old "Pin Table" DBCC command.. any thoughts?
I've written a "CacheTableToSQLMemory" Proc which Loops through all of a table's Indexes (Clustered & Non) , performing a "Select *" into a Temp table. I've scheduled SQL Agent to run a "cache lots of tables" Proc every 15 minutes in an attempt to keep pages in Memory.
It works to a large extent.. but even after I cache all of a query's relevant tables, running a query still increased the Count of Cached pages for that table. then it's faster the 2nd time.
thoughts?
We're running PAE & AWE. SQL is set to use between 8 & 20 GB of RAM.
The x86 bottleneck is your real issue. AWE can serve only data pages, as they can be mapped in and out of the AWE areas, but every other memory allocation has to cram in the 2GB of the process virtual address space. That would include every thread stack, all the code, all the data currently mappen 'in use' from AWE and, most importantly, every single cached plan, execution plan, cached security token, cached metadata and so on and so forth. and I'm not even counting CLR, I hope you don't use it.
Given that the system has 32GB of RAM, you can't even try /3GB and see if that helps, because of the total PAE reduction to 16GB in that case that would make half your RAM invisible...
You realy, really, have to move to x64. AWE can help only that much. You could collect performance counters from the Buffer Manager and Memory Manager objects and monitor sys.dm_os_memory_clerks so you could get a better picture of how is the instance memory behaving (where does the memory in use go, who is consuming it etc). I don't expect that will help you solve the issue really, but I do expect it will give you enough information to make a case for the upgrade to x64.
There is no way to pin tables in memory in SQL Server 2005. If SQL Server is dropping the tables from memory, it's because there is memory pressure from other parts of the system. Since your database is only 6GB, the database should stay in memory... provided that there are no other databases on the server.
There are a few things you can do to try to keep data in memory, though. Depending on the patch level and edition of your SQL Server installation, you might be able to make use of the lock pages in memory functionality to ensure that SQL Server's memory never gets paged out.
You can also change the memory allocation on the server to be a fixed size. Unless there's something else on your database server, you can set SQL Server's min and max memory to the same value. This won't necessarily prevent this from happening in the future (it's a function of how SQL Server is supposed to work) but it certainly won't hurt to set your SQL Server to use a fixed amount of memory (if you have no other memory concerns).
So I have a database that is acting weird. I am watching all activity on the server, and the tempdb is constantly growing. It has grown by 30gb in about 45 minutes. I keep checking the allocated space in the tempdb, and it is always about 8mb. I know that it is not needing all the space it is allocating, I have watched 1 transaction happening with the tempdb essentially empty and it still growing.
It appears to me that the engine instead of using previously allocated space is instead choosing to use more of the hard drive space.
I noticed our tempdb was extremely large earlier today and restarted SQL, that brought the tempdb down in size to a good size, but it has been growing again ever since, and constantly restarting SQL is not an option as this is a production environment. I have limited hd space on this server so I need to keep the tempdb at a reasonable size.
Have you done an analysis on the scripts that are running? Have you used the profiler to determine SQL activity?
My first thoughts are scripts using temp tables (#table) and a possible Cartesian Product join?
As a note the tempdb is recreated on startup of SQL server so thats why it will be truncated when you restart the service.
I just rolled a small web application into production on a remote host. After it ran a while, I noticed that MS SQL Server started to consume most of the RAM on the server, starving out IIS and my application.
I have tried changing the "server maximum memory" setting, but eventually the memory usage begins to creep above this setting. Using the activity monitor I have determined that I am not leaving open connections or something obvious so I am guessing its in cache, materialized views and the like.
I am beginning to believe that this setting doesn't mean what I think it means. I do note that if I simply restart the server, the process of memory consumption starts over without any adverse impact on the application - pages load without incident, all retrievals work.
There has got to be a better way to control sql or force it to release some memory back to the system????
From the MS knowledge base:
Note that the max server memory option
only limits the size of the SQL Server
buffer pool. The max server memory
option does not limit a remaining
unreserved memory area that SQL Server
leaves for allocations of other
components such as extended stored
procedures, COM objects, non-shared
DLLs, EXEs, and MAPI components.
Because of the preceding allocations,
it is normal for the SQL Server
private bytes to exceed the max server
memory configuration.
Are you using any extended stored procedures, .NET code, etc that could be leaking memory outside of SQL Server's control?
There is some more discussion on memory/address space usage here
Typically I recommend setting the max server memory to a couple of Gigs below the physical memory installed.
If you need to run IIS and an application server installed on the server then you'll want to give SQL Server less memory.