So I have a database that is acting weird. I am watching all activity on the server, and the tempdb is constantly growing. It has grown by 30gb in about 45 minutes. I keep checking the allocated space in the tempdb, and it is always about 8mb. I know that it is not needing all the space it is allocating, I have watched 1 transaction happening with the tempdb essentially empty and it still growing.
It appears to me that the engine instead of using previously allocated space is instead choosing to use more of the hard drive space.
I noticed our tempdb was extremely large earlier today and restarted SQL, that brought the tempdb down in size to a good size, but it has been growing again ever since, and constantly restarting SQL is not an option as this is a production environment. I have limited hd space on this server so I need to keep the tempdb at a reasonable size.
Have you done an analysis on the scripts that are running? Have you used the profiler to determine SQL activity?
My first thoughts are scripts using temp tables (#table) and a possible Cartesian Product join?
As a note the tempdb is recreated on startup of SQL server so thats why it will be truncated when you restart the service.
Related
The reason I ask it we have a dedicated RAID10 array with ~150GB for the tempdb (the "t" drive). It is only used for storing tempdb. The t drive isn't used by by SQL Server or any other process for anything else.
Our DBA has tempdb setup with 15GB initial size and autogrow 20% increments. Everytime the server starts it resized to 15GB and then over the course of the day grows to ~80GB (on average). Now IT is looking into making initial size larger say 30 or 40GB but given the drive is ONLY used for tempdb my thinking is why not "max it" right away.
Is the any negative effect to simply create 4 data files in the primary group for tempdb give them each an initial size of 30GB (120GB total), turn autogrow off and be done with it?
Are there any limits on SQL Server ability to span multiple tempdb data files in one query? i.e. will it cause problems if the tempdb has say 70GB total free but the file used by one process is full (30 of 30GB used)?
I would size them to about 100GB and leave autogrow on, this way you don't have to wait for it to grow every time, I would also add multiple files
Is the any negative effect to simply
create 4 data files in the primary
group for tempdb give them each an
initial size of 30GB, turn autogrow
off and be done with it?
Sounds like a good plan to me, however I would leave autogrow on just in case someone decides to do a sort operation on a big table which doesn't have an index on that column
See also here: http://technet.microsoft.com/en-us/library/cc966534.aspx
It is recommended to have .25 to 1
data files (per filegroup) for each
CPU on the host server.
This is especially true for TEMPDB
where the recommendation is 1 data
file per CPU.
Dual core counts as 2 CPUs; logical
procs (hyperthreading) do not.
We have found it very useful to create large TempDB data and log files. Any actions that limit server OS activities such as resizing TempDB increase server efficiencies. We have a 16 processor machine with 113 GB dedicated to TempDB data space. This machine is dedicated to large SSIS ETL processes, thus resulting in mass data operations.
The bulk of our ETL operations spawn up to 4 SQL threads. After initially configuring a TempDB file for each processor (16), we quickly realized via performance monitoring that our configuration was forcing SQL\windows to unnecessarily span the multiple TempDB files. We settled on 5 larger TempDB data files and realized performance improvements. We have since moved on to a 24 processor box and are using 8 TempDB files.
Please note that this is a large data migration server; I’m sure transaction-oriented systems would still benefit from the recommended 1-1 processor to TempDB file configuration. It should also be noted that having a large increase % on a TempDB file may force a critical transaction to take the windows operation hit and thus may not be appropriate for your specific application.
We're running our application's database on dedicated box running only SQL Server 2005.
This DB server has 32 Gb of RAM... and the database file itself is only 6 Gb.
I'd like to force several of the heavily read/queried tables into the SQL Memory buffer to increase speed.
I understand that SQL server is really good about keeping necessary data cached in memory once it's read from disk... But our clients would probably prefer their query running quickly the FIRST time.
"Fastest Performance the Second Time" isn't exactly a product highlight.
Short of the old "Pin Table" DBCC command.. any thoughts?
I've written a "CacheTableToSQLMemory" Proc which Loops through all of a table's Indexes (Clustered & Non) , performing a "Select *" into a Temp table. I've scheduled SQL Agent to run a "cache lots of tables" Proc every 15 minutes in an attempt to keep pages in Memory.
It works to a large extent.. but even after I cache all of a query's relevant tables, running a query still increased the Count of Cached pages for that table. then it's faster the 2nd time.
thoughts?
We're running PAE & AWE. SQL is set to use between 8 & 20 GB of RAM.
The x86 bottleneck is your real issue. AWE can serve only data pages, as they can be mapped in and out of the AWE areas, but every other memory allocation has to cram in the 2GB of the process virtual address space. That would include every thread stack, all the code, all the data currently mappen 'in use' from AWE and, most importantly, every single cached plan, execution plan, cached security token, cached metadata and so on and so forth. and I'm not even counting CLR, I hope you don't use it.
Given that the system has 32GB of RAM, you can't even try /3GB and see if that helps, because of the total PAE reduction to 16GB in that case that would make half your RAM invisible...
You realy, really, have to move to x64. AWE can help only that much. You could collect performance counters from the Buffer Manager and Memory Manager objects and monitor sys.dm_os_memory_clerks so you could get a better picture of how is the instance memory behaving (where does the memory in use go, who is consuming it etc). I don't expect that will help you solve the issue really, but I do expect it will give you enough information to make a case for the upgrade to x64.
There is no way to pin tables in memory in SQL Server 2005. If SQL Server is dropping the tables from memory, it's because there is memory pressure from other parts of the system. Since your database is only 6GB, the database should stay in memory... provided that there are no other databases on the server.
There are a few things you can do to try to keep data in memory, though. Depending on the patch level and edition of your SQL Server installation, you might be able to make use of the lock pages in memory functionality to ensure that SQL Server's memory never gets paged out.
You can also change the memory allocation on the server to be a fixed size. Unless there's something else on your database server, you can set SQL Server's min and max memory to the same value. This won't necessarily prevent this from happening in the future (it's a function of how SQL Server is supposed to work) but it certainly won't hurt to set your SQL Server to use a fixed amount of memory (if you have no other memory concerns).
HI
I have made a maintenance package in that have used shrink database task for specific database, it ran successfully, found slight increase in previous db size.
Initial size(129 gb) after running the package(130gb).
I am expecting after shrinkning it should shrink? what might be happen? am sure package scheduled to run and check the history found run successfully.
Any help/ please advise any special care required, Thanks in advance.
Do not shrink the database during maintenance. There is probably no other more damaging action you can do. Read more at Auto-shrink – turn it OFF. IF a database has grown to a certain size, then it will likely grow back if you shrink it. Shrinking the database is tremendously damaging to the index fragmentation and will slow down your reporting and analytic workloads. Once shrink, when the database will grow back during normal operations the auto-growth events will interrupt and freeze the database during the growth.
There is one thing to shrink a database that had got out of control due to some rogue action that increased it. But to have the shrink in maintenance task means you will constantly do it on a scheduled interval, and this is very bad.
There are a couple of things you can do the check on this. In SQL Server Management Studio (SSMS), Object Explorer, right-click on the database name and select Properties. On the General tab you'll find the Space Available value. Is there any available space?
Note that the space available includes space in the transaction log file. You need that space, so you don't want to shrink the database too much.
Also, keep in mind that you're database is probably in full recovery mode. what this means is that, as data is being inserted, updated, and deleted in the database, sql server logs it in the database log. This log can become quite large on a busy database. You can reduce the size of the log by performing full backups. Remember, the point of the log is so that you can do log backups and do point in time restores. If you're not doing this, or don't need to do this, you might consider having the database turned to simple recovery mode.
We have a SQL Server 2000 production environment where suddenly (ie. the last 3 days) something has caused the tempdb data file to grow very large (45 gigs with a database which is only 10 gigs).
Yesterday, after it happened again we shrank the database and ran the major batch processes individually without any problems. However, this morning the database was back up to 45 gigs.
Is there a simple way to find out what is causing this database to grow so large? Ideally, something which could be looked at today but if that is not available something which can be set to get that information tomorrow.
BTW: Shrinking the database gets back the space within a few seconds.
Agreed with Jimmy, you need use SQL Profiler to find which temporary objects created so intensive. This may be temporary tables that uses some reports or something like.
I wanted to thank everyone for their answers as they definitely led to the cause of the problem.
We turned on SQL profiler and sure enough a large bulk load showed up. As we are working on a project to move the "offending" job to work also in mysql we will probably just watch things for now.
do you have a job running that rebuilds the index? It is possible that it uses SORT_IN_TEMPDB
or any other large queries that do sorting might expand tempdb
This may have something to do with the recovery model that the TempDB is set to. It may be set to FULL instead of BULK-LOGGED. FULL recovery increases the transaction log size until a backup is performed.
Look at the data file size vs. the transaction log size.
I'm not a DBA but some thoughts:
Is it possible that there are temp
tables being created but not dropped?
##tempTable?
Is it possible that
there is a large temp table being
created (and dropped) but the space
isn't reclaimed?
Are you doing any
sort of bulk loading where the system
might use the temp table? (I'm not
sure if you can) but can you turn on
Auto-Shrink for the tempdb?
I have had a few problems with log files growing too big on my SQL Servers (2000). Microsoft doesn't recommend using auto shrink for log files, but since it is a feature it must be useful in some scenarios. Does anyone know when is proper to use the auto shrink property?
Your problem is not that you need to autoshrink periodically but that you need to backup the log files periodically. (We back ours up every 15 minutes.) Backing up the database itself is not sufficient, you must do the log as well. If you do not back up the transaction log, it will grow until it takes up all the space on the drive. If you back it up, it frees the space to be reused (you will still probably need to shrink after the first backup to get the log down to a more reasonable size). If you don't need to be able torecover from transactions (which you should need to be able to do unless your entire database consists of tables that are loaded from another source and can easily be re-loaded.), then set your log to simlpe recovery mode.
One reason why autoshrinking isn't so good an idea is that you will be growing the transaction log frequently which slows down performance. IF you back up the log, one you get to a relatively stable size (the amount of space normally used by the transaction log in the time period between backups), then the log will only need to grow occasionally if there are an unusually heavy amount fo transactions.
My take on this is that auto-shrink is useful when you have many fairly small databases that frequently get larger due to added data, and then have a lot of empty space afterwards. You also need to not mind that the files will be fragmented on the disk when they frequently grow and shrink. I'd never use auto-shrink on a critical database or one larger than 2 GB, as you never know when the shrink operation will kick in, and access to the database will be blocked until the shrink has completed.
You should never have autoshrink turned on. It causes performance degradation in several ways. The file-system and indexes become fragmented and it is very resource intensive. It is also not necessary if you manage your backups correctly.
Read this answer from Paul Randal on Server Fault and Just Say No To Auto-Shrink!!
I used to use it when we had a demo version of a huge database that took up a lot of space on the laptop, so we used it to keep the size down.
The key is to use it only when the data is basically throw away.
You should truncate the logs periodically as a part of your backup strategy.