We have a database with 2 tables, one with hundreds of millions of rows (row size<1KB), another 14 million rows. Compression enabled on both.
Database size was ~66GB. Everything worked fine.
Indexes were 75% fragmented. Coworker started REBUILD on both tables. It's been running for 4.5 hours now. The MDF is almost 150GB and LDF about 13GB and it keeps growing. We're about to run out of space.
What should we do? Wait for it to finish? Cancel query? Reboot SQL? Reboot server?
The process completed 7 hours into it, after consuming about 170GB for MDF file.
So the answer is:
Have plenty of disk space, close to what uncompressed data would be, or at least about 3x compressed;
Be prepared to increase disk space as needed, have IT around for it (either on a VM server or with a hot-swap physical box);
Always do one table at a time;
Be ready to wait a long time.
Hope this helps someone.
Related
I have a Synology with 2 To of disk space, and it is saved every day by Hyper Backup (with Smart Recycle).
But there is a file #img_bkp_cache that is growing, and takes almost 1/5th of the total disk capacity :
368G /volume2/#img_bkp_cache
1.3T /volume2/Samba
Is it safe to remove that cache file? How to do that? What can I do to shrink it otherwise?
Thank you for your help.
Here is Synology support answer (translated):
The cache image contains your remote backups index. This index is
compared to the remote index to figure out which elements have
changed. If you have several remote backups, then #img_bkp_cache will
get bigger and bigger.
The index takes roughly 5% of the total size of a backup.
It is not really safe to remove #img_bkp_cache. If you do so, the
remote backup will not be affected, but it will be impossible to manage
incremental backups.
In a nutshell, this file is important and cannot be deleted without consequences.
Note: Finally, I switched from RAID 1 to RAID 5 and doubled my storage capacity (I had a fifth volume that was unused), which "solved" the problem.
When I try to take back-up of an SQL server database (around 1000 GB) the time taken for the process varies each time.
Sometimes the back-up process completes within 70 mins and some times takes around 3 hours or so.
What could be the reason for the variation in backup times?
Can someone tell me the factors which influence the back-up time in SQL Server and the steps that could be taken to reduce the backup time?
Here are the details of the server box from which the backup is initiated:
MS Windows Server 2012 R2 Standard
Processor with 2.60 Ghz frequency
RAM - More than 300 gb
Additional info: All drives are SSD and back-up is taken with compression.
The back-up target is a remote network location carried out over a 1 GBPS bandwidth network.
So 1000 GB is the total size of DB datafiles, what about transaction log files ?
Are transaction log(s) included in your backup ?
A Database can be using a simple recovery mode, so basically nothing is written on the log files, or can be working with logfiles which are keeping db changes not committed.
Once they got committed the transaction log generally stay there (and grow) until a full backup with the flag to truncate log is completed.
According to the workload the size of transaction log files can vary a lot from one backup to another one despite the Database size changed slightly so the total data to be backup up can vary a lot too, and obviously the time to complete the full backup process.
In my experience the backup when there have been almost no user activity, i.e. on Sunday evening, are a lot faster of the ones after a certain number of transaction due to the increased size of logfile.
I have a 400GB MDF file and its growing by 5GB daily. At the moment, autogrow is set low at 10Mb.
If I grow the file by 10GB, I am guessing this will stall SQL Server and cause lots of timeouts.
Whats the recommended approach here>
Autogrow or take the hit and grow the file nightly?
The recommended approach is to NEVER grow it outside maintenance periods. As any sensibly large and busy sql database lives on dedicated storage ANYWAY - you can pre-allocate all the available space during a maintenance window.
MDF file growth is instantaneous with instant file initialization. SQL Server will not stall if you grow the file by 10 TB, let alone 10 GB. I would recommend you grow the database to the size you expect it to be in the next 12 months in a single operation. There's no reason to wait for it to expand your file size repeatedly throughout the day or even each night.
The reason I ask it we have a dedicated RAID10 array with ~150GB for the tempdb (the "t" drive). It is only used for storing tempdb. The t drive isn't used by by SQL Server or any other process for anything else.
Our DBA has tempdb setup with 15GB initial size and autogrow 20% increments. Everytime the server starts it resized to 15GB and then over the course of the day grows to ~80GB (on average). Now IT is looking into making initial size larger say 30 or 40GB but given the drive is ONLY used for tempdb my thinking is why not "max it" right away.
Is the any negative effect to simply create 4 data files in the primary group for tempdb give them each an initial size of 30GB (120GB total), turn autogrow off and be done with it?
Are there any limits on SQL Server ability to span multiple tempdb data files in one query? i.e. will it cause problems if the tempdb has say 70GB total free but the file used by one process is full (30 of 30GB used)?
I would size them to about 100GB and leave autogrow on, this way you don't have to wait for it to grow every time, I would also add multiple files
Is the any negative effect to simply
create 4 data files in the primary
group for tempdb give them each an
initial size of 30GB, turn autogrow
off and be done with it?
Sounds like a good plan to me, however I would leave autogrow on just in case someone decides to do a sort operation on a big table which doesn't have an index on that column
See also here: http://technet.microsoft.com/en-us/library/cc966534.aspx
It is recommended to have .25 to 1
data files (per filegroup) for each
CPU on the host server.
This is especially true for TEMPDB
where the recommendation is 1 data
file per CPU.
Dual core counts as 2 CPUs; logical
procs (hyperthreading) do not.
We have found it very useful to create large TempDB data and log files. Any actions that limit server OS activities such as resizing TempDB increase server efficiencies. We have a 16 processor machine with 113 GB dedicated to TempDB data space. This machine is dedicated to large SSIS ETL processes, thus resulting in mass data operations.
The bulk of our ETL operations spawn up to 4 SQL threads. After initially configuring a TempDB file for each processor (16), we quickly realized via performance monitoring that our configuration was forcing SQL\windows to unnecessarily span the multiple TempDB files. We settled on 5 larger TempDB data files and realized performance improvements. We have since moved on to a 24 processor box and are using 8 TempDB files.
Please note that this is a large data migration server; I’m sure transaction-oriented systems would still benefit from the recommended 1-1 processor to TempDB file configuration. It should also be noted that having a large increase % on a TempDB file may force a critical transaction to take the windows operation hit and thus may not be appropriate for your specific application.
So I have a database that is acting weird. I am watching all activity on the server, and the tempdb is constantly growing. It has grown by 30gb in about 45 minutes. I keep checking the allocated space in the tempdb, and it is always about 8mb. I know that it is not needing all the space it is allocating, I have watched 1 transaction happening with the tempdb essentially empty and it still growing.
It appears to me that the engine instead of using previously allocated space is instead choosing to use more of the hard drive space.
I noticed our tempdb was extremely large earlier today and restarted SQL, that brought the tempdb down in size to a good size, but it has been growing again ever since, and constantly restarting SQL is not an option as this is a production environment. I have limited hd space on this server so I need to keep the tempdb at a reasonable size.
Have you done an analysis on the scripts that are running? Have you used the profiler to determine SQL activity?
My first thoughts are scripts using temp tables (#table) and a possible Cartesian Product join?
As a note the tempdb is recreated on startup of SQL server so thats why it will be truncated when you restart the service.