I have a database with 20GB space. I have taken the backup and want to restore it to 3 other databases. But due to large size of database, it's prompting low space message on restore.I don't want all the data. Is there any way to reduce size of database OR what are the possible ways to manage all this work?
Previously, I took a backup and shrink it. The size was reduced to 1.11GB but on restoring back with a new database name, it takes 20GB disk space again.
Generate Script and type of script make as schema only and create new database and run this script...
Related
I have a Synology with 2 To of disk space, and it is saved every day by Hyper Backup (with Smart Recycle).
But there is a file #img_bkp_cache that is growing, and takes almost 1/5th of the total disk capacity :
368G /volume2/#img_bkp_cache
1.3T /volume2/Samba
Is it safe to remove that cache file? How to do that? What can I do to shrink it otherwise?
Thank you for your help.
Here is Synology support answer (translated):
The cache image contains your remote backups index. This index is
compared to the remote index to figure out which elements have
changed. If you have several remote backups, then #img_bkp_cache will
get bigger and bigger.
The index takes roughly 5% of the total size of a backup.
It is not really safe to remove #img_bkp_cache. If you do so, the
remote backup will not be affected, but it will be impossible to manage
incremental backups.
In a nutshell, this file is important and cannot be deleted without consequences.
Note: Finally, I switched from RAID 1 to RAID 5 and doubled my storage capacity (I had a fifth volume that was unused), which "solved" the problem.
I have a 400GB MDF file and its growing by 5GB daily. At the moment, autogrow is set low at 10Mb.
If I grow the file by 10GB, I am guessing this will stall SQL Server and cause lots of timeouts.
Whats the recommended approach here>
Autogrow or take the hit and grow the file nightly?
The recommended approach is to NEVER grow it outside maintenance periods. As any sensibly large and busy sql database lives on dedicated storage ANYWAY - you can pre-allocate all the available space during a maintenance window.
MDF file growth is instantaneous with instant file initialization. SQL Server will not stall if you grow the file by 10 TB, let alone 10 GB. I would recommend you grow the database to the size you expect it to be in the next 12 months in a single operation. There's no reason to wait for it to expand your file size repeatedly throughout the day or even each night.
The reason I ask it we have a dedicated RAID10 array with ~150GB for the tempdb (the "t" drive). It is only used for storing tempdb. The t drive isn't used by by SQL Server or any other process for anything else.
Our DBA has tempdb setup with 15GB initial size and autogrow 20% increments. Everytime the server starts it resized to 15GB and then over the course of the day grows to ~80GB (on average). Now IT is looking into making initial size larger say 30 or 40GB but given the drive is ONLY used for tempdb my thinking is why not "max it" right away.
Is the any negative effect to simply create 4 data files in the primary group for tempdb give them each an initial size of 30GB (120GB total), turn autogrow off and be done with it?
Are there any limits on SQL Server ability to span multiple tempdb data files in one query? i.e. will it cause problems if the tempdb has say 70GB total free but the file used by one process is full (30 of 30GB used)?
I would size them to about 100GB and leave autogrow on, this way you don't have to wait for it to grow every time, I would also add multiple files
Is the any negative effect to simply
create 4 data files in the primary
group for tempdb give them each an
initial size of 30GB, turn autogrow
off and be done with it?
Sounds like a good plan to me, however I would leave autogrow on just in case someone decides to do a sort operation on a big table which doesn't have an index on that column
See also here: http://technet.microsoft.com/en-us/library/cc966534.aspx
It is recommended to have .25 to 1
data files (per filegroup) for each
CPU on the host server.
This is especially true for TEMPDB
where the recommendation is 1 data
file per CPU.
Dual core counts as 2 CPUs; logical
procs (hyperthreading) do not.
We have found it very useful to create large TempDB data and log files. Any actions that limit server OS activities such as resizing TempDB increase server efficiencies. We have a 16 processor machine with 113 GB dedicated to TempDB data space. This machine is dedicated to large SSIS ETL processes, thus resulting in mass data operations.
The bulk of our ETL operations spawn up to 4 SQL threads. After initially configuring a TempDB file for each processor (16), we quickly realized via performance monitoring that our configuration was forcing SQL\windows to unnecessarily span the multiple TempDB files. We settled on 5 larger TempDB data files and realized performance improvements. We have since moved on to a 24 processor box and are using 8 TempDB files.
Please note that this is a large data migration server; I’m sure transaction-oriented systems would still benefit from the recommended 1-1 processor to TempDB file configuration. It should also be noted that having a large increase % on a TempDB file may force a critical transaction to take the windows operation hit and thus may not be appropriate for your specific application.
HI
I have made a maintenance package in that have used shrink database task for specific database, it ran successfully, found slight increase in previous db size.
Initial size(129 gb) after running the package(130gb).
I am expecting after shrinkning it should shrink? what might be happen? am sure package scheduled to run and check the history found run successfully.
Any help/ please advise any special care required, Thanks in advance.
Do not shrink the database during maintenance. There is probably no other more damaging action you can do. Read more at Auto-shrink – turn it OFF. IF a database has grown to a certain size, then it will likely grow back if you shrink it. Shrinking the database is tremendously damaging to the index fragmentation and will slow down your reporting and analytic workloads. Once shrink, when the database will grow back during normal operations the auto-growth events will interrupt and freeze the database during the growth.
There is one thing to shrink a database that had got out of control due to some rogue action that increased it. But to have the shrink in maintenance task means you will constantly do it on a scheduled interval, and this is very bad.
There are a couple of things you can do the check on this. In SQL Server Management Studio (SSMS), Object Explorer, right-click on the database name and select Properties. On the General tab you'll find the Space Available value. Is there any available space?
Note that the space available includes space in the transaction log file. You need that space, so you don't want to shrink the database too much.
Also, keep in mind that you're database is probably in full recovery mode. what this means is that, as data is being inserted, updated, and deleted in the database, sql server logs it in the database log. This log can become quite large on a busy database. You can reduce the size of the log by performing full backups. Remember, the point of the log is so that you can do log backups and do point in time restores. If you're not doing this, or don't need to do this, you might consider having the database turned to simple recovery mode.
I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code:
USE mydb
GO
BACKUP LOG mydb WITH TRUNCATE_ONLY
GO
DBCC SHRINKFILE(mydb_log,8)
GO
Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick.
Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know).
Here's my current backup plan:
Full backups every night
Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though)
Or maybe I just run it once a week, after I run a full backup task? What do you all think?
If you file grows every night at 500 MB there is only one correct action: pre-grow the file to 500MB and leave it there. Shrinking the log file is damaging. Having the log file auto-grow is also damaging.
you hit the file growth zero fill initialization during normal operations, reducing performance
your log grows in small increments creating many virtual log files, resulting in poorer operational performance
your log gets fragmented during shrinkage. While not as bad as a data file fragmentation, log file fragmentation still impact performance
one day the daily growth of 500MB will run out of disk space and you'd wish the file was pre-grown
You don't have to take my word for it, you can read on some of the MVP blogs what they have to say about the practice of log and file shrinkage on a regular basis:
Auto-shrink – turn it OFF!
Oh, the horror! Please stop telling people they should shrink their log files!
Why you want to be restrictive with shrink of database files
Don't Touch that Shrink Button!
Do not truncate your ldf files!
There are more, I just got tired of linking them.
Every time you shrink a log file, a fairy loses her wings.
I'd think more frequent transaction log backups.
I think what you suggest in your question is the right approach. That is, "hook the Log shrinking onto" your nightly backup/maintenance task process. The main thing is that you are regularly doing transaction log backups, which will allow the database to be shrunk when you do the shrink task. The key thing to keep in mind is that this is a two-step process: 1) backup your transaction log, which automatically "truncates" your log file; 2) run a shrink against your log file. "truncate" doesn't necessarily (or ever?) mean that the file will shrink...shrinking it is a separate step you must do.
for SQL Server 2005
DBCC SHRINKFILE ( Database_log_file_name , NOTRUNCATE)
This statement don't break log shipping. But, you may need to run more than one. For each run, the log shipping backup, copy, and restored to run after again run this statement.
Shrink and truncate are different.
My experiences:
AA db, 6.8GB transaction log
first run: 6.8 GB
log shipping backup, copy, restore after second run: 1.9 GB
log shipping backup, copy, restore after third run: 1.7 GB
log shipping backup, copy, restore after fourth run: 1 GB
BB db, 50GB transaction log
first run: 39 GB
log shipping backup, copy, restore after second run: 1 GB
Creating a transaction log backup doesn't mean that the online transaction log file size will be reduced. The file size remains the same. When a transaction is backuped up, in the online transaction log it's marked for overwriting. It;s not automatically removed, and no spaces is freed, therefore, the size remains the same.
Once you set the LDF file size, maintain its size by setting the right transaction log backup frequency.
Paul Randal provides details here:
Understanding Logging and Recovery in SQL Server
Understanding SQL Server Backups
Based on Microsoft recommendation Before you intend to Shrink log file you should first try to perform the following capabilities:
Freeing disk space so that the log can automatically grow.
Moving the log file to a disk drive with sufficient space.
Increasing the size of a log file.
Adding a log file on a different disk.
Turn on auto growth by using the ALTER DATABASE statement to set a non-zero growth increment for the FILEGROWTH option.
ALTER DATABASE EmployeeDB MODIFY FILE ( NAME = SharePoint_Config_log, SIZE = 2MB, MAXSIZE = 200MB, FILEGROWTH = 10MB );
Also, you should be aware of shrink operation via maintenance plan will effect on *.mdf file and *.ldf file. so you need to create a maintenance plan with SQL job task and write the following command to can only shrink *.ldf file to your appropriate target size.
use sharepoint_config
go
alter database sharepoint_config set recovery simple
go
dbcc shrinkfile('SharePoint_Config_log',100)
go
alter database sharepoint_config set recovery FUll
go
Note: 100 is called the target_size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size. The default size is the size specified when the file was created.
In my humble opinion, It’s not recommended to perform the shrink operation periodically! Only in some circumstances that you need to reduce the physical size.
You can also check this useful guide to Shrink a transaction log file Maintenance Plan in SQL Server