Reduce database restore time - sql

I have a database backup file of 4.78 GB. When I restore it on any machine, it takes a lot of time (15 - 20 mins to be precise) to get restored. The mdf file of the database is of 5.64 GB and ldf is of 100mb. There are lots of tables and lots of data in the database. Is there any way by which I can reduce the time taken to restore the backup file?

Restoring speed depends on various factors like
Processor Speed
Your Harddisk reading and writing speed.
Upgrading these factors should also increase the Database restore time

Related

Can I remove #img_bkp_cache generated by Hyper Backup in Synology?

I have a Synology with 2 To of disk space, and it is saved every day by Hyper Backup (with Smart Recycle).
But there is a file #img_bkp_cache that is growing, and takes almost 1/5th of the total disk capacity :
368G /volume2/#img_bkp_cache
1.3T /volume2/Samba
Is it safe to remove that cache file? How to do that? What can I do to shrink it otherwise?
Thank you for your help.
Here is Synology support answer (translated):
The cache image contains your remote backups index. This index is
compared to the remote index to figure out which elements have
changed. If you have several remote backups, then #img_bkp_cache will
get bigger and bigger.
The index takes roughly 5% of the total size of a backup.
It is not really safe to remove #img_bkp_cache. If you do so, the
remote backup will not be affected, but it will be impossible to manage
incremental backups.
In a nutshell, this file is important and cannot be deleted without consequences.
Note: Finally, I switched from RAID 1 to RAID 5 and doubled my storage capacity (I had a fifth volume that was unused), which "solved" the problem.

SQL Server database back-up operation takes long time

When I try to take back-up of an SQL server database (around 1000 GB) the time taken for the process varies each time.
Sometimes the back-up process completes within 70 mins and some times takes around 3 hours or so.
What could be the reason for the variation in backup times?
Can someone tell me the factors which influence the back-up time in SQL Server and the steps that could be taken to reduce the backup time?
Here are the details of the server box from which the backup is initiated:
MS Windows Server 2012 R2 Standard
Processor with 2.60 Ghz frequency
RAM - More than 300 gb
Additional info: All drives are SSD and back-up is taken with compression.
The back-up target is a remote network location carried out over a 1 GBPS bandwidth network.
So 1000 GB is the total size of DB datafiles, what about transaction log files ?
Are transaction log(s) included in your backup ?
A Database can be using a simple recovery mode, so basically nothing is written on the log files, or can be working with logfiles which are keeping db changes not committed.
Once they got committed the transaction log generally stay there (and grow) until a full backup with the flag to truncate log is completed.
According to the workload the size of transaction log files can vary a lot from one backup to another one despite the Database size changed slightly so the total data to be backup up can vary a lot too, and obviously the time to complete the full backup process.
In my experience the backup when there have been almost no user activity, i.e. on Sunday evening, are a lot faster of the ones after a certain number of transaction due to the increased size of logfile.

Reduce size of SQL Server database

I have a database with 20GB space. I have taken the backup and want to restore it to 3 other databases. But due to large size of database, it's prompting low space message on restore.I don't want all the data. Is there any way to reduce size of database OR what are the possible ways to manage all this work?
Previously, I took a backup and shrink it. The size was reduced to 1.11GB but on restoring back with a new database name, it takes 20GB disk space again.
Generate Script and type of script make as schema only and create new database and run this script...

Increasing size of SQL Server MDF file

I have a 400GB MDF file and its growing by 5GB daily. At the moment, autogrow is set low at 10Mb.
If I grow the file by 10GB, I am guessing this will stall SQL Server and cause lots of timeouts.
Whats the recommended approach here>
Autogrow or take the hit and grow the file nightly?
The recommended approach is to NEVER grow it outside maintenance periods. As any sensibly large and busy sql database lives on dedicated storage ANYWAY - you can pre-allocate all the available space during a maintenance window.
MDF file growth is instantaneous with instant file initialization. SQL Server will not stall if you grow the file by 10 TB, let alone 10 GB. I would recommend you grow the database to the size you expect it to be in the next 12 months in a single operation. There's no reason to wait for it to expand your file size repeatedly throughout the day or even each night.

SQL - defragmenting an index on a compressed table

We have a database with 2 tables, one with hundreds of millions of rows (row size<1KB), another 14 million rows. Compression enabled on both.
Database size was ~66GB. Everything worked fine.
Indexes were 75% fragmented. Coworker started REBUILD on both tables. It's been running for 4.5 hours now. The MDF is almost 150GB and LDF about 13GB and it keeps growing. We're about to run out of space.
What should we do? Wait for it to finish? Cancel query? Reboot SQL? Reboot server?
The process completed 7 hours into it, after consuming about 170GB for MDF file.
So the answer is:
Have plenty of disk space, close to what uncompressed data would be, or at least about 3x compressed;
Be prepared to increase disk space as needed, have IT around for it (either on a VM server or with a hot-swap physical box);
Always do one table at a time;
Be ready to wait a long time.
Hope this helps someone.