Performance Impact of Empty file by migrating the data to other files in the same filegroup - sql

We have a database currently sitting on 15000 RPM drives that is simply a logging database and we want to move it to 10000 RPM drives. While we can easily detach the database, move the files and reattach, that would cause a minor outage that we're trying to avoid.
So we're considering using DBCC ShrinkFile with EMPTYFILE. We'll create a data and a transaction file on the 10000 RPM drive slightly larger than the existing files on the 15000 RPM drive and then execute the DBCC ShrinkFile with EMPTYFILE to migrate the data.
What kind of impact will that have?

I've tried this and had mixed luck. I've had instances where the file couldn't be emptied because it was the primary file in the primary filegroup, but I've also had instances where it's worked completely fine.
It does hold huge locks in the database while it's working, though. If you're trying to do it on a live production system that's got end user queries running, forget it. They're going to have problems because it'll take a while.

Why not using log shipping. Create new database on 10.000 rpm disks. Setup log shipping from db on 15K RPM to DB on 10k RPM. When both DB's are insync stop log shipping and switch to the database on 15K RPM.

Is this a system connected to a SAN, or is it direct attached storage? If its a SAN do a SAN side migration to the new raid group and the server won't ever know there was a change.

Related

Backup Hyper-V VMs

i'm looking for an alternative for Hyperoo, one of the best backup solutions for VMs backup..
I tried many softwares, like Veem, Iperius, Altaro, Acronis ecc but everyone use Microsoft checkpoints and create AVHDX files, sometimes it happens that the backup has some problems and the avhdx remains open, I find myself forced to merge that punt hoping everything goes well.
All these programs make a false incrementally backup.
Even with every small modification the vhdx changes a little. The backup program checks that the virtual machine has changed and makes a full backup.
Hyperoo creates one full vhdx file and then many rollback files, one file each day.
I understand that you want safety and performance for your Hyper-V VM backups. Backup and restore is a stressful experience. As you mentioned the solutions use the Hyper-V checkpoint technologies and I don't know something else.
We tested a lot backup tools and end up with Veeam. Usually the backup and restore works. Unfortunately it put a lot weight on the infrastructure during backup (storage is slow..) and sometimes the backups fails because of this. To evade this we setup fixed periods outside the work time. Keep in mind that we use the Backup only for server VMs (not VDI).
I would recommend Veeam as backup solution, but maybe you can take a look at Commvault.(https://documentation.commvault.com/commvault/v11/article?p=31493.htm)
Greetings.

What is the difference between server snapshot and backup? (OVH)

I have a VPS with OVH. There are two options in there, Automated Backup and Snapshot. What is the difference between both and which one should I enable so I don't lose the data and the configuration on the server. It took me quite some time to optimize my server so I don't want to go through that pain again. Plus, there's like 30GB of data uploaded. I don't want to risk that even.
This explains it: https://www.ovh.com/world/vps/backup-vps.xml
So basically the automated backup is done automatically everyday and replicated in 3 different sites to ensure nothing is lost.
Snapshot seems like you have a max of two different snapshot and that you should do them yourself (like a VM snapshot).

Optimize tempdb log files

Have a dedicated DB server with 16 cores and 200+ GB of memory.
Use tempdb a lot but it typically stays under 4 GB
Recently added a dedicated SSD stripe for tempdb
Based on this page should create multiple files
Optimizing tempdb Performance
Understand multiple row files.
Here is my question:
Should I also create multiple tempdb log files?
It does say "create one data file for each CPU".
So my thought is that data means row (not log) files.
No, as with all databases, SQL server can only use one log file at a time so there is no benefit at all in having multiple log files.
The best thing you can do with log files really is keep them on separate drives to the data files as they have different IO requirements, pre-size them so they don't have to auto grow and if they do have to autogrow, make sure they do so at a sensible level to manage the number of virtual log files that are created inside them.

Is having multiple data/log files a good thing even on the same LUN?

I have read that it is a good idea to have one file per CPU/CPU Core so that SQL can more efficiently stream data to and from the disks. Ok, I can see the benefit if they are on different spindles, but what if I only have one spindle (4 drives in Raid 10) for my data files (.mdf and .ndf), will I still benefit from splitting the data files (from just the .mdf file to a .mdf and several .ndf files)? Same goes for the log file, although I see no benefit to it as the data has to be written serially and you're limited by the spindle's sequential write speed...
FYI, this is in regards to SQL Server 2005/2008...
Thanks.
The recommendation for multiple tempdb data files is definitely not about IOPS. It is about contention on the allocation pages (GAM, SGAM, PFS) in tempdb. SQL 2005+ doesn't require as big of a load on these pages, but contention still occurs. Not all system require a 1 file to 1 core mapping. Most sytems will perform well with 1 file to 2 or 4 cores. Having too many files adds overhead for managing the files. A good recommendation is to start with 1:4 or 1:2 and increasing if contention continues. Don't go above 1:1.
For other databases, this is not recommended.
And yes, only 1 log file ... always.
8 Steps to better Transaction Log throughput:
Create only ONE transaction log file.
Even though you can create multiple
transaction log files, you only need
one... SQL Server DOES not "stripe"
across multiple transaction log files.
Instead, SQL Server uses the
transaction log files sequentially.
Misconceptions around TF 1118:
Why is the trace flag not required so
much in 2005 and 2008? In SQL Server
2005, my team changed the allocation
system for tempdb to reduce the
possibility of contention. There is
now a cache of temp tables. When a new
temp table is created on a cold system
(just after startup) it uses the same
mechanism as for SQL 2000. When it is
dropped though, instead of all the
pages being deallocated completely,
one IAM page and one data page are
left allocated, and the temp table is
put into a special cache. Subsequent
temp table creations will look in the
cache to see if they can just grab a
pre-created temp table 'off the
shelf'. If so, this avoids accessing
the allocation bitmaps completely. The
temp table cache isn't huge (I think
it's 32 tables), but this can still
lead to a big drop in latch
contention in tempdb.
So the answer is NO to both questions. Log striping was never an issue, and one-NDF-per-CPU is largely a myth, one that will take a very long time to die out. Multiple files IMHO make sense only if you can stripe IO (separate LUNs). Multiple filegroups though make sense, but not for IO reasons, for administrative purposes: piecemeal restores and archive read-only filegroups.
Still good. This is not about IOPS - it is about SQL Server BLOCKING a file for certain operations. mostly when file extends are allocated to a table / index. If you do a lot of inserts / updates, multiple files basically mean another thread will block another file, not wait on the first one.
So, this is not really about IOPS loads, it is about a blocking behavior.

SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code:
USE mydb
GO
BACKUP LOG mydb WITH TRUNCATE_ONLY
GO
DBCC SHRINKFILE(mydb_log,8)
GO
Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick.
Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know).
Here's my current backup plan:
Full backups every night
Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though)
Or maybe I just run it once a week, after I run a full backup task? What do you all think?
If you file grows every night at 500 MB there is only one correct action: pre-grow the file to 500MB and leave it there. Shrinking the log file is damaging. Having the log file auto-grow is also damaging.
you hit the file growth zero fill initialization during normal operations, reducing performance
your log grows in small increments creating many virtual log files, resulting in poorer operational performance
your log gets fragmented during shrinkage. While not as bad as a data file fragmentation, log file fragmentation still impact performance
one day the daily growth of 500MB will run out of disk space and you'd wish the file was pre-grown
You don't have to take my word for it, you can read on some of the MVP blogs what they have to say about the practice of log and file shrinkage on a regular basis:
Auto-shrink – turn it OFF!
Oh, the horror! Please stop telling people they should shrink their log files!
Why you want to be restrictive with shrink of database files
Don't Touch that Shrink Button!
Do not truncate your ldf files!
There are more, I just got tired of linking them.
Every time you shrink a log file, a fairy loses her wings.
I'd think more frequent transaction log backups.
I think what you suggest in your question is the right approach. That is, "hook the Log shrinking onto" your nightly backup/maintenance task process. The main thing is that you are regularly doing transaction log backups, which will allow the database to be shrunk when you do the shrink task. The key thing to keep in mind is that this is a two-step process: 1) backup your transaction log, which automatically "truncates" your log file; 2) run a shrink against your log file. "truncate" doesn't necessarily (or ever?) mean that the file will shrink...shrinking it is a separate step you must do.
for SQL Server 2005
DBCC SHRINKFILE ( Database_log_file_name , NOTRUNCATE)
This statement don't break log shipping. But, you may need to run more than one. For each run, the log shipping backup, copy, and restored to run after again run this statement.
Shrink and truncate are different.
My experiences:
AA db, 6.8GB transaction log
first run: 6.8 GB
log shipping backup, copy, restore after second run: 1.9 GB
log shipping backup, copy, restore after third run: 1.7 GB
log shipping backup, copy, restore after fourth run: 1 GB
BB db, 50GB transaction log
first run: 39 GB
log shipping backup, copy, restore after second run: 1 GB
Creating a transaction log backup doesn't mean that the online transaction log file size will be reduced. The file size remains the same. When a transaction is backuped up, in the online transaction log it's marked for overwriting. It;s not automatically removed, and no spaces is freed, therefore, the size remains the same.
Once you set the LDF file size, maintain its size by setting the right transaction log backup frequency.
Paul Randal provides details here:
Understanding Logging and Recovery in SQL Server
Understanding SQL Server Backups
Based on Microsoft recommendation Before you intend to Shrink log file you should first try to perform the following capabilities:
Freeing disk space so that the log can automatically grow.
Moving the log file to a disk drive with sufficient space.
Increasing the size of a log file.
Adding a log file on a different disk.
Turn on auto growth by using the ALTER DATABASE statement to set a non-zero growth increment for the FILEGROWTH option.
ALTER DATABASE EmployeeDB MODIFY FILE ( NAME = SharePoint_Config_log, SIZE = 2MB, MAXSIZE = 200MB, FILEGROWTH = 10MB );
Also, you should be aware of shrink operation via maintenance plan will effect on *.mdf file and *.ldf file. so you need to create a maintenance plan with SQL job task and write the following command to can only shrink *.ldf file to your appropriate target size.
use sharepoint_config
go
alter database sharepoint_config set recovery simple
go
dbcc shrinkfile('SharePoint_Config_log',100)
go
alter database sharepoint_config set recovery FUll
go
Note: 100 is called the target_size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size. The default size is the size specified when the file was created.
In my humble opinion, It’s not recommended to perform the shrink operation periodically! Only in some circumstances that you need to reduce the physical size.
You can also check this useful guide to Shrink a transaction log file Maintenance Plan in SQL Server