I am implementing a backup of Hyper V VMs using diskshadow based on Windows VSS (Volume Shadow Copy Service).
The implementation is pretty much as described in DiskShadow / Xcopy BACKUP of Hyper-V, where the diskshadow script is like the following:
set context persistent
set metadata C:\backup.cab
set verbose on
begin backup
add volume C: alias ConfigVolume
#The GUID of the Hyper-V Writer
writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
create
EXPOSE %ConfigVolume% Y:
EXEC HyperVBackup.cmd
UNEXPOSE Y:
end backup
In HyperVBackup.cmd the actual copying of the shadow copies to a backup drive is done using xcopy. This is oviously the most time consuming part of the backup process.
The begin backup and end backup commands send events to vss writers to allow them to prepare for shadow copy creation and to react on the end of the backup.
Is it a good idea to call end backup AFTER EXEC HyperVBackup.cmd? Wouldn't this force vss writers to stay in an intermediate state as long as the long xcopy part takes?
Wouldn't it be appropiate to call end backup BEFORE the line EXEC HyperVBackup.cmd?
Actually I do not know what vss writers typically do when they receive the event sent by end backup.
Thanks,
nang.
as an alternative to diskshadow you might also want to check out the following open source Hyper-V backup solution supporting CSV and including a command line tool:
http://hypervbackup.codeplex.com/
end backup basically signals all the vss writers that a successful backup has occurred. You probably don't want to do that until after all data has been successfully moved to a safe location. In your case, you will not want to signal a finished backup until the HyperVBackup.cmd script has finished without errors and likewise the xcopy has finished without errors.
The reason for this is that some writers, such as Exchange or SQL Server will flush transaction logs when they are signaled by end backup. You don't want the transaction logs flushed until after they have been successfully backuped up and in a safe location.
The begin backup shouldn't be holding anything in an intermediate state. It just tells the vss writers "hey if there is any maintenance that needs done close to a backup window, do it now". I don't know the specifics of vss writers, but I could also see begin backup being used to set a marker, so when end backup is signaled, it can say "data up to this point is good and you can now run amok with it." For example, you don't want to flush logs up to the time of the end backup command, rather the end backup command will flush logs up to the time of the begin backup command.
The only "intermediate state" that happens is during the file system freeze. The freeze happens during the create command and is automatically thawed at the completion of the create command.
Related
I have been asked to analyze a issue regarding one of the biztalk servers. I was asked to free up space on a particular drive, where I found the only file BiztalkMsgBoxDB_log.bak is taking up close 90% of the drive.
Running the following query I later found out that the log space used is only 1.25%.
EXEC ('DBCC sqlperf(LOGSPACE) WITH NO_INFOMSGS')
**Database Name** **Log Size (MB)** **Log Space Used (%)** **Status**
BizTalkMsgBoxDb 24930.49 1.257622 0
currently the Recovery Mode is : FULL and the transaction log back up was taken an hour ago.
I have no clue as to why the log file was created so large.
How can I free up data on this drive.
Thanks in advance,
GHR
You have to shrink your database
Right click on your database => Shrink that's it
Make sure your "Backup BizTalk Server Job" is properly configured and is not failing (check the SQL Server Agent node on the BizTalk database server).
For reference on how to configure this job (and more details of what it does), check MSDN.
I have backed up a database I had created on an other machine running SQL server 2012 express edition and I wanted to restore it on my machine, which is running the same. I have ticked the checkbox overwriting existing one, and got this error:
Backup mediaset is not complete. Files: D:\question.bak. Family count:2. Missing family sequence number:1
This happens if, when you made the backup, you had multiple files listed in the backup destination textbox. Go back to your source server and create the backup again; this time, make sure there's only one destination file listed.
If you had more than one file listed as the backup destination, the backup is striped across them; you'll need all the files to perform the restore.
You can verify this by performing a RESTORE LABELONLY against the single file you copied to your destination server.
Sandra Walter's Answer provides an accurate description of what has happened, but I found the answer a bit lacking.
To make a backup which isn't striped (which is what has occurred in this situation), go back to the window where you setup the backup of your database. At the bottom is a list of paths where the different stripes will go to.
Go to each of the listed paths and delete the stripe of the backup.
Then remove all but one of the paths from the list in the window. And click the "OK" button. Your unstriped backup will be created at that one path.
Hope that helps.
My backup was scheduled on two different locations. once I selected both options during restoration its worked for me.
I like rsync. I can see what files will be deleted first. But what happens if during the backup, a sector of the source disk fails? Files could be deleted from the destination that should not be. However, if I check the log file for all deletion files first, then use the log file as instructions to rsync, then a source disk failure during backup should result in a lower probability of data loss.
I've read the man page and have to conclude that the answer is no. If not rsync, then what?
You can mitigate source disk failure risk using
--delete-after receiver deletes after transfer, not during
That will not delete files if a IO error is produced during copy.
But for ensuring integrity of your backup, I think the right way is using:
--only-write-batch=FILE like --write-batch but w/o updating destination
That will write diffs into a file. Once batch is created, you move it to destination machine, and apply diffs with:
--read-batch=FILE read a batched update from FILE
I have been working on an algorithm in Python, and I was using Vim to edit this file. I opened it up, did a save, and it came up with an Error something like it occasionally does:
"WARNING: YOUR FILE CANNOT BE SAVED! ALL CHANGES WILL BE LOST! CANNOT WRITE THE FILE!"
As this happens occasionally, I did what I normally do, and I hit :q! to quit without writing any changes. No harm, no foul. When I looked at my file, everything had been erased! Everything!
I talked around the office, and it seems that the nfs mount was full, and so that was why I couldn't save anything. There was a huge script generating a ton of data, which caused the mount to be full temporarily. I believe the NFS mount is from NetApp. I found 2 files in my current directory.
One was last saved two days ago, and one was today. They are in the format of:
.nfs.xxxxxxxxxxx
When I try to attempt to open up this file, I see some of my code, here and there, splattered among unknown characters. Apparently, this must be a binary representation of the state of the file.
Is there any way to recover this file from this NFS mount? If there is a shortcut to recover this file in Emacs, I will switch to Emacs from vim!
So, I did find a way to recover the file. I found two ways, in fact. Since it was on a NetApp NFS mount, I was able to use the snapshots feature. When you are in a directory just do
ls .snapshot
And this will pull up any snapshots that your system administrators have set. For us, we have an hourly.0, hourly.1, and nightly.0, and nightly.1 backups. So, we can go back two days, and in the same day, we can go back one hour (the current hour, and the previous).
The other way was to rename the file to a vim swap file like this.
mv .nfs.xxx my_vim_file.cpp.swp
vim my_vim_file.cpp.swp
Then attempt to open it up in Vim, and it should ask you if you want to Recover the swap file, say yes, and it should be back!
Apparently your Netapp uses NFS to mount its volumes (as opposed to iSCSI, for example). Generally, each VM is stored on a unique volume (aka datastore) on the Netapp filer. To find out the volumes and snapshots, and then restore a snapshot, here are the commands to execute at the command line:
# list all volumes, snapshots are taken of volumes
vol status
# list the snapshots available for a particular volume
snap list <vol_name>
# restore a snapshot, nightly.1 for example
snap restore <vol_name> nightly.1
That's it. All that's left is to turn the VM back on and see if you've restored far back enough. If not, then do another "snap restore" but with an older snapshot.
Note that this procedure assumes your administrator didn't disable snapshots (Netapp has a snapshot schedule by default) and that the Netapp is licensed for snaprestore (use the "license" command to verify). This procedure can further be simplified if you have the Netapp OnCommand System Manager, which is a GUI for managing the Netapp. Reverting a snapshot in the GUI is simple:
Go to Storage > Volumes > click on a volume > click on Snapshot Copies (at the bottom)
Choose a snapshot and restore
I've got a maintenance plan that executes weekly in the off hours. It's always reporting success, but the old backups don't get deleted. I don't want the drive filling up.
DB Server info: SQL Server Standard Edition 9.00.3042.00
There is a "Maintenance Cleanup Task" set to
"Search folder and delete files based on an extension"
and "Delete files based on the age of the file at task run time" is checked and set to 4 weeks.
The only thing I can see is that my backups are each given their own subfolder and that this is not recursive. Am I missing something?
Also: I have seen the issues pre-SP2, but I am running service pack 2.
If you make your backups in subfolders, you have to specify the exact subfolder for deleting.
For example:
You make the backup by choosing the option that says something like "Make one backup file for each database" and check the box that says "Create subfolder for each database".
(I work with a German version of SQL Server, so I translate everything into English myself now)
The specified folder is H:\Backup, so the backups will actually be created in the folder H:\Backup\DatabaseName.
And if you want the Maintenance Cleanup Task to delete the backups via "Delete files based on the age of the file at task run time", you have to specify the folder H:\Backup\DatabaseName, not H:\Backup !!!
This is the mistake that I made when I started using SQL Server 2005 - I put the same folder in both fields, Backup and Cleanup.
My understanding is that you can only include the first level of subfolders. I am assuming that you have that check-box checked already.
Are your backups deeper than the just one level?
Another thought is, do you have one single maintenance plan that you run to delete backups of multiple databases? The reason I ask this is because the way I could see that you would have to do that would be to point it to a folder that was one level higher meaning that your "include first-level subfolders" would not be deep enough.
The way I have mine set up is that the Maintenance Cleanup Task is part of my backup process. So once the backup completes for a specific database the Maintenance Cleanup Task runs on that same database backup files. This allows me to be more specific on the directory so I don't run into the directory structure being too deep. Since I have the criteria set the way I want, items don't get deleted till I am ready for them to be deleted either way.
Tim
Make sure your maintenance plan does not have any errors associated it with. You can check the error log under the SQL Server Agent area in the SQL Server Management Studio. If there are errors during your maintenance plans, then it is probably quitting before it starts to delete the outdated backups.
Another issue could be the "workflow" of the maintenance plan.
If your plan consists of more than one task, you have to connect the tasks with arrows to define the order in which they will run.
Possible issue #1:
You forgot to connect them with arrows. I just tested that - the job runs without any error or warning, but it executes only the first task.
Possible issue #2:
You defined the workflow in a way that the cleanup task will never run. If you connect two tasks with an arrow, you can right-click on the arrow and specify if the second task will run always or only when the first one does/does not run successful (this changes the color of the arrow, possible are red/green/blue). Maybe the backup works, and then the cleanup never runs because it will only run when the backups fails?