Backup tool for Microsoft Virtual Server 2005 R2? - backup

I am seeking a backup tool to back-up virtual OS instances run through Microsoft Virtual Server 2005 R2. According to the MS docs, it should be possible to do it live through volume shadow copy service, but I am having trouble finding any tool for any.
What are the best solution to back-up MS Virtual Server instances?

I'm personally fond of using ImageX to capture the VHD to a WIM file. (This is called file-based imaging, as opposed to sector-based imaging.) WIMs are sort of like an NTFS-specific compression format. It also has a single-instance store, which means that files that appear multiple times are only stored once. The compression is superb and the filesystem is restored perfectly with ACLs and reparse points perfectly intact.
You can store multiple VHDs and multiple versions of those VHDs in a WIM. Which means you can backup incremental versions of your VHD and it'll just add a little delta to the end of the WIM each time.
As for live images, you can script vshadow.exe to make a copy of your virtual machine before backing it up.
You can capture the image to WIM format in one of two ways:
Mount the virtual machine you want to capture in Windows PE using Virtual Server. Then run ImageX with the /CAPTURE flag and save the WIM to a network drive.
Use a tool like VHDMount to mount the virtual machine as a local drive and then capture with ImageX. (In my experience VHDMount is flaky and I would recommend SmartVDK for this task. VHDMount is better for formatting disks and partitioning.)
This only skims the surface of this approach. I've been meaning to write up a more detailed tutorial covering the nuances of all of this.

http://technet.microsoft.com/en-us/library/cc720377.aspx
http://support.microsoft.com/kb/867590
There appear to be a number of ways you can do this.

I'm using BackupChain for Virtual Server 2005 as well as VMWare. It creates delta incremental files which only contain file changes and takes snapshots while the VMs are running. This way we save a lot of storage space and bandwidth because it sends the backups via FTP to another server.
Sav

Related

Directory/Prefix with 40,000 objects locks windows explorer and other applications

Here at work we use s3fs because our product uses Oracle and the legacy code load files uploaded thru web direct from Oracle pl/sql, so we need the Oracle Database to see the same file system the web server sees. And we cannot access the OS where Oracle is installed to mount Windows shares thru SAMBA.
The problem is, when a directory (prefix in s3) reaches rougly
40,000 files, the access to this directory thru samba become extremely slow, causing timeouts and even completely stop the application pool trying to access this directory.
Our web servers are EC2 instances with Windows 2019.
I wonder if someone know a solution to this.
S3 LIST API is slow when objects are lots. It should be better if you could re-organize files into multiple directory levels.

Temporary local execution of VM image

I work with a number of different specialized and configured OS environments but I generally only use one at a time. I have a processor-beefy laptop but storage is always an issue. It would also be good to have a running backup of each environment so I can work from other hardware.
What would be ideal would be if I could run some kind of VM library server that maintained canonical copies of each environment from which I could DL local execution copies to my local machine to work with and then stream changes back to the server image as I did my work.
In my research it seems like a number of the virtual machine providers used to have services like this (Citrix Player, VMWare Mirage) but that they have all been EOLd.
Is there a way to set something like this up today? I'd love a foss solution based on KVM but id be willing to take a free proprietary solution.

Migrate 100+ virtual machines from on-prem to azure

Apologies if this is the wrong platform for this question.
If I want to migrate 100 VM's onto Azure VM's what all things I need to consider and how can I migrate?
This is not a comprehensive answer but some things to consider are:
- Start with a thorough inventory of the VMs to migrate. Issues to watch out for include..
- Any unsupported OS versions, including 32-bit.
- large numbers of attached drives.
- Disk drives >1TB.
- Gen 2 VHDs.
- Application and network interdependencies which need to be maintained.
- Specific performance requirements (i.e. any VMs that would need Azure premium storage, SSD drives etc.).
In developing a migration strategy some important considerations are:
- How much downtime can you tolerate? To minimize downtime look at solutions like Azure Site Recovery which supports rapid switchover. If downtime is more flexible there are more offline migration tools and scripts available.
- Understand whether to move to the new Azure Resource Manager or the Service Management deployment model. See https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/.
- Which machines to move first (pick the simplest, with fewest dependences).
- Consider cases where it may be easier to migrate the data or application to a new VM rather than migrate the VM itself).
A good forum to ask specific migration questions is: Microsoft Azure Site Recovery
Appending to sendmarsh's reply
The things you will have to consider are:
Version of virtual environment i.e VMWare or Hyper-V.
Os version, RAM Size, OS disk size, OS disk count, Number of disks, Capacity of each disk, format of hard disk, number of processor cores,number of NIC's, processor architecture, Network configurations such as IP address's, generation type if the environment is Hyper-V.
I could have missed a few more things... like checking if the VMWare tools are installed. Some of the configurations are not supported like having an iSCSI disk will not be supported. Microsoft supports not all naming conventions for the machines, so be careful in setting the name as that might affect things later.
A full length of pre-requisites list is over at:
[1]: https://azure.microsoft.com/en-us/documentation/articles/site-recovery-best-practices/#azure-virtual-machine-requirements
Update: Using Powershell to automate the migration would make your life easier.

Virtual PC 2007 as programming environment

I'd like to create a VM in Virtual PC 2007 for use as a development environment/sandbox for an existing ASP.NET application in Visual Studio 2005/SQL Server 2005 (and VSS for source control).
I'm thinking that I need to create a 'base' copy of the environment (with the os, Visual Studio, and Sql Server), and then copy that to a 'work' version that I do actual development in. I would be sharing this VM with one or two other developers who would be working on different parts of the app.
Is this a good idea? What is the best way to get my app/databases in and out of the VM and the changes I make into VSS? Is it just a copy from the host location to the VM share and back again? How do I keep everything synchronized?
Thanks!
I would seriously suggest you the following things:
Use a "server" solution, rather than a desktop solution. That's far more reasonable if you want to share the VM environment with other developers.
Use VMware's products rather than Microsoft's.
From these two points it follows that you should use VMware ESX Server and related products. If you don't want to / can't invest money in it there's a free version of this product: http://www.vmware.com/go/getesxi/, but I never used it.
Whether you choose to use the enterprise version of ESX server or the free version, I suggest you put your IT organization's IT department on it.
It's not a bad idea, if you think there's a need for it.
I do something similar when I need to develop a Windows App because it's just nice to have a clean environment. That way I don't accidentally add a reference to something that's not necessarily included in the .NET Framework. It forces me to install any 3rd party components as I'm developing and documenting. This way, I can anticipate prerequisites, and ensure that I have them documented before I load software to a user's PC and wonder why it doesn't work.
Just make sure the PC it's hosted on can handle the additional load. My main Dev PC is a dual core processor with 4GB RAM. I devote 2GB to any virtual PC I plan on using as a development environment so that I don't hit too much of a performance snag.
As for keeping everything synchronized, you will want to use some sort of source control (as you should even in a normal environment). (I like SVN with Tortoise SVN as my client of choice, but there are plenty of alternatives.) Just treat the virtual PCS as if they were normal PCs. Make sure they can access the network, so you can all access your source code repository.
You can use the snapshot feature (or whatever it is called) - that chagnes to the "system" are saved to a delta file so that you can easily revert to an earlier state of the virtual pc. It has some performance penalty. This way you don't have to keep base and work copies.
I use Virtual PCs for all of my Windows development. The company I work for has legacy products in FoxPro and current products in .NET so I have 2 environments set up:
1 - Windows XP with Foxpro and VSS - I can access VSS directly from this image and the code never enters other machines in my network (I work remotely)
2 - Windows 7 with VS2008 and all the associated bits and pieces needed to develop our .NET software (including TFS). This is the machine I use every day - I have a meaty desktop PC so I I am able to give the VPC 4GB RAM and runs as fast as a 'normal' PC.
I have my VPCs running in VirtualBox and it is equally as good as the other offerings. A previous answer mentioned VMWare ESX which is an excellent product for large scale deployment but if you want a server solution then VMWare Server is free and is a nice virtualisation platform.
If you are looking at ways to experiment with changes and still want to use VPC then undo disks are excellent - you fire up the machine, hack away to your hearts content and when you shut down you can choose to save or discard the entire session.
For me Virtual PCs are an excellent way to quickly set-up / tear down development environments and I would struggle to return to using a single machine for all my work.

Transfer of directory structure over network

I am designing a remote CD/DVD burner to address hardware constraints on my machine.
My design works like this: (analogous to a network printer)
Unix-based machine (acts as server) hosts a burner.
Windows-based machine acts as client.
Client prepares data to burn and transfers it to the server.
Server burns the data on CD/DVD.
My question is: what is the best protocol to transfer data over the network (Keeping the same directory hierarchy) between different operating systems?
I would think some kind of archive format would be best. The *nix .tar archive format works well for most things. However, since you are burning CD/DVD disks the native .iso format of the disk may be a good choice.
You'll likely need to transfer the entire archive prior to burning to prevent buffer under-run issues.
Edit:
You can use mkisofs to create the .iso file from a folder or your CD burner software may be able to output an .iso file.