Multiple ESXi VMs with same base - virtual-machine

I'm trying to create multiple VMs in ESXi with the same base, to conserve space.
For example, I'd like to create a base VM, with Windows 2008 and SQL 2008.
Then I'll want VM 1 to install software-1 and VM 2 to install software-2.
Assuming the base VM is 10GB, and I have 2 VMs based off that, my savings are 20GB. If I have 4 VMs based off the base VM, the savings would be 40GB.
It'll be a bonus if I can also update the base VM, and have the changes propagate to the other 2 (or 4) VMs.
I've done some googling on base VM, snapshot, etc, but I'm unable to find how to do this, probably due to the wrong terminology being used.
Does anyone know how I can achieve the above, either via GUI or by command line?

In my experience, the scenario can't be done by ESXi. We do snapshots for backup (of course, saving space) but not for seed.
The scenario may be termed SANDBOX. There is an famous open source solution linuxcontainer (lxc)
LXC is a userspace interface for the Linux kernel containment features.
Through a powerful API and simple tools, it lets Linux users easily create
and manage system or application containers.
If you want to do that on ESXi, the only way I thought is clone machine. (without saving space and time efficiency). And every time the seed machine update, you need to delete and clone again.
=> SEED ---> clone machine A ---> delete +---> NEW SEED ---> ...
| | |
| +-> clone machine B ---> delete |
| |
+----------------------------- update --+

Related

how does Autosys virtual machine work

I first heard the notion of autosys virtual machines, which seems capable of offloading a heavy loaded autosys job.
From some jil file examples I was able to make some oberversations:
in the jil file for a job, if there is a type attribute, does a
"type: v" mean virtual machine? But I also noticed in some other VM
jil example there is no "type" attribute, and the machine name is
like an alias with a "_V" suffix.
Do we need to specify two physical servers in the jil with one of them serving a primary
and the other backup (virtual) one?
what do attributes factor and max_load mean, and how are they properly setup?
How can we verify that both servers are hit if the jil file were configured as such? I supposed they are in log files.
You don't need to specify 2 machines for virtual. you can have 1 and use it as a new name or more recognizable one. If you specify 2 or more behind your set virtual machine, job will only run on one machine and not all. Depending on what virtual machine you define is how the scheduler will pick on which it should execute. With the new AE SP7 there are new possibilities like round robin and other options. I would suggest reading up on those and decide what best fits your need.

using Aerospike CE to replicate a DB on 3 VPS for the same data center, without XDR

For a startup-company project, we are renting three linux servers from the same datacenter in France (from OVH).
We are using three VPS at the moment. We will later switch to dedicated servers in case of commercial success.
We want to install a replicated distributed database on these 3 VPS, using a replication factor of 2 to allow a minimum of fault-tolerance.
If possible, we'd like to use Aerospike, as we prefer it over MongoDB and CouchDB.
So my question is : is it possible to use Aerospike Community Edition to replicate the database records across these 3 VPS, without XDR ? And how can we achieve that ?
Sure, XDR is only needed for replication across datacenters. To replicate within a cluster in a datacenter, configure your namespace's replication-factor to the desired value.
If you want your data replicated on two separate but identical configuraton Aerospike clusters (clusterA with 3 VSPS, clusterB with 3 VSPs) (Is what you are asking for?) on CE without using XDR, you can instantiate two client objects in your application, use one clientA object to write to clusterA, use the other clientB object to repeat the operation to the other clusterB. You will have a performance hit but may work for you.
If you just have one cluster of 3 VSPs, setting replication factor of two in your namespace configuration automatically keeps one master record and one replica on the same cluster, record level data evenly distributed across the cluster, with master and replica of any record always on different nodes.

Redis active-active replication

I am using redis version 2.8.3. I want to build a redis cluster. But in this cluster there should be multiple master. This means I need multiple nodes that has write access and applying ability to all other nodes.
I could build a cluster with a master and multiple slaves. I just configured slaves redis.conf files and added that ;
slaveof myMasterIp myMasterPort
Thats all. Than I try to write something into db via master. It is replicated to all slaves and I really like it.
But when I try to write via a slave, it told me that slaves have no right to write. After that I just set read-only status of slave in redis.conf file to false. Hence, I could write something into db.
But I realize that, it is not replicated to my master replication so it is not replicated to all other slave neigther.
This means I could'not build an active-active cluster.
I tried to find something whether redis has active-active cluster capability. But I could not find exact answer about it.
Is it available to build active-active cluster with redis?
If it is, How can I do it ?
Thank you!
Redis v2.8.3 does not support multi-master setups. The real question, however, is why do you want to set one up? Put differently, what challenge/problem are you trying to solve?
It looks like the challenge you're trying to solve is how to reduce the network load (more on that below) by eliminating over-the-net reads. Since Redis isn't multi-master (yet), the only way to do it is by setting up each app server with a master and a slave (to the other master) - i.e. grand total of 4 Redis instances (and twice the RAM).
The simple scenario is when each app updates only a mutually-exclusive subset of the database's keys. In that scenario this kind of setup may actually be beneficial (at least in the short term). If, however, both apps can touch all keys or if even just one key is "shared" for writes between the apps, then you'll need to bake locking/conflict resolution/etc... logic into your apps to consolidate local master and slave differences (and that may be a bit of an overkill). In either case, however, you'll end up with too many (i.e. more than 1) Redises, which means more admin effort at the very least.
Also note that by colocating app and database on the same server you're setting yourself for near-certain scalability failure. What will happen when you need more compute resources for your apps or Redis? How will you add yet another app server to the mix?
Which brings me back to the actual problem you are trying to solve - network load. Why exactly is that an issue? Are your apps so throughput-heavy or is the network so thin that you are willing to go to such lengths? Or maybe latency is the issue that you want to resolve? Be the case as it may be, I recommended that you consider a time-proven design instead, namely separating Redis from the apps and putting it on its own resources. True, network will hit you in the face and you'll have to work around/with it (which is what everybody else does). On the other hand, you'll have more flexibility and control over your much simpler setup and that, in my book, is a huge gain.
Redis Enterprise has had this feature for quite a while, but if you are looking for an open source solution KeyDB is a fork with Active Active support (called Active Replica).
Setting it up is just a little more work than standard replication:
Both servers must have "active-replica yes" in their respective configuration files
On server B execute the command "replicaof [A address] [A port]"
Server B will drop its database and load server A's dataset
On server A execute the command "replicaof [B address] [B port]"
Server A will drop its database and load server B's dataset (including the data it just transferred in the prior step)
Both servers will now propagate writes to each other. You can test this by writing to a key on Server A and ensuring it is visible on B and vice versa.
https://github.com/JohnSully/KeyDB/wiki/KeyDB-(Redis-Fork):-Active-Replica-Support

SQL Server 2008 differencing databases

So here is the deal:
Background
A Hyper-V VM can handle a differencing disk mode, where one can set the original VHD file in a read-only state and create a new vhd which keeps track of, and persists, the changes. The advantage here is you can easily create new VMs without having to reinstall Windows, etc.
Problem
What I am looking for is something similar, but for SQL Server databases. We do all of our development locally and then we have a box that has X instances run on it (1 for each developer). We then have a process which copies the production backups that are made and restores them to these instances. After this is complete, it checks-out a branch that a developer chooses (of SQL scripts) and runs the scripts on the instance. This way they can test their code on production data prior to it actually hitting production. However, it is a real pain to have a copy of all our production dbs for each instance-- it would be nice to have 1 set of them and have a differential option which just persists the changes made. Is this possible or am I dreaming?
Possible solution
One solution I thought of is just to use an actual differencing disk VHD. I would create a base VHD that has our production backup databases, which would be modified/created night with the production database. I then would have it modify/create differencing disks and apply the scripts to each differencing disk. This way we have 1 copy of the dbs, and the developer's changes are recorded to a separate differencing disk. However, I was hoping to accomplish this in SQL server.
Basically the conclusion I have come to is to try and automate the process of differencing disks as below:
Create a new VHD on a network share- we'll call this NAS1.
Mount the VHD from NAS1 on a machine that acts as a SQL processor (we'll call this SQLPROCESS1.
SQLPROCESS1 performs the following actions.
Copy BAK SQL files from production to SQLPROCESS1 (this might take a while, but this entire #3 could be put into a threaded application, so it could be copying multiple and restoring at the same time).
Restore files on SQLPROCESS1 and point data files (mdf, ldf) to reside on the new VHD.
Optional: Change SQL dbs to SIMPLE backup mode and use SHRINKFILE since we'll be using them solely for development (and don't need backups). This can save us a lot of space.
Detach all dbs.
Detach the VHD.
Create differencing disk from parent on NAS1.
Copy differencing disk X number of times (as needed per instance or developer).
Optional: We use a central server called TEST1 for testing and this is where we are going to mount each differencing disk-- 1 per instance or developer.
We'll first need to detach all dbs from each instance.
Then we'll need to unmount/detach the existing differencing VHDs if there are any.
Attach differencing disk(s).
Reattach all dbs in SQL Server.
Optional: run SQL scripts from a code repository branch as specified per developer.
References:
http://obligatorymoniker.wordpress.com/2010/08/21/how-to-create-a-differencing-vhd-that-refers-to-a-parent-vhd-that-is-on-a-network-share-from-windows-7/
To automate I'd use a simple set of batch files, VBS, or PowerShell.
Edit: Just tried this and it works great! Developers now have their own instance and it only records their changes.

How do you back up your development machine? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
How do you back up your development machine so that in the event of a catastrophic hardware malfunction, you are up and running in the least amount of time possible?
There's an important distinction between backing up your development machine and backing up your work.
For a development machine your best bet is an imaging solution that offers as near a "one-click-restore" process as possible. TimeMachine (Mac) and Windows Home Server (Windows) are both excellent for this purpose. Not only can you have your entire machine restored in 1-2 hours (depending on HDD size), but both run automatically and store deltas so you can have months of backups in relatively little space. There are also numerous "ghosting" packages, though they usually do not offer incremental/delta backups so take more time/space to backup your machine.
Less good are products such as Carbonite/Mozy/JungleDisk/RSync. These products WILL allow you to retrieve your data, but you will still have to reinstall the OS and programs. Some have limited/no histories either.
In terms of backing up your code and data then I would recommend a sourcecode control product like SVN. While a general backup solution will protect your data, it does not offer the labeling/branching/history functionality that SCC packages do. These functions are invaluable for any type of project with a shelf-life.
You can easily run a SVN server on your local machine. If your machine is backed up then your SVN database will be also. This IMO is the best solution for a home developer and is how I keep things.
All important files are in version control (Subversion)
My subversion layout generally matches the file layout on my web server so I can just do a checkout and all of my library files and things are in the correct places.
Twice-daily backups to an external hard drive
Nightly rsync backups to a remote server.
This means that I send stuff on my home server over to my webhost and all files & databases on my webhost back home so I'm not screwed if I lose either my house or my webhost.
I use Mozy, and rarely think about it. That's one weight off my shoulders that I won't ever miss.
Virtual machines and CVS.
Desktops are rolled out with ghost and are completely vanilla.
Except they have VirtualBox.
Then developers pull the configured baseline development environment
down from CVS.
They log into the development VM image as themselves, refresh the source and libraries from CVS and they're up and working agian.
This also makes doing develpment and maintenance at the same time a lot easier.
(I know some people won't like CVS or VirtualBox, so feel free to substiture your tools of choice)
oh, and You check you work into a private branch off Trunk daily.
There you go.
Total time to recover : 1 hour (tops)
Time to "adopt" a shbiy new laptop for a customer visit : 1 hour ( tops)
And a step towards CMMI Configuration Management.
BTW your development machine should not contain anything of value. All your work (and your company's work) should be in central repositories (SVN).
I use TimeMachine.
For my home and development machines I use Acronis True Image.
In my opinion, with the HD cheap prices nothing replaces a full incremental daily HD backup.
A little preparation helps:
All my code is kept organized in one single directory (with categorized sub-directories).
All email is kept in various PSTs.
All code is also checked into source control at the end of every day.
All documents are kept in one place as well.
Backup:
Backup your code, email, documents as often as it suits you (daily).
Keep an image of your development environment always ready.
Failure and Recovery
If everything fails, format and install the image.
Copy back everything from backup and you are up and running.
Of course there are tweaks here and there (incremental backup, archiving, etc.) which you have to do to make this process real.
If you are talking absolute least amount of restore time... I've often setup machines to do Ghost (Symantec or something similar) backups on a nightly basis to either an image or just a direct copy to another drive. That way all you have to do is reimage the machine from the image or just swap the drives. You can be back up in under 10 minutes... The setup I did before was in situation where we had some production servers that were redundant and it was acceptable for them to be offline long enough to clone the drive...but only at night. During the day they had to be up 100%...it saved my butt a couple times when a main drive failed... I just opened the case, swapped the cables so the backup drive was the new master and was back online in 5 minutes.
I've finally gotten my "fully automated data back-up strategy" down to a fine art. I never have to manually intervene, and I'll never lose another harddrive worth of data. If my computer dies, I'll always have a full bootable back-up that is no more than 24 hours old, and incremental back-ups no more than an hour old. Here are the details of how I do it.
My only computer is a 160 gig MacBook running OSX Leopard.
On my desk at work I have 2 external 500 gig harddrives.
One of them is a single 500 gig partition called "External".
The other has a 160 gig partition called "Clone" and a 340 gig partition called TimeMachine.
TimeMachine runs whenever I'm at work, constantly backing up my "in progress" files (which are also committed to Version Control throughout the day).
Every weekday at 12:05, SuperDuper! automatically copies my entire laptop harddrive to the "Clone" drive. If my laptop's harddrive dies, I can actually boot directly from the Clone drive and pick up work without missing a beat -- giving me some time to replace the drive (This HAS happened to me TWICE since setting this up!). (Technical Note: It actually only copies over whatever has changed since the previous weekday at 12:05... not the entire drive every time. Works like a charm.)
At home I have a D-Link DNS-323, which is a 1TB (2x500 gig) Network Attached Storage device running a Mirrored RAID, so that everything on the first 500 gig drive is automatically copied to the second 500 gig drive. This way, you always have a backup, and it's fully automated. This little puppy has a built-in Dynamic DNS client, and FTP server.
So, on my WRT54G router, I forward the FTP port (21) to my DNS-323, and leave its FTP server up.
After the SuperDuper clone has been made, rSync runs and synchronizes my "External" drive with the DNS-323 at home, via FTP.
That's it.
Using 4 drives (2 external, 2 in the NAS) I have:
1) An always-bootable complete backup less than 24 hours old, Monday-Friday
2) A working-backup of all my in-progress files, which is never more than 30 minutes old, Monday-Friday (when I'm at work and connected to the external drives)
3) Access to all my MP3s (170GB) at documents at work on the "External" and at home on the NAS
4) Two complete backups of all my MP3s and documents on the NAS (External is original copy, both drives on NAS are mirrors via ChronoSync)
Why do I do all of this?
Because:
1) In 2000, I dropped a 40 gig harddrive 1 inch, and it cost me $2500 to get that data back.
2) In the past year, I've had to take my MacBook in for repair 4 times. One dead harddrive, two dead motherboards, and a dead webcam. On the 4th time, they replaced my MacBook with a newer better one at no charge, and I haven't had a problem since.
Thanks to my daily backups, I didn't lose any work, or productivity. If I hadn't had them, though, all my work would have been gone, along with my MP3s, and my writing, and all the photos of my trips to Peru, Croatia, England, France, Greece, Netherlands, Italy, and all my family photos. Can you imagine? I'm sure you can, because I bet you have a pile of digital photos sitting on your computer right now... not backed-up in any way.
A combination of RAID1, Acronis, xcopy, DVDs and ftp. See:
http://successfulsoftware.net/2008/02/04/your-harddrive-will-fail-its-just-a-question-of-when/
Maybe just a simple hardware hard disk raid would be a good start. This way if one drive fails, you still have the other drive in the raid. If something other than the drives fail you can pop these drives into another system and get your files quickly.
I'm just sorting this out at work for the team. An image with all common tools is on Network. (We actually have a hotswap machine ready). All work in progress is on network too.
So Developers machine goes boom. Use hotswap machine and continue. Downtime ~15 mins + coffee break.
We have a corporate solution pushed down on us called Altiris, which works when it wants to. It depends on whether or not it's raining outside. I think Altiris might be a rain-god, and just doesn't know it. I am actually delighted when it's not working, because it means that I can have my 99% of CPU usage back, thank you very much.
Other than that, we don't have any rights to install other software solutions for backing things up or places we are permitted to do so. We are not permitted to move data off of our machines.
So, I end up just crossing my fingers while laughing at the madness.
I don't.
We do continuous integration, submit code often to the central source control system (which is backed up like crazy!).
If my machine dies at most I've lost a couple of days work.
And all I need to do is get a clean disk at setup the dev environment from a ghost image or by spending a day sticking CDs in, rebooting after Windows update, etc. Not a pleasant day but I do get a nice clean machine.
At work NetBackup or PureDisk depending on the box, at home rsync.
like a few others, I have a clean copy of my virtual pc that I can grab and start fresh at anytime and all code is stored in subversion.
I use SuperDuper! and backup my Virtual Machine to another external drive (i have two).
All the code is on a SVN server.
I have a clean VM in case mine fails. But in either case it takes me a couple of hours to install WinXP+Vstudio. i don't use anything else in that box.
I use xcopy to copy all my personal files to an external hard drive on startup.
Here's my startup.bat:
xcopy d:\files f:\backup\files /D /E /Y /EXCLUDE:BackupExclude.txt
This recurses directories, only copies files that have been modified and suppresses the message to replace an existing file, the list of files/folders in BackupExclude.txt will not be copied.
Windows Home Server. My dev box has two drives with about 750GB of data between them (C: is a 300GB SAS 15K RPM drive with apps and system on it, D: is a mirrored 1TB set with all my enlistments). I use Windows Home Server to back this machine up and have successfully restored it several times after horking it.
My development machine is backed up using Retrospect and Acronis. These are nightly backups that run when I'm asleep - one to an external drive and one to a network drive.
All my source code is in SVN repositories, I keep all my repositories under a single directory so I have a scheduled task running a script that spiders a path for all SVN repositories and performs a number of hotcopies (using the hotcopy.py script) as well as an svndump of each repository.
My work machine gets backed up however they handle it, however I also have the same script running to do hotcopies and svndumps onto a couple of locations that get backed up.
I make sure that of the work backups, one location is NOT on the SAN, yes it gets backed up and managed, but when it is down, it is down.
I would like a recommendation for an external RAID container, or perhaps just an external drive container, preferably interfacing using FireWire 800.
I also would like a recommendation for a manufacturer for the backup drives to go into the container. I read so many reviews of drives saying that they failed I'm not sure what to think.
I don't like backup services like Mozy because I don't want to trust them to not look at my data.
SuperDuper complete bootable backups every few weeks
Time Machine backups for my most important directories daily
Code is stored in network subversion/git servers
Mysql backups with cron on the web servers, use ssh/rsync to pull it down onto our local servers also using cron nightly.
If you use a Mac, it's a no brainer - just plug in an external hard drive and the built in Time Machine software will back up your whole system, then maintain an incremental backup on the schedule you define. This has got me out of a hole many a time when I've messed up my environment; it also made it super easy to restore my system after installing a bigger hard drive.
For offsite backups, I like JungleDisk - it works on Mac, Windows and Linux and backs up to Amazon S3 (or, added very recently, the Rackspace cloud service). This is a nice solution if you have multiple machines (or even VMs) and want to keep certain directories backed up without having to think about it.
Home Server Warning!
I installed Home Server on my development Server for two reasons: Cheap version of Windows Server 2003 and for backup reasons.
The backup software side of things is seriously hit or miss. If you 'Add' a machine to the list of computers to be backed up right at the start of installing Home Server, generally everything is great.
BUT it seems it becomes a WHOLE lot harder to add any other machines after a certain amount of time has passed.
(Case in point: I did a complete rebuild on my laptop, tried to Add it - NOPE!)
So i'm seriously doubting the reliability of this platform for backup purposes. Seems to be defeating the purpose if you can't trust it 100%
I have the following backup scenarios and use rsync as a primary backup tool.
(weekly) Windows backup for "bare metal" recovery
Content of System drive C:\ using Windows Backup for quick recovery after physical disk failure, as I don't want to reinstall Windows and applications from scratch. This is configured to run automatically using Windows Backup schedule.
(daily and conditional) Active content backup using rsync
Rsync takes care of all changed files from laptop, phone, other devices. I backup laptop every night and after significant changes in content, like import of the recent photo RAWs from SD card to laptop.
I've created a bash script that I run from Cygwin on Windows to start rsync: https://github.com/paravz/windows-rsync-backup