Setting up a test machine - testing

I'm looking to set up an improved test environment. I use VMWare to easily set up a clean environment (fresh installation). This works OK for WinXP and simple tests, but our software is very memory and CPU intensive and VMWare is just too slow with Win7 & a full functionality test.
So I'd like to set up a system with a big hard disk that I can store disk images to. I've tried Clonezilla but it's quite cumbersome to use. I'm looking for a solution where I can boot from a usb drive which will then show me a list of the following:
make new image
restore image
and 'restore image' should give me a list with the available images (win xp, win vista, win 7, win 7 with office installed, etc) So basically just like VMWare, but non-virtualized.
I would think this would be something obvious, but for the life of me I can't find any software that advertises it can do this. 10 years ago, Norton Ghost could do something like this but I don't remember how easy it was to use, and anyway it seems that current versions focus more on being an end-user backup tool.
So, any advice on what is the best way to set up such a test environment? Any specific disk imaging software that works well, or other tips on how to set this up? Thanks.

ISO Buster might help you to put mirror images to set your required Environment.
You might want to take a look at it :
http://get2pc.com/isobuster%202.8o36c-1-w-0-0
http://www.isobuster.com/help.php?help=285link text
http://download.cnet.com/IsoBuster/3000-2248_4-10208087.html
Good Luck !

Related

Multiseat setup for fun and profit: hypervisors and other choices

I am grad student, and I am considering setting up my dream home workstation/art tool/entertainment device/all-purpose everything. I'm wondering if what I want to do is possible (and practical), and if so, get some suggestions and warnings from people who know more about virtualization and hypervisors than I do:
Aim: Set up a 2-4 headed computing station that is optimized for using different OS'sfor different tasks I do. I want to keep my work/play streams separated, and have control over the resources that each one is allowed. For example, one head would be Windows 10 for audiovisual work, media playing, and maybe some gaming. Another head would use Linux and be used mainly for data science (mostly R and Python), and some hosting for purely local use (such as running an instance of the Galaxy bioinformatics server, which I only plan to access locally).Finally, I want a VM that is purely devoted to web-browsing, probably some lightweight Linux distro.
I want each OS to have it's own keyboard and monitor(s), but ideally I want to copy-paste between OSs. The idea is to just swivel my chair to move between operating systems, or even to have one person using each.
What I think I need:
A hypervisor with PCI, USB, and network controller pass-through.
Two video cards,one each for my Windows and Linux workstations (with the web browsing VM using the on-chip CPU graphics). Obviously, a mobo and CPU that support full virtualization.
A USB card with multiple separate controllers, so that I can use a different controller for each OS. Something similar for network interface cards.
Separate SSDs for each OS and its apps.
Some sort of storage pool (probably ZFS based) to hold the bulk of my files, shared so I can access them from either guest. Ideally, I'd like to to be in a separate enclosure, but I don't trust eSATA cables (they seem to fail frequently) and care about speed of database access, so I'll probably put the drives inside the main case, even though that will make future migration more annoying.
Something like SPICE for KVM, so that I can copy and paste freely between OS's.
Is there anything I am overlooking?
What hypervisor or similar solution is best for what I want to do? I am leaning towards KVM, but am far from committed.I will consider paid solutions if there is a compelling reason to use them.
What are some pitfalls I should be wary of?
kvm will work here ideally, a lot of tutorials and lot of intel based configurations working like a charm
zfs can't share your data, u need nfs or samba share on host machine
Synergy software is for you.

Reducing the impact on diskspace when loading new software on a dev machine.

TL;DR
noob wants to setup dev machine/workspace on old hardware using windows 10 and load up 5+ software programs with similar file size and disk impact as Visual Studios. Wants reduce the impact these programs have on his already resource scarce laptop. Buying new hardware is the last resort, what is a viable workaround?
I have a laptop that I use for school and I am looking into using it as a development work space. (Visual Studio, SSMS, .NET, Jetbrains, Github Desktop, Infragistics Studio and the works) However I also don't want these programs to slow down my regular student workflow (Word, Excel, browser) and take up resources. Additionally some of the development programs I intend to only test drive during their trial period so I don't want them to stick around in my file system. A lot of the things these programs do overlaps so eventually I will be removing some of the programs that are not a good fit for what I am doing(Training for Web Development).
My area of concern is that Memory usage per Task manager floats around 50% and Disk hits 99% on a regular basis. My goal is to reduce the impact of loading even more software to my computer. It currently has the basic office programs for school but I think the cause of it being gloated is that it is a 4yr old computer (Lenovo Ideapad Z370) Intel Core i5-2410M dual-core/4GB DDR3-1333 RAM/500GB 5400RPM, which may not be the most optimal hardware to have windows 10 running on.
To address this problem, could I just load my development programs to a external hard drive and then connect it to the laptop only when I am in "developer workflow" ?
I've done some initial looking into and this solution is said to be non-viable solution because programs vary in portability. If this is the case, could you propose alternatives such as loading the programs to a VM and connecting to it when I need the programs? What are other possible solutions to my resource problem?
I have a dropbox account and a onedrive account and a $25 Azure Credit provided by the school which I have at my disposal. Solution should be cost-effective. Goal is to squeeze the last ounce of value of current hardware before upgrading.
Thanks in Advance! #noob
Hello All I found what I was looking for!
Azure Cloud has a "Developer Ready" image. The VM holds Visual studios and other helpful tools preloaded. However you need to have a MSDN subscription and a Window 10 Professional product key. I had neither so I went with another option of a VM with preloaded SQL Server. From there I was able to load up all the demoware and tools as well as SSMS. I can now access my tools through RDP from work, home, school, any other MS machine. Best of all, now I don't need to buy new hardware and the pay per minute use keeps the price within my allotted credits from Azure.
TL;DR
Free VM to tap into my dev space and develop from anywhere

Basic virtualization questions

Excuse me for my lack of knowledge but I am really new to the Virtual world and have a few questions.
I work for a small charity who specialise in providing basic IT training. We have recently acquired a few Dell Poweredge 2650 servers and Dell desktops and we wish to offer both XP, Windows 7, Mac and Ubuntu training. I am looking at setting up a Virtual environment so that we can have a standard image for each OS (I currently use image files but it currently takes approximately 25mins to build each machine and multi-boot is not an option as the new machines have 20Gb disks).
The servers are all dual processor and we can purchase more memory(I need to justify the cost)
What are the memory requirements for
the Host?
How many VM's can I run
per server?
Can I run multiple instances of the same VM
Thanks in advance for your knowledge.
Darryn
You might be able to get away with a multi-boot option with those 20 gig disks; each OS will probably take no more than ten gigs for minimal installs, two OSes per machine isn't terrible. (Incidentally, look around for a group like FreeGeek in your area -- larger hard drives ought to be cheap for small sizes like 120-500 gigs.)
That said, virtualization might be just what you need, if you have a handful of pretty powerful machines.
I think between one and two gigabytes of host memory for every guest VM that you want to run would be very useful. At least in my experience, an Ubuntu image I gave 1024 megabytes to ran very quickly, but I didn't press it very far. Running Firefox or OpenOffice inside the VM would probably dictate more memory very quickly. Chrome seemed snappy.
So, if you've got 12 gigabytes of RAM, you might be able to get between four and twenty virtual machines hosted on the machine simultaneously, depending upon what your guests are doing.
As for disk space, if you use QEMU's -snapshot option, you ought to be able to save disk space. Each user could boot the same underlying disk image, but their own modifications would go into the 'snapshot' file. (I have no experience trying to do long-term system maintenance with this option, so it could be that all twenty of your users need to store service pack 2 contents when they upgrade in the future; I'd be scared of trying to modify the shared disk image once you've got snapshots of it running. Perhaps having everyone store 'personal documents' and the like in CIFS shares would make a ton of sense.)
The biggest hurdle will probably be Mac; because the Apple terms of service forbid running OS X on non-Apple hardware, you'll have to have some Apple machines around to run VirtualBox.

Carrying and Working on an Entire Development Box from a USB Stick. Feasible?

Lately I have been thinking about investing in a worthy USB pen drive (something along the lines of this), and install Operating Systems on Virtual Machines and start developing on them.
What I have in mind is that I want to be able to carry my development boxes, being a Windows Distribution for .Net development and a Linux Distribution for stuff like RoR, Perl and whatnot, so that I would be able to carry them around where need be...be it work, school, different computers at home etc...
I am thinking of doing this also for backup purposes...ie to backup my almost-single VM file to an external hd, instead of doing routinely updates to my normal Windows Box. I am also thinking about maybe even committing the VM boxes under Source Control (is that even feasible?)
So, am I on the right track with this ? Do you suggest that I try to implement this out?
How feasible is it to have your development box on Virtual Machine that runs from a USB Pen-Drive ?
I absolutely agree with where you are heading. I wish to do this myself.
But if you don't already know, it's not just about drive size, believe it or not USB Flash drives can be much slower than your spinning disk drives!
This can be a big problem if you plan to actually run the VMs directly from the USB drive!
I've tried running a 4GB Windows XP VM on a 32GB Corsair Survivor and the VM was virtually unusuable! Also copying my 4GB VM off and back onto the drive was also quite slow - about 10 minutes to copy it onto the drive.
If you have an esata port I'd highly recommend looking at high-speed ESata options like this Kanguru 32GB ESata/USB Flash drive OR this 32GB one by OCZ.
The read and write speeds of these drives are much higher over ESata than other USB drives. And you can still use them as USB if you don't have an ESata port. Though if you don't have an ESata port you can buy PCI to ESata cards online and even ESata ExpressCards for your laptop.
EDIT: A side note, you'll find the USB flash drives use FAT instead of NTFS. You don't want to use NTFS because it makes a lot more reads & writes on the disk and your drive will only have a limited number of reads & writes before it dies. But by using FAT you'll be limited to max 2GB file size which might be a problem with your VM. If this is the case, you can split your VM disks into 2GB chunks. Also make sure you backup your VM daily incase your drive does reach it's maximum number of writes. :)
This article on USB thumbdrives states,
Never run disk-intensive applications
directly against files stored on the
thumb drive.
USB thumbdrives utilize flash memory and these have a maximum number of writes before going bad and corruption occurs. The author of the previously linked article found it to be in the range of 10,000 - 100,000 writes but if you are using a disk intensive application this could be an issue.
So if you do this, have an aggressive backup policy to backup your work. Similarly, if when you run your development suite, if it could write to the local hard drive as a temporary workspace this would be ideal.
Hopefully you are talking about interpreted language projects. I couldn't imagine compiling a C/C++ of any size on a VM, let alone a VM running off of a USB drive.
I do it quite frequently with Xen, but also include a bare metal bootable kernel on the drive. This is particularly useful when working on something from which a live CD will be based.
The bad side is the bloat on the VM image to keep it bootable across many machines .. so where you would normally build a very lean and mean paravirtualized kernel only .. you have to also include one that has everything including the kitchen sink (up to what you want, i.e. do you need Audio, or token ring, etc?)
I usually carry two sticks, one has Xen + a patched Linux 2.6.26, the other has my various guest images which are ready to boot either way. A debootstrapped copy of Debian or Ubuntu makes a great starting point to create the former.
If nothing else, its fun to tinker with. Sorry to be a bit GNU/Linux centric, but that's what I use exclusively :) I started messing around with this when I had to find an odd path to upgrading my distro, which was two years behind the current one. So, I strapped a guest, installed what I wanted and pointed GRUB at the new LV for my root file system. Inside, I just mounted my old /home LV and away I went.
Check out MojoPac:
http://www.mojopac.com/
Hard-core gamers use it to take world of warcraft with them on the go -- it should work fine for your development needs, at least on Windows. Use cygwin with it for your unix-dev needs.
I used to do this, and found that compiling was so deathly slow, it wasn't worth it.
Keep in mind that USB flash drives are extremely slow (maybe 10 to 100 times slower) compared to hard drives at random write performance (writing lots of small files to a partition which already has lots of files).
A typical build process using GNU tools will create lots of small files - a simple configure script creates thousands of small files and deletes them again just to test the environment before you even start compiling. You could be waiting a long time.

What's a good way to backup (and maybe synchronize) your development machine? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I make extensive use of source control for anything that relates to a project I'm working on (source, docs etc.) and I've never lost anything that way.
However, I have had two or three crashes (spread over the last 4 years) on my development machine that forced me to reinstall my system and reconfigure my apps (eclipse, vim, Firefox, etc.). For weeks after reinstalling, I was missing one little app or another, some PHP or Python module wasn't there, stuff like that.
While this is not fatal, it's very annoying and sucks up time. Because it seemed so rare, I didn't bother about an actual solution, but meanwhile I've developed a mindset where I just don't want stuff like that happening anymore.
So, what are good backup solutions for a development machine? I've read this very similar question, but that guy really wants something different than me.
What I want is to have spare harddrives on the shelf and reduce my recovery time after a crash to something like an hour or less.
Thinking about this, I figured there might also be a way to use the backup mechanism for keeping two or more dev workstations in sync, so I can continue work at a different PC anytime.
EDIT: I should've mentioned that
I'm running Linux
I want incremental backup, so that it's cheap to do it frequently (once or twice a day)
RAID is good, but I'm on a laptop most of the time, no second hd in there, no E-SATA and I'm not sure about RAIDing to a USB drive: would that actually work?
I've seen sysadmins use rsync, has anybody had any experiences with that?
I would set up the machine how you like it and then image it. Then, you can set up rsync(or even SVN) to backup your homedir nightly/etc.
Then when your computer dies, you can reimage, and then redeploy your home dir.
The only problem would be upgraded/new software, but the only way to deal completely with that would be to do complete nightly backups of your drive(s).
Thanks, this sounds like a good suggestion. I think it should be possible to also update the image regularly (to get software updates / installs), but maybe not that often. E. g. I could boot the image in a VM and perform a global package update or something.
Hanno
You could create an image of your workstation after you've installed & configured everything. Then when your computer crashes, you can just restore the image.
A (big) downside to this, is that you won't have any updates or changes you've made since you created the image.
Cobian Backup is a reliable backup system for Windows that will perform scheduled backups to an external drive.
You could create a hard drive image. Restoring from a backup image restores everything to the exact state that it was at the time you took the image.
Or you could create an installer that installs just about everything needed.
Since you expressed interest in rsync, here's an article that covers how to make a bootable backup image via rsync for Debian Linux:
http://www.debian-administration.org/articles/575
Rsync is fast and easy for local and network syncing and is by nature incremental.
You can use RAID-1 for that. It’s the synchronize type, not the backup type.
I use RAID mirroring in conjunction with an external hard drive using Vista's system backup utility to backup the entire machine. That way I can easily fix a hard drive failure, but in the event my system becomes corrupted, I can restore from the E-SATA drive (which I only connect for backup).
Full disclosure: I've never had to restore the backup, so it's kind of like the airbag in your car; hopefully it works when you need it, but there's no way to be sure. Also, the backup process is manual (it can be automated) so I'm only as safe as the last backup.
You can use the linux "dd" command line utility to clone a hard drive.
Just boot from a linux cd, clone or restore your drive and reboot.
It works great for Windows/Mac drives too.
This will clone partition 1 of the first hard drive (/dev/sda) to partition 1 of the second drive (/dev/sdb)
dd if=/dev/sda1 of=/dev/sdb1
This will clone partition 1 of the first hard drive to a FILE on the second drive.
dd if=/dev/sda1 of=/media/drive2/backup/2009-02-25.iso
Simply swap the values for if= and of= to restore the drive.
If you boot from the Ubuntu live CD it will automount your USB drives making it easy to perform the backup/restore with external drive(s).
CAUTION: Verify the identity of your drives BEFORE running the above commands. It's easy to overwrite the wrong drive if you not careful.
Guess this is not exactly what you are looking for, but I just document all what I install and configure on a machine. Google Docs lets me do this from anywhere, and keeps the document intact when the machine crashes.
A good step by step document usually reduces the recovery time to one day or so
If you use a Mac, just plug in an external hard drive and Time Machine will do the rest, creating a complete image of your machine on the schedule you set. I restored from a Time Machine image when I swapped out the hard drive in my MacBook Pro and it worked like a charm.
One other option that a couple of guys use at my company is to have their development environment on a large Linux server. They just use their local machines to run an NX client to access the remote desktop (NX is much faster than VNC) - this has the advantages of fast performance, automatic backup of their files on the server, and the fact that they're developing on the same hardware that our customers use.
No matter what solution you use, it is always a good idea to have a secondary backup, too. Secondary backup should be off-site and include your essential work (source code, important docs). In case something happens to your main site (fire at the office, somebody breaks in and steals all your hardware, etc.), you would still be able to recover, eventually.
There are many online backup solutions. You could just get a remote storage at a reliable provider (e.g. Amazon S3) and sync your work on a daily basis. The solution depends on the type of access you can get, but rsync is probably the tool you would use for that.