Is it feasible to have a virtual machine running an OS on a local disk (not the one with the guest system) and booting the real machine from that same disk?
I'd like to do it with both linux and Windows. Is it possible?
If you use separate partitions on the disk for the two OSes, then it works with no problem. But I think you mean that you want both instances of the OS to use the same partition, so the answer is no. There are many files in both Linux and Windows that are modified by the system while it is running. If two different instances of the OS are running and trying to update these files at the same time, it will result in chaos.
It would be possible to share the read-only parts of the system, and have separate copies of the writable parts, but that would be fairly tricky to set up. And it would result in two separate OSes on the disk, albeit with some shared files, so I don't think it really meets the premise of your question.
Related
I work with a number of different specialized and configured OS environments but I generally only use one at a time. I have a processor-beefy laptop but storage is always an issue. It would also be good to have a running backup of each environment so I can work from other hardware.
What would be ideal would be if I could run some kind of VM library server that maintained canonical copies of each environment from which I could DL local execution copies to my local machine to work with and then stream changes back to the server image as I did my work.
In my research it seems like a number of the virtual machine providers used to have services like this (Citrix Player, VMWare Mirage) but that they have all been EOLd.
Is there a way to set something like this up today? I'd love a foss solution based on KVM but id be willing to take a free proprietary solution.
I am using VmWare Workstation 14 and when I install an operating system (any of them) some programs and apps are able to identify that I am using a virtual machine.
I have seen the vm is using virtualized devices that are really named virtual. like for example VmWare Network Card or etc. Is there any way to install fake real like hardware drivers on these virtual machines? Can this simple change make the app see this vm as a real machine?
How to make this virtual machine appear as a real machine to applications?
Is there really any way?
This was asked as a yes-or-no question so my answer is:
Yes... probably. But it's a lot of work.
There's a 2006 presentation by Tom Liston and Ed Skoudis that talks about this: https://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf
It focuses on VMware, but some of it would also apply to other types of Virtual Machine Environments (VMEs).
In summary, they identify as many things as they can find that would allow VM detection, which would each have to be addressed, and they also mention some VMware-specific mitigations for them.
VME artifacts in processes, file system, and/or Windows registry. These would include the VMtools service and "over 50 different references in the file system to 'VMware' and vmx" and "over 300 references in the Registry to 'VMware'", all of which would have to be deleted or changed.
VME artifacts in memory. Specific regions of memory tend to be different in guests (VMs) than hosts, namely the Interrupt Descriptor Table (IDT), Global
Descriptor Table (GDT), and Local Descriptor Table (LDT). The method by which the VM is built may allow these to appear the same in guests as they do in hosts.
VME-specific virtual hardware. This would include the drivers you mention like VmWare Network Card. The drivers would have to be removed or replaced with drivers that do not match the names or code signatures of any virtual drivers. Probably easiest to do on an open-source system, simply by modifying the driver source code and build.
VME-specific processor instructions and capabilities. Some VMEs add non-standard machine language instructions, or modify the behaviour of existing instructions. These can be changed or removed by editing the VME source code, at the cost of convenient host-guest interaction.
VME differences in behaviour. A VM might respond differently on the network, or fail at time synchonization. This could be mitigated with additional source code changes (on both host and guest) to make the network traffic look closer to normal, and providing sufficient CPU cores to the VM would help make sure it does not run more slowly than wall clock time.
Again this is from 2006, so if anyone has a more up-to-date reference, I'd love to see their answer.
Heres the problem. I use around three different machines for development. My partner is using two. We have to go through the same freaking set up procedure on all five machines to get to work.
Working with a php project here, so:
Install and configure, PDT, a php debugger, and some version of XAMPP.
Then possible install an svn client, and any other tools.
Again, to each of the five machines.
What if, instead, we did all of this once, in a virtual machine that is set up with the same stack, same versions, as the production server. Then each of us could grab a copy of the VM image, run that image on each of the five machines and do all of our development in that VM. Put Eclipse, apache, mysql, the works, all in that vm.
The only negative of this approach, and please correct me on the only part, is performance. Is it really that big of an issue though? The slowest machine out of the five is a Samsung NC10 powered by an Intel Atom 1.6 ghz processor.
Do you think this is possible and practically usable? Or am I crazy?
I use a VM for development (running on my laptop) and have never had performance problems. Another approach that you could take would be to image the drive in the state that you want. Use Acronis or Ghost to re-image each machine when you need to. Only takes about 5-10 minutes to restore an image on any modern PC.
I use a VM for all my "work" as it keeps it away from my "play". This set up allows me to use the office VPN without exposing my whole machine to the office environment (which I trust about as much as the internets. ;-) Also I don't have to worry about messing up my development environment by trying games or other software. My work VM is currently running inside VirtualBox but I have used VMWare in the past. I have only noticed performance issues when using graphic intensive programs like Webex or the Terminal Server Client.
It can certainly be done. What turns me off is the size of the VM image, which would normally be several GBs. Having it on a network share means it can take longer to transfer then your current setup process takes. I guess an external hard drive would be the easiest way to move it around.
Performance wouldn't be an issue with any web development.
I have to ask why your current machines need to be "re-imaged" each time you sit down for work?
If you're using Windows you'll probably want to use SYSPREP on the master image so that the 'mini-setup' runs when you boot up the virtual machines for the first time.
Otherwise in terms of Windows' point of view, the machines have the exact same SID, hostname and other things - running multiple machines with the same SID on the same network can cause tons of headaches. Even more if you want them to communicate with each other.
I've run websphere for zSeries on a vmware virtual machine with no problem and websphere is more resource intensive then any PHP stack. I find that having a multi core machine or at least hyper threading makes it run a lot faster.
With vmware, disk operations are slower. For PHP development I doubt it would be a problem, but you'd definitely notice it if you are compiling a large C++ project. There is also Sun's VirtualBox which is free, and the latest version is rather nice (but I haven't looked at how slow disk operations are yet).
I am using that idea in practice. Virtual machines are generally great for development.
To run on multiple operating systems and multiple separate development environments.
Preserver older development environments for later support.
Can be easily backed up, when hard drive crashes no need to start from beginning.
Can be copied from developer to another, so everyone don't have to do tedious installations and configurations.
Down sides are:
Virtual machines are slower, you need more powerful computers than you would need otherwise. I would recommend having at least 4 G of ram, but preferably more like 16, fast multi core processors and fast hard drives.
Copying Windows OS virtual machines, each used copy of virtual machine should have it's own product key. When you make a copy, it needs to be registered with new product key.
Did you think about a software configuration manager like ansible, chef or puppet? With such software automation of such tasks is very easy! It can even create fresh vm and then configure it.
I'm looking into using virtual machines to host multiple OSes and I'm looking at the free solutions which there are a lot of them. I'm confused by what a hypervisor is and why are they different or better than a "standard" virtual machine. When I mean standard I going to use the benchmark virtual machine VMWare Server 2.0.
For a dual core system with 4 GB of ram that would be capable of running a max of 3 VMs. Which is the best choice? Hypervisor or non-hypervisor and why? I've already read the Wikipedia article but the technical details are over my head. I need a basic answer of what can these different VM flavors do for me.
My main question relates to how I would do testing on multiple environments. I am concerned about the isolation of OSes so I can test applications on multiple OSes at the same time. Also which flavor gives a closer experience of how a real machine operates?
I'm considering the following:
(hypervisor)
Xen
Hyper-V
(non-hypervisor)
VirtualBox
VMWare Server 2.0
Virtual PC 2007
*The classifications of the VMs I've listed may be incorrect.
The main difference is that Hyper-V doesn't run on top of the OS but instead along with the system it runs on top of a thin layer called hypervisor. Hypervisor is a computer hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently.
Many other virtualization solution uses other techniques like emulation. For more details see Wikipedia.
Disclaimer, everything below is (broadly) my opinion.
Its helpful to consider a virtual machine monitor (a hypervisor) as a very small microkernel. It has very few jobs beyond accessing the underlying hardware, such as monitoring of event channels and granting guest domains access to specific resources .. while enforcing some kind of scheduler.
All guest machines are completely oblivious of the others, the isolation is true. Guests do not share memory with the privileged guest (or each other). So, in this instance, you could (roughly) think of each guest (even the privileged one) as a process, as far as the VMM is concerned. Typically, the first guest gets extra privileges so that it can manage the rest. This is the ideal technology to use when virtual machines are put into production and exposed to the world.
Additionally, some guests can be patched to become aware of the hypervisor, significantly increasing their performance.
On the other hand we have things like VMWare and QEMU, which rely on the host kernel to give it access to bare metal and enough memory to exist. They assume that all guests need to be presented with a complete machine, the limits put on the process presenting these (more or less) become the limits of the virtual machine. I say more or less because device mapper QoS is not commonly implemented. This is the ideal solution for trying code in some other OS, or some other architecture. A lot of people will call QEMU, Simics or even sometimes VMWare (depending on the product) a 'simulator'.
For production roll outs I use Xen, for testing something I just cross compiled I use QEMU, Simics or VirtualBox.
If you are just testing / rolling new code on various operating systems and architectures, I highly recommend #2. If your need is introspection (i.e. watching guest memory change as bad programs run in a guest) ... I'd need more explanation before answering.
Benefits of Hypervisor:
Hypervisor separates virtual machines logically, assigning each its own slice of underlying computing power, memory, and storage, thus preventing the virtual machines from interfering with each other.
I bought a new Vista PC recently but was having lots of problems getting everything to work on it, so I continued doing most of my work (development and other) on a slow XP machine that I've had for years.
Until now, that is - I used VMware Convertor to take an image of my old XP machine, and now I'm running it on my Vista machine, and doing pretty much all my work within that XP virtual machine. I'm using VMware Worstation.
So each morning I boot up my Vista machine, and then I boot up my XP virtual machine and spend the whole day working in the XP virtual machine.
Yes, you can probably guess: I'm the complete opposite of a VMware power user... I've not figured out snapshots, linked clones, or anything more than the absolute basics of running a VM. But I set this system up OK, and it's working well. Everything's running a lot faster than it was on my old machine anyway.
However, I'm concerned about the VM getting corrupted or something and causing me to lose everything. Of course I can back the whole VM up, and I can back up files from on the VM, and I will, but I'm wondering if it might be easier and safer to use a mapped drive or public folder or something for all my work, so that if the XP VM goes kaput, my files will all be available from the Vista machine.
This would also be good because I could share files easily between the Vista and the XP machine (I do use Vista for the odd thing). But I'm wondering if it'll make it much slower to read and write files from my XP machine? (e.g. if I'm compiling a big Java project, which will involve lots of IO at once.)
The information on how to set these things up is readily available, but I haven't found it so easy to figure out the best approach for what I'm doing. Most people are using VMs for much more advanced purposes than mine.
Also I'm wondering if there are any other tips or important considerations for this doing-all-your-work-in-one-VM type of setup? e.g. what's likely to go wrong, and how can I avoid it? Anything else?
I have an Ubuntu Linux box at home which has three VMs, all totally self-contained.
The first is for my wife's business, she needs access to all the MS Office stuff and MYOB.
The second is for work, they're too tight to buy me a laptop and I'm not going to let them install their hideous security and auto-update products on my real box.
The third is my Visual Studio development VM.
It runs like a dream (although only ever tested one VM at a time). And I just backup all the VM files from Ubuntu (along with my Linux work as well) which basically gives me images of the VM hard drives.
Surely if you are doing all your work in a VM, it's time to think about changing your host machine to one that's usable, no?
As others have pointed out, it is time to think about changing your host OS to one you are comfortable with and can get your work done on. Depending on what you do on a day to day basis on your machine, I can bet Vista is going to be anything more than a big hurdle. Why tax your work and yourself by running VMware on top of a beast that Vista is only to do all your work inside the VMware?
Having said that, I do suggest that you look into VMware snapshots and cloning. Those two are powerful features, not least the former in your case, which can be used to avert, in addition to solving, a lot of common problems you can run into while running any OS inside a VMware.
I perform a crude backup once in a while where I compress the VMware image on disk with toolsk like 7-zip, and store them on backup media. However, for backups or restore points within the system, VMware's Linked Cloning is definitely a handy feature -- since Windows is susceptible to getting corrupt/infected often, with linked cloning, you can be pretty sure that you can easily revert back to the last state before the corruption took place, and continue your work unimpeded from there.
I have been using VMWare at work for a couple of years now. I use it for development and testing. As long as your base PC is good enough it is a really good way to separate your "PC Life".
I would certainly be storing your data files on a server somewhere. This can be either a mapped drive, source control, or whatever. When you start using snapshots it is really easy to wipe a session, so treating your base PC as a kind of NAS avoids this problem.
I have now decided to start using VMWare at home. I have a VM for business apps (Office, QuickBooks etc), one for Visual Studio development and several others for web servers, sql servers etc. My base PC has 8GB RAM & a 2.8GHz quad core processor, so running four or more VMs is no problem.
I'm wondering if it might be easier and safer to use a mapped drive or public folder or something for all my work
Please please please, use a version control system (that is also backed up) if you're working mainly with text files. A mapped drive or public folder is accessible, but not the best way.