I'm looking into using virtual machines to host multiple OSes and I'm looking at the free solutions which there are a lot of them. I'm confused by what a hypervisor is and why are they different or better than a "standard" virtual machine. When I mean standard I going to use the benchmark virtual machine VMWare Server 2.0.
For a dual core system with 4 GB of ram that would be capable of running a max of 3 VMs. Which is the best choice? Hypervisor or non-hypervisor and why? I've already read the Wikipedia article but the technical details are over my head. I need a basic answer of what can these different VM flavors do for me.
My main question relates to how I would do testing on multiple environments. I am concerned about the isolation of OSes so I can test applications on multiple OSes at the same time. Also which flavor gives a closer experience of how a real machine operates?
I'm considering the following:
(hypervisor)
Xen
Hyper-V
(non-hypervisor)
VirtualBox
VMWare Server 2.0
Virtual PC 2007
*The classifications of the VMs I've listed may be incorrect.
The main difference is that Hyper-V doesn't run on top of the OS but instead along with the system it runs on top of a thin layer called hypervisor. Hypervisor is a computer hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently.
Many other virtualization solution uses other techniques like emulation. For more details see Wikipedia.
Disclaimer, everything below is (broadly) my opinion.
Its helpful to consider a virtual machine monitor (a hypervisor) as a very small microkernel. It has very few jobs beyond accessing the underlying hardware, such as monitoring of event channels and granting guest domains access to specific resources .. while enforcing some kind of scheduler.
All guest machines are completely oblivious of the others, the isolation is true. Guests do not share memory with the privileged guest (or each other). So, in this instance, you could (roughly) think of each guest (even the privileged one) as a process, as far as the VMM is concerned. Typically, the first guest gets extra privileges so that it can manage the rest. This is the ideal technology to use when virtual machines are put into production and exposed to the world.
Additionally, some guests can be patched to become aware of the hypervisor, significantly increasing their performance.
On the other hand we have things like VMWare and QEMU, which rely on the host kernel to give it access to bare metal and enough memory to exist. They assume that all guests need to be presented with a complete machine, the limits put on the process presenting these (more or less) become the limits of the virtual machine. I say more or less because device mapper QoS is not commonly implemented. This is the ideal solution for trying code in some other OS, or some other architecture. A lot of people will call QEMU, Simics or even sometimes VMWare (depending on the product) a 'simulator'.
For production roll outs I use Xen, for testing something I just cross compiled I use QEMU, Simics or VirtualBox.
If you are just testing / rolling new code on various operating systems and architectures, I highly recommend #2. If your need is introspection (i.e. watching guest memory change as bad programs run in a guest) ... I'd need more explanation before answering.
Benefits of Hypervisor:
Hypervisor separates virtual machines logically, assigning each its own slice of underlying computing power, memory, and storage, thus preventing the virtual machines from interfering with each other.
Related
I am using VmWare Workstation 14 and when I install an operating system (any of them) some programs and apps are able to identify that I am using a virtual machine.
I have seen the vm is using virtualized devices that are really named virtual. like for example VmWare Network Card or etc. Is there any way to install fake real like hardware drivers on these virtual machines? Can this simple change make the app see this vm as a real machine?
How to make this virtual machine appear as a real machine to applications?
Is there really any way?
This was asked as a yes-or-no question so my answer is:
Yes... probably. But it's a lot of work.
There's a 2006 presentation by Tom Liston and Ed Skoudis that talks about this: https://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf
It focuses on VMware, but some of it would also apply to other types of Virtual Machine Environments (VMEs).
In summary, they identify as many things as they can find that would allow VM detection, which would each have to be addressed, and they also mention some VMware-specific mitigations for them.
VME artifacts in processes, file system, and/or Windows registry. These would include the VMtools service and "over 50 different references in the file system to 'VMware' and vmx" and "over 300 references in the Registry to 'VMware'", all of which would have to be deleted or changed.
VME artifacts in memory. Specific regions of memory tend to be different in guests (VMs) than hosts, namely the Interrupt Descriptor Table (IDT), Global
Descriptor Table (GDT), and Local Descriptor Table (LDT). The method by which the VM is built may allow these to appear the same in guests as they do in hosts.
VME-specific virtual hardware. This would include the drivers you mention like VmWare Network Card. The drivers would have to be removed or replaced with drivers that do not match the names or code signatures of any virtual drivers. Probably easiest to do on an open-source system, simply by modifying the driver source code and build.
VME-specific processor instructions and capabilities. Some VMEs add non-standard machine language instructions, or modify the behaviour of existing instructions. These can be changed or removed by editing the VME source code, at the cost of convenient host-guest interaction.
VME differences in behaviour. A VM might respond differently on the network, or fail at time synchonization. This could be mitigated with additional source code changes (on both host and guest) to make the network traffic look closer to normal, and providing sufficient CPU cores to the VM would help make sure it does not run more slowly than wall clock time.
Again this is from 2006, so if anyone has a more up-to-date reference, I'd love to see their answer.
Docker is an abstraction of OS (kernal) and below, VM is abstraction of Hardware. What is the point of running a Docker on an VM (like Azure) (apart from app portability)? should they not be directly hosting docker on the hardware?
Docker doesn't provide effective isolation for kernel-level security exploits (there's only one ring 0, and it's shared across all containers). Thus, one could reasonably wish to have the additional isolation provided by a virtualization mechanism.
Keep in mind that much of Docker's value is not about security, but about containerization -- building and distributing portable applications in such a way as to ensure that coupling between layers occurs only where and how intended.
The advantage of a cloud system like Azure is that you can go online with your credit card and get a machine up and running in a few minutes. This is enabled by that machine being virtual. Also VMs let you share hardware across multiple users with hardware-level isolation.
If everything else was equal, i.e. you didn't need any of the features of a VM, then you would be correct that a physical machine should be used, as it will run more efficiently.
What's the difference between process virtual machine with system virtual machine?
My guess is that process VM is not providing a kind of an operating system for the whole application for that OS, rather providing an environment for some specific application.
And system VM is providing an environment for an OS to be installed just like VirtualBox.
Am I getting it correct?
Another question is the difference between the two different implementation of system VM: hosted vs. stand-alone.
I'm a beginner studying OS, so easy and understandable answer would be greatly appreciated :)
A Process virtual machine, sometimes called an application virtual machine, runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform.
A System virtual machine provides a complete system platform which supports the execution of a complete operating system (OS),Just like you said VirtualBox is one example.
A Host virtual machine is the server component of a virtual machine , which provides computing resources in the underlying hardware to support guest virtual machine (guest VM).
The following is from http://airccse.org/journal/jcsit/5113ijcsit11.pdf :
System Virtual Machines
A System Virtual Machine gives a complete virtual hardware platform with support for execution
of a complete operating system (OS).
The advantage of using System VM are:
Multiple Operating System environments can run in parallel on the same piece of
hardware in strong isolation from each other.
The VM can provide an instruction set architecture (ISA) that is slightly different from
that of the real machine
The main draw backs are:
Since the VM indirectly accesses the same hardware the efficiency is compromised.
Multiply VMs running in parallel on the same physical machine may result in varied
performance depending on the workload imposed on the system. Implementing proper
isolation techniques may address this drawback.
I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development.
Currently I install everything on my main OS, even I use it a little. I use VMs a little (for lamp server).
Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc)
Keep primary development environment on main os and move others in VM.
main os should be messed up
keep things easy to organize (must)
performance should be optimal (optimal settings for best performance of components)
I'm interested to know how others' are doing.
There are both pros and cons with VM's.
Pros:
portability: you can move image to
different server
easy backup (but lengthy)
replication (new member joins team)
Cons
performance
hardware requirements
size of backups (20-40 GB per VM ...)
management of backed up images (what is the difference is not obvious)
keeping all images up to date
(patching / Windows updates)
For your scenario, I would create base VM with core OS and shared components (Web server, database), replicated it and installed specific tools into separate VM. If you combine tools within VM, you may end up with same mess as in case of using base OS - the advantage is that it is much easier to get rid of it ;-)
Optimal performance != using VMs
if you need to use VMs anyway, then yes: it could be better to use a separate VM for each thing that need one, unless you need more than one at once
Now that OCI containers are stable and well supported, using those through docker, podman or other similar tool is an increasingly popular option.
They are isolated, but under the same kernel, so:
they are almost as portable as virtual machines,
like virtual machines they can have their own virtual IP addresses, so they can run services not visible from the outside and without occupying port on the host, but
they don't reserve any extra space on disk or in memory like virtual machines and
they are not slowed by any virtualization layers and
mounting directories from the host is easy and does not require any special support.
The usual approach is to have the checkout in the developer's normal home directory and mount it into containers for building, testing and running.
Also building in containers is now supported by Remote Development extension for Visual Studio Code
When testing our software on several different systems (98-XP-Vista-Seven-Linux-etc), I think that the best choice is to use virtualized systems.
What's your choice: VMware, Virtual Box or MS Virtual PC/Server? and why?
We use VMWare here at work. Really any VM software that supports snapshots (or some way of saving the state of the machine) will work well. Snapshots make it easier for testing installs and rolling back. It can also help if you program goes and modifies files for returning back to a known-good state.
Virtual Box is the way to go. It has snapshots and is platform independent (Good for Mac users who want to test on other OS's). And it is free.
If it's available, Hyper-V on Windows Server 2008 is a powerful and full-featured entry including snapshot trees and all the niceties you'd expect with a quality UI.
If you're planning on using the VM on your local dev machine so you can (e.g.) bring it home on your laptop to work from there, then the more client-oriented virtualization software is probably the way to go.
If you're planning on using the virtualization in a primarily professional environment, a number of Hyper V machines in a computer lab that you can remote into is a powerful paradigm that we've been using at my office for a few months now.
My own preference is to use a local VM (Virtual PC is the easiest one for me) as my development environment because I can bring my work laptop home and use the VM there also (I don't VPN into the office). I then use the lab's Hyper-V machines for tests, deployments, etc because they have a better story for taking and restoring snapshots.
Go VMware. My reason is simple: before VMware released VMWare player and VMware server (the virtualisation platform formerly known as VMware GSX), the market for VM hosts was limited and expensive.
When VMware released these for free, all the other manufacturers (yes, I'm looking at Microsoft here) had to follow suit, so if it wasn't for the beneficence of VMware, we'd still be looking at having to buy our VM host software.
So, support VMware for being the good guys.
Oh, and their enterprise products are the business, they work well with Linux, have some excellent memory-saving tricks (here's the tech details), multiple snapshots and snapshots off a base image, and have features such as VMotion (load spreading) that other products don't support nearly as well (if at all).
Microsoft's VirtualPC. It free and simple.
One bit of functionality that is nice is the differenced VHDD that makes it easy (and space wise cheep) to keep backing up/reverting the image
VMWare, that's what we use here. We have both the full blown ESX for virtual servers and the VMWare workstations for development / testing. ESX resource management is very good, and easy to configure.
I've used VMWare (when the company would pay for it), VMWare Server (when the company would not), VirtualBox (because it's free, decent, and supports snapshots), Parallels on the Mac (which I bought), and Xen.
All work fine.
My current workhorse is VirtualBox, largely because it's free, supports snapshots, and runs on the various host platforms I have to use.
VMWare works pretty well, but for high cpu server apps we have found that Microsoft's Hyper-V works better because it has better cpu reservation abilities.
The key is that the system has snapshots, so you can easily roll back to several states (most do) and we have found that both VMWare and Hyper-V have excellent API's allowing us to kick off our automated tests when a new build completes.
Microsoft Virtual PC for Microsoft OS's, Virtual Box for *nix.
Virtual PC seem to be slightly faster and more stable, but it does not support linux.
We might have used VMWare if it was free,but our company would not spend the money.
Virtual box is great. It does have some stability issues if you run it inside Mac OS X. if you need a single solution to run multiple OS's this would be the one.
Linux/OpenSolaris on top of Virtual Box on top of Linux.