Difference between virtual machine process and host os process? - virtual-machine

Suppose in my pc I have Ubuntu as Host OS. Now I installed a Virtual Machine say VirtualBox (hypervisor) and then deployed a centos and a redhat os inside that as guest OS.
Suppose CentOS and redhat has 2 processes running and Ubuntu is running 3 processes. So following are my questions:
There are how many processes that Ubuntu is having?
Is there any difference between GuestOS and HostOS processes?
If all guestos runs as a process then they will get less time as compared to other process running on host os.
Please clear my doubts here.
Thank you.

Well let me clear your doubts,
First of all there aren't any specific number of process for an OS, its called as cores or threads, technically you can define how many cores or threads you want to use on your virtual machine and it depends on the system configuration you use.
Secondly Guest OS is what you have created in the virtual machine and host is what your laptop or pc actually run. Host OS uses the actual hardware for the working whereas the Guest OS uses the virtual hardware like number of cores and type and size of hard drive defined by the user while adding a virtual machine.
Third, as I mentioned earlier Guest and Host OS works on the configurations used by you, if you user higher amount of cores/ threads in setting your virtual machine the Guest OS will get higher speed.
Ideally the virtual machines are used to test and create some functionality of the Operating Systems without affecting the internal OS, so you can think of it as a your parents house where you can live and grow but at the end you cannot go away from the fact that their contribution is more and so you cannot go beyond their features without leaving it and making your own home.

Linux operating systems are multi-threaded operating system. The host OS would consider virtual box as a thread. You can define number of cores and virtual hard disk size for guest OS by using virtual box.
Since virtual box runs in separate thread and other operations of host OS runs in separate threads, there would be less effect on speed of processing. But I've observed big variances in processing speed in systems which have low memory. Each and every thread needs specific allocation of memory for its smooth operation. So systems having more than 2 GB RAM managed virtual box very well.

Related

I want to learn about virtualization

As a very beginner, I only know how to create VMs and install OS on these using Oracle VirtualBox. All the VMs created are dependent on the hardware resources (CPU, RAM etc.) of a single machine. If the machine goes down the VMs will go down. Need to know how VMs can be created using taking resources from different physical machines (manually or dynamically) to avoid failure of any VMs.
For example: There are 4 physical machines having 8 core and 16GB RAM each. Now, I want to create three VM having having 8 core and 16GB RAM taking from different physical machines. If one physical machine goes down, no VM will be down.
You can look up clustering solutions (e.g. VMware clusters, or Hyper-V failover clusters). In this model, if a physical host goes down, then the virtualization platform will power up the VMs on other hosts.
If you're looking for zero downtime, then VMware has something called Fault Tolerance in which a shadow copy of a VM is running on a different host and is continuously synchronized with the primary copy. If the primary host goes down, the shadow copy can take over with zero downtime (e.g. you don't have to boot from the shadow copy because it's already running). This feature, while cool, has a lot of real-world limitations in how it inter-operates with other features of VMware. For example, as of vSphere 6.0, you cannot do various kinds of migrations for such VMs, etc. I believe it also requires a more expensive license.
These solutions generally require some shared resources between the physical hosts (most notably storage). Otherwise they will not work (or at the very least, performance will greatly suffer).

Openstack Compute node performance

Am starting to learn openstack. As per my understanding (after reading all the docs) is that the compute nodes run a host OS (ubuntu or other linux) and on top of that you have your hypervisor (like KVM) and then the VMs run on top of it i.e HW -> OS -> Hypervisor -> VMs . This is similar to having a VM running on Virtualbox which runs on a host operating system i.e HW-> Host OS ->VBox -> VMs.Please correct me if my understanding is incorrect.
Assuming my first understanding is correct, How will the performance of the VMs on the this architecture be compared to running the VMs directly on hypervisor i.e HW-> Hypervisor (KVM)->VMs ?
Comparing this with VMWare openstack architecture where Nova speaking to VMWare vCenter and then vCenter manages the ESXi nodes (vCenter and ESXi are on different nodes). This way my VMs are directly running on top of hypervisor connected to HW (HW->ESXi->VMs).And all the overlay networking is handled by NSX. This looks much more performant compared to the other architecture. Am i missing something here ?
Thanks in advance.
~exp8
Since kvm runs on linux kernel, and runs instructions directly on the cpu, it is the hypervisor (HW -> HyperVisor -> VM). On the VMware side, there is a tiny proprietary tuned version of linux as hypervisor.
To find out which one is more efficient, you should do benchmarking. But if you think Vmware's linux consumes less resources (less process, memory, cpu), it may be better.

SystemC running inside a virtual machine, timing issues or corrupt results?

Is it ok to run a SystemC based simulation on a guest OS inside a virtual machine? Can simulation time be affected by this? I know that SystemC time is simulated and not actually tied to hardware timers.
And will running dozens of instances of SystemC simulations in the virtual machine configured with 4 cores (physical machine has 8) affect the results?
There are no problems running SystemC on a virtual machine. I do it regularly with VirtualBox. I run SystemC on Linux and Windows virtual machines, both 32-bit and 64-bit.
Unless there is a bug in the virtual machine software, a software application should behave identically when run on a physical or virtual machine.
Running multiple SystemC simulations concurrently on a virtual machine is also fine. The limit to how many simulations you can run concurrently, will be based on how much RAM you have available.

Difference between "process virtual machine" with "system virtual machine"

What's the difference between process virtual machine with system virtual machine?
My guess is that process VM is not providing a kind of an operating system for the whole application for that OS, rather providing an environment for some specific application.
And system VM is providing an environment for an OS to be installed just like VirtualBox.
Am I getting it correct?
Another question is the difference between the two different implementation of system VM: hosted vs. stand-alone.
I'm a beginner studying OS, so easy and understandable answer would be greatly appreciated :)
A Process virtual machine, sometimes called an application virtual machine, runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform.
A System virtual machine provides a complete system platform which supports the execution of a complete operating system (OS),Just like you said VirtualBox is one example.
A Host virtual machine is the server component of a virtual machine , which provides computing resources in the underlying hardware to support guest virtual machine (guest VM).
The following is from http://airccse.org/journal/jcsit/5113ijcsit11.pdf :
System Virtual Machines
A System Virtual Machine gives a complete virtual hardware platform with support for execution
of a complete operating system (OS).
The advantage of using System VM are:
Multiple Operating System environments can run in parallel on the same piece of
hardware in strong isolation from each other.
The VM can provide an instruction set architecture (ISA) that is slightly different from
that of the real machine
The main draw backs are:
Since the VM indirectly accesses the same hardware the efficiency is compromised.
Multiply VMs running in parallel on the same physical machine may result in varied
performance depending on the workload imposed on the system. Implementing proper
isolation techniques may address this drawback.

What are the benefits of a Hypervisor VM?

I'm looking into using virtual machines to host multiple OSes and I'm looking at the free solutions which there are a lot of them. I'm confused by what a hypervisor is and why are they different or better than a "standard" virtual machine. When I mean standard I going to use the benchmark virtual machine VMWare Server 2.0.
For a dual core system with 4 GB of ram that would be capable of running a max of 3 VMs. Which is the best choice? Hypervisor or non-hypervisor and why? I've already read the Wikipedia article but the technical details are over my head. I need a basic answer of what can these different VM flavors do for me.
My main question relates to how I would do testing on multiple environments. I am concerned about the isolation of OSes so I can test applications on multiple OSes at the same time. Also which flavor gives a closer experience of how a real machine operates?
I'm considering the following:
(hypervisor)
Xen
Hyper-V
(non-hypervisor)
VirtualBox
VMWare Server 2.0
Virtual PC 2007
*The classifications of the VMs I've listed may be incorrect.
The main difference is that Hyper-V doesn't run on top of the OS but instead along with the system it runs on top of a thin layer called hypervisor. Hypervisor is a computer hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently.
Many other virtualization solution uses other techniques like emulation. For more details see Wikipedia.
Disclaimer, everything below is (broadly) my opinion.
Its helpful to consider a virtual machine monitor (a hypervisor) as a very small microkernel. It has very few jobs beyond accessing the underlying hardware, such as monitoring of event channels and granting guest domains access to specific resources .. while enforcing some kind of scheduler.
All guest machines are completely oblivious of the others, the isolation is true. Guests do not share memory with the privileged guest (or each other). So, in this instance, you could (roughly) think of each guest (even the privileged one) as a process, as far as the VMM is concerned. Typically, the first guest gets extra privileges so that it can manage the rest. This is the ideal technology to use when virtual machines are put into production and exposed to the world.
Additionally, some guests can be patched to become aware of the hypervisor, significantly increasing their performance.
On the other hand we have things like VMWare and QEMU, which rely on the host kernel to give it access to bare metal and enough memory to exist. They assume that all guests need to be presented with a complete machine, the limits put on the process presenting these (more or less) become the limits of the virtual machine. I say more or less because device mapper QoS is not commonly implemented. This is the ideal solution for trying code in some other OS, or some other architecture. A lot of people will call QEMU, Simics or even sometimes VMWare (depending on the product) a 'simulator'.
For production roll outs I use Xen, for testing something I just cross compiled I use QEMU, Simics or VirtualBox.
If you are just testing / rolling new code on various operating systems and architectures, I highly recommend #2. If your need is introspection (i.e. watching guest memory change as bad programs run in a guest) ... I'd need more explanation before answering.
Benefits of Hypervisor:
Hypervisor separates virtual machines logically, assigning each its own slice of underlying computing power, memory, and storage, thus preventing the virtual machines from interfering with each other.