Openstack Compute node performance - virtual-machine

Am starting to learn openstack. As per my understanding (after reading all the docs) is that the compute nodes run a host OS (ubuntu or other linux) and on top of that you have your hypervisor (like KVM) and then the VMs run on top of it i.e HW -> OS -> Hypervisor -> VMs . This is similar to having a VM running on Virtualbox which runs on a host operating system i.e HW-> Host OS ->VBox -> VMs.Please correct me if my understanding is incorrect.
Assuming my first understanding is correct, How will the performance of the VMs on the this architecture be compared to running the VMs directly on hypervisor i.e HW-> Hypervisor (KVM)->VMs ?
Comparing this with VMWare openstack architecture where Nova speaking to VMWare vCenter and then vCenter manages the ESXi nodes (vCenter and ESXi are on different nodes). This way my VMs are directly running on top of hypervisor connected to HW (HW->ESXi->VMs).And all the overlay networking is handled by NSX. This looks much more performant compared to the other architecture. Am i missing something here ?
Thanks in advance.
~exp8

Since kvm runs on linux kernel, and runs instructions directly on the cpu, it is the hypervisor (HW -> HyperVisor -> VM). On the VMware side, there is a tiny proprietary tuned version of linux as hypervisor.
To find out which one is more efficient, you should do benchmarking. But if you think Vmware's linux consumes less resources (less process, memory, cpu), it may be better.

Related

How to Get CPU usage for each VM on ESXI host

I want to get CPU usage (cumulative) for each VM hosted on a VMware ESXI host.
I tried using Power CLI command 'Get-VMHost' but it only gives the overall CPU usage by ESXI host.
For CPU usage esxtop is a very powerful ESX command and you have to run it at the CLI. I haven't used the Power CLI so I'm unsure if it's available there but it is definitely available at the CLI which VMware tries to discourage you from using (see https://kb.vmware.com/s/article/2004746). Documentation for esxtop for the latest release of vSphere is at https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-D89E8267-C74A-496F-B58E-19672CAB5A53.html.
That document is a bit terse and in terms of getting CPU usage for each VM this old documentation for esxtop may guide you a bit better https://www.vmware.com/pdf/esx2_using_esxtop.pdf. In particular note the different nomenclature of ESXi (and ESX) for which the primary unit of address space and execution is the "world" rather than the "process". Thus you want to get the CPU usage for all "worlds" associated with each VM. Some VMs may have only a single "world" and some may have several and it is configurable. As for esxtop, it has been around forever and most likely it can still today provide the same functionality that it did over a decade ago with ESX 2.

How to setup HPC cluster on one server

i'm working on an application that needs to be tested in a HPC cluster.
i'm thinking about using xcat as a resource manager.
i don't have much hardware resources, i have one HP desktop and MacBook laptop.
the question: is it possible to set up a virtual cluster (using virtualBox or KVM) on one hardware resource
thanks,
The short answer here is yes, depending on how much memory and disk you have available on your one machine. I've done this numerous times on a MacBook Pro with 8 GB of RAM.
The long answer is that there is absolutely nothing magical about an HPC cluster. All you need to test basic parallel applications in a simulated cluster environment are two or more VMs which meet these criteria:
Same OS, as identical as possible.
Passwordless authentication (ssh key based auth).
Same software stack in same location on all nodes (See #4 or use rsync).
At least one shared filesystem, e.g. NFS mounted $HOME
Shared network with name resolution configured (correct /etc/hosts on all nodes)
None of this requires job schedulers, provisioning tools or any complex networking. You can find many NFS setup howtos to help get one node set up to share $HOME to the others, this might be the most complicated part. VirtualBox does a good job of setting up local networking.
On top of this you can layer setting up a job scheduler like SLURM (highly recommended), provisioning tools like Warewulf or xCat, parallel filesystems across the VMs (BeeGFS is easy to set up and a great introduction), etc. I have had a full featured stateless cluster simulated on my Macbook Pro a number of times using tools from this list and VirtualBox VMs. It's a great way to learn about setting up an HPC cluster.

Difference between virtual machine process and host os process?

Suppose in my pc I have Ubuntu as Host OS. Now I installed a Virtual Machine say VirtualBox (hypervisor) and then deployed a centos and a redhat os inside that as guest OS.
Suppose CentOS and redhat has 2 processes running and Ubuntu is running 3 processes. So following are my questions:
There are how many processes that Ubuntu is having?
Is there any difference between GuestOS and HostOS processes?
If all guestos runs as a process then they will get less time as compared to other process running on host os.
Please clear my doubts here.
Thank you.
Well let me clear your doubts,
First of all there aren't any specific number of process for an OS, its called as cores or threads, technically you can define how many cores or threads you want to use on your virtual machine and it depends on the system configuration you use.
Secondly Guest OS is what you have created in the virtual machine and host is what your laptop or pc actually run. Host OS uses the actual hardware for the working whereas the Guest OS uses the virtual hardware like number of cores and type and size of hard drive defined by the user while adding a virtual machine.
Third, as I mentioned earlier Guest and Host OS works on the configurations used by you, if you user higher amount of cores/ threads in setting your virtual machine the Guest OS will get higher speed.
Ideally the virtual machines are used to test and create some functionality of the Operating Systems without affecting the internal OS, so you can think of it as a your parents house where you can live and grow but at the end you cannot go away from the fact that their contribution is more and so you cannot go beyond their features without leaving it and making your own home.
Linux operating systems are multi-threaded operating system. The host OS would consider virtual box as a thread. You can define number of cores and virtual hard disk size for guest OS by using virtual box.
Since virtual box runs in separate thread and other operations of host OS runs in separate threads, there would be less effect on speed of processing. But I've observed big variances in processing speed in systems which have low memory. Each and every thread needs specific allocation of memory for its smooth operation. So systems having more than 2 GB RAM managed virtual box very well.

How do Hypervisor, libvirt and host come together to virtualize the system?

I am not able to understand this hierarchy. Host OS runs on bare metal. Where does libvirt reside ? where does the Hypervisor reside ? Is libvirt a necessary lib in the creation of a VM or is it just a better abstraction to the core kernel APIs that provide the the service of creation of VMs? what is the libvirt equivalent in windows that is used by Virtualbox ? Does Hypervisor have any role to play in the scheduling of host OS applications or is it just another process handled by the Host OS. These are a few questions that confuse me. Can anyone explain this in a straight manner ? Thanks!
Host OS runs on bare metal - Yes
where does the Hypervisor reside? - It is a software layer on Host OS
Where does libvirt reside? - It is another API layer on the Host to indirectly interface with the hypervisor.
Is libvirt a necessary lib in the creation of a VM or is it just a better abstraction to the core kernel APIs that provide the the service of creation of
VMs? - Standard abstraction
What is the libvirt equivalent in windows that is used by Virtualbox ? - Libvirt can be used to interact with VirtualBox. Libvirt is known to work as a client (not server) on Windows XP (32-bit), and Windows 7 (64-bit).
Does Hypervisor have any role to play in the scheduling of host OS applications or is it just another process handled by the Host OS. - Just another process
References: http://virtualbox.guru

Is virtualization still relevant with docker?

I've read this article:
How is Docker different from a normal virtual machine?
I have huge intend of converting all my virtual images into docker instances.
I can't see an angle where vm still make sense...
So what's the point to VM now? Ok... maybe the desktop virtualization to have pulseaudio working?
Once docker solve this, what else?
UPDATE
Okay... So I can't run docker in "non-linux" favour hosts...
For one point you can't run an operating system within your container that is different from the OS on the host.
On Windows and Mac OSX boot2docker is used to run Docker which is VirtualBox running a reduced Linux OS which runs Docker.
The benefits of containers are clear and well known, but the disadvantages have been glossed over somewhat.
Specifically, you don't just need the same OS type (aka linux), you get the same version of the kernel (including any mods you want.) Since containers are an OS construct, there are resource islands per OS kernel version (and different implementations for Windows, BSD or any non-linux if they exist).
VM's are secured with CPU level isolation, containers are secured with OS level isolation (with arguably a bigger attack surface).
There are many claims out there that containers are as slow and as big as VM's once you load up your container with everything you need for production and add lots of overlays, but these are all anecdotal and no large scale survey or trustable data is available yet.