How can an application detect that the hypervisor has resumed its guest OS? - virtual-machine

How can an application detect when the hypervisor has resumed its guest OS after suspending it due to, for example, migration? Is there an API for it, depending on the specific hypervisor, or is there something in the environment (Linux or Windows) that unambiguously indicates resumption?

There's not really anything that I can think of off the top of my head. The guest OS, for the most part, doesn't really know it's virtualized so it doesn't have a concept of what host it's on or whether or not it's being migrated and/or suspended at the hypervisor level.
Is there a particular use case you can share where this would be helpful?

Related

Why does Hyper-V require hardware virtualization?

I know Hyper-V is type 1 or native hypervisor, meaning it sits on top of hardware and doesn't require an operating system, i.e. talks to the hardware through the ISA interface).
But I don't understand why does it require hardware assisted virtualization? Does it mean Hyper-V is not full native hypervisor because it requires another part (put in hardware)? Does every native hypervisor require hardware virtualization?
Because without hardware virtualization it would have to run an emulation which comes with a BRUTAL performance implication. There is no way to do proper virtualiaztion without either interpreting a significant number of machine code, or have hardware support for this. EVERY native hypervisor requires hardware virtualization - which is, btw., nothing new... it was in the firstp rocessors mit 60s, iirc (196x). Yes, this is that old. VMS - the Mainframe operating system - is acutally short for... "Virtual Machine System". The processors back then had hardware virtualization.

How does a Hypervisor distinguish multiple VMs running on it and isolate them from the underlying h/w?

How does a Hypervisor distinguish multiple VMs running on it and isolate them from the underlying h/w?
e.g. if there is a system call from with in a guest OS, how does the HV know it belongs to a specific guest OS?
Not much details about lower details of HV operation.
A normal system call in a guest is processed by the guest OS without intervention of the hypervisor.
However, when the guest does cause a trap to the hypervisor (not a system call, but some other operation that requires hypervisor service), the hypervisor knows which guest it is because it knows which guest it scheduled on that CPU.

Is there such thing as "main" OS in case of type 1 hypervisor?

When we work with type 2 hypervisors it is very easy to say which OS is the main one. For example, if you install some type 2 hypervisor on Win 7, and launch Win 95 inside this hypervisor, the main OS will be Win 7. The conception is obvious.
However, it's not so obvious with type 1 hypervisors. I never worked with them before.
You have few operating systems on top the hypervisor. So... Which one of these OSs will be the main one? How this question is resolved? And probably (just a guess) there is no such thing as "main OS" in this case?
I don't think that "main" operating system is a defined term.
A type 2 hypervisor is an extension to an operating system, which is known as the host operating system when guest operating systems are running on top of it. A host operating system runs directly on the hardware and needs to have specific code to interact with the hardware (e.g. the NIC, the disk, etc.) and provide abstractions to user-level programs. The hypervisor simply extends the functionality of the host operating system to allow guest operating systems to run on top (e.g. when the guest operating system wants to write to the hard drive, the hypervisor translates this request to a form that the host OS can understand so that the host OS can make the disk access).
A type 1 hypervisor runs directly on the hardware without an operating system. A type 1 hypervisor is basically just a stripped down operating system with the functionality necessary to allow guest operating systems to run on top. When the guest needs to write to disk or do some other privileged operation, the type 1 hypervisor receives the request and acts on it. Perhaps the type 1 hypervisor is what you would consider the "main" OS? Regardless, I would avoid using that term.
I would argue that the "main" OS would be the Hypervisor software itself, as it runs directly on the hardware and supports the virtual operating systems, as well as boots on system startup.

How to push/show notifications from the guest OS to the host OS in VMware player

I am wondering if there is any way to get VMware Player to blink or show a message in the window title or perform some similar notifying action whenever there is some activity inside the guest operating system.
I run a Windows VM on a Linux box. If I am working on the host OS and an email or IM or any notification appears on a window inside the Windows VM, there is no way for me to be notified of that in the host OS. I am wondering if there is any practical solution to this or if this is an intrinsic limitation of virtualization. Any thoughts? Thanks.
This is an intrinsic limitation of type 2 virtualization, if you're able to get out the virtual machine it means something went wrong in terms of security.
BUT, you can still solve your problem. Both the host and the VM are connected to the Internet (and sometimes connected on the same internal network). You have a common resource, so they can communicate. The easiest solution in your example is to use an external notification service like pushbullet (if you don't mind it to be hosted elsewhere) or pushjet (if you want to self host).

What are the benefits of a Hypervisor VM?

I'm looking into using virtual machines to host multiple OSes and I'm looking at the free solutions which there are a lot of them. I'm confused by what a hypervisor is and why are they different or better than a "standard" virtual machine. When I mean standard I going to use the benchmark virtual machine VMWare Server 2.0.
For a dual core system with 4 GB of ram that would be capable of running a max of 3 VMs. Which is the best choice? Hypervisor or non-hypervisor and why? I've already read the Wikipedia article but the technical details are over my head. I need a basic answer of what can these different VM flavors do for me.
My main question relates to how I would do testing on multiple environments. I am concerned about the isolation of OSes so I can test applications on multiple OSes at the same time. Also which flavor gives a closer experience of how a real machine operates?
I'm considering the following:
(hypervisor)
Xen
Hyper-V
(non-hypervisor)
VirtualBox
VMWare Server 2.0
Virtual PC 2007
*The classifications of the VMs I've listed may be incorrect.
The main difference is that Hyper-V doesn't run on top of the OS but instead along with the system it runs on top of a thin layer called hypervisor. Hypervisor is a computer hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently.
Many other virtualization solution uses other techniques like emulation. For more details see Wikipedia.
Disclaimer, everything below is (broadly) my opinion.
Its helpful to consider a virtual machine monitor (a hypervisor) as a very small microkernel. It has very few jobs beyond accessing the underlying hardware, such as monitoring of event channels and granting guest domains access to specific resources .. while enforcing some kind of scheduler.
All guest machines are completely oblivious of the others, the isolation is true. Guests do not share memory with the privileged guest (or each other). So, in this instance, you could (roughly) think of each guest (even the privileged one) as a process, as far as the VMM is concerned. Typically, the first guest gets extra privileges so that it can manage the rest. This is the ideal technology to use when virtual machines are put into production and exposed to the world.
Additionally, some guests can be patched to become aware of the hypervisor, significantly increasing their performance.
On the other hand we have things like VMWare and QEMU, which rely on the host kernel to give it access to bare metal and enough memory to exist. They assume that all guests need to be presented with a complete machine, the limits put on the process presenting these (more or less) become the limits of the virtual machine. I say more or less because device mapper QoS is not commonly implemented. This is the ideal solution for trying code in some other OS, or some other architecture. A lot of people will call QEMU, Simics or even sometimes VMWare (depending on the product) a 'simulator'.
For production roll outs I use Xen, for testing something I just cross compiled I use QEMU, Simics or VirtualBox.
If you are just testing / rolling new code on various operating systems and architectures, I highly recommend #2. If your need is introspection (i.e. watching guest memory change as bad programs run in a guest) ... I'd need more explanation before answering.
Benefits of Hypervisor:
Hypervisor separates virtual machines logically, assigning each its own slice of underlying computing power, memory, and storage, thus preventing the virtual machines from interfering with each other.