How KVM handle physical interrupt? - interrupt

i am working on KVM optimization for VMs' IO. I have read the KVM codes, usually all the physical interrupt will cause the VMexit and enter into KVM. Then the host's IDT will handle the corresponding physical interrupt. My question is that how KVM decide whether to inject a virtual interrupt into the guest or not? and under what situation it will inject a virtual interrupt to the guest?
Thanks

In the Documentation of kvm this is what is told about when the virtual interupt can be injected . Heres the link http://os1a.cs.columbia.edu/lxr/source/Documentation/kvm/api.txt
look at line number 905.
The struct kvm_run structure i think gives control to the application on how it makes the VM
behave.Use cscope and search for the string request_interrupt_window in the source code, You will understand how the kvm see when to enter the guest for injecting an interupt.Also go through the api.txt file it is very helpful.
Cheers
EDITED
Here's, one example of the host injecting interupts into the guest.
Assume that there was a page fault in the GUEST VM
this causes a VMEXIT
Hypervisor/KVM handles the VMEXIT
Its sees the reason for VMEXIT through VMCS control structure and find that there was page fault.
The host/KVM is responsible for memory virtualization, so it check to see if the page fault was caused
because the page was not allocated to the GUEST in which case it calls alloc_page in the HOST kernel and does a VMENTRY to resume GUEST execution.
Or the mapping was removed by the GUEST OS, in this case the KVM uses a VMCS control structure as a communication medium to inject a virtual interupt no 14 which causes the GUEST kernel to handle page fault.
This is one example of the host inserting virtual interupt. Ofcourse there are plenty of other ways/reasons to do so.
You can infact configure the VMCS to make the guest do a VMEXIT after executing EVERY INSTRUCTION this can be done using the MONITOR TRAP FLAG.

I guess you refer to assigned device interrupts (and not emulated interrupts or virt-IO interrupts which are not directly forwarded from the physical device to the guest).
For each irq of the assigned device, request_threaded_irq is called and registers kvm_assigned_dev_thread to be called upon every interrupt. As you can see kvm_set_irq is then called, and as described the only coalescing that takes place if the interrupt is masked. In x86 interrupts can be masked by rflags.if, mov-SS, due to TPR that does not allow the interrupt to be delivered or due to interrupt in service with higher priority. KVM is bound to follow the architecture definition in order not to surprise the guest.

Related

How to memory map address space on host from KVM/QEMU guest

I have an embedded application running on a Xilinx ZynqMp SoC. The application running on the PS (processor) memory maps the PL (FPGA) of the SoC over an AXI bus via /dev/mem at some base physical address.
I would like to run this application in a KVM/QEMU VM running on the PS. This means I will need to somehow expose that memory window available via /dev/mem on the host to the guest VM.
Through some research I thought that virtio-mmio would be the method to do this. I made some attempts using virtio-mmio but hit a wall, so I asked a question: Memory map address space on host from KVM/QEMU guest using virtio-mmio
The response seems to indicate that virtio-mmio is not the method I should be using for this.
If that is the case, what is the method used for exposing a memory space available on the host to a guest VM? I do not need any sort of device driver/layer on top of this. I just need raw memory access.

Usb hub stalls when asking for descriptors

I have started with an Atmel Start project:
My goal is to have a usb hub connected to this demo board:
SAM V71 Xplained Ultra Evaluation Kit
The problem is atmel doesn't supply a hub driver, and they haven't responded to our questions about this. So I have been attempting to write one based upon the msc and other drivers they do provide.
Currently I'm having an issue when I connect the USB hub. It is returning a STALL when I send a GET_DESCRIPTOR request with the type DEVICE. This seems odd to me because other USB devices such as a flash drive or USB to serial converter do not reply STALL to the same request. In fact the Flash drive goes through the entire enumeration process and msc installation so that I can successfully read and write to the drive.
I am detecting the stall via a single break point set in the STALL handling section of the pipe handler.
I have been reading the Universal Serial Bus
Specification Rev 2.0 but I can't find any differences between the way to read descriptors from hubs vs other devices. And I don't understand why a STALL would ever be sent in reply to a GET_DESCRIPTOR request.
Thanks
Just in case this is useful for anyone else. The issue I was having was apparently caused by the compiler optimization settings. Specifically I had change this setting to: "None (-O0)", after changing this back to the default I have had no problems enumerating USB devices. Picture of Optimization configuration
My colleague discovered this because of a seemingly unrelated problem which was causing Hard faults and Bus faults on the chip, these were also fixed by switching back to -O1. It seems -O0 needs to be used with a grain a salt or not at all on this chip.

Coldfire microprocessor MCF5272 USB module stops firing interrrupts

This is a problem that I am trying to solve for years, periodically spending 1-2 months on it.
I am using Metrowerks IDE and ColdFire C compiler MCFCCompiler ver 4.0 to build the embedded code that uses the USB module for communication with the host. The product with this hardware has been out for eight years and pretty successful. However, along these years we were getting complaints from the field that occasionally the communication with the host hangs up and the operation is unrecoverable.
I tracked the bug down using USB sniffer and the Coldfire debug hardware and this is condition and the scenario that I find the code in.
The communication break is on the firmware side and not the driver on the host.
The hang-up happens only when sending USB firmware commands from host (windows 7) in rapid-fire from multiple threads. Every firmware command replies back to the host. So there's maximum traffic through the USB port.
I am using the implementation provided by Motorola that is well documented in USB-STAND-ALONE-DRIVER_V03.pdf (google will find it for you). There are two functions that are in my focus point and they should play nicely together: usb_in_service (called by the interrupt handler) and usb_tx_data (that initiates the transfer, which at some point will generate an interrupt).
The usb_tx_data function is implemented such that it bails out if the USB fifo still has data to send to the host. But waiting on the fifo to clear up takes the code into an infinite loop.
No more interrupt occurs after this although the USB module's registers content tells me the interrupts are enabled.
I checked that the USB module did not get reset event and is not suspended either.
The main question is whether the error is in the USB module hardware or in the code. I don't find any errata pointing to this problem. If it's the code, where is that whole in it that the logic is not accounting for?
The hot pursuit in on because we are making a new line of product based on this same firmware and I cannot release it until this is solved.

how to observe interrupts in windows or linux ubuntu 14.04

everybody i want to observe interrupt handling in my system, now i'm using windows 8.1, i can use a linux ubuntu 14.04.1 on Vmware virtual machine too.
any information about interrupt handling , counting them and watching their processing is useful. is there any application that do this monitoring?
please help me, i'm in hold,
thank you
I'd recommend trying to search for an answer before asking a question. This is shamelessly copy/pasted from http://www.linuxjournal.com/content/watch-live-interrupts.
To see the interrupts occurring on your system, run the command:
watch -n1 "cat /proc/interrupts"
The watch command executes another command periodically, in this case "cat /proc/interrups". The -n1 option tells watch to execute the command every second.
Try using -d for fancy output with highlights.
Man page link for the watch command: http://linux.die.net/man/1/watch
Introduction to Linux Interrupts (describes what /proc/interrupts is all
about):http://www.thegeekstuff.com/2014/01/linux-interrupts/
The first Column is the IRQ number.
The Second column says how many times the CPU core has been interrupted.
For interrupt like rtc [Real time clock] CPU has not being interrupted. RTC are present in electronic devices to keep track of time.
NMI and LOC are drivers used on system that are not accessible/configured by user.
IRQ number determines the priority of the interrupt that needs to be handled by the CPU.
A small IRQ number value means higher priority.
For example if CPU receives interrupt from Keyboard and system clock simultaneously. CPU will serve System Clock first since it has IRQ number 0.
IRQ 0 — system timer (cannot be changed);
IRQ 1 — keyboard controller (cannot be changed)
IRQ 3 — serial port controller for serial port 2 (shared with serial port 4, if present);
IRQ 4 — serial port controller for serial port 1 (shared with serial port 3, if present);
IRQ 5 — parallel port 2 and 3 or sound card;
IRQ 6 — floppy disk controller;
IRQ 7 — parallel port 1. It is used for printers or for any parallel port if a printer is not present.
For Windows
Original Question: How can I find out what is causing interrupts on Windows?
There are a couple of answers there you may benefit from. Like Windows Process Explorer which shows how much processor time is spent serving interrupts, Windows Performance Analyzer (WPA), the xperf command, and The DPC/ISR Action

System Virtualization : Understanding IO virtualization and role of hypervisor [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I would like to obtain a correct understanding of I/O virtualization. The context is pure/full virtualization and not para-virtualization.
My understanding is that a hypervisor virtualizes hardware and offers virtual resources to each sandboxed application. Each sandbox thinks its accessing the underlying hardware, but in reality it is not. Instead it is the hypervisor which does all the accesses. It is this aspect I need to understand better.
Let assume a chip has a hardware timer meant to be used by OS kernel as a tick timer. Lets assume that there are 2 virtual machines (E.g Windows and Linux) running atop the hypervisor.
None of the virtual machines have modified their source code. So they continue to spit out instructions that directly program the timer resource.
What is the role of the hypervisor really here? How are the two OSes really prevented from accessing the real stuff?
After a bit of reading, I have reached a certain level of understanding described at:
https://stackoverflow.com/a/13045437/1163200
I reproduce it wholly here:
This is an attempt to answer my own question.
System Virtualization : Understanding IO virtualization and role of hypervisor
Virtualization
Virtualization as a concept enables multiple/diverse applications to co-exist on the same underlying hardware without being aware of each other.
As an example, full blown operating systems such as Windows, Linux, Symbian etc along with their applications can coexist on the same platform. All computing resources are virtualized.
What this means is none of the aforesaid machines have access to physical resources. The only entity having access to physical resources is a program known as Virtual Machine Monitor (aka Hypervisor).
Now this is important. Please read and re-read carefully.
The hypervisor provides a virtualized environment to each of the machines above. Since these machines access NOT the physical hardware BUT virtualized hardware, they are known as Virtual Machines.
As an example, the Windows kernel may want to start a physical timer (System Resource). Assume that ther timer is memory mapped IO. The Windows kernel issues a series of Load/Store instructions on the Timer addresses. In a Non-Virtualized environment, these Load/Store would have resulted in programming of the timer hardware.
However in a virtualized environment, these Load/Store based accesses of physical resources will result in a trap/Fault. The trap is handled by the hypervisor. The Hypervisor knows that windows tried to program timer. The hypervisor maintains Timer data structures for each of the virtual machines. In this case, the hypervisor updates the timer data structure which it has created for Windows. It then programs the real timer. Any interrupt generated by the timer is handled by the hypervisor first. Data structures of virtual machines are updated and the latter's interrupt service routines are called.
To cut a long story short, Windows did everything that it would have done in a Non-Virtualized environment. In this case, its actions resulted in NOT the real system resource being updated, but virtual resources (The data structures above) getting updated.
Thus all virtual machines think they are accessing the underlying hardware; In reality unknown to them, all accesses to physical hardware is mediated through by the hypervisor.
Everything described above is full/classic virtualization. Most modern CPUs are unfit for classic virtualization. The trap/fault does not apply to all instructions. So the hypervisor is easily bypassed on modern devices.
Here is where para-virtualization comes into being. The sensitive instructions in the source code of virtual machines are replaced by a call to Hypervisor. The load/store snippet above may be replaced by a call such as
Hypervisor_Service(Timer Start, Windows, 10ms);
EMULATION
Emulation is a topic related to virtualization. Imagine a scenario where a program originally compiled for ARM is made to run on ATMEL CPU. The ATMEL CPU runs an Emulator program which interprets each ARM instruction and emulates necessary actions on ATMEL platform. Thus the Emulator provides a virtualized environment.
In this case, virtualization of system resources is NOT performed via trap and execute model.