How to memory map address space on host from KVM/QEMU guest - virtual-machine

I have an embedded application running on a Xilinx ZynqMp SoC. The application running on the PS (processor) memory maps the PL (FPGA) of the SoC over an AXI bus via /dev/mem at some base physical address.
I would like to run this application in a KVM/QEMU VM running on the PS. This means I will need to somehow expose that memory window available via /dev/mem on the host to the guest VM.
Through some research I thought that virtio-mmio would be the method to do this. I made some attempts using virtio-mmio but hit a wall, so I asked a question: Memory map address space on host from KVM/QEMU guest using virtio-mmio
The response seems to indicate that virtio-mmio is not the method I should be using for this.
If that is the case, what is the method used for exposing a memory space available on the host to a guest VM? I do not need any sort of device driver/layer on top of this. I just need raw memory access.

Related

Is it possible to have different dev VM environments and access graphics card?

What I want to do on my laptop:
Develop and Run on windows with Visual Studio (CUDA, TensorRT,...)
Develop and Run on Linux (CUDA, TensorRT,...)
Environment to edit videos, photoshop,...
Play games
Environment for general use (web browser, outlook, word,...)
Environment to test applications
Possibly connecting some external GPU to offload the work (cuda,...) from my laptop's graphics card. Since I'm new to this, I haven't researched enough to understand how it can be done. But, this is in my plans.
What I did and reaserched:
As a start, I created VM environements in my host Windows OS using VirtualBox for #1 and #2, but I cannot run inside VM, since it doesn't provide access to graphics card. Even if it did, I still need somehow to switch to a different environment when I want to play games for example.
I probably need hypervisor type 1 if I want to have environment to play games? But, in this case I'll need a second laptop to access it, right?
Is this even possible to do on one laptop (I have strong laptop with enough RAM and SSD)
Graphics cards (GPU) are PCI devices, so they can be passed to VMs with PCI Passthrough. A device is not accessible to the host during passthrough. Hot plug can be used to reattach a graphics card to a different VM or the host without rebooting.
I don't know if a Windows host supports GPU passthrough (maybe you need Windows Server), but Linux host and Windows guest seems to work.
Setting this up is easier if you have a second GPU that remains attached to the host or another computer to control the host during GPU passthrough, for example via SSH.

Is a google compute virtual machine highly available?

So I have a cloud virtual machine on google compute, does this mean by nature that it is highly available? If the VM is running on a single piece of hardware on GCE, if the piece of hardware breaks then the VM could go down. Is the VM running on some kind of RAID, but for servers? So if one of the machines goes down another machine will pick up and continue running the vm? Thanks.
The machine itself is not highly available. However, Google takes several steps to increase reliability:
Storage is replicated and independent of the physical machine the VM is running on (obviously not for local SSD). This means that even if the physical machine catches on fire, only the "runtime" state is lost but the attached disks are fine.
VMs can live-migrate. This is a setting you can control. If enabled, the VM will be migrated to a different physical machine on maintenance events. Live-migration can lead to brief performance degradation while memory etc. is synced to the other host but the machine is not shut down / restarted.
Even when the physical host suddenly dies, you can set your instance to restart automatically on a new machine. If you plan to use this mode, make sure your instance is able to cleanly boot to serving state without manual intervention.
If you need high availability, the best approach is to spread your instances among zones of the same region and using a network or HTTP(S) loadbalancer. These will automatically stop sending traffic to a machine in case it becomes unhealthy. Also see this short youtube video on Google's network architecture for more info.
For high availability of your application data, there are highly available options like Datastore for database-like usage and Cloud Storage for file-oriented data. Keep in mind that Cloud SQL also runs on a single instance/physical machine which means that you have to setup slaves/replicas to get high availability. However, you can also do that with your favorite DB system on plain Compute Engine instances if you are willing to maintain them yourself.

Can NS3 EMU be applied on different machines in WAN?

we are currently considering whether ns3 satisfying our requirements, we're looking for a convenient tool to run in distributed devices in the real network (every kind of possible connections) and capture the network performance data (like a sniffer). I realize that the primary purpose of ns3 is to simulate network topology in a single machine, but its emu module sounds promising and the flow monitor can save our effort on data capture.
In the following link
http://www.nsnam.org/wiki/HOWTO_make_ns-3_interact_with_the_real_world
it is declared that NS-3 EMU can be applied to inject simulated nodes interacting with real live network, and 3 kinds of testbed are given. However the first solution, virtual machine vmware testbed is still woking within LAN -- in promiscuous mode the virtual machines network card are listening to all LAN broadcasts so that the emu-udp-echo server and client can find each other.
My question is, is it possible that the emu-udp-echo server/client running in different, physical systems from different positions in wide network?
e.g. in different cities or from different network providers, given ip address of the hardware where the other ns-3 node is running? if it is possible, how can i specify the "real" ip address and port for the node, instead of assign a virtual ipv4 address?
Thanks a lot.
Yes, while the documentation describes how to perform this using virtual machines, this can be done in general on real hardware. Since that HOWTO was written, there has been additional work on providing helpers for running this type of experiment, including running on PlanetLab testbed machines. This documentation describes the generalized file descriptor NetDevice, added to the ns-3.17 release: http://www.nsnam.org/docs/release/3.19/models/html/fd-net-device.html. A similar example to the one described in that HOWTO is found in the file fd-emu-udp-echo.cc.
When using emulation mode on real networks, configuration of the MAC addresses and IP addresses must be done carefully. First, the device must be able to be put into promiscuous mode. Second, the MAC address needs to be different than the hardware address of the NIC. If you intend to be riding on top of an active NIC with existing IP address (in use for other Internet traffic), you'll need to have another IP address for ns-3 that is within the right link subnet. If instead you want to dedicate the NIC to ns-3 use, then do not assign the IP address to the host NIC and just assign it to the ns-3 configuration.
The PlanetLab example also shows another configuration that uses Tap devices to send data to/from PlanetLab testbed nodes. Some of this configuration is specific to how PlanetLab works, but the use of Tap device bridged to an ns-3 device may also facilitate emulation.

Raspberry Pi - how to load RAM programmatically through SD interface?

I would like to have some kind of mechanism to somehow load the RAM on the Raspberry Pi programmatically from a controller computer (I assume through the SD interface) and then let the Raspberry Pi's CPU execute. Is there some kind of device that does this? And what is it programmed in?
It would also be great if there's a way to interrupt the whole thing from the controlling computer if needed.
SD is a fairly poor choice for an interface to try to push data into from an external source; generally the computer hosting the SD device wants to be the master of operations.
But the Raspberry pi has both uart serial ports and (on the model B) an ethernet interface. Downloading code through either is quite normal.
You haven't mentioned if you want to run an application atop a typical linux installation, or if you want to do bare metal programming. In the first case you would typically transfer the program to the file system (either ramdisk or the SD card) and then execute it.
In the second case, you would need a stub of code already on the device (which is to say, the boot partition of an sdcard) which knows how to configure peripherals sufficiently to enable reception of code via serial or ethernet (the latter complicated by needing a USB host stack), and then jump into it.

How KVM handle physical interrupt?

i am working on KVM optimization for VMs' IO. I have read the KVM codes, usually all the physical interrupt will cause the VMexit and enter into KVM. Then the host's IDT will handle the corresponding physical interrupt. My question is that how KVM decide whether to inject a virtual interrupt into the guest or not? and under what situation it will inject a virtual interrupt to the guest?
Thanks
In the Documentation of kvm this is what is told about when the virtual interupt can be injected . Heres the link http://os1a.cs.columbia.edu/lxr/source/Documentation/kvm/api.txt
look at line number 905.
The struct kvm_run structure i think gives control to the application on how it makes the VM
behave.Use cscope and search for the string request_interrupt_window in the source code, You will understand how the kvm see when to enter the guest for injecting an interupt.Also go through the api.txt file it is very helpful.
Cheers
EDITED
Here's, one example of the host injecting interupts into the guest.
Assume that there was a page fault in the GUEST VM
this causes a VMEXIT
Hypervisor/KVM handles the VMEXIT
Its sees the reason for VMEXIT through VMCS control structure and find that there was page fault.
The host/KVM is responsible for memory virtualization, so it check to see if the page fault was caused
because the page was not allocated to the GUEST in which case it calls alloc_page in the HOST kernel and does a VMENTRY to resume GUEST execution.
Or the mapping was removed by the GUEST OS, in this case the KVM uses a VMCS control structure as a communication medium to inject a virtual interupt no 14 which causes the GUEST kernel to handle page fault.
This is one example of the host inserting virtual interupt. Ofcourse there are plenty of other ways/reasons to do so.
You can infact configure the VMCS to make the guest do a VMEXIT after executing EVERY INSTRUCTION this can be done using the MONITOR TRAP FLAG.
I guess you refer to assigned device interrupts (and not emulated interrupts or virt-IO interrupts which are not directly forwarded from the physical device to the guest).
For each irq of the assigned device, request_threaded_irq is called and registers kvm_assigned_dev_thread to be called upon every interrupt. As you can see kvm_set_irq is then called, and as described the only coalescing that takes place if the interrupt is masked. In x86 interrupts can be masked by rflags.if, mov-SS, due to TPR that does not allow the interrupt to be delivered or due to interrupt in service with higher priority. KVM is bound to follow the architecture definition in order not to surprise the guest.