I am using the libvirt/QEMU/KVM stack to run some VMs on an Ubuntu 20.04 host. I am using the virsh CLI tool for VM management. I'd like to allow multiple VMs to access the same device (FPGA) over PCIE. It seems that libvirt doesn't allow this, and when I attach the PCIE device to multiple VMs and try to power more than one on, I get the following error.
error: Failed to start domain ubuntu-guest-2
error: Requested operation is not valid: PCI device 0000:05:00.0 is in use by driver QEMU, domain ubuntu-guest-1
This kinda makes sense to me, as there shouldn't be conflicting data sent over the PCIE bus. But nonetheless, does anyone know a workaround to make this happen?
There are a number of techniques to share a device across VMs. All of them require either device-specific software support in the VMM, hardware in the device to support sharing (SR-IOV), or both (Scalable IOV).
For a custom FPGA design, you would need to provide this.
SR-IOV is part of the PCIe specification, so there may be libraries available that you could incorporate into your FPGA design.
Related
What I want to do on my laptop:
Develop and Run on windows with Visual Studio (CUDA, TensorRT,...)
Develop and Run on Linux (CUDA, TensorRT,...)
Environment to edit videos, photoshop,...
Play games
Environment for general use (web browser, outlook, word,...)
Environment to test applications
Possibly connecting some external GPU to offload the work (cuda,...) from my laptop's graphics card. Since I'm new to this, I haven't researched enough to understand how it can be done. But, this is in my plans.
What I did and reaserched:
As a start, I created VM environements in my host Windows OS using VirtualBox for #1 and #2, but I cannot run inside VM, since it doesn't provide access to graphics card. Even if it did, I still need somehow to switch to a different environment when I want to play games for example.
I probably need hypervisor type 1 if I want to have environment to play games? But, in this case I'll need a second laptop to access it, right?
Is this even possible to do on one laptop (I have strong laptop with enough RAM and SSD)
Graphics cards (GPU) are PCI devices, so they can be passed to VMs with PCI Passthrough. A device is not accessible to the host during passthrough. Hot plug can be used to reattach a graphics card to a different VM or the host without rebooting.
I don't know if a Windows host supports GPU passthrough (maybe you need Windows Server), but Linux host and Windows guest seems to work.
Setting this up is easier if you have a second GPU that remains attached to the host or another computer to control the host during GPU passthrough, for example via SSH.
I am using VmWare Workstation 14 and when I install an operating system (any of them) some programs and apps are able to identify that I am using a virtual machine.
I have seen the vm is using virtualized devices that are really named virtual. like for example VmWare Network Card or etc. Is there any way to install fake real like hardware drivers on these virtual machines? Can this simple change make the app see this vm as a real machine?
How to make this virtual machine appear as a real machine to applications?
Is there really any way?
This was asked as a yes-or-no question so my answer is:
Yes... probably. But it's a lot of work.
There's a 2006 presentation by Tom Liston and Ed Skoudis that talks about this: https://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf
It focuses on VMware, but some of it would also apply to other types of Virtual Machine Environments (VMEs).
In summary, they identify as many things as they can find that would allow VM detection, which would each have to be addressed, and they also mention some VMware-specific mitigations for them.
VME artifacts in processes, file system, and/or Windows registry. These would include the VMtools service and "over 50 different references in the file system to 'VMware' and vmx" and "over 300 references in the Registry to 'VMware'", all of which would have to be deleted or changed.
VME artifacts in memory. Specific regions of memory tend to be different in guests (VMs) than hosts, namely the Interrupt Descriptor Table (IDT), Global
Descriptor Table (GDT), and Local Descriptor Table (LDT). The method by which the VM is built may allow these to appear the same in guests as they do in hosts.
VME-specific virtual hardware. This would include the drivers you mention like VmWare Network Card. The drivers would have to be removed or replaced with drivers that do not match the names or code signatures of any virtual drivers. Probably easiest to do on an open-source system, simply by modifying the driver source code and build.
VME-specific processor instructions and capabilities. Some VMEs add non-standard machine language instructions, or modify the behaviour of existing instructions. These can be changed or removed by editing the VME source code, at the cost of convenient host-guest interaction.
VME differences in behaviour. A VM might respond differently on the network, or fail at time synchonization. This could be mitigated with additional source code changes (on both host and guest) to make the network traffic look closer to normal, and providing sufficient CPU cores to the VM would help make sure it does not run more slowly than wall clock time.
Again this is from 2006, so if anyone has a more up-to-date reference, I'd love to see their answer.
How is it possible to determine the commands to operate a usb device, if that device comes from another operating system and traffic monitoring software cannot be installed on that OS. The only method i can think of is sending random commands to the device, until the device responds, but this seems implausible for more complex commands, and potentially dangerous. For example, consider the DualShock 4 controller. Sony has not made an official driver for this device, so what method can i use to create a linux driver for it?
Get a hardware protocol analyzer. Then you won't need to install any software on the host or device under test. Here is one that I have used:
http://www.totalphase.com/products/beagle-usb12/
Is there anyway that you can expose local partition or disk image through your computer usb to another computer to appear like external drive on mac/linux/bsd system ?
I'm trying to play with something like kernel development and I need one system for compiling and other for restarting/testing.
With USB: Not a chance. USB is unidirectional, and your development system has no way of emulating a mass storage device, or any kind of other USB device.
With Firewire: Theoretically. (This is what Apple's target disk mode is using.) However, I can't find a readily available solution for that.
I'd advice you to try either virtualization or network boot. VirtualBox is free and open software, and has a variety of command line options, which means it can be scripted. Network boot takes a little effort to set up, but can work really well.
Yet another option, is to use a minimal Linux distribution as a bootstrap which sets up the environment you want, and then uses kexec to launch your kernel, possibly with GRUB as an intermediary step.
What kind of kernel are you fiddling with? If it's your own code, will the kernel operate in real or protected mode? Do you strictly need disk access, or do you just want to boot the actual kernel?
I am trying to find out which Hypervisor will allow me to grant access to specialized PCI cards (such as a telephony card) to a virtual machine. So far I have tried out VMWare ESXi server and it doesn't seem to allow me to do this. I have heard that Microsoft Virtual Server does allow this, but I haven't been able to find any supporting documentation.
I'd look into Xen, it appears that you can load a backend xen driver on the host OS which will then allow you to communicate directly with the hardware from the guest.
See this link for more information. I'm not a Xen user, but from my virtualization experience I would guess that the paravirtualization aspects of a Xen host/guess is going to be your best bet for raw device access.
Yes, Xen can do this successfully. It is called PCI Passthrough: http://wiki.xen.org/wiki/Xen_PCI_Passthrough
I've done this successfully for both Windows and Linux guests with Xen 4.x, using my system's IOMMU. There are some restrictions on which devices can be assigned to which guests based on the PCI hierarchy in your particular system. You can view yours in Linux using 'lspci -t' (for "tree").
The IOMMU is located fairly high up in the tree, so on laptop-like systems, there may not be much partitioning available. Add-in PCIe cards can almost always be assigned, though.