How to passthrough an usb hub to a vm (based on kvm)? - usb

linux kernel 4.9.7
libvirt 2.4.0
qemu 2.7.0
HOST:
i7 6700
ASUS B250M-PLUS
2xGTX1060 3GB
500Wpower
1x PCIEtoUSB3.0 card (Reneses chip)
I have two VMs using different GPU
I'm using usb controller passthrough now
VM1 has motherboard controller
VM2 has the PCIE CARD
BUT!!!!!!it is instability!!!!!
mouse and keyboard will be no use sometimes
so i am thinking about passthrough usb hub to be stable
is there any way to do it

this question is interesting. But you should provide more infos about the problem, how you started the machine, what precisel is working and what not.
Btw, use some more time to format your question...
Maybe you background is, that it is dificult and mayby impossible to passthrough an hub to a vom using the normal QEMU VID/PID combination. You can probably show the VM the hub, but all devcices connected to it are still owned by the host.^^
The only way I see is to pass through your nice PCIe USB3.0 controller to your desired VM. You need to enable the IOMMU feature in Grub for KVM to do so. But then it will work.

Related

ESXi 7.0.3 USB Device not show up after setting configuration

In ESXi console I can see my device with lsusb list
And in vm settings I add new USB Device and select my usb from dropdown list, and save settings.
But after that, I could neither see my device in windows server nor in vm configuration!
And when I try to add new device it doesn't show up in dropdown list anymore.
But still can see my device in lsusb list.
This is because the device in question is a USB storage device, specifically a flash drive. A quick Google has this hit: "S102 Pro Advanced USB 3.2 Flash Drive | Buy Now". AFAIK there is no workaround. Gory details follow in the next paragraph. Peruse at your own risk.
In vSphere ESXi 6.5 and earlier releases of ESXi the USB driver stack was a "vmklinux" driver stack with individual drivers for each device type, including USB host controllers. Thus if your installation was on a SCSI disk or PXE booted over the network you didn't need the USB storage driver at all and you could unload it and then pass the device through to a VM. Unloading the driver wasn't offically supported but could be done at the esxcli command line or if you wanted to be really daring you could delete it from your bootbank image. Note that "deleting" was only theoretical as you would really be replacing the driver module with a file of size 0 in the last loaded tar image (the only one that is customer configurable) and could thus be undone, but I digress.
The new USB native driver stack that debuted in vSphere ESXi 6.7 is a monolithic driver which means that it is not possible to unload only the storage driver. You could unload or clobber the entire USB stack but then you'd lose the keyboard and perhaps other things plus the ability to pass any USB device through to a VM. The disadvantages of monolithic USB drivers are well known and are the reason that Linus himself got involved in the redesign of the long gone Linux monolithic USB driver from over a decade ago, but I again digress. As to why VMware "improved" things by replacing a USB stack with half a dozen or more USB drivers with a monolithic stack with all the attendant disadvantages, you'd have to ask them. Feel free to request that they break the driver up into constituent modules as they know how to do this.

How to emulate USB-device as "alive" on VM?

Step by step:
My PC has connected printer via USB (I know VID:XXXX and PID:YYYY)
I took image of my PC and put inside virtual machine (VMWare)
Of course image on my VM doesn't has connected USB-printer (because real printer is connected to my real USB-port on my real PC).
One program is running and checking accessibility of printer by check connection with the printer via USB (I don't know how exactly - maybe via WMI, maybe via other way).
Results:
a) on my real PC this program works
b) on image doesn't work
QUESTION: is possible to emulate on VM-side that USB-port (VID:XXXX and PID:YYYY) is alive?
Thanks.
P.S. I don't want to install USB-redirect-via-TCP or similar approach.
You should switch to the QEMU emulator and to Linux to do that. VMWare probably doesn't support this of thing especially in a Windows environment.
If you are already on Linux, QEMU has hardware emulation of the xHCI and you can assign the host USB devices to KVM (read here: https://www.linux-kvm.org/page/USB_Host_Device_Assigned_to_Guest).
On Windows, I don't think this will be possible.

Is it possible to have different dev VM environments and access graphics card?

What I want to do on my laptop:
Develop and Run on windows with Visual Studio (CUDA, TensorRT,...)
Develop and Run on Linux (CUDA, TensorRT,...)
Environment to edit videos, photoshop,...
Play games
Environment for general use (web browser, outlook, word,...)
Environment to test applications
Possibly connecting some external GPU to offload the work (cuda,...) from my laptop's graphics card. Since I'm new to this, I haven't researched enough to understand how it can be done. But, this is in my plans.
What I did and reaserched:
As a start, I created VM environements in my host Windows OS using VirtualBox for #1 and #2, but I cannot run inside VM, since it doesn't provide access to graphics card. Even if it did, I still need somehow to switch to a different environment when I want to play games for example.
I probably need hypervisor type 1 if I want to have environment to play games? But, in this case I'll need a second laptop to access it, right?
Is this even possible to do on one laptop (I have strong laptop with enough RAM and SSD)
Graphics cards (GPU) are PCI devices, so they can be passed to VMs with PCI Passthrough. A device is not accessible to the host during passthrough. Hot plug can be used to reattach a graphics card to a different VM or the host without rebooting.
I don't know if a Windows host supports GPU passthrough (maybe you need Windows Server), but Linux host and Windows guest seems to work.
Setting this up is easier if you have a second GPU that remains attached to the host or another computer to control the host during GPU passthrough, for example via SSH.

No Host Display After GPU Passthrough

I am attempting to set up a GPU passthrough to use in a VM on my system. I am using Ubuntu 17.04 and have followed the instructions in the following link successfully after manually blacklisting the nouveau drivers.
https://medium.com/#calerogers/gpu-virtualization-with-kvm-qemu-63ca98a6a172
When I turn the host system on, I see the Ubuntu boot splash, but then the screen goes black and nothing is displayed. I can ssh into the system and see that the Nvidia GPU is correctly assigned to vfio-pci (as expected). I have not tested that the passthrough works on a VM yet, since I would like to get my host graphics working with the integrated Intel graphics first.
I have tried xrandr, but it says that it can't open the display. I'm expecting that there is something specific I have to do to enable the integrated graphics for the host. Any suggestions?
System:
ASRocks Z77 EXTREME4
Intel i7-4790K
Nvidia GTX 650 Ti
Thank you for your help!
I have figured out the problem and decided not to delete the post, for anyone who this might find this helpful.
By changing the primary graphics in my BIOS from auto (which automatically chooses the Nvidia card) to integrated, everything now works exactly as expected.

U-boot: load kernel via USB

I'm writing a small OS for a ARM board, and I'm a bit tired of the usual "remove SD card, copy kernel, insert SD card, switch on" pattern, so I started to look towards u-boot, and now I'm able to load the kernel via serial port using u-boot and kermit: I don't have to remove/insert the SD card anymore.
However this is painfully slow (~5min for 2.5 Mo), and I wonder if I could do the same using the usb port of the board (I know u-boot support tftpt booting method, but I didn't managed to setup the network correctly so far).
Best,
V.