How to access virtual machines started with QEMU? - virtual-machine

I wish I could connect to virtual machines started with QEMU.
Then I should extract the information about the registers (EIP, EBP, etc.)
In XEN for this, there is the function get_vcpu_context, also there are other functions through which you can connect to each virtual machine separately.
Is there any such support for QEMU, and where can I find QEMU hypercalls documentation?

You have a couple of options:
you can connect to the QEMU gdbstub and use the GDB remote protocol to query CPU state. This is probably the most reliable method.
you can connect to the QEMU monitor and use the "info registers" command. Note that the output format of this might change between QEMU releases, though (it is intended for human consumption, not other programs), so it will be less reliable long-term than using GDB remote protocol.

Related

Multiple VMs Accessing a single device over PCIE

I am using the libvirt/QEMU/KVM stack to run some VMs on an Ubuntu 20.04 host. I am using the virsh CLI tool for VM management. I'd like to allow multiple VMs to access the same device (FPGA) over PCIE. It seems that libvirt doesn't allow this, and when I attach the PCIE device to multiple VMs and try to power more than one on, I get the following error.
error: Failed to start domain ubuntu-guest-2
error: Requested operation is not valid: PCI device 0000:05:00.0 is in use by driver QEMU, domain ubuntu-guest-1
This kinda makes sense to me, as there shouldn't be conflicting data sent over the PCIE bus. But nonetheless, does anyone know a workaround to make this happen?
There are a number of techniques to share a device across VMs. All of them require either device-specific software support in the VMM, hardware in the device to support sharing (SR-IOV), or both (Scalable IOV).
For a custom FPGA design, you would need to provide this.
SR-IOV is part of the PCIe specification, so there may be libraries available that you could incorporate into your FPGA design.

Using saltstack ssh

Is there a difference between using salt-proxy ssh and directly salt-ssh? I'm interested because according to documentation both aimed to run remote commands without agent installation on the end machine.
You cant simply do salt-ssh on a proxy minion, for which you would have to write your own custom ssh interface to the remote system, because your proxy minion may not support doing salt-ssh.
How to choose between using salt-ssh vs salt-proxy totally depends on the type of a minion system.
As stated in the saltstack documentation - https://docs.saltstack.com/en/latest/topics/ssh/index.html and
https://docs.saltstack.com/en/latest/topics/proxyminion/index.html
For salt-ssh to be used, the remote system must have python installed - one of the criteria. For example, controlling ubuntu from centos.
As stated in the salt-proxy doc,
Proxy minions are a developing Salt feature that enables controlling
devices that, for whatever reason, cannot run a standard salt-minion.
Examples include network gear that has an API but runs a proprietary
OS, devices with limited CPU or memory, or devices that could run a
minion, but for security reasons, will not.

How to reverse-engineer a USB device without monitoring traffic?

How is it possible to determine the commands to operate a usb device, if that device comes from another operating system and traffic monitoring software cannot be installed on that OS. The only method i can think of is sending random commands to the device, until the device responds, but this seems implausible for more complex commands, and potentially dangerous. For example, consider the DualShock 4 controller. Sony has not made an official driver for this device, so what method can i use to create a linux driver for it?
Get a hardware protocol analyzer. Then you won't need to install any software on the host or device under test. Here is one that I have used:
http://www.totalphase.com/products/beagle-usb12/

How to monitor syscalls of a VM

Is there a simple way to monitor the syscalls of processes
running in a VM from the outside on the hypervisor (dom0) in a Xen setup?
In general, is that an easy task or are modifications on the hypervisor code necessary to do such a VM syscall monitoring?
Is it also possible with a HVM VM or only with a PV VM?
Not sure if you already looked at these link:
http://hal.archives-ouvertes.fr/docs/00/43/10/31/PDF/Technical_Report_Syscall_Interception.pdf
http://research.microsoft.com/pubs/153179/sim-ccs09.pdf
http://pages.cs.wisc.edu/~remzi/OSTEP/vmm-intro.pdf
With very limited knowledge on subject, I am making an attempt. one can emulate instruction used to make a syscall. e.g. sysenter, sysexit. Thus for any attempt by guest to use these instructions should trap for hypervisor to intervene. Once hypervisor comes into picture, you can copy syscall number and its arguments.

usb target disk mode equivalent on running system

Is there anyway that you can expose local partition or disk image through your computer usb to another computer to appear like external drive on mac/linux/bsd system ?
I'm trying to play with something like kernel development and I need one system for compiling and other for restarting/testing.
With USB: Not a chance. USB is unidirectional, and your development system has no way of emulating a mass storage device, or any kind of other USB device.
With Firewire: Theoretically. (This is what Apple's target disk mode is using.) However, I can't find a readily available solution for that.
I'd advice you to try either virtualization or network boot. VirtualBox is free and open software, and has a variety of command line options, which means it can be scripted. Network boot takes a little effort to set up, but can work really well.
Yet another option, is to use a minimal Linux distribution as a bootstrap which sets up the environment you want, and then uses kexec to launch your kernel, possibly with GRUB as an intermediary step.
What kind of kernel are you fiddling with? If it's your own code, will the kernel operate in real or protected mode? Do you strictly need disk access, or do you just want to boot the actual kernel?