Can we single step QEMU using libvert - virtual-machine

I am developing a peripheral hardware and want to use QEMU to test it.
The plan is to run the device driver in QEMU and use libvert (or something else?) to interface the VM with a python based simulation model of the peripheral.
I aware that QEMU can be single stepped via GDB, but I am looking at a python approach to do the following.
Wait for a write to a specific memory location.
Suspend QEMU
Run some background task in the host.
Run QEMU for N Cycles.
Write to a memory location
Continue
Is this possible with libvert or any other toolkit?

I needed to do something similar, and came across two approaches:
Run Python in GDB, using a python script of the commands
Use a Python API to GDB like pygdbmi
The latter ended up being more flexible, so I'll explain those steps here.
Configure qemu with debugging information:
./configure --enable-debug
Build qemu and invoke it halted, with debug hooks:
make
sudo make install
qemu-system-x86_64 -S -s
Now, use a Python script to attach to and interact with qemu via pygdbmi(instructions here):
from pygdbmi.gdbcontroller import GdbController
from pprint import pprint
# Start gdb process
gdbmi = GdbController()
print(gdbmi.get_subprocess_cmd()) # print actual command run as subprocess
gdbmi.write('target remote localhost:1234'); # attach to QEMU GDB socket
pprint(response)
response = gdbmi.write('-break-insert main') # machine interface (MI) commands start with a '-'
response = gdbmi.write('break main') # normal gdb commands work too, but the return value is slightly different
response = gdbmi.write('-exec-run')
response = gdbmi.write('run')
response = gdbmi.write('-exec-next', timeout_sec=0.1) # the wait time can be modified from the default of 1 second
response = gdbmi.write('next')
response = gdbmi.write('next', raise_error_on_timeout=False)
response = gdbmi.write('next', raise_error_on_timeout=True, timeout_sec=0.01)
response = gdbmi.write('-exec-continue')
response = gdbmi.send_signal_to_gdb('SIGKILL') # name of signal is okay
response = gdbmi.send_signal_to_gdb(2) # value of signal is okay too
response = gdbmi.interrupt_gdb() # sends SIGINT to gdb
response = gdbmi.write('si 20') # step 20 instructions
response = gdbmi.write('continue')
response = gdbmi.exit()
If you have trouble with kernel symbols, you might also need to issue a command 'file myKernel' to load the symbol table from that file, assuming it was compiled with debugging information.
For reference, the '-s' command adds GDB hooks at localhost:1234. So the first command you issue must direct gdb to look there:
gdbmi.write('target remote localhost:1234');

Related

Python on M1 MBP trying to connect to USB devices - NoBackendError: No backend available

I am trying to connect with Python to my USB devices.
The final result should be a connection to my Blood Pressure Monitor but I am failing already to connect to ANY device.
My simple code - which I found here - is bellow. The Product- and Vendor ID I got from Apple Menu > About this Mac > System Information
import usb.core
import usb.util
# find our device
dev = usb.core.find(idVendor=0x0781, idProduct=0x55a4)
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
dev.set_configuration()
# get an endpoint instance
cfg = dev.get_active_configuration()
intf = cfg[(0,0)]
ep = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT)
assert ep is not None
# write the data
ep.write('test')
But I get always NoBackendError: No backend available from dev = usb.core.find(idVendor=0x0781, idProduct=0x55a4)
For the connection I installed pyusb in my Python env and with Homebrew libusb on my mac.
I have no clue how to get a connection or even a simple list via iteration with all my connected Product- and Vendor IDs.
This error is to be expected if pyusb cannot find the dynamic libraries of libusb.
Installing libusb with Homebrew is not sufficient. Homebrew puts the relevant files in /opt/homebrew/Cellar/libusb/1.0.24/lib and creates symbolic links in /opt/homebrew/lib. But pyusb is not aware of these paths.
You have two main options:
Add /opt/homebrew/lib to the environment variable DYLD_LIBRARY_PATH. For a permanent setup, add it to ~/.zshenv:
export DYLD_LIBRARY_PATH="/opt/homebrew/lib:$DYLD_LIBRARY_PATH"
Create a symbolic link in your home directory. This takes advantage of the fact that ~/lib is a default fallback path for libraries:
ln -s /opt/homebrew/lib ~/lib

SSH connection command to embedded OS QNX Neutrino via paramiko [duplicate]

I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)
The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch

Access pagemap in gem5 FS mode

I am trying to run an application which uses pagemap in gem5 FS mode.
But I am not able to use pagemap in gem5. It throws below error -
"assert(pagemap>=0) failed"
The line of code is:
int pagemap = open("/proc/self/pagemap", O_RDONLY);
assert(pagemap >= 0);
Also, If I try to run my application on gem5 terminal with sudo ,it throws error-
sudo command not found
How can I use sudo in gem5 ??
These problems are not gem5 specific, but rather image / Linux specific, and would likely happen on any simulator or real hardware. So I recommend that you remove gem5 from the equation completely, and ask a Linux or image specific question next time, saying exactly what image your are using, kernel configs, and provide a minimal C example that reproduces the problem: this will greatly improve the probability that you will get help.
I have just done open("/proc/self/pagemap", O_RDONLY) successfully with: this program and on this fs.py setup on aarch64, see also these comments.
If /proc/<pid>/pagemap is not present for any file, do the following:
ensure that procfs is mounted on /proc. This is normally done with an fstab entry of type:
proc /proc proc defaults 0 0
but your init script needs to use fstab as well.
Alternatively, you can mount proc manually with:
mount -t proc proc proc/
you will likely want to ensure that /sys and /dev are mounted as well.
grep the kernel to see if there is some config controlling the file creation.
These kinds of things are often easy to find without knowing anything about the kernel.
If I do:
git grep '"pagemap'
to find the pagemap string, which is likely the creation point, on v4.18 this leads me to fs/proc/base.c, which contains:
#ifdef CONFIG_PROC_PAGE_MONITOR
REG("pagemap", S_IRUSR, proc_pagemap_operations),
#endif
so make sure CONFIG_PROC_PAGE_MONITOR is set.
sudo: most embedded / simulator images don't have it, you just login as root directly and can do anything by default without it. This can be seen by the conventional # in the prompt instead of $.

How do I run a script on VxWorks Tornado Shell?

I am trying to run a script on VxWorks Shell, which will load a module.
I use a Perl script to telnet into the system, login and get access to the shell.
I am able to run the basic commands like 'i', 'time', 'ls' 'pwd' and 'h' and so on.
But I would like to run a script, say 'test.o'.
If I do : <C:\Path\subfolder\test.o the script file WILL run from, the TORNADO Shell.
But I have connected to using Telnet using Perl.
So I connect this way:
use Net::Telnet;
my $username = "username";
my $password = "password";
my $t = new Net::Telnet(Timeout=>10, Errmode=>'die');
$t->open('10.42.177.123');
$t->login($username,$password); # Logins as expected.
my #lines = $t->cmd('i'); # To test
print #lines # This works
#lines = $t->cmd('<C:\\Path\\Subfolder\\test.o'); # This is not working for me. HELP!
print #lines; # Prints the Error below
I get an error saying :
Unknown directory: /C:\Path\Subfolder
can't open input 'C:\Path\Subfolder\test.o
errno = 0x1f5
-
How do I run my script file if it is residing at a particular folder of the host PC?
I am able to run the script manually from the TORNADO SHELL window where the prompt looks like ->. and hence it is a working script. And as I have said, I am able to run and print the basic VxWorks Shell commands ("build-in functions").
Any help? [ My OS is Win7 ]
Thanks!
This is issue is now resolved. Two issues was there, and one was because TORNADO, another VxWorks Client was also logged into the system at the same time, while I am trying run my perl script which sends commands and do instructions using Telnet, and having two clients (Tornado, and my scripts Telnet session) running at the same time (despite the VxWorks OS running on the Embedded system having TelnetDeamon running) it didn't like it.
As for the Error above, why it didn't work and gave an error was a syntax error. I should have used
$t->cmd('<\\Path\\subfolder\\test.o');
No need to give C:

How are the vxWorks "kernel shell" and "host shell" different?

In the vxWorks RTOS, there is a shell that allows you to issue command to your embedded system.
The documentation refers to kernel shell, host shell and target shell. What is the difference between the three?
The target shell and kernel shell are the same. They refer to a shell that runs on the target. You can connect to the shell using either a serial port, or a telnet session.
A task runs on the target and parses all the commands received and acts on them, outputting data back to the port.
The host shell is a process that runs on the development station. It communicates with the debug agent on the target. All the commands are actually parsed on the host and only simplified requests are sent to the target agent:
Read/Write Memory
Set/Remove Breakpoints
Create/Delete/Suspend/Resume Tasks
Invoke a function
This results in less real-time impact to the target.
Both shells allow the user to perform low level debugging (dissassembly, breakpoints, etc..) and invoke functions on the target.
There are some differences between host shell and target shell, you can use h command to get the actual commands the two shell support.
The host shell support more command line edit functions like auto complement and symbol lookup etc.