Adding new ARM machine to Qemu - embedded

I've used buildroot to compile a firmware targetting the LPC EA3250 board, I'm trying to get this to run using qemu so that I can test changes to the firmware on my machine. I've tried commands such as:
qemu-system-arm -M virt -kernel uImage -hda rootfs.ext2 -boot c -m 128M -append "root=/dev/sda rw console=ttyS0,38400n8"
But I keep getting similar errors no matter which -M option I apply. It seems that somehow I need to get a new machine option to pass qemu which will correspond to my board. I've found this config file which seems to be the configuration needed for the board I'm looking at.
What I would like to know is how to insert this config into qemu. Do I have to place this config somewhere and then recompile everything? If I do where do I need to put it?

On further investigation it seems that the config file I found is for something else entirely. The LPC EA3250 is not supported by qemu and adding in support for additional machines is an extensive task.

Related

What is the entry point for WSL 1?

Imagine we have a statically linked Linux executable.
How should I name it in the imported tar.gz so the WSL 1 will run it by default, when created and started like:
# import an archive as a WSL distro
wsl --import static tmp-root-dir static.tar.gz
# boot distro to a default app??
wsl -d static
PS WSL uses own proprietary boot process and seems doesn't use traditional Unix /sbin/init.
Short answer:
The smallest bootable (without errors or warnings) WSL rootfs will consist of three files:
/main: Your statically-linked application. It can be named whatever you want, as long as the name matches what is in passwd.
/etc/passwd: Defines the application (i.e. shell) to load for the default user.
/etc/wsl.conf: To suppress normal WSL functionality and (optionally) define the non-root user.
More detail:
This probably isn't exactly what you are wanting, but it will hopefully meet your needs.
To start with, the entry point for WSL (the first time a Linux ELF binary is started inside the instance) seems to be its /init binary, which, in addition to some "normal" Linux init process tasks, sets up some of the Windows-interop functionality. To my knowledge, it cannot currently be changed. As far as I can tell, for WSL1, it is injected into the instance by the LXSS manager when starting a WSL instance.
Note: WSL2 might be slightly different in this regard, as it does seem to use a kernel-processed initrd to load /init. It is possible to override the kernel command-line, but that would impact all WSL2 instances, so it's probably not a practical solution.
It's not quite clear from your question whether you want the "default application" to:
Run as the default application/shell every time wsl -d static is run, even if it was already running.
Or just run once when starting the WSL1 instance for the first time.
I believe you are looking for the first option.
Run as the default application
In the first case, the standard WSL1 /init process might get you to where you need to be. As part of the startup, as you would expect, it reads /etc/passwd to determine the user shell to start. It also reads /etc/wsl.conf to determine the default user ID (but falls back to the registry if there is no default user set in wsl.conf).
So, to start a different application (let's call it main), you can:
Place the binary in the root directory of your image.
Set the application as the "shell" of the root user in a single-line /etc/passwd:
user:x:1000:1000:user:/:/main
Side-note that this also sets the home directory to / so we don't have to create another directory.
Define a etc/wsl.conf with the following contents:
[user]
default=user
[automount]
enabled=false
mountFsTab=false
[interop]
appendWindowsPath=false
This will prevent WSL from performing the following startup tasks, which would produce an error without additional image support:
Mounting Windows drives into the instance
Attempting to process /etc/fstab (since we have no mount command in the image).
Appending Windows paths (since our instance won't have access to the Windows drives)
It also sets the default user to the UID 1000 user we created in /etc/passwd. This isn't strictly necessary - There's likely no concern with running as root in this single-use instance, but I've included a non-root user as a "best practice".
That should be it. The smallest bootable WSL rootfs will consist of just those three files:
/etc/wsl.conf
/etc/passwd
/main
This will work on WSL1 as well as WSL2, although for WSL2, you should invoke with wsl ~ -d static to make sure that it doesn't try to start on a Windows drive that it can't access. Otherwise, you'll receive an init error, but your application will still be invoked.
Run once
If you are looking for something that will, for instance, start up a daemon when the instance is started for the first time, then there are a few alternatives that I document in this answer. If you are on Windows 11, then there's a built-in mechanism via /etc/wsl.conf. Otherwise, on Windows 10, you'll probably need to include some binary that can handle conditional logic. Something like execline would probably be perfect for this, but I've had issues with it under WSL2, at least, and I'm not sure that it would run under WSL1 (but it might).
Side-note for WSL1/musl
musl is a commonly used alternative libc implementation. For instance, Rust (AFAICT), can only generate truly statically-linked executables using musl. Note, however, that WSL1 cannot run musl-based statically linked binaries.
WSL2 can handle them just fine.
I managed to get it working. Initially I missed an executable bit on the app when created TAR archive.
Take standard 64-bit assembly:
.data
msg:
.ascii "Hello, world!\n"
.set len, . - msg
.text
.globl _start
_start:
# write
mov $1, %rax
mov $1, %rdi
mov $msg, %rsi
mov $len, %rdx
syscall
# exit
mov $60, %rax
xor %rdi, %rdi
syscall
and create a minimal WSL system:
wsl as -64 -o minimal.o minimal.s
wsl ld -melf_x86_64 -o minimal minimal.o
tar czf minimal.tar.gz \
--mode=a=rx \
--xform='s#^minimal#/\0#' minimal
wsl --import minimal rootfs-minimal minimal.tar.gz --version 1
wsl --list
wsl -d minimal -e /minimal
To make executable default (shorten wsl -d minimal -e /minimal to wsl -d minimal) we need an extra file /etc/passwd:
root:x:0:0:root:/root:/minimal
First line of this file determine a default user and so path to the executable (entry point) unless you override the user with /etc/wsl.conf:
[user]
default=user
Basically WSL 1 treats only 2 files as magical (in addition to ignoring /sbin/init):
/etc/wsl.conf
/etc/passwd

How to enable X11 forwarding in PyCharm SSH session?

The Question
I'm trying to enable X11 forwarding through the PyCharm SSH Terminal which can be executed via
"Tools -> Start SSH session..."
Unfortunately, It seems there is no way of specifying the flags like I would do in my shell for enabling the X11 Forwarding:
ssh -X user#remotehost
Do you know some clever way of achieving this?
Current dirty solution
The only dirty hack I found is to open an external ssh connection with X11 forwarding and than manually update the environment variable DISPLAY.
For example I can run on my external ssh session:
vincenzo#remotehost:$ echo $DISPLAY
localhost:10.0
And than set on my PyCharm terminal:
export DISPLAY=localhost:10.0
or update the DISPLAY variable in the Run/Debug Configuration, if I want to run the program from the GUI.
However, I really don't like this solution of using an external ssh terminal and manually update the DISPLAY variable and I'm sure there's a better way of achieving this!
Any help would be much appreciated.
P.s. Making an alias like:
alias ssh='ssh -X'
in my .bashrc doesn't force PyCharm to enable X11 forwarding.
So I was able to patch up jsch and test this out and it worked great.
Using X11 forwarding
You will need to do the following to use X11 forwarding in PyCharm:
- Install an X Server if you don't already have one. On Windows this might be the VcXsrv project, on Mac OS X the XQuartz project.
- Download or compile the jsch package. See instructions for compilation below.
- Backup jsch-0.1.54.jar in your pycharm's lib folder and replace it with the patched version. Start Pycharm with a remote environment and make sure to remove any instances of the DISPLAY environment variable you might have set in the run/debug configuration.
Compilation
Here is what you need to do on a Mac OS or Linux system with Maven installed.
wget http://sourceforge.net/projects/jsch/files/jsch/0.1.54/jsch-0.1.54.zip/download
unzip download
cd jsch-0.1.54
sed -e 's|x11_forwarding=false|x11_forwarding=true|g' -e 's|xforwading=false|xforwading=true|g' -i src/main/java/com/jcraft/jsch/*.java
sed -e 's|<version>0.1.53</version>|<version>0.1.54</version>|g' -i pom.xml
mvn clean package
This will create jsch-0.1.54.jar in target folder.
Update 2020:
I found a very easy solution. It may be due to the updated PyCharm version (2020.1).
Ensure that X11Forwarding is enabled on server: In /etc/ssh/sshd_config set
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
On client (MacOS for me): In ~/.ssh/config set
ForwardX11 yes
In PyCharm deselect Include system environment variables. This resolves the issue that the DISPLAY variable gets set to the system variable.
EDIT: As seen in the below image it works. For example I used the PyTorch implementation of DeepLab and visualize sample images from PASCAL VOC:
X11 forwarding was implemented in 2021.1 for all IntelliJ-based IDEs. If it still doesn't work, please consider creating a new issue at youtrack.jetbrains.com.
By the way, the piece of advice about patching jsch won't work for any IDE newer than 2019.1.
In parallel, open MobaXTerm and connect while X11 forwarding checkbox is enabled. Now PyCharm will forward the display through MobaXTerm X11 server.
This until PyCharm add this 'simple' feature.
Also, set DISPLAY environment variable in PyCharm run configuration like this:
DISPLAY=localhost:10.0
(the right hand side should be obtained with the command echo $DISPLAY in the server side)
Update 2022: for PyCharm newer than 2022.1: Plotting in SciView works by only setting ForwardX11 yes in .ssh/config (my laptop OS is ubuntu 22.04). I did not set any other parameters either on the server or local side.

How do I emulate the vmx feature with QEMU?

I read from here that vmx capability support on QEMU must be explicitly enabled by providing the +vmx option to the command but the problem is that it does not seem to work. In my system, the VMX feature is still undetected.
Command:
qemu-system-x86_64 -no-kvm -cpu qemu64,+vmx,-svm ...
In my guest OS, when I execute cpuid 1 I get ECX = 0x80802001; bit 5 = 0 meaning that my virtual CPU does not have VMX.
Is this a bug?
Or is there another way to enable the vmx feature in QEMU?
No, the vmx flag is not supported in the processor emulation mode of QEMU. In order to use vmx in QEMU, you must use KVM with QEMU (replacing -no-kvm with -enable-kvm); and your host processor must support vmx.
In this document it shows the nested vmx instructions support in the Linux KVM; meaning this feature must be used with -enable-kvm.
In my test the options -enable-kvm -cpu kvm64,+vmx work, as the vmx feature is detected in the guest OS.
The following command works for me:
qemu-system-x86_64 -cpu host -kernel kernel/kernel -serial stdio -enable-kvm
-cpu host makes QEMU report host CPU features inside the VM (so your CPU must support vmx)
-enable-kvm is required by -cpu host
Even though according to this -cpu qemu64,+vmx should work, it doesn't work for me either.

Xcode distributed build failure

I am trying to do distributed builds with Xcode, but I see this error while building from my build server (Build Sever is the host, dev machine is the client).
When I try to do this the other way, I am able to distribute builds (My Dev machine as the host and the Build Sever as the client)
Any thoughts?
[14:44:47]: Step 2/3 (6m:10s)
[14:44:57]: [Step 2/3] distcc[95606] (dcc_parse_multiplier) ERROR: bad multiplier "/0,lzo,cpp" in host specification
[14:44:57]: [Step 2/3] distcc[95606] (dcc_show_hosts) CRITICAL! Failed to get host list
[14:44:57]: [Step 2/3] /usr/bin/pump: error: pump mode requested, but distcc hosts list does not contain any hosts with ',cpp' option
Your milage may vary with this solution, but we've had to hack the distcc that comes with Xcode to force pump mode to be off to fix this problem.
Remove pump from /Developer/usr/bin and /usr/bin, just write out an empty file named pump in its place
Don't forget to chmod a+x your pump and distcc (in the next step)
In /Developer/usr/bin, rename distcc to distcc.bin and write out this distcc
#!/bin/bash
hosts=$DISTCC_HOSTS
hosts=${hosts//\,cpp/}
export DISTCC_HOSTS=$hosts
echo Modified DISTCC_HOSTS=\"$DISTCC_HOSTS\"
/Developer/usr/bin/distcc.bin $#
Apologies, this is a quick and dirty solution. There is probably a cleaner way to do this.
Please restart the build server and your own computer. That usually does the trick for me, also, update to the latest xcode 4

Is there a way I can have a VM gain access to my computer?

I would like to have a VM to look at how applications appear and to develop OS-specific applications, however, I want to keep all my code on my Windows machine so if I decide to nuke a VM or anything like that, it's all still there.
If it matters, I'm using VirtualBox.
This is usually handled with network shares. Share your code folder from your host machine and access it from the VMs.
Aside from network shares, another tool to use for this is a version-control system.
You should always be able make a normal network connection between the VM and the hosting OS, as though it were another computer on the same network. Which, in some sense, it is.
I do this all the time.
I have a directory in a Windows drive that I mount in my host ubuntu 12.04.
I run virtualbox ubuntu 13.04 as a guest.
I want the guest to mount the Windows directory with full non-root permissions.
I do almost all my work from a bash shell, so this method is natural for me.
When searching for methods to automatically mount virtualbox shared folders,
reliable and correct methods are hard to distinguish from those that fail.
Failures include getting and setting permissions, as well as other problems.
Methods that fail include:
modifying /etc/fstab
modifying /etc/rc.local
I am fairly certain that rc.local can be used,
but no methods I have tried worked.
I welcome improvements on these guidelines.
On virtualbox 4.2.14 running nautilus (bash terminal) on an ubuntu 13.04 guest,
Below is a working method to mount Common (sharename)
on /home/$USER/Desktop/Common (mountpoint) with full permissions.
(Note the β€˜\’ command continuation character in the find command.)
First time only: create your mountpoint, modify your .bashrc file, and run it.
Respond with password when requested.
These are the four command-lines needed:
mkdir $HOME/Desktop/Common
sudo echo β€œ$USER ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers
find $HOME/Desktop/Common -maxdepth 0 -type d -empty -exec sudo \
mount -t vboxsf -o \
uid=`id -u $USER`,gid=`id -g $USER` Common $HOME/Desktop/Common \;
source ~/.bashrc # Needed if you want to mount Common in this bash.
All other times: simply launch a bash shell.
The find command mounts the shared directory if the mountpoint directory is empty.
If the mountpoint directory is not empty, it does not run the mount command.
I hope this is error-free and sufficiently general.
Please let me know of corrections and improvements.