I'm trying to emulate STM32F429I discovery board using qemu & eclipse IDE. I got the blinky example running with the led turning on and off on the graphics screen but I have tried an example to run the on-board screen and it doesn't seem to be running, Is it supported? Also, many drivers fail when simulated with qemu (sdram, rcc, ...) How can I know exactly what peripherals that are fully supported?
Here's the part of documentation about the board, What is meant by "FP not emulated" anyway?
Related
Backgournd
There is a lot of documentation about using QEMU for simulating a system of particular architecture (a "platform").
For example, x86, ARM or RISCV system.
The first step is to configure QEMU target-list, for example ./configure --target-list=riscv32-softmmu.
It's also possible to provide multiple targets in the target-list, but apparently that builds an independent simulation for each specified platform.
My goal, however, is to simulate a system with mixed targets: an x86 machine which also hosts a RISCV embedded processor over PCI.
Obviously I need to implement a QEMU PCI device which would host the RISCV device on the x86 platform, and
I have a good idea how to implement a generic PCI device.
However, I'm not sure about the best approach to simulate both x86 and RISCV together on the same QEMU simulation.
One approach is to run two instances of QEMU (as two separate processes) and use some sort of IPC for communicating between the x86 and the RISCV simulation.
Another possible (?) approach could be to build RISCV QEMU as a loadable library and load it from x86 QEMU.
Perhaps it's even possible to have a single QEMU application that simulates both x86 and RISCV?
Yet another approach is not to use QEMU for simulating the RISCV device. I could implement a QEMU PCI device that completely encapsulates a RISCV simulation such as tiny-emu, but I would rather use QEMU for both x86 and RISCV.
My questions are:
Are there some guidelines or examples for a mixed-target QEMU project?
I've searched for examples but only found references to using QEMU as a single platform simulation, where first you choose which platform you would like to run.
What would be the best approach for simulating a mixed platform in QEMU? Separate QEMU processes with IPC? Or is there a way to configure QEMU in such a way that it could simulates a mixed platform?
Related
https://lists.gnu.org/archive/html/qemu-devel/2021-12/msg01969.html
QEMU does not support running multiple target architectures in the same QEMU process. (This is something we would in theory like to be able to do, but it would require a lot of reworking of core parts of QEMU which assume that the target architecture is known at compile time. So far nobody has felt it important enough to put in the significant development effort needed.)
So if you want to do this you'll need to somehow stitch together a QEMU process for the primary architecture with some other process to do the secondary architecture (QEMU or otherwise). This has been done (for instance Xilinx have an out-of-tree QEMU-based system that does this kind of thing with multiple QEMU processes) but I'm not aware of any easy off-the-shelf frameworks or setups to do it. I suspect that figuring out how time/clocks interact between the two simulations is one of the tricky aspects.
There is another option
you can start 2 QEMU processes and connect them through socket
Then you can create run script that start both of them in your order
its less "clock" accurate but good enough for virtual your HW
The other option is https://wiki.qemu.org/Features/MultiProcessQEMU
but you will need some hacking this experimental code
Use renode. It not only provides easy multi cpu simulation, but also hdl and multimachine simulation synchronozed in a single process.
Note: Why this question is not off-topic
Some people seem to think this question is off-topic, and it would be better suited for Super User. Please give me a chance to explain why that's not the case.
The question is not about "general computing hardware" but about "embedded software". In the topic of "embedded software", there are tons of questions on StackOverflow related to OpenOCD, a popular open-source tool to connect your computer to embedded software development boards. All these questions are considered totally okay for StackOverflow. My question on this page is about PyOCD - an emerging OpenOCD alternative. So if you vote to close this question, then please also vote to close the 565 (!) other questions about OpenOCD too ;-)
I've got a NuMaker-M032SE V1.3 board from Nuvoton that I'm trying to flash/debug with PyOCD. It's my first time I'm experimenting with PyOCD and with Nuvoton chips. Unfortunately, PyOCD cannot find the device. I'll go step-by-step through the whole procedure. Please tell me what I did wrong.
1. My system
I'm running 64-bit Windows 10 on my desktop computer. I've got Python 3.8 and recently installed the latest PyOCD development version from a cloned GitHub repository (see https://github.com/mbedmicro/pyOCD).
2 Install Microcontroller board
Note: this paragraph is simply to show you the background situation, before I move on to explain the actual problem I got with PyOCD
I've got a Nuvoton NuMaker-M032SE V1.3 microcontroller board:
This board has a Nu-Link2-Me V1.0 probe on the right side. The first time I connect my board to my computer nothing really happened. So I figured out I had to install the Nuvoton ICP tool that comes with the Nu-Link USB Driver 1.6:
You can download the Nuvoton ICP tool here: https://www.nuvoton.com/hq/support/tool-and-software/development-tool-hardware/programmer/
When I first start the sofware, I see this:
And I get a request to update the firmware on the Nu-Link2-Me V1.0 probe:
I click OK and wait for the firmware update to complete. I plug out and back in the board. Windows clearly notices the device:
I can also see the device in my Control Panel > Device Manager. It's listed under Universal Serial Bus Controllers as Nuvoton Nu-Link2 USB:
3 Prepare PyOCD
PyOCD has a few built-in targets. But not the Nuvoton chip I got. So I consulted the documentation at https://github.com/mbedmicro/pyOCD/blob/master/docs/target_support.md and learn that I need to download a pack from http://www.keil.com/dd2/pack/ . That's where I download the Nuvoton ARM Cortex-M NuMicro Family Device Support pack:
Because I downloaded the pack manually, I know that I'll have to add the parameter --pack="C:/path/to/pack/Nuvoton.NuMicro_DFP.1.3.5.pack" to every PyOCD command, to ensure that PyOCD can access this pack whenever it needs to.
4. Connect PyOCD with Nuvoton board
I believe my microcontroller board is properly installed to go on to the final step: connect PyOCD to the Nuvoton microcontroller board.
First I want PyOCD to find the board. I issue the following command in a Windows console:
$ pyocd list --pack="C:/path/to/pack/Nuvoton.NuMicro_DFP.1.3.5.pack"
Unfortunately, I get the response:
No available debug probes are connected
I tried a few times, both with and without the --pack parameter. I always get the same error message.
Note:
I had expected to see something like:
# Probe Unique ID
---------------------------------------------------------------------------
0 ARM CMSIS-DAP v1 000000800a0c882800000000000000000000000097969902
That's the output I get when I issue the $ pyocd list command and I have my SWDAP probe connected to my computer. The SWDAP is the official probe from ARM (see https://os.mbed.com/components/SWDAP-LPC11U35/) that runs the DAPLink firmware (see https://github.com/ARMmbed/DAPLink).
I got a reply from Nuvoton. Apparently the NuMaker-M032SE V1.3 board is not yet supported in PyOCD at the time of writing (02 Dec 2019). At the moment, only NuMaker M252/M263 boards are supported.
Nuvoton will make efforts to support these boards in PyOCD too, in the future.
I recently got a new Mbed board - this one is MTS Dragonfly. I can't get flash-disk to show up correctly, and I am wondering if I have got a DOA module, or I am doing something wrong. Does this happens to other Mbed boards?
I have installed drivers from manufacturer website and do have a working serial connection, which defaults to the cellular module. However the flash disk does not show up correctly. Unlike other Mbed boards, I am greeted with a message "please insert disk" and I see no file system.
Interesting part is that the mbed microcontroller - that is the one doing the programming - is on a separate development / breakout board. The target is a separate module that is meant to be used in production.
If I do not insert the target into the development board, and connect development board to the PC, I get the same error. I have looked at diskpart, and when no target is present, it shows up as a 16 KB disk with no partitions or volumes.When the module is inserted, diskpart reports ~512 KB of space, also with no partitions. Thus I guess that I am plugging in the module correctly.
I have seen user discussions for a 'bricked' mbed board (damaged file-system), and this situation looks similar to me.
I tried diskpart to create a partition, or clean the disk, and it throws an IO error.
This question on mbed site
I just tested all of my Mbed boards and discovered that this is a regression in Windows 10 anniversary update.
MTS-Dragonfly and another board, Delta DFCM-NNN40, do not show up with a valid partition on any of my Windows 10 machines. I have a couple of FRDM boards and those work fine.
I tested Ubuntu, and it has no issue displaying the disk drives or programming the boards. I have not tested other versions of Windows. A workable solution is is to use Ubuntu in VirtualBox, and pass it control of the USB device.
I wonder by any chance is there a way to install vxworks on vdx86d(vdx6354)? I searched a lot on the net and did not find NO to this question, but no manual or help could be find by me. anybody did this before and know how to do it?
VxWorks certainly runs on PC architecture x86 targets; there is probably already a suitable 80486 BSP that will suit this board. You can search for a suitable BSP here. There is only one BSP explicitly listed for 486 targetted at VxWorks 5.4/Tornado 2.0 - so it is as antique as 486 architecture itself. VxWorks 6.9 however has a single unified BSP for x86 which will no doubt work with your board.
VxWorks is not "installed" as such in the same way as a GPOS such as Linux or Windows; rather you link your application with the VxWorks libraries to create an application image that runs directly on boot. How the bootstrap process works varies between architectures and hardware implementation, but as a generic PC architecture board, booting a VxWorks application on your board will be the same as any other PC target. As such what you need to look for are directions on booting VxWorks on PC architecture rather then being specific about your actual board.
On PC architecture you can boot from mass-storage, or from a network server. Booting via a network connection is the normal method during debug/development. A great deal of the information available is for older versions of VxWorks. However it seems that it is possible to boot VxWorks via a VxWorks specific bootstrap, or from a generic PC bootloader such a s U-Boot.
Ultimately Wind River Support is probably a good starting point.
Could anyone get the camera data from the Kinect using a Raspberry Pi ?
We would like to make a wireless Kinect connecting it using Ethernet or WiFi. Otherwise, let me know if you have a working alternative.
To answer your question, yes it is possible to get Image and depth on the raspberry pi!
Here is how to.
If you want to use just video (color, not depth) there is already a driver in the kernel! You can load it like this:
modprobe videodev
modprobe gspca_main
modprobe gspca_kinect
You get a new /dev/videoX and can use it like any other webcam!
If you need depth (which is why you want a kinect), but have a kernel older than 3.17, you need another driver that can be found here: https://github.com/xxorde/librekinect. If you have 3.17 or newer, then librekinect functionality is enabled by toggling the gspca_kinect module's command-line depth_mode flag:
modprobe gspca_kinect depth_mode=1
Both work well on the current Raspbian.
If you can manage to plug your kinect camera to the raspberry Pi, install guvcview first to see if it does works.
sudo apt-get install guvcview
Then, typeguvcview in the terminal and it should open an option panel and the camera control view. If all of that does works and that you want to get the RAW data to do some image treatments, you will need to compile OpenCV (it takes 4 hour of compiling) and after that, you just will need to program whatever you want. To compile it, just search on Google, there are lots of tutorial.
Well, as far as I know there are no successful stories about getting images from Kinect on RaspberryPi.
On github there is an issue in libfreenect repository about such problem. In this comment user zarvox say that RPi haven't enough power to handle data from Kinect.
Personally I tried to connect Kinect with RPi using OpenNI2 and Sensor, but have no success. And that was not a clever decision because it's impossible to work with Microsoft Kinect on Linux using OpenNI2 due to licensing restrictions (Well, actually it is not so impossible. You can use OpenNI2-FreenectDriver + OpenNI2 on Linux to hookup Kinect. But anyway this workaround is not suitable for RaspberryPi, because OpenNI2-FreenectDriver uses libfreenect).
But anyway there are some good tutorials about how to connect ASUS Xtion Live Pro to RaspberryPi: one, two. And how to connect Kinect to more powerfull arm-based CubieBoard2: three.
If you intend to do robotics the simplest thing is to use the Kinect library on ROS Here
Oderwise you can try OpenKinect, They provide the libfreenect library that let you acess to the accelerometers the image & much more
OpenKinect on Github here
OpenKinect Wiki here
Here is a good exemple with code & all the details you need to connect to the Kinect & operate the motors using libfreenect.
You will need a powered USB hub to power the Kinect & to install libusb.
A second possiblity is to use the OpenNI library which provides a SDK to develop midleware libraries to interface to your application there is even an OpenNi lib for processing here.
yes, you can use Kinect with raspberry pi in a small robotic project.
I have done this work with the openkinect library.
my experience is you should check your raspberry pi and monitoring pi voltage, not time does to low voltage.
you should accuracy your coding to use lower processing and run your code faster.
because if your code had got a problem, your image processing would be the slower response to the objects.
https://github.com/OpenKinect/libfreenect https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv2_threshold.py