I wish to get the qualcomm SNPE (snapdragon neural processing engine) working on my linux (not Android) board (flightPro w/ qualcomm 820.) . It works fine on the cpu.
I've successfully followed the examples provided to load alexnet onto my 820 board and run snpe (snpe-net-run) in cpu mode. It does not run in gpu mode.
Searching the web and forums (e.g., https://developer.qualcomm.com/forum/qdn-forums/software/qualcomm-neural-processing-sdk/59207) it seems that all (?) linux boards may be missing the opencl driver that would be required to make this work.
Following the example...
> snpe-net-run --container bvlc_alexnet.dlc --input_list target_raw_list.txt --use_gpu
The selected runtime is not available on this platform. Continue
anyway to observe the failure at network creation time.
Aborted
I expected the gpu to work (and hopefully, fingers crossed to be substantially faster than the cpu!)
You need to consult your board vendor/manufacturer and your Linux BSP provider.
From the SNPE product page, the 820 is listed as supported but it is also mentioned that libOpenCL.so must be present on the device (highlighted in bold below).
The Qualcomm Neural Processing SDK supports Qualcomm® Snapdragon™ 855,
845, 820, 835, 712, 675, 660, 653, 652, 650, 636, 632, 630, 626, 625,
450, 439, and 429 as well as Qualcomm® QCS605 and QCS403, Qualcomm®
SM6125, the Qualcomm® Snapdragon™ 820Am automotive platform and
Qualcomm Flight. For Qualcomm® Adreno™ GPU support, libOpenCL.so must
be present on device.
For our case, we were using a board with the 626 and an Adreno™ 506 GPU. The board vendor also provided the Linux BSP. When we built the Linux image, it already included a libOpenCL.so under /usr/lib (32-bit) and /usr/lib64 (64-bit).
We were also using another development board from another vendor, and the SNPE SDK was included with the development kit along with instructions on how to set it up onboard.
Basically, it depends on the board and the accompanying BSP. Otherwise, you'll probably need to customize your Linux image to add support for it.
Related
I'm trying to deploy TF Lite on a microcontroller that is not in the examples provided by TF repository, and I'm starting with an STM32L0.
My question is:
1) how can I modify the mbed project for an STMF4 to fit another STM32 family?
I noticed I need to change the TARGET (which I could find in the mbed-os repository) but it returns me a few errors saying it misses AUDIO_DISCO and BSP modules.
2) Where do I find these libraries for my board?
Specs:
Linux Ubuntu 18.04
mbed cli 1.10.2
mbed os >= 5 (contains mbed-os.lib file)
tensorflow v2.10.1
Discovery Kit for STM32L07CZY6TR (B-L072-LRWAN1)
For part #1, you can remove the AUDIO_DISCO and BSP .lib files that are in the generated projects for Mbed.
This should get you something that builds examples that don't need to access the microphone or accelerometers, but if you want to use sensor data, you'll have to figure out what the equivalents are for your board since Mbed OS doesn't offer abstractions for these kinds of devices.
I managed to build for other targets by doing the following:
Find the target name for your board in mbed-os/targets/
In my case it was DISCO_L072CZ_LRWAN1
Clone the v2.1.0 of tensorflow repository (the latest version on master didn't work for me)
Replace by your target name in the following command:
make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=mbed TAGS="CMSIS <lowercase_target>" generate_hello_world_mbed_project
Follow the next steps described in the tutorial and run the following command with your target name uppercase:
mbed compile -m <TARGET_UPPERCASE> -t GCC_ARM
Done! If you need to use the libraries, they will be located in
tensorflow/lite/experimental/micro/tools/make/gen/mbed_cortex-m4/prj/hello_world/mbed/mbed-os/features/
Hope it helps! =)
I have an ETTUS Research N210 software defined radio (SDR) connected to my laptop. The device is recognized under macos and also under an Ubuntu on top of a virtual box. These commands:
uhd_usrp_probe --args=addr=192.168.10.2
and
uhd_find_devices --args=addr=192.168.10.2
and even
rx_ascii_art_dft --args=addr=192.168.10.2 --freq 92000000 --gain 30 --rate 8000000 --frame-rate 15 --ref-lvl -50 --dyn-rng 70
work perfectly and deliver results. But whenever I start the gnuradio-companion with a simple flow graph, I get the following error (BOTH directly under macos and on top of VirtualBox Ubuntu):
[ERROR] [UHD] Device discovery error: unknown key format 192.168.10.2
Runtime
RuntimeError: LookupError: KeyError: No devices found for ----->
Device Address: 192.168.10.2
In the flow graph, I put the device address in the properties window of "USRP Source--> General --> Device Address".
Any ideas what I am doing wrong?
I finally found the solution in one of the replies in ETTUS forum. So I put it here in the hope it can be useful for others facing the same problem. The device address field of the USRP source in gnuradio-companion should not be filled with just "192.168.10.2" but with "addr=192.168.10.2". This solved the problem for me.
With QEMU, I can use either use -initrd '${images_dir}/rootfs.cpio for the initrd, or pass the initramfs image directly to -kernel Image.
But if I try the initramfs image with gem5 fs.py --kernel Image it fails with:
fatal: Could not load kernel file
with the exact same initramfs kernel image that QEMU was able to consume.
And I don't see an analogue to -initrd.
The only method that I got to work was to pass an ext2 disk image to --disk-image with the raw vmlinux.
https://www.mail-archive.com/gem5-users#gem5.org/msg15198.html
initrd appears unimplemented on arm and x86 at least, since gem5 must know how to load it and inform the kernel about it's location, and grepping initrdonly shows some ARM hits under:
src/arch/arm/linux/atag.hh
but they are commented out.
Communicating the initrd to the kernel now appears to be simply doable via the DTB chosen node linux,initrd-start and linux,initrd-end properties, so it might be very easy to implement: https://www.kernel.org/doc/Documentation/devicetree/bindings/chosen.txt (and gem5's existing DTB auto generation) + reusing the infrastructure to load arbitrary bytes to a memory location: How to preload memory with given raw bytes in gem5 from the command line in addition to the main ELF executable?
Initramfs doesn't work because gem5 can only boot from vmlinux which is the raw ELF file, and the initramfs images only gets attached by the kernel build to a more final image type like Image or bzImage which QEMU can use to boot, see also: https://unix.stackexchange.com/questions/5518/what-is-the-difference-between-the-following-kernel-makefile-terms-vmlinux-vml/482978#482978
Edit: the following is not needed anymore after the patch mentioned at: How to attach multiple disk images in a simulation with gem5 fs.py? To do this test, I also had to pass a dummy disk image as of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91 since the scripts don't handle a missing --disk-image well, you can just dump some random 512 bytes and use them:
dd if=/dev/zero of=dummy.iso bs=512 count=1
Since it took me quite some time to figure out how to get Xtion (Primesense) to work on VMware I thought to share it here with you. (with Kinect I have a problem to let ROS see the device even though VMware has successfully connected it).
roslaunch openni2_launch openni2.launch
Running the above command gave me the error:
Warning: USB events thread - failed to set priority. This might cause loss of data...
I either got a single frame or no frame when running "rviz" and Add --> Image --> Image topic --> /camera/rgb/image_raw
So how do I get video frames in Ubuntu from a Primesense device while using a Virtual Machine (VMware)?
My specs
Windows 7 running VMware 10.0.4 build-2249910
Ubuntu 12.04.5 Precise in VMware
ROS Hydro
The following question pointed me in the right direction: http://answers.ros.org/question/77651/asus-xtion-on-usb-30-ros-hydro-ubuntu-1210/?answer=143206#post-id-143206
In the answer of blizzardroi (not selected answer) he/she mentions that USBInterface should be 0. I reasoned that since my main Machine is Windows, I should set UsbInterface to 1, which indeed solved it.
Solution
Go to /etc/openni2/ (from system folder, not Home) and open PS1080.ini with administrator rights (e.g. sudo gedit PS1080.ini). Search for UsbInterface, remove the ; and change the value to 1. It should look like below:
; USB interface to be used. 0 - FW Default, 1 - ISO endpoints (default on Windows), 2 - BULK endpoints (default on Linux/Mac/Android machines)
UsbInterface=1
Additional
From previous experience it may also be related that your Windows system may need the kinect drivers as well. If the above not works, try to install the following:
(Kinect SDK) https://www.microsoft.com/en-us/download/details.aspx?id=34808
(OpenNI2 Windows) http://structure.io/openni
p.s. Don't forget your drivers for Ubuntu (replace hydro with your ROS version)
sudo apt-get install ros-hydro-openni*
Important
It doesn't solve the error below, but rviz returns video, which means that we can read the data the Primesense device publishes!
Warning: USB events thread - failed to set priority. This might cause loss of data...
Got the same warning from opennni (issued at start by a binary located at Tools/PSLinkConsole) with another sensor.
Solved by starting process as sudo - my guess: to set priority to USB event threads you need root access. :)
I run poclbm on my system but for some reason both deepbit and slush don't "see" the work being performed. My system reports about 200 megabashes per second being done. I tried mining with my cpu using the same settings, and then both deepbit and slush recognized that work was being performed.
These are the errors I am getting out of the respective mining hardware (every minute or so):
poclbm error: pit.deepbit.net:8332 22/02/2013 21:50:59, Verification failed, check hardware! (0:0:Cypress, d47b7ba0)
cgminer error: [2013-02-22 22:18:51] GPU0: invalid nonce - HW error
I am using Ubuntu 12.10 (Quantal Quetzal) with the 12.10 version poclbm with an ATI 5800 series video card. The video drivers are installed and work as far as I can tell. When I run a "aticonfig --odgc --adapter=all", the gpu does seem to be utilized with poclbm (around 70% utilization or so).
I found the solution through an irc channel (Freenode on channcel #cgminer). Basically, at least on the version of Ubuntu that I have (12.10), the 2.8 version of the SDK does NOT work properly with cgminer or poclbm. I was instructed to download the 2.4 version of the SDK. Here:
http://developer.amd.com/Downloads/AMD-APP-SDK-v2.4-lnx32.tgz
http://developer.amd.com/Downloads/AMD-APP-SDK-v2.4-lnx64.tgz
Some distributions require the "2.7" version so I'll put the links here:
http://developer.amd.com/Downloads/AMD-APP-SDK-v2.7-lnx32.tgz
http://developer.amd.com/Downloads/AMD-APP-SDK-v2.7-lnx64.tgz
I compiled it. There is no "make install" for this Makefile, apparently, so you have to manually copy the files to your lib directory:
for 32 bit: $ cp -pv lib/x86/* /usr/lib/
for 64 bit: $ cp -pv lib/x86_64/* /usr/lib/
Also copy the include files: $ rsync -avl include/CL/ /usr/include/CL/
With the libraries installed in the appropriate directories, I recompiled cgminer and then it worked. I also tried it with poclbm and it worked with that too.
Hm, I experienced the same error with pclbm and cgminer. Then I found https://bitcointalk.org/index.php?topic=139406.msg1502120#msg1502120 .. I tried phoenix and all is ok now. Hope it helps. Sry my bad english.