OpenThread examples for EFR32MG12 won't work directly as is on their github repository - openthread

I am trying out OpenThread using a combination of their github repository (https://github.com/openthread/openthread) and the Codelabs (https://openthread.io/guides).
There are examples built for the EFR32MG12 platform - https://github.com/openthread/openthread/tree/master/examples/platforms/efr32mg12.
I am using the WSTK PCB4001 Rev A03 with the Mighty Gecko 2.4 GHz 19dBm BRD4161A Rev A03. I am able to build the code without errors. I am even able to flash the code onto the chip successfully using JLinkExe. However, I do not see any outputs on the board - no CLI output or response, no LEDs blinking.
I think there might be some error with the HAL or BSP implementations. Any help would be appreciated.

Related

Can I use NAOqi 2.5 (C++/Python SDK) features on a NAOqi 2.9 (QiSDK) robot (Pepper)?

I have the Pepper robot running NAOqi 2.9, which is meant to use the QiSDK for its Android tablet. Things have been going well, but the photo capture rate is surprisingly slow (at most 2 fps), so I've got to use the C++ (or Python) SDKs available for NAOqi 2.5 for this particular task.
I've been trying to get it to work for a few days with no success. I have setup both the C++ and Python SDKs up and running, but the problem I'm facing is connection to the robot.
I've run the simple following code (using the robot's IP) found on the official website here
from naoqi import ALProxy
tts = ALProxy("ALTextToSpeech", "<IP of your robot>", 9559)
tts.say("Hello, world!")
and I'm getting the following output stream
after the second line
The connection problem occurs running either C++ on Ubuntu, or Python on Windows.
I can connect to the robot via SSH, FTP, QiSDK in Android Studio, but not in any way through the NAOqi 2.5 SDKs for C++ or Python. Since QiSDK was most probably build on top of the C++ SDK, there surely has to be a way to make this to work.
Any information will help immeasurably.
As far as I know, in NAOqi 2.5, the tablet (JavaScript) and the "brain" (Choregraphe i.e. Python / C++) of the robot were two independent devices and had to communicate and cooperate with each other. In NAOqi 2.9, the "brain" was moved to the tablet and the only way to program Pepper is by using Android Studio.
On the download page for Pepper NAOqi 2.9 (https://www.softbankrobotics.com/emea/en/support/pepper-naoqi-2-9/downloads-softwares), there is a comment regarding the Python SDK:
This is for old NAOqi 2.5.10 and NAOqi 2.5.5.
And the following is stated for NAOqi 2.9 / Pepper SDK Plugin [for Android Studio]:
This is all you need for Pepper NAOqi 2.9.
Therefore, according to Softbank Robotics' documentation, using Python / C++ to program a NAOqi 2.9 Pepper is not possible.
I hope this information answers your question.
Edit
There's another way, you can use the qi Python library inside Pepper's head, in order to use services, such as ALTextToSpeech or ALMotion, with a simple example here. One could also only use SSH to start a Python server, which would give access to these functionalities through endpoints.
import qi
app = qi.Application()
app.start()
session = app.session
tts = session.service("ALTextToSpeech")
tts.say("Hello Word")
If you run the above snippet inside Pepper's head it produces the expected output(saying Hello world). There are almost all the services that are documented here. You can also list them all by calling .services() on the session object
End of Edit
I finally found a way to hack into it. If you connect to the robot via SSH you can use the qicli binary. Its documentation is here
qicli info lists all services available, for example ALVideoDevice or ALMotion
qicli info ALMotion displays the available methods of that service
qicli info ALMotion.setAngles displays info about that method's parameters
qicli call ALMotion.setAngles HeadYaw 0.7 0.3 calls the function in the module with given parameters
So one could write a wrapper to this binary and call it programmatically via SSH, it seems like a lot of work for this kind of task but I haven't found anything else.
I've got Python's Paramiko library to work:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname='ip-of-robot', username='nao', password='your-pass')
stdin, stdout, stderr = client.exec_command('pwd')
print(stdout.read())
client.exec_command('qicli call ALMotion.setAngles HeadYaw -0.7 0.2')
client.close()
I've also tried .NET's SSH.NET library, with little to no success.

Is STM32f429 discovery board fully supported on qemu?

I'm trying to emulate STM32F429I discovery board using qemu & eclipse IDE. I got the blinky example running with the led turning on and off on the graphics screen but I have tried an example to run the on-board screen and it doesn't seem to be running, Is it supported? Also, many drivers fail when simulated with qemu (sdram, rcc, ...) How can I know exactly what peripherals that are fully supported?
Here's the part of documentation about the board, What is meant by "FP not emulated" anyway?

PyOCD doesn't find Nu-Link2-Me probe on my NuMaker board

Note: Why this question is not off-topic
Some people seem to think this question is off-topic, and it would be better suited for Super User. Please give me a chance to explain why that's not the case.
The question is not about "general computing hardware" but about "embedded software". In the topic of "embedded software", there are tons of questions on StackOverflow related to OpenOCD, a popular open-source tool to connect your computer to embedded software development boards. All these questions are considered totally okay for StackOverflow. My question on this page is about PyOCD - an emerging OpenOCD alternative. So if you vote to close this question, then please also vote to close the 565 (!) other questions about OpenOCD too ;-)
I've got a NuMaker-M032SE V1.3 board from Nuvoton that I'm trying to flash/debug with PyOCD. It's my first time I'm experimenting with PyOCD and with Nuvoton chips. Unfortunately, PyOCD cannot find the device. I'll go step-by-step through the whole procedure. Please tell me what I did wrong.
1. My system
I'm running 64-bit Windows 10 on my desktop computer. I've got Python 3.8 and recently installed the latest PyOCD development version from a cloned GitHub repository (see https://github.com/mbedmicro/pyOCD).
2 Install Microcontroller board
Note: this paragraph is simply to show you the background situation, before I move on to explain the actual problem I got with PyOCD
I've got a Nuvoton NuMaker-M032SE V1.3 microcontroller board:
This board has a Nu-Link2-Me V1.0 probe on the right side. The first time I connect my board to my computer nothing really happened. So I figured out I had to install the Nuvoton ICP tool that comes with the Nu-Link USB Driver 1.6:
You can download the Nuvoton ICP tool here: https://www.nuvoton.com/hq/support/tool-and-software/development-tool-hardware/programmer/
When I first start the sofware, I see this:
And I get a request to update the firmware on the Nu-Link2-Me V1.0 probe:
I click OK and wait for the firmware update to complete. I plug out and back in the board. Windows clearly notices the device:
I can also see the device in my Control Panel > Device Manager. It's listed under Universal Serial Bus Controllers as Nuvoton Nu-Link2 USB:
3 Prepare PyOCD
PyOCD has a few built-in targets. But not the Nuvoton chip I got. So I consulted the documentation at https://github.com/mbedmicro/pyOCD/blob/master/docs/target_support.md and learn that I need to download a pack from http://www.keil.com/dd2/pack/ . That's where I download the Nuvoton ARM Cortex-M NuMicro Family Device Support pack:
Because I downloaded the pack manually, I know that I'll have to add the parameter --pack="C:/path/to/pack/Nuvoton.NuMicro_DFP.1.3.5.pack" to every PyOCD command, to ensure that PyOCD can access this pack whenever it needs to.
4. Connect PyOCD with Nuvoton board
I believe my microcontroller board is properly installed to go on to the final step: connect PyOCD to the Nuvoton microcontroller board.
First I want PyOCD to find the board. I issue the following command in a Windows console:
$ pyocd list --pack="C:/path/to/pack/Nuvoton.NuMicro_DFP.1.3.5.pack"
Unfortunately, I get the response:
No available debug probes are connected
I tried a few times, both with and without the --pack parameter. I always get the same error message.
Note:
I had expected to see something like:
# Probe Unique ID
---------------------------------------------------------------------------
0 ARM CMSIS-DAP v1 000000800a0c882800000000000000000000000097969902
That's the output I get when I issue the $ pyocd list command and I have my SWDAP probe connected to my computer. The SWDAP is the official probe from ARM (see https://os.mbed.com/components/SWDAP-LPC11U35/) that runs the DAPLink firmware (see https://github.com/ARMmbed/DAPLink).
I got a reply from Nuvoton. Apparently the NuMaker-M032SE V1.3 board is not yet supported in PyOCD at the time of writing (02 Dec 2019). At the moment, only NuMaker M252/M263 boards are supported.
Nuvoton will make efforts to support these boards in PyOCD too, in the future.

Gumstix Overo SSD1306 OLED

Hello everybody,
I have been working for some time on the implementation of tools for coding an application in Qt5 on a Gumstix Overo platform with a Yocto Rocko kernel.
After some effort I managed to set up the development tools:
- Create a bootable SD card with Yocto Rocko and Qt5.
- Get cross-compilation tools to code Qt5 linux support on Gumstix Overo (ARM).
- Configure QtCreator to develop code and compile it for the Gumstix Overo.
All my research work with explanatory "step by step" are available on this link.
In order to use the I2C part of the Gumstix Overo, I would like to exploit the small OLED SSD1306 display.
I found a project that did it for a Beagle Bone here.
And the library practically ready to use here.
After adapting the project for the Gumstix Overo and compiling the code, I can start the application.
The problem is that after a short time running the program stops and show me these two errors.
- ioctl error: Remote I/O error.
- Chunk writtent to RAM -Failed.
The display of the first error comes from an additionnal part that I have added in the code in line 202 there.
The display of the second error comes from the library in the line 777 there.
My unsuccessful searches on the internet make me to ask you for help.
Small precise, given the I2C communication voltage levels between the Gumstix Overo and the SSD1306 OLED display, an electronic adaptation has be made.
the electrical assembly
I2C waveform
Voltage level adaptation schematic
Thank you all.
Ok I found the solution.
The problem was the logical level converter that not working for 1.8 VDC.
I choose the PCA9306 Breakout from sparkfun there and it's working well.
I hope that will help someone.

Raspberry Pi with Kinect

Could anyone get the camera data from the Kinect using a Raspberry Pi ?
We would like to make a wireless Kinect connecting it using Ethernet or WiFi. Otherwise, let me know if you have a working alternative.
To answer your question, yes it is possible to get Image and depth on the raspberry pi!
Here is how to.
If you want to use just video (color, not depth) there is already a driver in the kernel! You can load it like this:
modprobe videodev
modprobe gspca_main
modprobe gspca_kinect
You get a new /dev/videoX and can use it like any other webcam!
If you need depth (which is why you want a kinect), but have a kernel older than 3.17, you need another driver that can be found here: https://github.com/xxorde/librekinect. If you have 3.17 or newer, then librekinect functionality is enabled by toggling the gspca_kinect module's command-line depth_mode flag:
modprobe gspca_kinect depth_mode=1
Both work well on the current Raspbian.
If you can manage to plug your kinect camera to the raspberry Pi, install guvcview first to see if it does works.
sudo apt-get install guvcview
Then, typeguvcview in the terminal and it should open an option panel and the camera control view. If all of that does works and that you want to get the RAW data to do some image treatments, you will need to compile OpenCV (it takes 4 hour of compiling) and after that, you just will need to program whatever you want. To compile it, just search on Google, there are lots of tutorial.
Well, as far as I know there are no successful stories about getting images from Kinect on RaspberryPi.
On github there is an issue in libfreenect repository about such problem. In this comment user zarvox say that RPi haven't enough power to handle data from Kinect.
Personally I tried to connect Kinect with RPi using OpenNI2 and Sensor, but have no success. And that was not a clever decision because it's impossible to work with Microsoft Kinect on Linux using OpenNI2 due to licensing restrictions (Well, actually it is not so impossible. You can use OpenNI2-FreenectDriver + OpenNI2 on Linux to hookup Kinect. But anyway this workaround is not suitable for RaspberryPi, because OpenNI2-FreenectDriver uses libfreenect).
But anyway there are some good tutorials about how to connect ASUS Xtion Live Pro to RaspberryPi: one, two. And how to connect Kinect to more powerfull arm-based CubieBoard2: three.
If you intend to do robotics the simplest thing is to use the Kinect library on ROS Here
Oderwise you can try OpenKinect, They provide the libfreenect library that let you acess to the accelerometers the image & much more
OpenKinect on Github here
OpenKinect Wiki here
Here is a good exemple with code & all the details you need to connect to the Kinect & operate the motors using libfreenect.
You will need a powered USB hub to power the Kinect & to install libusb.
A second possiblity is to use the OpenNI library which provides a SDK to develop midleware libraries to interface to your application there is even an OpenNi lib for processing here.
yes, you can use Kinect with raspberry pi in a small robotic project.
I have done this work with the openkinect library.
my experience is you should check your raspberry pi and monitoring pi voltage, not time does to low voltage.
you should accuracy your coding to use lower processing and run your code faster.
because if your code had got a problem, your image processing would be the slower response to the objects.
https://github.com/OpenKinect/libfreenect https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv2_threshold.py