Deploying TFLite on microcontrollers - tensorflow

I'm trying to deploy TF Lite on a microcontroller that is not in the examples provided by TF repository, and I'm starting with an STM32L0.
My question is:
1) how can I modify the mbed project for an STMF4 to fit another STM32 family?
I noticed I need to change the TARGET (which I could find in the mbed-os repository) but it returns me a few errors saying it misses AUDIO_DISCO and BSP modules.
2) Where do I find these libraries for my board?
Specs:
Linux Ubuntu 18.04
mbed cli 1.10.2
mbed os >= 5 (contains mbed-os.lib file)
tensorflow v2.10.1
Discovery Kit for STM32L07CZY6TR (B-L072-LRWAN1)

For part #1, you can remove the AUDIO_DISCO and BSP .lib files that are in the generated projects for Mbed.
This should get you something that builds examples that don't need to access the microphone or accelerometers, but if you want to use sensor data, you'll have to figure out what the equivalents are for your board since Mbed OS doesn't offer abstractions for these kinds of devices.

I managed to build for other targets by doing the following:
Find the target name for your board in mbed-os/targets/
In my case it was DISCO_L072CZ_LRWAN1
Clone the v2.1.0 of tensorflow repository (the latest version on master didn't work for me)
Replace by your target name in the following command:
make -f tensorflow/lite/experimental/micro/tools/make/Makefile TARGET=mbed TAGS="CMSIS <lowercase_target>" generate_hello_world_mbed_project
Follow the next steps described in the tutorial and run the following command with your target name uppercase:
mbed compile -m <TARGET_UPPERCASE> -t GCC_ARM
Done! If you need to use the libraries, they will be located in
tensorflow/lite/experimental/micro/tools/make/gen/mbed_cortex-m4/prj/hello_world/mbed/mbed-os/features/
Hope it helps! =)

Related

How can I run Mozilla TTS/Coqui TTS training with CUDA on a Windows system?

I have a machine with a Quadro P5000 graphics card, running Windows 10. I'd like to train a TTS voice on this system. What do I need to install to make this work?
Here's what to install/do:
Download and install Python 3.8 (not 3.9+) for Windows. During the installation, ensure that you:
Opt to install it for all users.
Opt to add Python to the PATH.
Download and install CUDA Toolkit 10.1 (not 11.0+).
Download "cuDNN v7.6.5 (November 5th, 2019), for CUDA 10.1" (not cuDNN v8+), extract it, and then copy what's inside the cuda folder into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1.
Download the latest 64-bit version of eSpeak NG (no version constraints :-) ).
Download the latest 64-bit version of Git for Windows (no version constraints :-) ).
Open a PowerShell prompt to a folder where you'd like to install Coqui TTS.
Run git clone https://github.com/coqui-ai/TTS.git.
Run cd TTS.
Run python -m venv ..
Run .\Scripts\pip install -e ..
Run the following command (this differs from the command you get from the PyTorch website because of a known issue):
.\Scripts\pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
Put the following into a script called "test_cuda.py" in the TTS folder:
import torch
x = torch.rand(5, 3)
print(x)
print(torch.cuda.is_available())
Run the script via .\Scripts\python ./test_cuda.py and confirm the output looks like this (the first part should have just random numbers, but the last line must read True; if it does not, CUDA is not installed properly):
tensor([[0.2141, 0.7808, 0.9298],
[0.3107, 0.8569, 0.9562],
[0.2878, 0.7515, 0.5547],
[0.5007, 0.6904, 0.4136],
[0.2443, 0.4158, 0.4245]])
True
Put the following into a script called "train.bat" in the TTS folder, and then customize it for your configuration file:
set PYTHONIOENCODING=UTF-8
set PYTHONLEGACYWINDOWSSTDIO=UTF-8
set PHONEMIZER_ESPEAK_PATH=C:/Program Files/eSpeak NG/espeak-ng.exe
.\Scripts\python.exe ./TTS/bin/train_tacotron.py --config_path "C:/path/to/your/config.json"
Run the script via .\train.bat.
If you are using a different model than Tacotron or need to pass other parameters into the training script, feel free to further customize train.bat.
If you are just getting started with TTS training in general, take a peek at How do I get started training a custom voice model with Mozilla TTS on Ubuntu 20.04?.

How do I build and run the code from github for NS3 in the link provided

How do I build and run the code from github for NS3 in the link provided below
https://github.com/mkheirkhah/mptcp
it has already ns3 installation steps with mptcp
https://github.com/mkheirkhah/mptcp
this is the Installations steps go according to it ul get to know
We have tested this code on Mac (with llvm-gcc42 and python 2.7.3-11) and several Linux distributions (e.g. Red Hat with gcc4.4.7 or Ubuntu16.4 with gcc5.4.0).
Clone the MPTCP's repository
git clone https://github.com/mkheirkhah/mptcp.git
Configure and build
CXXFLAGS="-Wall" ./waf configure build
Run a simulation
./waf --run "mptcp"
https://github.com/Kashif-Nadeem/ns-3-dev-git is the more recent fork of https://github.com/teto/ns-3-dev-git/wiki which started as mkheirkhah's fork.
It should work with the latest ns-3. Compared to mkheirkhah's approach (I haven't checked if it is still valid), it tries to reuse the TCP socket code so that it can use TCP socket application. You can read more details from https://www.researchgate.net/publication/313623789_An_Implementation_of_Multipath_TCP_in_ns3

nv-nsight-cu-cli caused Tensorflow to fail

I've downloaded the newest Nsight Compute profiling tool and I want to use it to benchmark Tensorflow applications. The code I'm using is here. It runs perfectly fine when I execute it and when I benchmark it with nvprof ./mnist.py it had no problem at all. However, when I try to run it with command sudo ./nv-nsight-cu-cli [path to the file] I get the following error:
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
I suspect that nv-nsight-cu-cli somehow didn't recognized the environment variable at all. Is there any fix around?
You need to search for differences in both environments:
env variables
LD_LIBRARY_PATH
/etc/ld.so.conf
/etc/ld.so.conf.d/*
cuBLAS
Is installation complete/not broken?
Is it installed at the same location on both machines?
Versions
...
You can start with locate libcublas.so on both machines to see if there's a difference. Alternatively, you can strace -f -e open the program to check where it tries to libcublas.so from.
Your error has (for now) nothing to do with GPUs: libcublas.so.9.0 can just not be found. Find it, find why Tensorflow can not find it and your problem will be solved.
It appears that GP100 is not supported by the tool at this moment.
The answer is found here:
Nsight Compute only supports Pascal (other than GP100) and later GPUs.

How do you compile a Pharo VM without an image?

I have already cloned the VM and installed all dependencies for my platform. Now I am a bit confused because a couple of guides suggests that Pharo image should be started to generate the C sources translated from Slang.
"Unix"
PharoVMBuilder buildUnix32.
"OSX"
PharoVMBuilder buildMacOSX32.
"Windows"
PharoVMBuilder buildWin32.
But how you generate a VM when you cannot start a VM in your platform? This sounds like chicken and egg problem.
This means is not possible to build a VM if you cannot start an image in that platform?
If you download pre-generated sources from the CI server as suggested by Esteban, you don't need the pharo-vm sources cloned from any repository. Just uncompress in a new folder and build from there.
Assuming you have your new sources in c:\phs, open directories.cmake and rename the hardcoded path as follows:
set(topDir "c:/phs/")
set(buildDir "c:/phs/build")
set(thirdpartyDir "${buildDir}/thirdparty")
set(platformsDir "c:/phs/platforms")
set(srcDir "c:/phs/src")
set(srcPluginsDir "${srcDir}/plugins")
set(srcVMDir "${srcDir}/vm")
set(platformName "win32")
set(targetPlatform ${platformsDir}/${platformName})
set(crossDir "${platformsDir}/Cross")
set(platformVMDir "${targetPlatform}/vm")
set(outputDir "c:/phs/results")
As you could not start a VM, I suppose you need to change at least the compilation flags used to generate the sources in the CI server. They are in c:\phs\build\CMakeLists.txt specially the following flags:
-march=... (your processor architecture, search for Safe Cflags)
Removing -g0 which suppress debug options
Remove -O2 (optimizations)
Remove -DNDEBUG
Modify -DDEBUGVM=0 to -DDEBUGVM=1
and finally start the build script
cd /c/phs/build
bash build.sh
You need to pre-generate the sources outside or take pre-generated sources from other place.
Let's assume you want to compile a kind of unix, you can download pre-generated sources from here:
https://ci.inria.fr/pharo/view/3.0-VM/job/PharoSVM/Architecture=32,Slave=vm-builder-linux/lastSuccessfulBuild/artifact/sources.tar.gz (for a stack vm)
https://ci.inria.fr/pharo/view/3.0-VM/job/PharoVM/Architecture=32,Slave=vm-builder-linux/lastSuccessfulBuild/artifact/sources.tar.gz (for a cog vm)

ios Symbolication Server side

How to symbolicate the ios crash report after uploading to server in a linux environment where iOS development tools and scripts will not be available. I know Apple uses atos and some other tools to map the hex addresses to symbol along with .dYSM file.
I can upload .dYSM file along with crash report to server. Refered QuincyKit, but they are doing symbolication locally. But other's like HockeyApp and Critterism are doing it remotely.
Pls recommend the possible ways to do it in server.
It is possible. You can take a look at https://github.com/facebook/atosl
I got it working under Linux. (Ubuntu Server) However, it takes some time to get it up and running.
Installing atosl
First, you need to install libdwarf-dev, dwarfdump, binutils-dev and libiberty-dev.
E.g. on Ubuntu:
$ sudo apt-get install libdwarf-dev dwarfdump binutils-dev libiberty-dev
Download or clone the atosl repo from GitHub:
$ git clone https://github.com/facebook/atosl.git
CD to the atosl dir
$ cd atosl
Create a local config config.mk.local which contains a flag with the location of your binutil apps. (in Ubuntu by default that's /usr/bin). If you're not sure, you can find out by executing cat /var/lib/dpkg/info/binutils.list | less and copy the path of the file objdump. E.g. if the entry is /usr/bin/objdump, your path is /usr/bin.
So in the end, your config.mk.local should look like this:
LDFLAGS += -L/usr/bin
Compile it:
$ make
Now you can start using it:
$ ./atosl --help
Symbolicating example
To show how atosl is used, I'll provide a simple example.
Now let's take a look at a line from the crash log:
13 ErrorApp 0x000ea294 0xe3000 + 29332
To symbolicate this, we will need the load address, and the runtime address.
In this example the runtime address is 0x000ea294, and the load address is 0xe3000.
Now we have everything we need:
$ ./atosl -o [YOUR_dSYM_FILE] -l [LOAD_ADDRESS] [RUNTIME_ADDRESS]
In this example:
$ ./atosl -o ErrorApp.app.dSYM/Contents/Resources/DWARF/ErrorApp -l 0xe3000 0x000ea294
Which returns the symbolicated line:
main (in ErrorApp) (main.m:16)
FYI
Your vmaddr, which usually is 0x00001000, you can find by looking at the segname __TEXT Mach-O load command of your binary. In my example, this happens to be different, namely 0x00004000
To find the address, we need to do some math.
The address is found by the following formula:
address = vmaddr + ( runtime_address - load_address )
In this example our address is:
0x00004000 + ( 0x000ea294 - 0xe3000 ) = 0xB294
I haven't played around with this that much yet, but for now it seems to give me the results I needed. Maybe it will work for you too.
You need to implement your own linux compatible versions of atos, otool and dwarfdump (at least the functionality needed for symbolication). The Apple tools are not open source and only run on Mac OS X.
None of the services provide a solution that can be used by 3rd parties on non OS X systems. So your only chance, besides implementing the required functionality to run on your linux system, is to do it on a Mac like QuincyKit does it, see https://github.com/TheRealKerni/QuincyKit/wiki/Remote-symbolication or use a third party service.
Note: I am the creator of QuincyKit and Co-Founder of HockeyApp.