Add the tensorflow lite static library to the buildroot cross compiler - tensorflow

I work on a Buildroot Embedded Linux System and I have to code Machine Learning Inference using the Tensorflow lite C++ static library. I have already built it following the tensorflow tutorial and I have got my libtensorflow-lite.a file ready to go.
But now, I don't really know how to add this static library to the cross compiler on buildroot. The buildroot user manual doesn't seem to talk about it.
I don't know if I have to create a ".mk" file or a "Config.in" file as a package or not.
Can someone help me ?

Related

building a kernel module using CMake

I want to build a linux kernel driver which uses another library which can be compiled to be used with kernel code.
I tried using Makefile and adding all the sources/headers with no luck.
The recommended way is to use CMake but I didn't find any good tutorial on how to use CMake with linux kernel module.
Are there some basic rules?
An hello world example?

Compile Tensorflow programs with custom compiler

I'm trying to compile a very simple Tensorflow program (which only prints the Tensorflow version) with my company's c compiler but the libtensorflow.so I downloaded from Tensorflow's offical website is incompatible with our c compiler.
My company's c compiler is pretty much just a standard gcc but gcc can compile the program and our custom compiler cannot.
My colleague told me I have two options: (1) replace Bazel's compiler with our compiler and use Bazel to compile the program or (2) Compile the program with Bazel first then compile the program using our compiler and include the pb.h files generated by Bazel (because those bazel files can only be generated by Bazel).
I'm not sure how to do (!) but I tried (2). The problem with (2) is I got erros saying the protoc was generated by an older version and I'm not sure how to change to the right version.
Some additional information: (1) The OS is Linux, (2) I do not have the privilege to use sudo commands, (3) I cannot access system directories (e.g. /usr/local)
Is there any hope I can make this work? You may ask why not just build the program with Bazel. It's because our company's program needs to be run by our company's simulator and the simulator only accepts program generated by our company's compiler.
Your only option is to build tensorflow with Bazel, and tell Bazel to use your C/C++ compiler. The easiest way is to set the CC and CXX environment variables to point to your compiler's executable. If it is really a drop-in replacement of GCC, then it should work and after building you should get a tensorflow binary compiled with your custom compiler.
If special flags are needed then you should make a custom toolchain in Bazel to tell it how to use your compiler, it is a bit complex but not much. Instructions for that are at https://github.com/bazelbuild/bazel/wiki/Building-with-a-custom-toolchain

Building tensorflow with different wheel name

Is it possible to build the Tensorflow Python wheel with a different name than tensorflow?
I would like to build Tensorflow with SIMD instructions like SSE, AVX and FMA and distribute that internally in our repository. I've managed to build it, but the package name is tensorflow. To keep the package separate from the official package, I would like to call it tensorflow-optimized or something similar.
Is this possible with the bazel build system?
Or is there a way I could edit the wheel?
This is not part of the bazel build system, it is part of the tensorflow project's script. I think the relevant line is https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/pip_package/setup.py#L44. So you should be able to pass --project_name to override it.

Distributing a separate dependency on Tensorflow

I have some custom Tensorflow code I have in the contrib/ subdirectory of the project (all other parts of the code are standard Tensorflow from the official distribution).
I would like to be able to distribute this code as an external dependency on Tensorflow, such that I can distribute the library via pip and depend on the binary packages available for Tensorflow in pip as well.
My main goal is that I don't want to have users of my code have to compile the full Tensorflow tree (with my custom code only in contrib/) just to get my custom code / module.
Is this possible to do, and if so how?

TensorFlow core debug; missing debug symbols

I'm trying to learn TensorFlow's internals by stepping from its CIFAR-10 model training's python code into its core C++ code.
Using Eclipse+PyDev for step by step debugging of the python code works great, but I can't find how to step into the C++ code of the TensorFlow core.
I tried using Eclipse CDT to build the C++ code in a separate project, and attach the debugger to the python process running cifar10_train.py as described here, but the symbols are never loaded and (obviously) deferred breakpoints are never hit.
Background and setup:
I'm running on Ubuntu 14.04 LTS, installed the TensorFlow code from sources as described here and my CDT project uses a Makefile containing
bazel build -c dbg //tensorflow/cc:tutorials_example_trainer.
TensorFlow loads a library called _pywrap_tensorflow.so that includes its C API (as defined in tensorflow/tensorflow/core/client/tensor_c_api.cc ).
In my case the library loaded during runtime was located in
~/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so
but the library that was built from the local source code was located in ~/.cache/bazel/_bazel_<username>/dbb3c677efbf9967e464a5c6a1e69337/tensorflow/bazel-out/local_linux-dbg/bin/tensorflow/python/_pywrap_tensorflow.so.
Copying the locally built library over the loaded library, and attaching to the python process as defined in the question solved the problem.