Bazel + numpy + zip cross compile for arm - numpy

I am using bazel to make a python zip (--build_python_zip) from py_binary rule. Works great on the same architecture, but I when I try run the x86 built app it crashes on the arm with:
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
I think this is because there are some c libs in numpy that are included but built for x86. From looking around it seems like I need to define a toolchain in bazel and build with that. Does this work with the rules_python pip_install thing? How do I define/use the toolchain?
I have a minimal example in: https://github.com/CruxML/MinimalCrossCompile. Run make_zip.sh to build and run. Verified that this has issue described.

This appears to have been solved in rules_python 0.12 and above in PR #773

Related

Has anyone experience in building a static library for the Tensorflow C++ API?

I need to build Tensorflow as a static library to include into a product. As of now it seems that there's only support for building a shared/dynamic library with bazel. My current objective is to build a library for MacOS(darwin-arm64), but I'm also going to build one for x86.
Has anyone solved this before?
I've gotten some things to work thanks to this thread:
https://github.com/tensorflow/rust/pull/351
What I've done is to compile and leave all of the .a and .lo file cache:
bazel build --config=monolithic --macos_minimum_os=11.0 --cpu=darwin_arm64 -j 1 //tensorflow:libtensorflow_cc.so
And then tried to link them together using libtool with using the param generated by bazel to try and get the needed files sorting out unwanted lines and filtering duplicates:
libtool -static -o libtensorflow_arm64_libtool_SO3.a $(cat bazel-bin/tensorflow/libtensorflow_cc.so.*.params | sed -e 's!-Wl,-force_load,!!' | grep -e '\.a$' -e '\.lo$' | sort -t: -u -k1,1)
Some simple things work with this approach but I can for instance run into this following error whilst running my code interfacing the C-API:
F tensorflow/core/framework/allocator_registry.cc:85] No registered CPU AllocatorFactory
Indeed there's no support whatsoever at the moment for building a static library for the Tensorflow C-API. This is due to bazel being the build tool. At the moment of writing bazel doesn't have support for writing static libraries:
https://github.com/bazelbuild/bazel/issues/1920
This has been an issue for quite some time, and this is also the reason why the entire C-API can't be built as a static library at the moment.
But, there's a way around this. You can build Tensorflow lite as a static library with Cmake as can be found here in the Tensorflow git repository:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite
I also found this thread very helpful:
TensorFlow static C API library - how to link with 10 sub-dependencies?
After building this you will also need to include the Google flatbuffer library in your project(which you of course can include in your static library as well):
https://github.com/google/flatbuffers
TFlite can run most models and works even for my most complex models I've built. So it is the best way to get Tensorflow to work as a static library at the moment. For more information on TFLite see:
https://www.tensorflow.org/lite

Libtorch only has cmake config file but needs to build with meson

I'm trying to make a cpp project that uses libtorch (C++ distributions of pytorch) using meson build.
It has one simple cpp file of about 50 lines that runs deep learning on images.
First, I confirmed that my project runs well in two environments.
Since meson uses the pkg-config file, I made a simple pkg-config file for libtorch, and it works for a project running on the CPU.
libtorch provides a TorchConfig.cmake file, so I used cmake to confirm that my project runs on the GPU version.
However, I don't know how to build the GPU version of the project using meson.
The TorchConfig.cmake file is more complicated than I thought, so it was very difficult to make it manually with the pkg-config file.
(TorchConfig.cmake file references many cmake files in libtorch directories.)
I also tried to use libtorch_dep = dependency('Torch', method : 'cmake'), but it only found libtorch.so among many libtorch libraries that are for GPU APIs.
So, how can I build the project using libtorch with only cmake config file like this with meson?
Or is there a way to utilize the cmake config file to write a pkg-config file?
Any comments or suggestions would be appreciated.
operating system: Ubuntu 18.04
meson version: 0.54.0
cmake version: 3.22.1
libtorch version: 1.8.0
I tried to leverage “method: cmake” in meson, but it fails.
So, I wrote pkg-config file for libtorch manually and it works now.

Disable mkl inside eigen

System information
OS Platform and Distribution : Debian GNU/Linux 10
TensorFlow installed from (source or binary): Source
TensorFlow version:latest
Python version:3.7.3
Installed using virtualenv? pip? conda?: virtualenv
Bazel version (if compiling from source):3.3.0
GCC/Compiler version (if compiling from source): gcc 8.3
I did NOT use the --config=mkl flag while building
The image is a call graph of 1024X1024 Matmul done 100 times, if you look carefully mkldnn_sgemm is called internally by Eigen, this is what I want to disable.
After some amount of reading I found out that MKL can be called internally by Eigen and after reading the Bazel documentation and seeing how TensorFlow is strucured.I want to disable mkl/mkl_dnn completely if eigen is to be used.
The Eigen build files loads the ifmkl symbol from the mkl directory.
My next step was to look at the if_mkl function inside the build_defs.bzl
I changed the includes attribute in the cclibrary rule in the BUILD file of eigen to ["//conditions:default"] and tried building it again
which game me an error saying ModuleNotFoundError: No module named 'portpicker'
So pip installed portpicker pip install portpicker
And the build completed sucessfully after I did this, but the profile literally shows no difference mkldnn_sgemm is still bieng called the same number of time by eigen internally.
NOTE: The Ultimate aim is to disable mkldnn which is bieng called from inside eigen

Building a cross-platform application (using Rust)

I started to learn Rust programming language and I use Linux. I'd like to build a cross-platform application using this language.
The question might not be related to Rust language in particular, but nonetheless, how do I do that? I'm interested in building a "Hello World" cross-platform application as well as for more complicated ones. I just need to get the idea.
So what do I do?
UPDATE:
What I want to do is the ability to run a program on 3 different platforms without changing the sources. Do I have to build a new binary file for each platform from the sources? Just like I could do in C
To run on multiple platforms you need to build an executable for each as #huon-dbauapp commented.
This is fairly straightforward with Rust. You use "--target=" with rustc to tell it what you want to build. The same flag works with Cargo.
For example, this builds for an ARM target:
cargo build --target=arm-unknown-linux-gnueabihf
See the Rust Flexible Target Specification for more about targets.
However, Rust doesn't ship with the std Crate compiled for ARM (as of June 2015). If this is the case for your target, you'll first need to compile the std Crates for the target yourself, which involves compiling the Rust compiler from source, and specifying the target for that build!
For information, most of this is copied from: https://github.com/japaric/ruststrap/blob/master/1-how-to-cross-compile.md
The following instructions are for gcc, so if you don't have this you'll need to install it. You'll also need the corresponding cross compiler tools, so for gcc:
sudo apt-get install gcc-arm-linux-gnueabihf
Compile Rust std Crate For ARM
The following example assumes you've already installed the current Rust Nightly, so we'll just get the sources and compile for ARM. If you are using a different version of the compiler, you'll need to get that to ensure your ARM libraries match the version of the compiler you're using to build your projects.
mkdir ~/toolchains
cd ~/toolchains
git clone https://github.com/rust-lang/rust.git
cd rust
git update
Build rustc for ARM
cd ~/toolchains/rust
./configure --target=arm-unknown-linux-gnueabihf,x86_64-unknown-linux-gnu
make -j4
sudo make install
Note "-j4" needs at least 8GB RAM, so if you hit a problem above try "make" instead.
Install ARM rustc libraries In native rustc build
sudo ln -s $HOME/src/rust/arm-unknown-linux-gnueabihf /usr/lib/rustlib/arm-unknown-linux-gnueabihf
Create hello.rs containing:
pub fn main() {
println!("Hello, world!");
}
Compile hello.rs, and tell rustc the name of the cross-compiler (which must be in your PATH):
rustc -C linker=arm-linux-gnueabihf-gcc-4.9 --target=arm-unknown-linux-gnueabihf hello.rs
Check that the produced binary is really an ARM binary:
$ file hello
hello: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), (..)
SUCCESS!!!:
Check: the binary should work on an ARM device
$ scp hello me#arm:~
$ ssh me#arm ./hello
Hello, world!
I've used this to build and link a Rust project with a separate C library as well. Instructions similar to the above on how to do this, dynamically or statically are in a separate post, but I've used my link quota up already!
The best way to figure this out is to download the source code for Servo and explore it on your own. Servo is absolutely a cross-platform codebase, so it will have to address all of these questions, whether they be answered in build/configuration files, or the Rust source itself.
It looks like the rust compiler might not be ready to build standalone binaries for windows yet (see the windows section here), so this probably can't be done yet.
For posix systems it should mostly Just Work unless you're trying to do GUI stuff.
Yes, you won't need to change the source, unless you are using specific libraries that are not cross-platform.
But as #dbaupp said native executables are different on each platform, *nix uses ELF, Windows PE, and OSX Mach-O. So you will need to compile it for each platform.
I don't know the state of cross-compiling in rust, but if they already implemented it, then you should be able to build all the binaries in the same platform, if not, you will have to build each binary on it's platform.

error related to static linking of glibcxx and glibc

I am am trying to cross-compile an x86 program for alpha using g++. For that, I tried both "-static-libgcc" and "--static" options when linking the object file with libraries to generate the binaries. The cross compilation was successful, however I got the following errors when I ran the binaries on alpha machine:
./word_count: /lib/libc.so.6.1: version GLIBC_2.4' not found (required by ./word_count)
./word_count: /usr/lib/libstdc++.so.6: versionGLIBCXX_3.4.10' not found (required by ./word_
These errors shouldn't happen, since I am using static linking! So, I cannot figure out why I am getting these errors! Any help is appreciated.
You need to link against both, standard C and C++ libraries. (source)