I need to run tests on some XLA passes and used bazel test
--config=opt --config=cuda //tensorflow/compiler/xla/service to do the same (from here). The build failed with the following message, hinting at the missing googletest dependency.
/usr/lib/x86_64-linux-gnu/Scrt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
The dependences libgtest.a and libgtest_main.a were built from the googletest source and passed to the linker using --linkopt=/path/to/file.
googletest/googletest/libgtest_main.a(gtest_main.cc.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
Adding -DCMAKE_CXX_FLAGS=-fPIC didn't help. How can I change the cmake config to build with -fPIC ?
tensorflow (v1.8) is configured to be built by a locally built version gcc (5.4), since the system's version (5.5) fails to build tensorflow. Would that be the cause of the problem ?
Linking to the shared libraries instead of the object file archive solved this problem, i.e.,
bazel test --linkopt="$GTEST_DIR/libgtest.so" --linkopt="GTEST_DIR/libgtest_main.so"
instead of,
bazel test --linkopt="$GTEST_DIR/libgtest.a" --linkopt="GTEST_DIR/libgtest_main.a"
This still doesn't help run tensorflow unit tests. There are build errors in dependences of the unit tests, eg. the compilation of //tensorflow/...../absl/base/internal fails.
Related
newbie here, first message :)
I need tensorflow with CUDA, AVX and SSE on a windows machine. As far as I understood I need to build it myself. I first tried with Anaconda, but it was a mess, so I uninstalled anything related to python and I started following step by step the official guide
I used several commands to build, for instance:
bazel build -c opt --copt=-march=native --copt=-mfpmath=both --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
bazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package
The first two commands came from here and might be old.
The building always fails with this message:
ERROR: missing input file 'external/llvm-project/mlir/include/mlir/Interfaces/SideEffectInterfaces.h', owner: '#llvm-project//mlir:include/mlir/Interfaces/SideEffectInterfaces.h'
Does anybody understand what is going on here?
Also, what is the best command to build among the one I used?
Is there any way to install it in Anaconda on windows (with CUDA, avx and SSE capabilities)?
Building tensorflow on windows can be tough, there are many ways for it to fail. The procedure will vary with the version you are compiling. For the most part, the instructions on the tensorflow site are correct if followed verbatim. I think the trickiest part is matching the version of tools used to compile with version of tensorflow you are working with.
My suggestion would be to lock into a particular version of tensorflow and stick with that until you succeed. If you git clone the source from github, I would suggest git checkout r2.2. This will put you into the most recent version.
I would avoid anaconda as that presents complications with the python version you are working with. I have had good results with python 3.6.8., but it may be possible to use 3.7.
You will need a specific version of Bazel as well, 2.0.0 is what works with tensorflow r2.2. Be mindful that you need to configure the BAZEL_VC environment variable before you start compiling. It should look something like C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC. You can do a bazel clean --expunge after the variable is set to avoid some confusion.
The r2.2 tensorflow also requires MSVC 2019, it will not compile with other versions. You will need the build tools for this version as well.
The last bazel build command you showed is the correct one. Don't forget to python ./configure.py before starting a fresh compile.
If my guess is correct, the error message you are getting is from using an older version of MSVC on the later tensorflow source code, but that's just a guess.
I've been through the steps to build tensorflow and it's working in python. Now how do I BUILD the C tensorflow library I want to use?
$ gcc -I../tensorflow -ltensorflow g.c
/usr/bin/ld: cannot find -ltensorflow
collect2: error: ld returned 1 exit status
To build the C library from source, follow most of the instructions for building TensorFlow from source, except that instead of building the pip package, build the tarball that packages the shared libraries and C API header file:
bazel build -c opt //tensorflow/tools/lib_package:libtensorflow
This will produce a tarball in:
bazel-bin/tensorflow/tensorflow/tools/lib_package/libtensorflow.tar.gz
More details in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/lib_package/README.md
The release binaries are built using the process described above.
Hope that helps.
There are 2 libraries you have to build:
libtensorflow_framework.so
libtensorflow.so
To build them you have to use bazel
bazel build //tensorflow:libtensorflow_framework.so
bazel build //tensorflow:libtensorflow.so
Once the build process of both libraries ends, you have to make the linker aware of where these libraries are, hence you have to upgrade the LIBRARY_PATH and the LD_LIBRARY_PATH accordingly.
TENSORFLOW_LIB = "/path/of/tensorflow/bazel-bin/tensorflow/"
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${TENSORFLOW_LIB}`
export LIBRARY_PATH=${LIBRARY_PATH}:${TENSORFLOW_LIB}
I'm trying to build tensorflow on aarch64 linux board computer.
But tensorflow's bazel settings use armeabi-v7a config to build libjpeg-turbo.
This cause build error gcc: error: unrecognized command line option '-mfloat-abi=softfp' (aarch64 gcc has no softfp option).
I think that bazel recognizes my build environment as android armeabi-v7a (correct is linux aarch64), but I don't know how to check that and how to control that.
Bazel does not have functions/values like as cmake's message/CMAKE_SYSTEM_PROCESSOR. This is very irritating me.
I am getting a build error when executing TF.
I have an include file issue. I have installed latest zlib1g-dev, but no luck.
Bazel binaries built from source V0.3.2,
TF command:
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
ERROR:
ERROR: tensorflow/core/BUILD:853:1: undeclared inclusion(s) in rule '//tensorflow/core:lib_internal':
this rule is missing dependency declarations for the following files included by 'tensorflow/core/lib/png/png_io.cc':
~/.cache/bazel/_bazel_madhu/a9aabe45cf3d94341ef4fb777deb58c5/external/zlib_archive/zlib.h'
~/.cache/bazel/_bazel_madhu/a9aabe45cf3d94341ef4fb777deb58c5/external/zlib_archive/zconf.h'."
Clean your $HOME/.cache/bazel directory.
I am trying to compile my c++ project (working with g++ with boost) with clang. I have successfully compile boost-libraries (1.53) with clang tool-chain. I am using CMake to compile my project, compilation is failing with following error.
In file included from /home/dilawar/Works/hpc21/bliff/BlifParserAndPartitioner/src/expression_graph.h:21:
/usr/local/include/boost/graph/graph_traits.hpp:14:10: fatal error: 'iterator' file not found
#include <iterator>
^
1 error generated.
I am passing -stdlib=libc++ to compiler. I am not sure which package I should install (ubuntu) to install libc++. I have clang and llvm installed on my machine.
Do I have to download and compile libc++ or it is installed automatically when one install clang?
When you pass -stdlib=libc++ the clang driver looks for header files in a different
directory w.r.t. when you don't pass the flag. You have to install libc++ separately. The libc++ webpage (http://libcxx.llvm.org/) has some details on how to install libc++ using CMake.
This webpage might also be useful:
http://marshall.calepin.co/llvmclang-and-standard-libraries-on-mac-os-x.html