Tensorflow won't build with CUDA support - tensorflow

I've tried building tensorflow from source as described in the installation guide. I've had success building it with cpu-only support and with the SIMD instruction sets, but I've run into trouble trying to build with CUDA support.
System information:
Mint 18 Sarah
4.4.0-21-generic
gcc 5.4.0
clang 3.8.0
Python 3.6.1
Nvidia GeForece GTX 1060 6GB (Compute capability 6.1)
CUDA 8.0.61
CuDNN 6.0
Here's my attempt at building with CUDA, gcc, and SIMD:
kevin#yeti-mint ~/src/tensorflow $ bazel clean
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
kevin#yeti-mint ~/src/tensorflow $ ./configure
You have bazel 0.5.2 installed.
Please specify the location of python. [Default is /home/kevin/.pyenv/shims/python]:
Found possible Python library paths:
/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages]
/home/kevin/.pyenv/versions/3.6.1/lib/python3.6
Do you wish to build TensorFlow with MKL support? [y/N]
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n]
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N]
nvcc will be used as CUDA compiler
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]:
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "6.1"]:
Do you wish to build TensorFlow with MPI support? [y/N]
MPI support will not be enabled for TensorFlow
Configuration finished
kevin#yeti-mint ~/src/tensorflow $ bazel build --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --verbose_failures //tensorflow/tools/pip_package:build_pip_package
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': Use SavedModel Builder instead.
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': Use SavedModel instead.
INFO: Found 1 target...
ERROR: /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/external/protobuf/BUILD:244:1: C++ compilation of rule '#protobuf//:js_embed' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command
(cd /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/execroot/org_tensorflow && \
exec env - \
PATH=/home/kevin/.pyenv/shims:/home/kevin/.pyenv/shims:/home/kevin/.pyenv/bin:/home/kevin/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/kevin/.local/bin \
PWD=/proc/self/cwd \
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 '-std=c++11' -g0 -MD -MF bazel-out/host/bin/external/protobuf/_objs/js_embed/external/protobuf/src/google/protobuf/compiler/js/embed.d '-frandom-seed=bazel-out/host/bin/external/protobuf/_objs/js_embed/external/protobuf/src/google/protobuf/compiler/js/embed.o' -iquote external/protobuf -iquote bazel-out/host/genfiles/external/protobuf -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/bazel_tools/tools/cpp/gcc3 -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fno-canonical-system-headers -c external/protobuf/src/google/protobuf/compiler/js/embed.cc -o bazel-out/host/bin/external/protobuf/_objs/js_embed/external/protobuf/src/google/protobuf/compiler/js/embed.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 2.
python: can't open file 'external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc': [Errno 2] No such file or directory
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 5.578s, Critical Path: 0.06s
Turning off all extra flags:
kevin#yeti-mint ~/src/tensorflow $ bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_packageWARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': Use SavedModel Builder instead.
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': Use SavedModel instead.
INFO: Found 1 target...
ERROR: /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/external/fft2d/BUILD.bazel:21:1: C++ compilation of rule '#fft2d//:fft2d' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command
(cd /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/execroot/org_tensorflow && \
exec env - \
PATH=/home/kevin/.pyenv/shims:/home/kevin/.pyenv/shims:/home/kevin/.pyenv/bin:/home/kevin/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/kevin/.local/bin \
PWD=/proc/self/cwd \
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 -MD -MF bazel-out/host/bin/external/fft2d/_objs/fft2d/external/fft2d/fft/fftsg.d -iquote external/fft2d -iquote bazel-out/host/genfiles/external/fft2d -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/bazel_tools/tools/cpp/gcc3 -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fno-canonical-system-headers -c external/fft2d/fft/fftsg.c -o bazel-out/host/bin/external/fft2d/_objs/fft2d/external/fft2d/fft/fftsg.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 2.
python: can't open file 'external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc': [Errno 2] No such file or directory
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 3.522s, Critical Path: 2.42s
Trying with clang instead:
kevin#yeti-mint ~/src/tensorflow $ ./configure
You have bazel 0.5.2 installed.
Please specify the location of python. [Default is /home/kevin/.pyenv/shims/python]:
Found possible Python library paths:
/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages]
/home/kevin/.pyenv/versions/3.6.1/lib/python3.6
Do you wish to build TensorFlow with MKL support? [y/N]
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n]
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N] y
Clang will be used as CUDA compiler
Please specify which clang should be used as device and host compiler. [Default is /usr/bin/clang]:
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]:
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "6.1"]:
Do you wish to build TensorFlow with MPI support? [y/N]
MPI support will not be enabled for TensorFlow
Configuration finished
kevin#yeti-mint ~/src/tensorflow $ bazel build --config=opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-msse4.2 --verbose_failures //tensorflow/tools/pip_package:build_pip_package
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': Use SavedModel Builder instead.
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': Use SavedModel instead.
INFO: Found 1 target...
~1300 lines of build warnings and info...
ERROR: /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/external/nccl_archive/BUILD:33:1: C++ compilation of rule '#nccl_archive//:nccl' failed: clang failed: error executing command
(cd /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/execroot/org_tensorflow && \
exec env - \
CLANG_CUDA_COMPILER_PATH=/usr/bin/clang \
CUDA_TOOLKIT_PATH=/usr/local/cuda \
CUDNN_INSTALL_PATH=/usr/local/cuda-8.0 \
PWD=/proc/self/cwd \
PYTHON_BIN_PATH=/home/kevin/.pyenv/shims/python \
PYTHON_LIB_PATH=/home/kevin/.pyenv/versions/3.6.1/lib/python3.6 \
TF_CUDA_CLANG=1 \
TF_CUDA_COMPUTE_CAPABILITIES=6.1 \
TF_CUDA_VERSION=8.0 \
TF_CUDNN_VERSION=6 \
TF_NEED_CUDA=1 \
TF_NEED_OPENCL=0 \
/usr/bin/clang '-march=native' -mavx -mavx2 -mfma -msse4.2 '-march=native' -MD -MF bazel-out/local_linux-py3-opt/bin/external/nccl_archive/_objs/nccl/external/nccl_archive/src/reduce.cu.pic.d '-frandom-seed=bazel-out/local_linux-py3-opt/bin/external/nccl_archive/_objs/nccl/external/nccl_archive/src/reduce.cu.pic.o' -iquote external/nccl_archive -iquote bazel-out/local_linux-py3-opt/genfiles/external/nccl_archive -iquote external/local_config_cuda -iquote bazel-out/local_linux-py3-opt/genfiles/external/local_config_cuda -iquote external/bazel_tools -iquote bazel-out/local_linux-py3-opt/genfiles/external/bazel_tools -isystem external/local_config_cuda/cuda -isystem bazel-out/local_linux-py3-opt/genfiles/external/local_config_cuda/cuda -isystem external/local_config_cuda/cuda/include -isystem bazel-out/local_linux-py3-opt/genfiles/external/local_config_cuda/cuda/include -isystem external/bazel_tools/tools/cpp/gcc3 '-std=c++11' -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fPIC -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wno-invalid-partial-specialization -fno-omit-frame-pointer -no-canonical-prefixes -DNDEBUG -g0 -O2 -ffunction-sections -fdata-sections '-DCUDA_MAJOR=0' '-DCUDA_MINOR=0' '-DNCCL_MAJOR=0' '-DNCCL_MINOR=0' '-DNCCL_PATCH=0' -Iexternal/nccl_archive/src -O3 -x cuda '-DGOOGLE_CUDA=1' '--cuda-gpu-arch=sm_61' -c bazel-out/local_linux-py3-opt/genfiles/external/nccl_archive/src/reduce.cu.cc -o bazel-out/local_linux-py3-opt/bin/external/nccl_archive/_objs/nccl/external/nccl_archive/src/reduce.cu.pic.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
clang: error: Unsupported CUDA gpu architecture: sm_61
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 25.030s, Critical Path: 12.66s
This is consistent behavior on the current master branch (31aa360), r1.2 (5d8c0a6), and r1.1 (8ddd727). I've seen many github issues (8790, 9651, 10367) and a stack overflow post or two (here, I tried using gcc/g++ 4.8), but they all seem to be solved and/or slightly unrelated to my problem.

Related

Compile errors running the ot-br-posix ./script/setup on RPi4

I'm trying to run the ./script/setup, but get compile errors:
Please note that the total 65 steps listed below is because I've restarted the setup script. The initial number of steps were closer to 465.
[1/65] Building CXX object src/common/CMakeFiles/otbr-common.dir/mainloop.cpp.o
FAILED: src/common/CMakeFiles/otbr-common.dir/mainloop.cpp.o
/usr/bin/c++ -DHAVE_LIBSYSTEMD=1 -DOTBR_ENABLE_BACKBONE_ROUTER=1 -DOTBR_ENABLE_BORDER_AGENT=1 -DOTBR_ENABLE_BORDER_ROUTING=1 -DOTBR_ENABLE_BORDER_ROUTING_COUNTERS=1 -DOTBR_ENABLE_DBUS_SERVER=1 -DOTBR_ENABLE_DNSSD_DISCOVERY_PROXY=1 -DOTBR_ENABLE_NAT64=1 -DOTBR_ENABLE_NOTIFY_UPSTART=1 -DOTBR_ENABLE_REST_SERVER=1 -DOTBR_ENABLE_SRP_ADVERTISING_PROXY=1 -DOTBR_ENABLE_SRP_SERVER_AUTO_ENABLE_MODE=1 -DOTBR_ENABLE_VENDOR_INFRA_LINK_SELECT=0 -DOTBR_MESHCOP_SERVICE_INSTANCE_NAME="\"OpenThread BorderRouter\"" -DOTBR_PACKAGE_NAME=\"OpenThread_BorderRouter\" -DOTBR_PACKAGE_VERSION=\"0.3.0-0cdef3c\" -DOTBR_PRODUCT_NAME=\"BorderRouter\" -DOTBR_SYSLOG_FACILITY_ID=LOG_USER -DOTBR_VENDOR_NAME=\"OpenThread\" -I../../include -I../../src -Ithird_party/openthread/repo/etc/cmake -I../../third_party/openthread/repo/etc/cmake -I../../third_party/openthread/repo/include -I../../third_party/openthread/repo/src/posix/platform/include -I../../third_party/openthread/repo/src -Wall -Wextra -Werror -Wfatal-errors -Wuninitialized -Wno-missing-braces -std=c++11 -MD -MT src/common/CMakeFiles/otbr-common.dir/mainloop.cpp.o -MF src/common/CMakeFiles/otbr-common.dir/mainloop.cpp.o.d -o src/common/CMakeFiles/otbr-common.dir/mainloop.cpp.o -c ../../src/common/mainloop.cpp
In file included from /usr/include/c++/8/list:63,
from ../../src/common/mainloop_manager.hpp:41,
from ../../src/common/mainloop.cpp:30:
/usr/include/c++/8/bits/stl_list.h:811:19: error: expected ‘)’ before ‘&’ token
list(_InputIterat&... __args)`
compilation terminated due to -Wfatal-errors.
I receive a lot more errors, but they follow the same pattern as above.
I have followed the guide from openthread.io to setup an Open Thread Border Router
The execution of the bootstrap script ran smoothly.
Additional information:
Git local repository path: ~/src/openthread/ot-br-posix
Command for executing the setup script:
pi#raspberrypi:~/src/openthread/ot-br-posix$> INFRA_IF_NAME=eth0 ./script/setup
RPi OS: Recommended image from the guide Raspberry Pi OS lite
Libgcc versions:
libgcc-8-dev/oldstable,now 8.3.0-6+rpi1 armhf [installed,automatic]
libgcc1/oldstable,now 1:8.3.0-6+rpi1 armhf [installed]
Cmake versions:
cmake-data/oldstable,now 3.16.3-3~bpo10+1 all [installed,automatic]
cmake/oldstable,now 3.16.3-3~bpo10+1 armhf [installed]

How to build tenssorflow op with bazel with additional include directories

I got tensorflow binaries (already compiled)
I have added to tensorflow source:
tensorflow\core\user_ops\icp_op_kernel.cc - contains:
https://github.com/tensorflow/models/blob/master/research/vid2depth/ops/icp_op_kernel.cc
tensorflow\core\user_ops\BUILD - contains:
load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")
tf_custom_op_library(
name = "icp_op_kernel.so",
srcs = ["icp_op_kernel.cc"],
)
I am trying to build with:
bazel build --config opt //tensorflow/core/user_ops:icp_op_kernel.so
And I get:
tensorflow/core/user_ops/icp_op_kernel.cc(16): fatal error C1083: Cannot open include file: 'pcl/point_types.h': No such file or directory
Because bazel don't know where the pcl include files are.
I have installed pcl and the include directory is in:
C:\Program Files\PCL 1.6.0\include\pcl-1.6
How do I tell bazel to also include this directory?
Also I will probably need to add C:\Program Files\PCL 1.6.0\lib to the link, How do I do that?
You don't need bazel for building ops if it fails.
I have implemented customized ops both in CPU and GPU, and basically follow the two Tensorflow tutorials.
For CPU ops, follow Tensorflow tutorial on Build the op library:
TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC ${TF_CFLAGS[#]} ${TF_LFLAGS[#]} -O2
Note on gcc version >=5: gcc uses the new C++ ABI since version 5. The binary pip packages available on the TensorFlow website are built with gcc4 that uses the older ABI. If you compile your op library with gcc>=5, add -D_GLIBCXX_USE_CXX11_ABI=0 to the command line to make the library compatible with the older abi.
For GPU ops, check the current official GPU ops building instructions on Tensorflow adding GPU op support
nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc \
${TF_CFLAGS[#]} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC
g++ -std=c++11 -shared -o cuda_op_kernel.so cuda_op_kernel.cc \
cuda_op_kernel.cu.o ${TF_CFLAGS[#]} -fPIC -lcudart ${TF_LFLAGS[#]}
As it says, Note that if your CUDA libraries are not installed in /usr/local/lib64, you'll need to specify the path explicitly in the second (g++) command above. For example, add -L /usr/local/cuda-8.0/lib64/ if your CUDA is installed in /usr/local/cuda-8.0.
Also, Note in some linux settings, additional options to nvcc compiling step are needed. Add -D_MWAITXINTRIN_H_INCLUDED to the nvcc command line to avoid errors from mwaitxintrin.h.

How to edit the linker flags bazel uses to build syntaxnet/tensorflow

I don't get Tensorflow with Syntaxnet built with CUDA on Ubuntu 16.04.
I have built it successfully without CUDA on this system.
Most likely the error is rooted in the configuration. The bazel build of tensorflow with CUDA generates linker commands for shared libraries with the linker option
-pie for generating executables with position independent code. This causes the error "undefined reference to `main'".
/home/patrick/.cache/bazel/_bazel_patrick/5b9c9cf56f3e0138be05b0752b134bcb/external/com_google_absl/absl/base/BUILD.bazel:28:1: Linking of rule '#com_google_absl//absl/base:spinlock_wait' failed (Exit 1):
crosstool_wrapper_driver_is_not_gcc failed: error executing command
`(cd /home/patrick/.cache/bazel/_bazel_patrick/5b9c9cf56f3e0138be05b0752b134bcb `/execroot/__main__ && exec env - \
CUDA_TOOLKIT_PATH=/usr/local/cuda \
CUDNN_INSTALL_PATH=/usr/local/cuda \
GCC_HOST_COMPILER_PATH=/usr/bin/gcc \
LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:/usr/local/cuda-9.0/extras/CUPTI/lib64:/usr/local/cuda-9.0/nvvm/lib64 \
NCCL_INSTALL_PATH=/usr \ PATH=/home/patrick/bin:/home/patrick/.local/bin:/usr/local/cuda/bin:/usr/bin:/bin \
PWD=/proc/self/cwd \
PYTHON_BIN_PATH=/usr/bin/python \
PYTHON_LIB_PATH=/usr/local/lib/python2.7/dist-packages \
TF_CUDA_CLANG=0 \
TF_CUDA_COMPUTE_CAPABILITIES=6.1 \
TF_CUDA_VERSION=9.0 \
TF_CUDNN_VERSION=7 \
TF_NCCL_VERSION=2 \
TF_NEED_CUDA=1 \
TF_NEED_OPENCL_SYCL=0 \
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -shared -o bazel-out/k8-opt/bin/external/com_google_absl/absl/base/libspinlock_wait.so -Wl,-no-as-needed -B/usr/bin/ -pie -Wl,-z,relro,-z,now -no-canonical-prefixes -pass-exit-codes '-Wl,--build-id=md5' '-Wl,--hash-style=gnu' -Wl,--gc-sections -Wl,#bazel-out/k8-opt/bin/external/com_google_absl/absl/base/libspinlock_wait.so-2.params)
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/Scrt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
This linking command succeeds when removing the option -pie.
Help would be appreciated to either find a way to edit the linker flags Bazel uses or to get a hint to the configuration error I made from users that encountered a similar problem. I don't think that posting the configuration steps I did will lead to other suggestions than the ones I already read on other posts. The build process looks too shaky for me.
I already had a look at the definition in the CROSSTOOL and BUILD files. I did not edit them and they look Ok (-pie is only enabled for linking executables).
I work with
Bazel 0.15.2
Tensorflow 1.8.0
Ubuntu 16.04
gcc 5.4
CUDA 9.0
CUDNN 7.1
NCCL 2.1

bazel compile error: linux/magic.h: No such file or directory

ERROR: /ban/yohchang/practice/tensorflow/bazel-0.5.1-dist/src/main/cpp/BUILD:7:1: C++ compilation of rule '//src/main/cpp:blaze_util' failed: gcc failed: error executing command
(cd /tmp/bazel_tC149834/out/execroot/bazel-0.5.1-dist &&
exec env -
LD_LIBRARY_PATH=:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/gcc-4.8.1/lib:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/gcc-4.8.1/lib64/:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/mpc-0.8.1/lib:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/gmp-4.3.2/lib:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/mpfr-2.4.2/lib:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/isl-0.11/lib:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/cloog-0.18.0/lib
PATH=/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/binutils-2.28:/sc10/ap/sivl/2005.09-SP1-1/bin:/vol0/sys/myPrint/print_execd-6.2u4/bin/lx24-amd64:/sc10/ap/linux/bin:/ban/yohchang/:.:/bin:/usr/ucb:/usr/ccs/bin:/usr/dt/bin:/usr/openwin/bin:/usr/local/bin:/usr/ucb/bin:/usr/bin:/usr/sbin:/bin/X11:/usr/X11R6/bin:/sc10/ap/xv/sun:/ban/wchuang/tool:/sc10/ap/tool:/vol0/sys/tool:/usr/bin:/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/gcc-4.8.1/bin
PWD=/proc/self/cwd
/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/gcc-4.8.1/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -B/volp1/quota_ctrl/yohchang/practice/tensorflow/local_install/gcc-4.8.1/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/local-opt/bin/src/main/cpp/objs/blaze_util/src/main/cpp/blaze_util_linux.d '-frandom-seed=bazel-out/local-opt/bin/src/main/cpp/objs/blaze_util/src/main/cpp/blaze_util_linux.o' -DBLAZE_OPENSOURCE -iquote . -iquote bazel-out/local-opt/genfiles -iquote external/bazel_tools -iquote bazel-out/local-opt/genfiles/external/bazel_tools -isystem external/bazel_tools/tools/cpp/gcc3 -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c src/main/cpp/blaze_util_linux.cc -o bazel-out/local-opt/bin/src/main/cpp/_objs/blaze_util/src/main/cpp/blaze_util_linux.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
src/main/cpp/blaze_util_linux.cc:18:25: fatal error: linux/magic.h: No such file or directory
#include "linux/magic.h"
^
compilation terminated.
Don't really know how to solve this...
I try to use google to find some information. It tells me that maybe this problem is caused by my old kernel. But I don't really know what I can do next.
Environment info
Operating System:
Red Hat Enterprise Linux Server release 5.7 (Tikanga)
ldd (GNU libc) 2.5
gcc-4.8.1 (I install this compiler additionally.)
other information:
I can't use yum or any other on-line update to install package...
So I download source code and compile them on my redhat computer.
Bazel version (output of bazel info release):
0.5.1-dist
If you need any other information, please let me know!
Thanks for help!
linux/magic.h is not part of Bazel, it's part of the environment. I, for example, have it in /usr/include/linux/magic.h.
Can you download the equivalent of kernel-headers (the ones you would install by yum install kernel-headers) and put them somewhere gcc can see them? That would be into one of the directories returned by gcc -E -xc++ - -v.

Compile openjdk 7 on arm ubuntu

I am trying to compile openjdk 7 on my arm ubuntu:
make all ALLOW_DOWNLOADS=true DISABLE_HOTSPOT_OS_VERSION_CHECK=ok
Then I received this error:
g++ -DLINUX -D_GNU_SOURCE -DIA32 -I/home/darklord/Develop/jdk7/hotspot/src/share/vm/prims -I/home/darklord/Develop/jdk7/hotspot/src/share/vm -I/home/darklord/Develop/jdk7/hotspot/src/cpu/x86/vm -I/home/darklord/Develop/jdk7/hotspot/src/os_cpu/linux_x86/vm -I/home/darklord/Develop/jdk7/hotspot/src/os/linux/vm -I/home/darklord/Develop/jdk7/hotspot/src/os/posix/vm -I/home/darklord/Develop/jdk7/hotspot/src/share/vm/adlc -I../generated -DASSERT -DTARGET_OS_FAMILY_linux -DTARGET_ARCH_x86 -DTARGET_ARCH_MODEL_x86_32 -DTARGET_OS_ARCH_linux_x86 -DTARGET_OS_ARCH_MODEL_linux_x86_32 -DTARGET_COMPILER_gcc -DCOMPILER2 -DCOMPILER1 -fno-rtti -fno-exceptions -D_REENTRANT -fcheck-new -fvisibility=hidden -m32 -march=i586 -pipe -Werror -g -c -o ../generated/adfiles/adlparse.o /home/darklord/Develop/jdk7/hotspot/src/share/vm/adlc/adlparse.cpp
g++: error: unrecognized argument in option '-march=i586'
It seems it is trying compile using x86 configuration. So how can I make the build pass on ARM machine?
You have to specify proper architecture option for g++. Reference here.
-march=name
This specifies the name of the target ARM architecture. GCC uses this name to determine what kind of instructions it can emit when
generating assembly code. This option can be used in conjunction with
or instead of the -mcpu= option. Permissible names are: armv2',
armv2a', armv3',armv3m', armv4',armv4t', armv5',armv5t',
armv5te',armv6', armv6j',iwmmxt', `ep9312'.
Please make sure you refer proper version docs of gcc