I'm currently experiencing problems with compiling TensorFlow. It looks like there are problems downloading certain packages, as mentioned here, however the last mention of the bug was in September. What is wrong? I simply cloned the latest r1.5 branch, ran ./configure and tried to compile. I'm using cuda 9.1 with cudnn 7.
bazel build --config=opt --config=cuda --config=mkl //tensorflow/tools/pip_package:build_pip_package
........
ERROR: /home/mv310/projects/tensorflow/tensorflow/tools/pip_package/BUILD:28:1: no such package 'third_party/eigen3': error globbing [**/*]: /home/mv310/projects/tensorflow/third_party/eigen3/mkl_include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include (Too many levels of symbolic links) and referenced by '//tensorflow/tools/pip_package:included_headers_gather'
ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: no such package 'third_party/eigen3': error globbing [**/*]: /home/mv310/projects/tensorflow/third_party/eigen3/mkl_include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include/include (Too many levels of symbolic links)
INFO: Elapsed time: 10.798s
FAILED: Build did NOT complete successfully (37 packages loaded)
currently loading: tensorflow/core ... (11 packages)
Could you try removing mkl_include dir and build again?
I was working in the models project. My issue was a leftover directory from a previous install. Deleting the directory and doing a fresh clone helped.
rm -r -f models/research/syntaxnet/tensorflow
git pull --recurse
cd tensorflow
./configure
In the tensorflow directory, I think you can also stash your changes and do a hard reset too.
git stash
git reset --hard HEAD
Finally, if all else fails, clean your bazel cache
rm -r -f ~/.cache/bazel/*
Related
I am trying to install Inkscape 1.2beta on Linux Ubuntu 20.04. The website currently only offers an AppImage and a source tarball. Since I would like to access the newest features of Inkscape via the command line, I need to build and install the source tarball.
INSTALL.md states that I need all submodules and dependencies before install.
How do I find these dependencies to successfully build and install Inkscape?
This list should satisfy all required dependencies on Linux Ubuntu:
apt install
cmake
imagemagick
libdouble-conversion-dev
libgdl-3-dev
libagg-dev
libpotrace-dev
libboost-all-dev
libsoup2.4-dev
libgc-dev
libwpg-dev
poppler-utils
libpoppler-dev
libpoppler-glib-dev
libpoppler-private-dev
libvisio-dev libvisio-tools
libcdr-dev
libgtkmm-3.0-dev
libgspell-1-dev
libxslt-dev libxslt1-dev
libreadline6-dev
lib2geom-dev
lib2geom-dev is needed to solve error "<ieeefp.h> not found".
For building Inkscape:
Download source tarball for Inkscape v1.2 from inkscape.org and extract
cd <extracted inkscape directory>
mkdir build && cd build
cmake ..
make
make install
If you still get an error during cmake .., please comment below with the names of the missing modules in the error message.
The details on how to build Inkscape (and the dependencies) could be found in the repository itself, or Inkscape website (For completness, the steps are copied from the website here):
To obtain the latest source code, use the following command (downloads into a subdirectory of your current working directory called "inkscape" by default):
git clone --recurse-submodules https://gitlab.com/inkscape/inkscape.git
To update this code later, change into the download folder and use:
git pull --recurse-submodules && git submodule update
By default, git will download every branch and every commit. If you are on a slow machine, have limited disk space, or limited internet bandwidth, you can use shallow clone and single branch clone options to limit the amount of data it will download:
git clone --depth=1 --single-branch --recurse-submodules --shallow-submodule https://gitlab.com/inkscape/inkscape.git
Building Inkscape on Linux
Open a terminal at the root of the folder into which you downloaded the source code in the previous step.
Install build dependencies
Download and run the script to install everything required for compiling Inkscape (check script to see if your distribution is supported):
wget -v https://gitlab.com/inkscape/inkscape-ci-docker/-/raw/master/install_dependencies.sh -O install_dependencies.sh
bash install_dependencies.sh --recommended
Compile
To compile with CMake, do the following:
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=${PWD}/install_dir -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache
make -j8
make install
Notes:
Using ccache is optional but speeds up compilation.
The optional -j8 argument to make tells it to run 8 jobs in parallel. Feel free to adjust this to the number of hardware threads (physical cores) available on your computer.
The recommended -DCMAKE_INSTALL_PREFIX argument allows to specify a custom isolated installation location (in the example above install_dir/ inside the build folder). It avoids installation into system locations (where it could conflict with other versions of Inkscape) and allows running multiple versions of Inkscape in parallel. It will still use all the files (including the preferences.xml) that reside in the ~/.config/inkscape directory.
Run
Run it from the build directory:
install_dir/bin/inkscape
I am following this guide to build Pytorch from scratch on a Raspberry Pi3B. For some reason, there is an error:
Building wheel torch-1.2.0a0+f13fadd
-- Building version 1.2.0a0+f13fadd
cmake --build . --target install --config Release -- -j 4
make: *** No rule to make target 'install'. Stop.
when I call python3 setup.py build. I am running Python version 3.5 and I am unsure why this seems to be failing.
Recently I encountered this error so after some research, in
https://stackoverflow.com/a/46987554/12164529
someone mentioned something about cache.
Therefore I guess that's because of some CMake cache behavior, so I run this command:
sudo USE_ROCM=1 USE_LMDB=1 USE_OPENCV=1 MAX_JOBS=15 python setup.py clean
And the error went away.
ps. This is my first answer on stackoverflow, and I'm not sure if this is a good one, but I hope it helps people find here.
I solved this problem with reference to this link.
This error happened at my second installation attempt. During the first I forgot to install the c++ compiler and got CMAKE_CXX_COMPILER not found. With a proper compiler installed, the second attempt gave me the "No rule to make target 'install'" error as mentioned in the question.
The problem was solved by removing the build/ directory and re-run python setup.py install.
So it seems this is caused by the cached build information.
So, I've installed Bazel via Chocolatey, installed Python 3.5 and 2.7, installed CUDA v8, and cuDNN v6, and installed JDK 8.0, I'm now trying to custom-build TensorFlow on my Windows 10 device, with AVX, AVX 2 and CUDA. TensorFlow-GPU, the pre-built version, does work, I've already tested and run that successfully.
I've followed the instructions of other articles, both on TensorFlows' actual site (trying to adapt some sections from the Linux/Mac installs), and on here. The furthest I've made it is; cloning the Github repository via Msys2, running configure.py, then attempting to build via bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package I receive an error, the header of which is:
Error reading java.io.IOException: CreateProcess(): The system cannot find the file specified.
: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/include/cudnn.h
I've double checked, that file does exist, so I'm not sure why I'm getting this error.
EDIT: Also attempted to run via Powershell, reached the same point.
Any help would be much appreciated.
I had the exact same error while trying to build Tensorflow on Windows (using cuDNN 5.1). I fixed it by launching bazel from the msys2 terminal (instead of from the windows command prompt) and manually setting the BAZEL_SH environment variable before attempting to build.
export BAZEL_SH=c:/tools/msys64/usr/bin/bash.exe
bazel build -c opt --config=win-cuda tensorflow/cc:cc_ops
The following steps helped me to compile Tensorflow on Windows 10.
pacman -Syuu patch
ln -s "c:\python27\python.exe" /usr/bin/python
export BAZEL_SH=c:/tools/msys64/usr/bin/bash.exe
"C:\Documents and Settings\All Users\chocolatey\bin\bazel.exe" build --config=opt --config=win-cuda //tensorflow/tools/pip_package:build_pip_package
But after 1 hour of compilation I got another error:
C:\tools\msys64\tmp_bazel_dmitry\x1e5egqw\execroot\org_tensorflow\external\protobuf_archive\python\google\protobuf\internal\api_implementation.cc
: fatal error C1083: Cannot open compiler generated file: '': Invalid
argument Target //tensorflow/tools/pip_package:build_pip_package
failed to build
I am following this tutorial to install GPU-enabled TensorFlow that is compatible with CUDA Compute Capability 3.0.
I installed Java-JDK8, Bazel 0.1.0, TensorFlow 0.6.0, and changed the configurations to run on CUDA Compute Capability 3.0. Everything is good so far.
But when I enter this command:
$HOME/bin/bazel build -c opt --config=cuda
//tensorflow/cc:tutorials_example_trainer
I see this output:
Extracting Bazel installation...
.....
ERROR: /home/me/tensorflow/tensorflow/core/BUILD:1: Extension file not found: 'google/protobuf/protobuf.bzl'.
ERROR: /home/me/tensorflow/tensorflow/cc/BUILD:65:1: error loading package 'tensorflow/core': Extension file not found: 'google/protobuf/protobuf.bzl' and referenced by '//tensorflow/cc:tutorials_example_trainer'.
ERROR: Loading failed; build aborted.
INFO: Elapsed time: 1.006s
Any advice?
The problem is fixed by running this command:
$ git clone -b 0.6.0 –recurse-submodules https://github.com/tensorflow/tensorflow.git
The error message I received is documented here. Pulling all submodules fixed the problem.
I've had issues with the command above, -recurse-submodules does not exist
Try this:
$ git clone --recursive git#github.com:tensorflow/tensorflow.git
I'm new to Bazel. I'm not sure how this thing works. On the TF website, there's this section on "Create the pip package and install".
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
# To build with GPU support:
$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
# The name of the .whl file will depend on your platform.
$ pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
Here's the situation:
There's a new commit on the master branch of TensorFlow and I merge it into my fork.
I need to rebuild the wheel and do a pip install of the new wheel (correct me if I am wrong).
I ./configure first, then bazel build, then bazel-bin, then pip install.
Is this the correct way to properly update changes from master? The bazel build step takes a really long time.
Bazel is a build tool just like other build tools like cmake and make. The steps you listed is the correct way to get updates from master. The build step could take long the first time you build TensorFlow. Later builds, after updates from master, should be faster, as Bazel, just like any other build tool, doesn't re-build targets whose dependencies have not been modified.