CalledProcessError while installing Tensorflow using Bazel - tensorflow

I am trying to install Tensorflow from source using Bazel on Raspberry pi. I am following the official documentation as given here. When I run the ./configure in Tensorflow directory after completing all the steps written for Bazel, I get the following error
/home/cvit/bin/bazel: line 88: /home/cvit/.bazel/bin/bazel-real: cannot execute binary file: Exec format error
/home/cvit/bin/bazel: line 88: /home/cvit/.bazel/bin/bazel-real: Success
Traceback (most recent call last):
File "./configure.py", line 1552, in <module>
main()
File "./configure.py", line 1432, in main
check_bazel_version('0.15.0')
File "./configure.py", line 450, in check_bazel_version
curr_version = run_shell(['bazel', '--batch', '--bazelrc=/dev/null', 'version'])
File "./configure.py", line 141, in run_shell
output = subprocess.check_output(cmd)
File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['bazel', '--batch', '--bazelrc=/dev/null', 'version']' returned non-zero exit status 1
I didn't put the user flag in the bazel installation. So, I think this might be bazelrc error so I tried to set $PATH=$BAZEL/bin but nothing happened.
Please give any suggestion !!

Probably the problem is that the non appropriate version of bazel is installed.
Run bazel version in the tensorflow directory, and see if there is an error.
If there is a problem with bazel version, then check out the .baselversion file, and if it contains a version that isn't installable with apt, then dowload the installer from https://github.com/bazelbuild/bazel/releases then install it, else install with apt.
After that everything should work fine.

Related

Problem with building Chromium for Windows on Mac host

I am trying to cross build chromium on mac host. I have followed the instructions from https://chromium.googlesource.com/chromium/src.git/+/HEAD/docs/win_cross.md and everything seems fine. But when I run autoninja -C out/win chrome I face with the following error. Any help would be appreciated.
ninja: Entering directory `out/win'
[0/1] Regenerating ninja files
ERROR at //build/config/win/visual_studio_version.gni:27:7: Script returned non-zero exit code.
exec_script("../../vs_toolchain.py", [ "get_toolchain_dir" ], "scope")
^----------
Current dir: /chromium/src/out/win/
Command: python3 /chromium/src/build/vs_toolchain.py get_toolchain_dir
Returned 1.
stderr:
Traceback (most recent call last):
File "/chromium/src/build/vs_toolchain.py", line 587, in <module>
sys.exit(main())
File "/chromium/src/build/vs_toolchain.py", line 583, in main
return commands[sys.argv[1]](*sys.argv[2:])
File "/chromium/src/build/vs_toolchain.py", line 561, in GetToolchainDir
win_sdk_dir = SetEnvironmentAndGetSDKDir()
File "/chromium/src/build/vs_toolchain.py", line 554, in SetEnvironmentAndGetSDKDir
return NormalizePath(os.environ['WINDOWSSDKDIR'])
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/os.py", line 679, in __getitem__
raise KeyError(key) from None
KeyError: 'WINDOWSSDKDIR'
See //third_party/angle/gni/angle.gni:35:5: whence it was imported.
import("//build/config/win/visual_studio_version.gni")
^----------------------------------------------------
See //chrome/elevation_service/BUILD.gn:7:1: whence it was imported.
import("//build/toolchain/win/midl.gni")
^--------------------------------------
See //BUILD.gn:660:7: which caused the file to be included.
"//chrome/elevation_service:elevation_service_unittests",
^-------------------------------------------------------
FAILED: build.ninja
../../buildtools/mac/gn --root=../.. -q --ide=vs --regeneration gen .
ninja: error: rebuilding 'build.ninja': subcommand failed

Cannot run dask-mpi with Python 3.7 -- timeout when connecting client to dask-mpi scheduler

I'm attempting to run the Dask-MPI "Getting Started" (http://mpi.dask.org/en/latest/) example in a fresh Anaconda environment.
I set up an environment using
conda create -n dask-mpi -c conda-forge python=3.7 dask-mpi
conda activate dask-mpi
Inside the environment, I run
mpirun -np 4 dask-mpi --scheduler-file ./scheduler.json
Then, from a python interpreter on the same machine (and in the same folder), I run
from dask.distributed import Client
client = Client(scheduler_file='/path/to/scheduler.json')
This results in the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 712, in __init__
self.start(timeout=timeout)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 858, in start
sync(self.loop, self._start, **kwargs)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/utils.py", line 331, in sync
six.reraise(*error[0])
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/utils.py", line 316, in f
result[0] = yield future
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 736, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 954, in _start
yield self._ensure_connected(timeout=timeout)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 736, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 1015, in _ensure_connected
timedelta(seconds=timeout), self._update_scheduler_info()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
tornado.util.TimeoutError: Timeout
The terminal that I ran dask-mpi from does not have any output which would indicate that something is trying to connect. I have verified that the port in question, 8786, is open. I've also verified via debugger that the client is getting the correct address from the scheduler file.
I've tried this in quite a few different environments and on a few different machines, including a fresh Ubuntu 18.04 docker container. I'm completely at a loss for what steps I might be missing.
It turns out this was due to an error in newer versions of dask.distributed (1.25.3) which broke the behavior of dask-mpi. This seems to be fixed as of dask-mpi 1.0.3 (https://github.com/dask/dask-mpi/releases/tag/1.0.3).

how to compile the tutorial program on tensorflow

After configuring the tensorflow, I tried to run the command
bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
But an error occured which I tried every possible but failed to solve.
ERROR: Skipping '//tensorflow/cc:tutorials_example_trainer': error loading package 'tensorflow/cc': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '#local_config_cuda//cuda': Traceback (most recent call last):
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 1042
_create_local_cuda_repository(repository_ctx)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 975, in _create_local_cuda_repository
_host_compiler_includes(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 145, in _host_compiler_includes
get_cxx_inc_directories(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 120, in get_cxx_inc_directories
set(includes_cpp)
The set constructor for depsets is deprecated and will be removed. Please use the depset constructor instead. You can temporarily enable the deprecated set constructor by passing the flag --incompatible_disallow_set_constructor=false
WARNING: Target pattern parsing failed.
ERROR: error loading package 'tensorflow/cc': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '#local_config_cuda//cuda': Traceback (most recent call last):
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 1042
_create_local_cuda_repository(repository_ctx)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 975, in _create_local_cuda_repository
_host_compiler_includes(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 145, in _host_compiler_includes
get_cxx_inc_directories(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 120, in get_cxx_inc_directories
set(includes_cpp)
The set constructor for depsets is deprecated and will be removed. Please use the depset constructor instead. You can temporarily enable the deprecated set constructor by passing the flag --incompatible_disallow_set_constructor=false
INFO: Elapsed time: 2.293s
FAILED: Build did NOT complete successfully (0 packages loaded)
currently loading: tensorflow/cc
Note that: I've installed the CUDA8.0, cuDNN 5.0 and Bazel 0.6.0, My system is Ubuntu 16.04.
It seems there is already an issue open for this problem: https://github.com/tensorflow/tensorflow/issues/11859. Last comment says that the issue can be fixed by editing line 120 in tensorflow/third_party/gpus/cuda_configure.bzl. If that doesn't help I'd subscribe to the issue and wait for a fix.

Buildroot - installing of the "pytest-runner" package fails

Buildroot version: 2017-02
I need to integrate python package 'chardet' to my build. The chardet require the 'pytest-runner'package.
Case 1
The 'chardet' is marked to be integrated into build. The 'pytest-runner' is not pre-feteched and the 'chardet' package updated to the latest version (3.0.3) before the build with the scanpypi script. When the make is run the following error message is shown indicating problems with the 'pytest-runner':
>>> python-chardet 3.0.3 Building
(cd /home/nnnn/bldr_lab/buildroot/output/build/python-chardet-3.0.3//;
PATH="/home/nnnn/bldr_lab/buildroot/output/host/bin:/home/nnnn/bldr_lab
/buildroot/output/host/sbin:/home/nnnn/bldr_lab/buildroot/output/host/usr
/bin:/home/nnnn/bldr_lab/buildroot/output/host/usr/sbin:/home/nnnn/x-tools
/arm-cortex_a8-linux-gnueabihf/bin:/home/nnnn/bin:/home/nnnn/.local
/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr
/games:/usr/local/games" PYTHONPATH="/home/nnnn/bldr_lab/buildroot/output
/target/usr/lib/python2.7/sysconfigdata/:/home/nnnn/bldr_lab/buildroot
/output/target/usr/lib/python2.7/site-packages/" _python_sysroot=/home
/nnnn/bldr_lab/buildroot/output/host/usr/arm-buildroot-linux-gnueabihf
/sysroot _python_prefix=/usr _python_exec_prefix=/usr /home/nnnn/bldr_lab
/buildroot/output/host/usr/bin/python setup.py build )
Download error on https://pypi.python.org/simple/pytest-runner/: unknown
url type: https -- Some packages may not be found!
Couldn't find index page for 'pytest-runner' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: unknown url type: https
-- Some packages may not be found!
No local packages or download links found for pytest-runner
Traceback (most recent call last):
File "setup.py", line 52, in <module>
['chardetect = chardet.cli.chardetect:main']})
File "/home/nnnn/bldr_lab/buildroot/output/host/usr/lib/python2.7
/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "build/bdist.linux-x86_64/egg/setuptools/dist.py", line 268, in
__init__
File "build/bdist.linux-x86_64/egg/setuptools/dist.py", line 313, in
fetch_build_eggs
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 846, in resolve
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 1091, in best_match
File "build/bdist.linux-x86_64/egg/pkg_resources/__init__.py", line 1103, in obtain
File "build/bdist.linux-x86_64/egg/setuptools/dist.py", line 380, in fetch_build_egg
File "build/bdist.linux-x86_64/egg/setuptools/command/easy_install.py", line 633, in easy_install
distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('pytest-runner')
package/pkg-generic.mk:216: recipe for target '/home/nnnn/bldr_lab
/buildroot/output/build/python-chardet-3.0.3/.stamp_built' failed
make[1]: *** [/home/nnnn/bldr_lab/buildroot/output/build/python-chardet-
3.0.3/.stamp_built] Error 1
Makefile:79: recipe for target '_all' failed
make: *** [_all] Error 2
case 2 When I try to create the 'pytest-runner' with the scanpypi the following error message is shown:
nnnn#xxxx:~/bldr2/buildroot$ ./support/scripts/scanpypi pytest-runner -o package
buildroot package name for pytest-runner: python-pytest-runner
Package: python-pytest-runner
Fetching package pytest-runner
Downloading package pytest-runner from https://pypi.python.org/packages/9e/4d/08889e5e27a9f5d6096b9ad257f4dea1faabb03c5ded8f665ead448f5d8a/pytest-runner-2.11.1.tar.gz...
Error: Could not install package pytest-runner
When I download the 'pytest-runner-2.11.1.tar.gz' from pypi manually it looks as normal pypi tar file. Any idea what is root cause and how to solve the problem?
-timo-

Error loading library gpuarray with Theano

I am trying to run this script to test Theano's use of my GPU and get the following error:
ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
Traceback (most recent call last):
File "/home/me/anaconda3/envs/py35/lib/python3.5/site-
packages/theano/gpuarray/__init__.py", line 164, in <module>
use(config.device)
File "/home/me/anaconda3/envs/py35/lib/python3.5/site-
packages/theano/gpuarray/__init__.py", line 151, in use
init_dev(device)
File "/home/me/anaconda3/envs/py35/lib/python3.5/site-
packages/theano/gpuarray/__init__.py", line 60, in init_dev
sched=config.gpuarray.sched)
File "pygpu/gpuarray.pyx", line 614, in pygpu.gpuarray.init
(pygpu/gpuarray.c:9419)
File "pygpu/gpuarray.pyx", line 566, in pygpu.gpuarray.pygpu_init
(pygpu/gpuarray.c:9110)
File "pygpu/gpuarray.pyx", line 1021, in
pygpu.gpuarray.GpuContext.__cinit__ (pygpu/gpuarray.c:13472)
pygpu.gpuarray.GpuArrayException: Error loading library: -1
I need to use the nvidia-381 driver since my GPU is a 1080 ti and is not compatible with nvidia-375. I'm not sure if that matters but installing nvcc overwrites 381 and causes some errors if I reinstall 381 after setting up nvcc so I can't use nvcc.
I can import pygpu without errors but if I run pygpu.test() I get the following error and I don't know how to specify the DEVICE variable without nvcc.
======================================================================
ERROR: Failure: RuntimeError (No test device specified. Specify one using the DEVICE or GPUARRAY_TEST_DEVICE environment variables.)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/me/anaconda3/envs/py35/lib/python3.5/site-packages/nose/failure.py", line 39, in runTest
raise self.exc_val.with_traceback(self.tb)
File "/home/me/anaconda3/envs/py35/lib/python3.5/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/home/me/anaconda3/envs/py35/lib/python3.5/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/home/me/anaconda3/envs/py35/lib/python3.5/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/home/me/anaconda3/envs/py35/lib/python3.5/imp.py", line 234, in load_module
return load_source(name, filename, file)
File "/home/me/anaconda3/envs/py35/lib/python3.5/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/me/.local/lib/python3.5/site-packages/pygpu-0.6.2-py3.5-linux-x86_64.egg/pygpu/tests/test_tools.py", line 5, in <module>
from .support import (guard_devsup, rand, check_flags, check_meta, check_all,
File "/home/me/.local/lib/python3.5/site-packages/pygpu-0.6.2-py3.5-linux-x86_64.egg/pygpu/tests/support.py", line 32, in <module>
context = gpuarray.init(get_env_dev())
File "/home/me/.local/lib/python3.5/site-packages/pygpu-0.6.2-py3.5-linux-x86_64.egg/pygpu/tests/support.py", line 29, in get_env_dev
raise RuntimeError("No test device specified. Specify one using the DEVICE or GPUARRAY_TEST_DEVICE environment variables.")
RuntimeError: No test device specified. Specify one using the DEVICE or GPUARRAY_TEST_DEVICE environment variables.
----------------------------------------------------------------------
Ran 7 tests in 0.003s
FAILED (errors=7)
<nose.result.TextTestResult run=7 errors=7 failures=0>
Warning: its entirely possible that this is all wrong and the actual reason for your problem is in fact - as you suspect - your gpu driver.
I had the same issue with gpuarray on Windows 10.
In the end I solved it by:
completely uninstall python
install cuda 8.0 (with cudnn 5.1)
install anaconda
install theano through anaconda:
conda install theano pygpu
As you are using linux: This error message basically means It didn't work, don't ask me why And is mostly shown if something with your setup is wrong (e.g. different compilers used for compiling python and theano, or incompatible cuda version)
I would recommend to update to cuda 8.0 and to reinstall your python environment over anaconda (just in case)
On a side note: I tested your example script from the docu and at least that is working....
Note for windows users: Never try to install Anaconda in a location where you have spaces in the path... Everything looks fine ... until theano starts having trouble finding and compiling things.
Note regarding the pygpu.test():
Normally you just set the environment variable:
windows: set DEVICE=cuda
linux: export DEVICE=cuda
BUT The test has the habit of saying you didn't specify a device if the library couldn't be loaded...