Unable to compile TensorFlow from source on an Intel i7 930 CPU; GTS-250 GPU - tensorflow

I'm new to TF and want to do a compile from source as my desktop does not have a CPU or GPU that supports AVX instructions. My system has an Intel i7 930 processor (Bloomfield from the nehalem family) and a Nvidea GTS-250 CPU. Yeah I know, both are getting long in the teeth.
The following is the full stack where I am failing in the process. I am following instructions from the following webpage.
https://www.tensorflow.org/install/source_windows
I pushed forward with the compile and ended up seeing the following errors...
H:\Python\TensorFlowCompile\tensorflow>python ./configure.py
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.29.1 installed.
Please specify the location of python. [Default is C:\Users\Zeek\AppData\Local\Programs\Python\Python37\python.exe]:
Found possible Python library paths:
C:\Users\Zeek\AppData\Local\Programs\Python\Python37\lib\site-packages
Please input the desired Python library path to use. Default is [C:\Users\Zeek\AppData\Local\Programs\Python\Python37\lib\site-packages]
Do you wish to build TensorFlow with XLA JIT support? [y/N]: N
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: N
No CUDA support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]: --config=v2 -march=nehalem
Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]: Y
Eigen strong inline overridden.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=ngraph # Build with Intel nGraph support.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2 # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=nonccl # Disable NVIDIA NCCL support.
H:\Python\TensorFlowCompile\tensorflow>bazel build //tensorflow/tools/pip_package:build_pip_package
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
Inherited 'common' options: --isatty=0 --terminal_columns=269
INFO: Options provided by the client:
'build' options: --python_path=C:/Users/Zeek/AppData/Local/Programs/Python/Python37/python.exe
INFO: Reading rc options for 'build' from h:\python\tensorflowcompile\tensorflow\.bazelrc:
'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone --strategy=Genrule=standalone -c opt --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --announce_rc --define=grpc_no_ares=true --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --config=v2
INFO: Reading rc options for 'build' from h:\python\tensorflowcompile\tensorflow\.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=C:/Users/Zeek/AppData/Local/Programs/Python/Python37/python.exe --action_env PYTHON_LIB_PATH=C:/Users/Zeek/AppData/Local/Programs/Python/Python37/lib/site-packages --python_path=C:/Users/Zeek/AppData/Local/Programs/Python/Python37/python.exe --config monolithic --copt=-w --host_copt=-w --copt=-DWIN32_LEAN_AND_MEAN --host_copt=-DWIN32_LEAN_AND_MEAN --copt=-DNOGDI --host_copt=-DNOGDI --verbose_failures --distinct_host_configuration=false --define=override_eigen_strong_inline=true --action_env TF_CONFIGURE_IOS=0
INFO: Found applicable config definition build:v2 in file h:\python\tensorflowcompile\tensorflow\.bazelrc: --define=tf_api_version=2
INFO: Found applicable config definition build:monolithic in file h:\python\tensorflowcompile\tensorflow\.bazelrc: --define framework_shared_object=false
INFO: Found applicable config definition build:monolithic in file h:\python\tensorflowcompile\tensorflow\.bazelrc: --define framework_shared_object=false
Loading:
Loading: 0 packages loaded
Loading: 0 packages loaded
Analyzing: target //tensorflow/tools/pip_package:build_pip_package (1 packages loaded, 0 targets configured)
Analyzing: target //tensorflow/tools/pip_package:build_pip_package (6 packages loaded, 18 targets configured)
Analyzing: target //tensorflow/tools/pip_package:build_pip_package (6 packages loaded, 18 targets configured)
Analyzing: target //tensorflow/tools/pip_package:build_pip_package (6 packages loaded, 18 targets configured)
INFO: Call stack for the definition of repository 'com_google_protobuf' which is a tf_http_archive (rule definition at H:/python/tensorflowcompile/tensorflow/third_party/repo.bzl:121:19):
- H:/python/tensorflowcompile/tensorflow/tensorflow/workspace.bzl:434:5
- H:/python/tensorflowcompile/tensorflow/WORKSPACE:19:1
Analyzing: target //tensorflow/tools/pip_package:build_pip_package (6 packages loaded, 18 targets configured)
INFO: Repository 'com_google_protobuf' used the following cache hits instead of downloading the corresponding file.
* Hash 'b9e92f9af8819bbbc514e2902aec860415b70209f31dfc8c4fa72515a5df9d59' for https://storage.googleapis.com/mirror.tensorflow.org/github.com/protocolbuffers/protobuf/archive/310ba5ee72661c081129eb878c1bbcec936b20f0.tar.gz
If the definition of 'com_google_protobuf' was updated, verify that the hashes were also updated.
ERROR: An error occurred during the fetch of repository 'com_google_protobuf':
Traceback (most recent call last):
File "H:/python/tensorflowcompile/tensorflow/third_party/repo.bzl", line 101
_apply_patch(ctx, ctx.attr.patch_file)
File "H:/python/tensorflowcompile/tensorflow/third_party/repo.bzl", line 68, in _apply_patch
_execute_and_check_ret_code(ctx, cmd)
File "H:/python/tensorflowcompile/tensorflow/third_party/repo.bzl", line 52, in _execute_and_check_ret_code
fail("Non-zero return code({1}) when ...))
Non-zero return code(127) when executing 'C:\msys64\usr\bin\bash.exe -l -c "patch" "-p1" "-d" "C:/users/Zeek/_bazel_Zeek/hfhzrtpt/external/com_google_protobuf" "-i" "H:/python/tensorflowcompile/tensorflow/third_party/protobuf/protobuf.patch"':
Stdout:
Stderr: /usr/bin/bash: patch: command not found
ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: no such package '#com_google_protobuf//': Traceback (most recent call last):
File "H:/python/tensorflowcompile/tensorflow/third_party/repo.bzl", line 101
_apply_patch(ctx, ctx.attr.patch_file)
File "H:/python/tensorflowcompile/tensorflow/third_party/repo.bzl", line 68, in _apply_patch
_execute_and_check_ret_code(ctx, cmd)
File "H:/python/tensorflowcompile/tensorflow/third_party/repo.bzl", line 52, in _execute_and_check_ret_code
fail("Non-zero return code({1}) when ...))
Non-zero return code(127) when executing 'C:\msys64\usr\bin\bash.exe -l -c "patch" "-p1" "-d" "C:/users/Zeek/_bazel_Zeek/hfhzrtpt/external/com_google_protobuf" "-i" "H:/python/tensorflowcompile/tensorflow/third_party/protobuf/protobuf.patch"':
Stdout:
Stderr: /usr/bin/bash: patch: command not found
INFO: Elapsed time: 19.298s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (6 packages loaded, 18 targets configured)
FAILED: Build did NOT complete successfully (6 packages loaded, 18 targets configured)
I ran with a newer version of bazel prior to this compile and I was presented with the following error so that is why I reverted back to 0.29.1
H:\Python\TensorFlowCompile\tensorflow>python ./configure.py
Extracting Bazel installation...
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 1.0.1 installed.
Please downgrade your bazel installation to version 0.29.1 or lower to build TensorFlow! To downgrade: download the installer for the old version (from https://github.com/bazelbuild/bazel/releases) then run the installer.
Any thoughts on where I am going wrong, I am still pretty new at this so it's probably something pretty glaring/easy....
[Edit] this seemed to be resolved by setting up patch s it wasn't installed:
pacman -Syuu patch
**I pushed a bit farther and now I'm failing at the zip stage as follows.. **
INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
[0 / 23] [Prepa] PythonZipper tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.zip ... (3 actions, 0 running)
ERROR: H:/python/tensorflowcompile/tensorflow/tensorflow/lite/python/BUILD:46:1: PythonZipper tensorflow/lite/python/tflite_convert.zip failed (Exit 255)
FATAL: MappedOutputFile(bazel-out/x64_windows-opt/bin/tensorflow/lite/python/tflite_convert.zip): CreateFileMapping failed
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 38.195s, Critical Path: 4.35s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
**I then tried to go back to the python config stage to make sure that I picked v2 and the proper family of CPU but when I redo the compile I see ANOTHER error where it is failing much sooner.. **
java.lang.RuntimeException: Unrecoverable error while evaluating node 'REPOSITORY_DIRECTORY:#local_config_cc' (requested by nodes 'REPOSITORY:#local_config_cc')
at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:528)
at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:399)
at java.base/java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(Unknown Source)
at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source)
at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source)
Caused by: java.nio.file.InvalidPathException: Illegal char <> at index 60: C:/users/bill/_bazel_bill/hfhzrtpt/external/local_config_cc/*********************************************************************
** Visual Studio 2017 Developer Command Prompt v15.0
** Copyright (c) 2017 Microsoft Corporation
C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/VC/Auxiliary/Build/VCVARSALL.BAT
at java.base/sun.nio.fs.WindowsPathParser.normalize(Unknown Source)
at java.base/sun.nio.fs.WindowsPathParser.parse(Unknown Source)
at java.base/sun.nio.fs.WindowsPathParser.parse(Unknown Source)
at java.base/sun.nio.fs.WindowsPath.parse(Unknown Source)
at java.base/sun.nio.fs.WindowsFileSystem.getPath(Unknown Source)
at java.base/java.nio.file.Path.of(Unknown Source)
at java.base/java.nio.file.Paths.get(Unknown Source)
at com.google.devtools.build.lib.vfs.JavaIoFileSystem.getNioPath(JavaIoFileSystem.java:84)
at com.google.devtools.build.lib.vfs.JavaIoFileSystem.exists(JavaIoFileSystem.java:119)
at com.google.devtools.build.lib.vfs.Path.exists(Path.java:356)
at com.google.devtools.build.lib.bazel.repository.skylark.SkylarkPath.exists(SkylarkPath.java:79)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at com.google.devtools.build.lib.syntax.MethodDescriptor.call(MethodDescriptor.java:135)
at com.google.devtools.build.lib.syntax.DotExpression.eval(DotExpression.java:126)
at com.google.devtools.build.lib.syntax.DotExpression.doEval(DotExpression.java:51)
at com.google.devtools.build.lib.syntax.Expression.eval(Expression.java:75)
at com.google.devtools.build.lib.syntax.UnaryOperatorExpression.doEval(UnaryOperatorExpression.java:98)
at com.google.devtools.build.lib.syntax.Expression.eval(Expression.java:75)
at com.google.devtools.build.lib.syntax.Eval.execIf(Eval.java:139)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:214)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.UserDefinedFunction.call(UserDefinedFunction.java:91)
at com.google.devtools.build.lib.syntax.BaseFunction.callWithArgArray(BaseFunction.java:474)
at com.google.devtools.build.lib.syntax.BaseFunction.call(BaseFunction.java:436)
at com.google.devtools.build.lib.syntax.FuncallExpression.callFunction(FuncallExpression.java:992)
at com.google.devtools.build.lib.syntax.FuncallExpression.doEval(FuncallExpression.java:904)
at com.google.devtools.build.lib.syntax.Expression.eval(Expression.java:75)
at com.google.devtools.build.lib.syntax.UnaryOperatorExpression.doEval(UnaryOperatorExpression.java:98)
at com.google.devtools.build.lib.syntax.Expression.eval(Expression.java:75)
at com.google.devtools.build.lib.syntax.Eval.execIf(Eval.java:139)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:214)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.UserDefinedFunction.call(UserDefinedFunction.java:91)
at com.google.devtools.build.lib.syntax.BaseFunction.callWithArgArray(BaseFunction.java:474)
at com.google.devtools.build.lib.syntax.BaseFunction.call(BaseFunction.java:436)
at com.google.devtools.build.lib.syntax.FuncallExpression.callFunction(FuncallExpression.java:992)
at com.google.devtools.build.lib.syntax.FuncallExpression.doEval(FuncallExpression.java:904)
at com.google.devtools.build.lib.syntax.Expression.eval(Expression.java:75)
at com.google.devtools.build.lib.syntax.Eval.execAssignment(Eval.java:72)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:192)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.Eval.execStatements(Eval.java:231)
at com.google.devtools.build.lib.syntax.Eval.execIf(Eval.java:144)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:214)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.UserDefinedFunction.call(UserDefinedFunction.java:91)
at com.google.devtools.build.lib.syntax.BaseFunction.callWithArgArray(BaseFunction.java:474)
at com.google.devtools.build.lib.syntax.BaseFunction.call(BaseFunction.java:436)
at com.google.devtools.build.lib.syntax.FuncallExpression.callFunction(FuncallExpression.java:992)
at com.google.devtools.build.lib.syntax.FuncallExpression.doEval(FuncallExpression.java:904)
at com.google.devtools.build.lib.syntax.Expression.eval(Expression.java:75)
at com.google.devtools.build.lib.syntax.Eval.execAssignment(Eval.java:72)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:192)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.UserDefinedFunction.call(UserDefinedFunction.java:91)
at com.google.devtools.build.lib.syntax.BaseFunction.callWithArgArray(BaseFunction.java:474)
at com.google.devtools.build.lib.syntax.BaseFunction.call(BaseFunction.java:436)
at com.google.devtools.build.lib.syntax.FuncallExpression.callFunction(FuncallExpression.java:992)
at com.google.devtools.build.lib.syntax.FuncallExpression.doEval(FuncallExpression.java:904)
at com.google.devtools.build.lib.syntax.Expression.eval(Expression.java:75)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:201)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.Eval.execStatements(Eval.java:231)
at com.google.devtools.build.lib.syntax.Eval.execIfBranch(Eval.java:83)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:198)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.Eval.execIf(Eval.java:140)
at com.google.devtools.build.lib.syntax.Eval.execDispatch(Eval.java:214)
at com.google.devtools.build.lib.syntax.Eval.exec(Eval.java:183)
at com.google.devtools.build.lib.syntax.UserDefinedFunction.call(UserDefinedFunction.java:91)
at com.google.devtools.build.lib.syntax.BaseFunction.callWithArgArray(BaseFunction.java:474)
at com.google.devtools.build.lib.syntax.BaseFunction.call(BaseFunction.java:436)
at com.google.devtools.build.lib.bazel.repository.skylark.SkylarkRepositoryFunction.fetch(SkylarkRepositoryFunction.java:173)
at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.fetchRepository(RepositoryDelegatorFunction.java:298)
at com.google.devtools.build.lib.rules.repository.RepositoryDelegatorFunction.compute(RepositoryDelegatorFunction.java:225)
at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:451)
... 7 more
FAILED: Build did NOT complete successfully (231 packages loaded, 3779 targets configured)
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 60)
H:\Python\TensorFlowCompile\tensorflow>
Sorry folks I'm not sure where or how I show approach this one at this time... Is there a way to start the compile fresh, was there something that I missed that failed the pythonzip stage, was there a problem in my bezal config to start with....

TensorFlow doesn't yet support Bazel 1.x (it's coming, but not yet done), so yes you'll need 0.29.1 until then.
Set the environment variable to point to your VC directory, see "Installing and Using Bazel" > "Installing on Windows" > "Troubleshooting" > "Problem: Bazel does not find Visual Studio or Visual C++".
Start a normal cmd.exe shell, no need to run the Visual Studio command line.
Install Python modules required by TensorFlow: python -m pip install numpy keras_preprocessing
Configure TensorFlow: python configure.py
Build the pip module: bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

Many thanks for the responses all. I was able to get TF working as listed above. I've sense upgraded the CPU and GPUs too!

Related

Why do I get error 'unrecognized command line option '-fuse-ld=--enable-gold=default' when building Tensorflow?

I am trying to build Tensorflow from source as described at: https://www.tensorflow.org/install/source
I have Bazel 0.29.1 and Python available. I do:
module load bazeltest/0.29.1
module load pythontest/gcc6.3.0/3.7.5tensorflow
./configure
I choose all default options in configure, then:
bazel build //tensorflow/tools/pip_package:build_pip_package
The build proceeds for a while, but fails at:
INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /gpfs01/home/cczcb/.cache/bazel/_bazel_cczcb/f0a4604cf88277481621943e2a61f102/external/swig/BUILD.bazel:5:1: Linking of rule '#swig//:swig' failed (Exit 1)
gcc: error: unrecognized command line option '-fuse-ld=--enable-gold=default'
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 18.642s, Critical Path: 10.57s
INFO: 125 processes: 125 local.
FAILED: Build did NOT complete successfully
I have also tried:
bazel build --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/tools/pip_package:build_pip_package
as I am using GCC 6.3.0 here. The result is the same, as it is when I use GCC 4.9.3.
My O/S is Centos 7.4.
Can anyone advise what might be amiss ?
thanks,
Colin

How to build tensorflow benchmark tool

Following https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark
bazel build -c opt --crosstool_top=//external:android/crosstool --cpu=armeabi-v7a --host_crosstool_top=#bazel_tools//tools/cpp:toolchain --config monolithic tensorflow/tools/benchmark:benchmark_model
I get
WARNING: The following rc files are no longer being read, please transfer their contents or import their path into one of the standard rc files:
/Users/user/external_projects/tensorflow/tools/bazel.rc
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=176
ERROR: Config value monolithic is not defined in any .rc file
How to fix it?
bazel version
WARNING: The following rc files are no longer being read, please transfer their contents or import their path into one of the standard rc files:
/Users/user/external_projects/tensorflow/tools/bazel.rc
Build label: 0.23.1
Build target: bazel-out/darwin-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Mon Mar 4 10:40:32 2019 (1551696032)
Build timestamp: 1551696032
Build timestamp as int: 1551696032
Update:
For fresh tensorflow master I get:
INFO: Analysed target //tensorflow/tools/benchmark:benchmark_model (71 packages loaded, 4664 targets configured).
INFO: Found 1 target...
ERROR: /private/var/tmp/_bazel_user/144c1461f36cde95de1693452c235294/external/com_google_absl/absl/types/BUILD.bazel:178:1: C++ compilation of rule '#com_google_absl//absl/types:bad_optional_access' failed (Exit 1)
clang: error: unknown argument: '-m<platform_for_version_min>-version-min=10.14'
Target //tensorflow/tools/benchmark:benchmark_model failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 98.302s, Critical Path: 18.67s
INFO: 399 processes: 399 local.
FAILED: Build did NOT complete successfully
Update 2:
On Ubuntu 16 and fresh tensorflow master:
bazel version
INFO: Invocation ID: 34e40dab-96b2-45ef-b549-dab45a2738bc
Build label: 0.22.0
Build target: bazel-out/k8-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Mon Jan 28 12:58:08 2019 (1548680288)
Build timestamp: 1548680288
Build timestamp as int: 1548680288
Output:
WARNING: /data/user_data/external_projects/tensorflow/tensorflow/core/BUILD:1794:12: in srcs attribute of cc_library rule //tensorflow/core:android_tensorflow_lib_lite: please do not import '//tensorflow/core/distributed_runtime:server_lib.h' directly. You should either move the file to this package or depend on an appropriate rule there
INFO: Analysed target //tensorflow/tools/benchmark:benchmark_model (72 packages loaded, 4809 targets configured).
INFO: Found 1 target...
INFO: From Compiling external/snappy/snappy-sinksource.cc [for host]:
cc1plus: warning: command line option '-Wno-implicit-function-declaration' is valid for C/ObjC but not for C++
INFO: From Compiling external/snappy/snappy-stubs-internal.cc [for host]:
cc1plus: warning: command line option '-Wno-implicit-function-declaration' is valid for C/ObjC but not for C++
INFO: From Compiling external/snappy/snappy.cc [for host]:
cc1plus: warning: command line option '-Wno-implicit-function-declaration' is valid for C/ObjC but not for C++
ERROR: /home/user/.cache/bazel/_bazel_user/b4774fbdb8542988b4e302c9e073f145/external/com_google_absl/absl/container/BUILD.bazel:529:1: C++ compilation of rule '#com_google_absl//absl/container:raw_hash_set' failed (Exit 1)
Target //tensorflow/tools/benchmark:benchmark_model failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 59.522s, Critical Path: 11.24s
INFO: 456 processes: 456 local.
FAILED: Build did NOT complete successfully
Which version of the NDK are you using? I used the following command line and it worked:
$ bazel build -c opt \
--crosstool_top=//external:android/crosstool \
--cpu=armeabi-v7a \
--host_crosstool_top=#bazel_tools//tools/cpp:toolchain \
--config monolithic \
--cxxopt=-std=c++11 \
tensorflow/tools/benchmark:benchmark_model
This is with Bazel 0.23.2 and NDK r17c on https://github.com/tensorflow/tensorflow/commit/7bd86377dedaf459d22b68363a0a2d4580180379.

Inspecting Tensorflow graph

I was testing the command to inspect a custom built tensorflow graph.
The command I used is the one found in Here
bazel build tensorflow/tools/graph_transforms:summarize_graph bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/home/WarMachineRox/test_frozen_graph.pb
But it returns an error saying:
ERROR: Unrecognized option: --in_graph=/home/WarMachineRox/test_frozen_graph.pb
If I use in_graph option without '--', it returns:
ERROR: no such package 'tensorflow/tools/graph_transforms/tensorflow/tools/graph_transforms': BUILD file not found on package path
Is there anyway to inspect tensorflow graph input node without using this?
Thanks,
bazel build tensorflow/tools/graph_transforms:summarize_graph bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/home/WarMachineRox/test_frozen_graph.pb
should be two separate commands:
$ bazel build tensorflow/tools/graph_transforms:summarize_graph
INFO: Analysed target //tensorflow/tools/graph_transforms:summarize_graph (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/tools/graph_transforms:summarize_graph up-to-date:
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
INFO: Elapsed time: 0.372s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
$ bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
2018-05-29 12:37:51.343760: E tensorflow/tools/graph_transforms/summarize_graph_main.cc:313] in_graph graph can't be empty.
usage: bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
Flags:
--in_graph="" string input graph file name
--print_structure=false bool whether to print the network connections of the graph

error bazel build in tensorflow

At first, I would like to use bazel to help me run tensorflow with SSE and avx so I tried this within work space:
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
but it gives me a new error like following, I wonder what is wrong and what should I do? Thanks for help.
WARNING: Config values are not defined in any .rc file: cuda
ERROR: Skipping '//tensorflow/tools/pip_package:build_pip_package': no such package 'tensorflow/tools/pip_package': BUILD file not found on package path
WARNING: Target pattern parsing failed.
INFO: Analysed 0 targets (2 packages loaded).
INFO: Found 0 targets...
ERROR: command succeeded, but there were errors parsing the target pattern
INFO: Elapsed time: 2.727s, Critical Path: 0.02s
FAILED: Build did NOT complete successfully
You probably have an outdated bazel. I am not sure but you can try to use --config=opt instead of -c opt for initial versions.
You have to run ./configure. That will create a .bazelrc and .tf_configure.bazel file in your Tensorflow workspace.
The --config=cuda Bazel flag refers to entries in those two files (they are both text files). The entries typically look like this: build:cuda --some_bazel_flag.
It was answered here

cannot build tensorflow in Ubuntu 1604

My Env:
Ubuntu 1604
Python 3.5.2
bazel 0.9.0
JDK 1.8.0_152
My problem:
I have ran the command ./configure before built the tensorflow source code. But I till get following error:
no such target '#local_config_git//:gen/spec.json': target 'gen/spec.json' not declared in package ......
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Add "--config=mkl" to your bazel command to build with MKL support.
Please note that MKL on MacOS or windows is still not supported.
If you would like to use a local MKL instead of downloading, please set the environment variable "TF_MKL_ROOT" every time before build.
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.
Configuration finished
root#ubuntu:/tensorflow# bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
......................................
ERROR: /tensorflow/tensorflow/core/BUILD:1653:1: no such target '#local_config_git//:gen/spec.json': target 'gen/spec.json' not declared in package '' defined by /root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/local_config_git/BUILD and referenced by '//tensorflow/core:version_info_gen'
ERROR: /tensorflow/tensorflow/core/BUILD:1653:1: no such target '#local_config_git//:gen/head': target 'gen/head' not declared in package '' defined by /root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/local_config_git/BUILD and referenced by '//tensorflow/core:version_info_gen'
ERROR: /tensorflow/tensorflow/core/BUILD:1653:1: no such target '#local_config_git//:gen/branch_ref': target 'gen/branch_ref' not declared in package '' defined by /root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/local_config_git/BUILD and referenced by '//tensorflow/core:version_info_gen'
ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: Loading failed
INFO: Elapsed time: 29.514s
FAILED: Build did NOT complete successfully (113 packages loaded)
currently loading: tensorflow/core/kernels
root#ubuntu:/tensorflow#
ERROR information