The objective of my experiment is to build tensorflow on Jetson TK1 arm based embedded board. Since pre-builts of tensorflow for arm architecture are not given by the official releases, I was forced to the option of building it from source.
To build tensorflow, we need Bazel which should be also build from source. Now I got stuck here, not able to build bazel at all.
I have referred various blogs and github projects and tried to follow the instructions everyone said it worked for them.
1) Tensorflow on Raspberry-pi
2) Jetson Hacks building Tensorflow from source
3) Official Documentation
Steps Followed:
$ sudo apt-get install build-essential openjdk-8-jdk python zip
$ wget https://github.com/bazelbuild/bazel/releases/download/0.4.5/bazel-0.4.5-dist.zip
$ unzip -d bazel bazel-0.4.5-dist.zip
$ cd bazel
$ sudo ./compile.sh
Error Log:
ERROR: /build/bazel/src/main/protobuf/BUILD:25:2: Java compilation in rule '//src/main/protobuf:extra_actions_base_java_proto' failed: Worker process sent response with exit code: 1.
java.lang.InternalError: Cannot find requested resource bundle for locale en_US
at com.sun.tools.javac.util.JavacMessages.getBundles(JavacMessages.java:128)
at com.sun.tools.javac.util.JavacMessages.getLocalizedString(JavacMessages.java:147)
at com.sun.tools.javac.util.JavacMessages.getLocalizedString(JavacMessages.java:140)
at com.sun.tools.javac.util.Log.localize(Log.java:673)
at com.sun.tools.javac.util.Log.printLines(Log.java:485)
at com.sun.tools.javac.api.JavacTaskImpl.handleExceptions(JavacTaskImpl.java:156)
at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:93)
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:87)
at com.google.devtools.build.buildjar.javac.BlazeJavacMain.compile(BlazeJavacMain.java:104)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder$1.invokeJavac(SimpleJavaLibraryBuilder.java:163)
at com.google.devtools.build.buildjar.ReducedClasspathJavaLibraryBuilder.compileSources(ReducedClasspathJavaLibraryBuilder.java:52)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.compileJavaLibrary(SimpleJavaLibraryBuilder.java:166)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.run(SimpleJavaLibraryBuilder.java:178)
at com.google.devtools.build.buildjar.BazelJavaBuilder.processRequest(BazelJavaBuilder.java:90)
at com.google.devtools.build.buildjar.BazelJavaBuilder.runPersistentWorker(BazelJavaBuilder.java:67)
at com.google.devtools.build.buildjar.BazelJavaBuilder.main(BazelJavaBuilder.java:44)
Caused by: java.util.MissingResourceException: Can't find bundle for base name com.google.errorprone.errors, locale en_US
at java.util.ResourceBundle.throwMissingResourceException(ResourceBundle.java:1573)
at java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1396)
at java.util.ResourceBundle.getBundle(ResourceBundle.java:854)
at com.sun.tools.javac.util.JavacMessages.lambda$add$0(JavacMessages.java:106)
at com.sun.tools.javac.util.JavacMessages.getBundles(JavacMessages.java:125)
... 15 more
Target //src:bazel failed to build
INFO: Elapsed time: 291.995s, Critical Path: 258.92s
ERROR: Could not build Bazel
To make sure the error is independent of the architecture, I have tried to build Bazel in x86_64 PC. Even there I am getting the same error. I have seen people created the similar issue in bazel github group, none solved.
Version 0.4.5 is very old. We just released 0.12.0, could you try that one?
Related
I am learning tesnorflow from this blog:
http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/
The code i am running is :
https://github.com/dennybritz/cnn-text-classification-tf/blob/master/train.py
I have installed tensorflow from sourcse in a virtual enviroment,in CPU only enviroment using followinbg bazel build command: bazel build --config=mkl ...
here is the exact error:
"2018-01-16 03:15:27.783040: F tensorflow/core/kernels/mkl_maxpooling_op.cc:157] Check failed: dnnPoolingCreateForward_F32( &prim_pooling_fwd, primAttr, algorithm, lt_user_input, params.kernel_size, params.kernel_stride, params.in_offset, dnnBorderZerosAsymm) == E_SUCCESS (-127 vs. 0)
Aborted
"
I have debugged error to the line where sess.run is written, i have beleived it has something to do it mkl_maxpooling, as i had installed tensorflow with mkl optimization of INTEL cpu's
Given below are the steps that I followed:
Build tensorflow 1.4 from source with mkl as mentioned in the question
Cloned the git repo "https://github.com/dennybritz/cnn-text-classification-tf.git"
Ran "python train.py" from "cnn-text-classification-tf" directory(created from git clone)
Code ran without any errors. So it seems like the tensorflow was not properly built from the source. Please confirm that there were no errors while building tensorflow from source.
I'm trying to convert my *.pb tensorflow model to coreML. I'm getting stuck on identifying my output node of my model.
In order to obtain my output node, I've attempted to build and run summarize_graph on my *.pb file, but running into issues. How do I build and run summarize_graph after downloading the source?
I've run the following command:
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=tensorflow_inception_graph.pb
and I get the following error:
INFO: Analysed 0 targets (0 packages loaded). INFO: Found 0 targets...
INFO: Elapsed time: 0.389s, Critical Path: 0.01s INFO: Build completed
successfully, 1 total action
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph: No such
file or directory
After issuing the bazel command, a blank bazel-bin directory appears in the location I executed the command.
Note, summarize_graph didn't exist in my tensorflow installation. So I downloaded the source from github tensorflow/tools/graph_transforms and copied it into my tensorflow/tools/graph_transforms directory.
the directory contains the following:
BUILD README.md
init.py
init.pyc add_default_attributes.cc add_default_attributes_test.cc backports.cc backports_test.cc compare_graphs.cc
fake_quantize_training.cc fake_quantize_training_test.cc file_utils.cc
file_utils.h file_utils_test.cc flatten_atrous.cc
flatten_atrous_test.cc fold_batch_norms.cc fold_batch_norms_test.cc
fold_constants_lib.cc fold_constants_lib.h fold_constants_test.cc
fold_old_batch_norms.cc fold_old_batch_norms_test.cc
freeze_requantization_ranges.cc freeze_requantization_ranges_test.cc
fuse_convolutions.cc fuse_convolutions_test.cc insert_logging.cc
insert_logging_test.cc obfuscate_names.cc obfuscate_names_test.cc out
python quantize_nodes.cc quantize_nodes_test.cc quantize_weights.cc
quantize_weights_test.cc remove_attribute.cc remove_attribute_test.cc
remove_device.cc remove_device_test.cc remove_ema.cc
remove_ema_test.cc remove_nodes.cc remove_nodes_test.cc
rename_attribute.cc rename_attribute_test.cc rename_op.cc
rename_op_test.cc round_weights.cc round_weights_test.cc set_device.cc
set_device_test.cc sort_by_execution_order.cc
sort_by_execution_order_test.cc sparsify_gather.cc
sparsify_gather_test.cc strip_unused_nodes.cc
strip_unused_nodes_test.cc summarize_graph_main.cc transform_graph.cc
transform_graph.h transform_graph_main.cc transform_graph_test.cc
transform_utils.cc transform_utils.h transform_utils_test.cc
I'm on a macbook pro
Thanks!
In case anyone is running into the similar problem, I solved it.
Navigate to the root of the tensorflow source directory
cmd> ./configure
cmd> bazel build tensorflow/tools/graph_transforms:summarize_graph
(you may get an error about xcode; if so, run the following)
cmd> xcode-select -s /Applications/Xcode.app/Contents/Developer
cmd> bazel clean --expunge
cmd> bazel build tensorflow/tools/graph_transforms:summarize_graph
CentOS 7 walkthrough:
yum install epel-release
yum update
yum install patch
curl https://copr.fedorainfracloud.org/coprs/vbatts/bazel/repo/epel-7/vbatts-bazel-epel-7.repo -o /etc/yum.repos.d/vbatts-bazel-epel-7.repo
yum install bazel
curl -L -O https://github.com/tensorflow/tensorflow/archive/v1.8.0.tar.gz
cd tensorflow-1.8.0
./configure # interactive!
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
I am working with Tensorflow 1.1.0 with gcc 5.2.0 and bazel 0.4.5
When I do:
./configure
bazel build --verbose_failures --config=opt //tensorflow/tools/pip_package:build_pip_package
I got the following error messages:
ERROR: /remote/us03home4/rogerlo/.cache/bazel/_bazel_rogerlo/c6e718933b1d81ab029d890c5eecbc01/external/protobuf/BUILD:67
9:1: null failed: protoc failed: error executing command
(cd /remote/us03home4/rogerlo/.cache/bazel/_bazel_rogerlo/c6e718933b1d81ab029d890c5eecbc01/execroot/tensorflow && \
exec env - \
bazel-out/host/bin/external/protobuf/protoc '--python_out=bazel-out/local-opt/genfiles/external/protobuf/python' -Iexternal/protobuf/python -Ibazel-out/local-opt/genfiles/external/protobuf/python bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/any.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/api.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/compiler/plugin.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/descriptor.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/duration.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/empty.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/field_mask.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/source_context.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/struct.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/timestamp.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/type.proto bazel-out/local-opt/genfiles/external/protobuf/python/google/protobuf/wrappers.proto): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by bazel-out/host/bin/external/protobuf/protoc)
bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by bazel-out/host/bin/external/protobuf/protoc)
bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by bazel-out/host/bin/external/protobuf/protoc)
bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by bazel-out/host/bin/external/protobuf/protoc)
____Building complete.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
____Elapsed time: 101.992s, Critical Path: 54.24s
make: *** [tensorflow] Error 1
But if I added this line:
linker_flag: "-Wl,-rpath,/depot/gcc-5.2.0/lib64"
to the file
$TENSORFLOW_ROOT/bazel-tensorflow/external/local_config_cc/CROSSTOOL
Build will pass.
I wonder if I can configure that linker_flag from the configure file or somewhere else?
I did add it to the build option but it doesn't work.
bazel build --verbose_failures --config=opt --linkopt="-Wl,-rpath,/depot/gcc-5.2.0/lib6
4" //tensorflow/tools/pip_package:build_pip_package
EDIT: added bazel version
SOLUTION:
Add the linker option to the configuration of Bazel.
Recompile Bazel.
Compile Tensorflow with the recompiled Bazel will pass.
Investigation
The target is built by external crosstool, so the --linkopt won't work. According to the Bazel official blog, the configuration of external crosstool (C++) is auto detected. It points to the C++ configuration file.
linker_flag rpath is computed by $LD_LIBRARY_PATH. That is, if you have some library paths defined in $LD_LIBRARY_PATH, Bazel will generate their rpath in the linker_flag.
But that dependency is removed because of [issue#2099](github.com/bazelbuild/bazel/issues/2099)
So setting $LD_LIBRARY_PATH doesn't work in [v0.4.5](github.com/bazelbuild/bazel/blob/0.4.5/tools/cpp/cc_configure.bzl#L250)
However, I haven't figure out how to do it correctly (setting env_action or something). So the quick solution is to hardcode it in the configuration file.
(Forgive me about the ugly hyperlinks above. My reputation is not enough to have more than 2 links in a post.)
[Copy my answer here]
SOLUTION:
Add the linker option to the configuration of Bazel.
Recompile Bazel.
Compile Tensorflow with the recompiled Bazel will pass.
Investigation
The target is built by external crosstool, so the --linkopt won't work. According to the Bazel official blog, the configuration of external crosstool (C++) is auto detected. It points to the C++ configuration file.
linker_flag rpath is computed by $LD_LIBRARY_PATH. That is, if you have some library paths defined in $LD_LIBRARY_PATH, Bazel will generate their rpath in the linker_flag.
But that dependency is removed because of [issue#2099](github.com/bazelbuild/bazel/issues/2099)
So setting $LD_LIBRARY_PATH doesn't work in [v0.4.5](github.com/bazelbuild/bazel/blob/0.4.5/tools/cpp/cc_configure.bzl#L250)
However, I haven't figure out how to do it correctly (setting env_action or something). So the quick solution is to hardcode it in the configuration file.
(Forgive me about the ugly hyperlinks above. My reputation is not enough to have more than 2 links in a post.)
I am trying to build MKL-accelerated version of TensorFlow using bazel 0.5.1, gcc 6.2, binutils 2.28, Anaconda2 python on Scientific Linux 7.2.
Apparently the system /lib64/libstdc++.so.6 is too old, so I am trying to use gcc installed in another directory. PATH, LD_LIBRARY_PATH are modified to prepend the corresponding paths (using modules). However, while bazel has no trouble picking up correctly executables for gcc, ld, python, it still tries to load old system /lib64/libstdc++.so.6. How to force it to use the one from gcc 6.2? Why does not it pick it up from LD_LIBRARY_PATH?
According to google many people are having trouble with this but I could not find a solution that would work for me. I had no trouble building TensorFlow under Ubuntu 16.04 that has sufficiently new gcc in the standard location.
I do:
1) ./configure
The only non-default options I choose is use MKL and download MKL
2) bazel build --config=mkl --copt="-DEIGEN_USE_VML" -s -c opt //tensorflow/tools/pip_package:build_pip_package
.....
example/example_parser_configuration.proto tensorflow/core/protobuf/control_flow.proto tensorflow/core/protobuf/meta_graph.proto tensorflow/core/protobuf/named_tensor.proto tensorflow/core/protobuf/saved_model.proto tensorflow/core/protobuf/tensorflow_server.proto tensorflow/core/util/event.proto tensorflow/core/util/test_log.proto)
ERROR: /scratch/midway2/ivy2/TF_intel/tensorflow/tensorflow/tools/tfprof/BUILD:42:1: null failed: protoc failed: error executing command bazel-out/host/bin/external/protobuf/protoc '--python_out=bazel-out/local-opt/genfiles/' -I. -I. -Iexternal/protobuf/python -Ibazel-out/local-opt/genfiles/external/protobuf/python ... (remaining 5 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
bazel-out/host/bin/external/protobuf/protoc: /lib64/libstdc++.so.6: version GLIBCXX_3.4.20' not found (required by bazel-out/host/bin/external/protobuf/protoc)
bazel-out/host/bin/external/protobuf/protoc: /lib64/libstdc++.so.6: versionCXXABI_1.3.8' not found (required by bazel-out/host/bin/external/protobuf/protoc)
bazel-out/host/bin/external/protobuf/protoc: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by bazel-out/host/bin/external/protobuf/protoc)
.....
Thank you,
Igor
sorry for the slow reply. Bazel by design ignores LD_LIBRARY_PATH when running actions. It doesn't have to ignore them during C++ toolchain detection, but at the moment, it does :/ To help you forward, I would try adding --sysroot= as linkopt or using bazel grte_top flag. Depending on where your libstdc++.so lives, you might need to disable sandbox. The principled solution would be to write a custom CROSSTOOL that specifies builtin_sysroot or grte_top. But that is not an easy task.
Let me know if I lost you somewhere in that paragraph :)
I have installed Tensorflow Serving as outlined on the install page at https://tensorflow.github.io/serving/setup. However, when I follow the build instruction on the page I get the following error:
$ bazel build tensorflow_serving/...
ERROR: /home/**PATH**/external/org_tensorflow/third_party/py/python_configure.bzl:183:20: unexpected keyword 'environ' in call to repository_rule(implementation: function, *, attrs: dict or NoneType = None, local: bool = False).
ERROR: com.google.devtools.build.lib.packages.BuildFileContainsErrorsException: error loading package '': Extension file 'third_party/py/python_configure.bzl' has errors.
INFO: Elapsed time: 0.623s
I am running on Ubuntu and TensorFlow 1.0.1 build. I am using Python 2.7 and have set up a virtualenv.
I can successfully build the bazel hello example and also am able to complete the gRPC quick start found at http://www.grpc.io/docs/quickstart/python.html.
Any suggestions?
-Dave
The trouble was an old copy of bazel. To determine your version
$ bazel version
Build label: 0.4.5
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Mar 16 12:19:38 2017 (1489666778)
Build timestamp: 1489666778
Build timestamp as int: 1489666778
In my case it required a manual removal of the old version
rm -fr ~/.bazel ~/.bazelrc
Next, I chose the install using the installer for ubuntu.
$ ./bazel-0.4.5-installer-linux-x86_64.sh
Bazel installer
---------------
Bazel is bundled with software licensed under the GPLv2 with Classpath exception.
You can find the sources next to the installer on our release page:
https://github.com/bazelbuild/bazel/releases
# Release 0.4.5 (2017-03-16)
There was still another trick to getting it to work.
$cd ..
$ bazel test tensorflow_serving/...
Python Configuration Error: 'PYTHON_BIN_PATH' environment variable is not set
This error is also related to versioning, but in this case it was an issue with serving. The solution was to revert to an earlier version and update the submodule from git (I had previously cloned the repository). From the serving directory:
$ git checkout 0.5.1
M tensorflow
M tf_models
Note: checking out '0.5.1'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 51bb356... Merge pull request #325 from kirilg/0.5.1
(tensorflow) $ git submodule update
Submodule path 'tensorflow': checked out '07bb8ea2379bd459832b23951fb20ec47f3fdbd4'
Submodule path 'tf_models': checked out '2fd3dcf3f31707820126a4d9ce595e6a1547385d'
(tensorflow) $ bazel test tensorflow_serving/...
Serving now reports success:
INFO: Found 199 targets and 57 test targets...
[1,299 / 4,037] Still waiting for 200 jobs to complete:
Running (standalone):