How can I specify the compiler for Bazel to use? I see the --compiler option here, but no explanation of its use.
I have read about making new toolchains, but it appears that it is per project or something. For Tensorflow in particular, I want to use a icecc install I have on my machines so I can distribute the build
For a wrapper around gcc, doing export CC=/path/to/icecc should just work and start using icecc with bazel 0.4.5. If icecc requires special environment variable you might have to add --action_env flags.
Note that Bazel was created to run with the Google compilation cluster and as a consequence separate each compilation action, that might interact badly with icecc assumptions.
Related
I have tried to compile Tensorflow 2.0 to get the benefits of extra cpu instructions like avx, but to no avail. I have read through How to compile Tensorflow with SSE4.2 and AVX instructions? but I am still confused as unless you are building for another PC surely -march=native should just work. I have tried building 2 times with different instructions and am still getting the warning message.
I think I used the below, and I think I have the logs still saved if someone wants to help.
"bazel build //tensorflow/tools/pip_package:build_pip_package
d_pip_package --config=mkl"
"bazel build -c opt --copt=-march=native --config=mkl //tensorflow/tools/pip_package:build_pip_package
This is only for the satisfaction of understanding what is going on. I currently don't need the benefit the optimisation will bring, but I do not understand why the method I used isn't working as I followed it exactly.
As noted by my edit in the top answer on the question you linked, it seems bazel and/or TensorFlow's build scripts are buggy. They mishandle -march=native and fail to pass it on to the compiler. I'm guessing it does something wrong with args that have an = in their name, because args like -mfma work.
You are correct, if they were correctly passing -march=native to the compiler there would be no problem, and no need for any of this complication.
IDK why nobody's fix this huge inconvenience yet, instead leaving lots of users who aren't experts on x86 CPU features to stumble around trying to figure out which features their CPU has and how to enable them for gcc/clang. This is exactly what -march=native is for, along with the other important feature of setting tuning options appropriately for the machine you're compiling on.
I had a look once, but I don't actually use TensorFlow and don't know bazel so I got bogged down in the maze of build machinery between that command line and actual invocation of g++ ... foo.cpp
How do I build a debian package from source using bazel?
I am trying to build debian packagefor tensorflow. I need that to be included in our PPA server. Thanks!
building Debian packages consists of compiling the software (mostly; there are also packages that don't need compilation, e.g. for scripting languages), and then packaging the artifacts.
Therefore the packaging process has a separate "build" step, which is used to trigger your software's build process.
This step doesn't care whether you use make, CMake, SCons, bazel or whatever, as long as you tell it what it should do.
a simplistic debian/rules file for your needs could look like:
#!/usr/bin/make -f
%:
dh $#
override_dh_auto_build:
bazel build //main:hello-world
But of course there is quite a lot to Debian packaging in general, so you make sure you read (and understand) the Debian Packaging Documentation first...
I've written a module on top of a private fork off of TensorFlow that uses nanomsg.
For my local development server, I used cmake install to install nanomsg (to /usr/local) and accessed the header files from their installed location. The project runs fine locally.
However, I now need to package nanomsg within my TensorFlow workspace. I've tried the following two approaches, and find neither satisfactory:
Similar to this answer for OpenCV, I precompiled nanomsg into a private repository, loaded it within my workspace (within tensorflow/workspace.bzl) using an http_archive directive then included the headers and libraries in the relevant build script. This runs fine, but is not a portable solution.
A more portable solution, I created a genrule to run a specific sequence of cmake commands that can be used to build nanomsg. This approach is neater, but the genrule cannot be reused to cmake other projects. (I referred to this discussion).
Clearly cmake is not supported as a first-class citizen in Bazel builds. Is there anyone who has faced this problem in your own projects created a generic, portable way to include libraries within Bazel projects that are built using cmake? If so, how did you approach it?
As Ulf wrote, I think your suggested option 2 should work fine.
Regarding "can I identify if the cmake fails", yes: cmake should return with an error exit code (!= 0) when it fails. This in turn will cause Bazel to automatically recognize the genrule action as failed and thus fail the build. Because Bazel sets "set -e -o pipefail" before running your command (cf. https://docs.bazel.build/versions/master/be/general.html#genrule-environment), it should also work if you chain multiple cmake commands in your genrule "cmd".
If you call out to a shell script in your "cmd" attribute that then actually runs the cmake commands, make sure to put "set -e -o pipefail" in the first line of your script yourself. Otherwise the script will not fail when cmake fails.
If I misunderstood your question "Can I identify if the cmake fails", please let me know. :)
This new project: https://github.com/bazelbuild/rules_foreign_cc seems like a solution(it build rules for cmake to build your project inside bazel).
Context: I have several loops in an Objective-C library I am writing which deal with processing large text arrays. I can see that right now it is running in a single threaded manner.
I understand that LLVM is now capable of auto-vectorising loops, as described at Apple's session at WWDC. It is however very cautious in the way it does it, one reason being the possibility of variables being modified due to CPU pipelining.
My question: how can I see where LLVM has vectorised my code, and, more usefully, how can I receive debug messages that explain why it can't vectorise my code? I'm sure if it can see why it can't auto-vectorise it, it could point that out to me and I could make the necessary manual adjustments to make it vectorisable.
I would be remiss if I didn't point out that this question has been more or less asked already, but quite obtusely, here.
Identifies loops that were successfully vectorized:
clang -Rpass=loop-vectorize
Identifies loops that failed vectorization and indicates if vectorization was specified:
clang -Rpass-missed=loop-vectorize
Identifies the statements that caused vectorization to fail:
clang -Rpass-analysis=loop-vectorize
Source: http://llvm.org/docs/Vectorizers.html#diagnostics
The standard llvm toolchain provided by Xcode doesn't seem to support getting debug info from the optimizer. However, if you roll your own llvm and use that, you should be able to pass flags as mishr suggested above. Here's the workflow I used:
1. Using homebrew, install llvm
brew tap homebrew/versions
brew install llvm33 --with-clang --with-asan
This should install the full and relatively current llvm toolchain. It's linked into /usr/local/bin/*-3.3 (i.e. clang++-3.3). The actual on-disk location is available via brew info llvm33 - probably /usr/local/Cellar/llvm33/3.3/bin.
2. Build the single file you're optimizing, with homebrew llvm and flags
If you've built in Xcode, you can easily copy-paste the build parameters, and use your clang++-3.3 instead of Xcode’s own clang.
Appending -mllvm -debug-only=loop-vectorize will get you the auto-vectorization report. Note: this will likely NOT work with any remotely complex build, e.g. if you've got PCH's, but is a simple way to tweak a single cpp file to make sure it's vectorizing correctly.
3. Create a compiler plugin from the new llvm
I was able to build my entire project with homebrew llvm by:
Grabbing this Xcode compiler plugin: http://trac.seqan.de/browser/trunk/util/xcode/Clang%20LLVM%20MacPorts.xcplugin.zip?order=name
Modifying the clang-related paths to point to my homebrew llvm and clang bin names (by appending '-3.3')
Placing it in /Library/Application Support/Developer/5.0/Xcode/Plug-ins/
Relaunching Xcode should show this plugin in the list of available compilers. At this point, the -mllvm -debug-only=loop-vectorize flag will show the auto-vectorization report.
I have no idea why this isn't exposed in the Apple builds.
UPDATE: This is exposed in current (8.x) versions of Xcode. The only thing required is to enable one or more of the loop-vectorize flags.
Assuming you are using opt and you have a debug build of llvm, you can do it as follows:
opt -O1 -loop-vectorize -debug-only=loop-vectorize code.ll
where code.ll is the IR you want to vectorize.
If you are using clang, you will need to pass the -debug-only=loop-vectorize flag using -mllvm option.
I'm thinking to write a simple configure script (similar to autoconf one) which execs cmake. But before doing that I want to check if anyone knows of such an effort already. I wasn't able to find anything on google.
It should be able to support the basic autoconf configure flags (prefix, exec-prefix, bindir mostly).
Reason to do it is of course that there's a certain user expectancy to be able to do ./configure && make
Also not really an answer but too long for a comment:
After reading up about cmake / cpack, I can at least tell you this. Cmake expects to be present on the platform. Therefore CPack cannot generate the same type of ./configure scripts as autotools. The Autotools expect some shell to be present, which is essentially the same as cmake to be present. However since cmake also targets the Win environment, it cannot rely on a shell. That being said, CPack can provide source packages, which need to be installed with cmake in the usual manner.
Also this does not solve your problem, I do not recommend to write a tool for cmake. Cmake is able to use all these type of prefixes you are interested in. If the user wants to compile your program from scratch, he has to know at least the basics (e.g. setting variables) of your build system. This is also true for autotools. If you want to spare him the pain, you can provide binary .sh, .deb or .rpm packages, which can be easily built with cmake / cpack.