Is it possible to build the Tensorflow Python wheel with a different name than tensorflow?
I would like to build Tensorflow with SIMD instructions like SSE, AVX and FMA and distribute that internally in our repository. I've managed to build it, but the package name is tensorflow. To keep the package separate from the official package, I would like to call it tensorflow-optimized or something similar.
Is this possible with the bazel build system?
Or is there a way I could edit the wheel?
This is not part of the bazel build system, it is part of the tensorflow project's script. I think the relevant line is https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/pip_package/setup.py#L44. So you should be able to pass --project_name to override it.
Related
I work on a Buildroot Embedded Linux System and I have to code Machine Learning Inference using the Tensorflow lite C++ static library. I have already built it following the tensorflow tutorial and I have got my libtensorflow-lite.a file ready to go.
But now, I don't really know how to add this static library to the cross compiler on buildroot. The buildroot user manual doesn't seem to talk about it.
I don't know if I have to create a ".mk" file or a "Config.in" file as a package or not.
Can someone help me ?
I was wondering if it was possible to deploy Tensorflow custom ops or custom reader written in C++ inside cloud-ml.
It looks like cloud-ml does not accept running native code in its standard mode (I'm not really interested in using a virtualized environment), at least for Python package they only accept pure python with no C dependency.
Likely the easiest way to do this is to include as an extra package the build of the entire custom Tensorflow Wheel that includes the op. For specifying extra packages see: https://cloud.google.com/ml-engine/docs/how-tos/packaging-trainer#to_include_custom_dependencies_with_your_package
For building a TF wheel from source see: https://www.tensorflow.org/install/install_sources#build_the_pip_package
You could also try to download/install just the .so file for the new op, but that would require either downloading it inside the setup.py of your training package or inside the training python code itself.
Note that you can currently only upload custom packages during Training, and not during Batch or Online Prediction, so a model trained using a custom TF version may not work with the prediction service.
Tensorflow 1.0 has introduced XLA support that includes JIT compilation and AOT compilation. For JIT compilation, I found a python test script with which it can be unit-tested. However, I've not found any python test for AOT compilation. There are bazel tests though, which can be run on source tree.
Tensorflow's link https://www.tensorflow.org/performance/xla/tfcompile provides information on how to test. But tfcompile does not make into the tensorflow's distribution content. I may be wrong here. But I could not see tfcompile anywhere in the TF's distribution directory where it is installed.
Could anyone please help me understand how to test AOT compilation on the existing distribution content OR I need to tweak something in the code to allow AOT stuff to go into distribution?
Thanks in advance.
I know you're asking specifically about AOT, but I recommend you first read this page: https://www.tensorflow.org/performance/xla/
And then read this one: https://www.tensorflow.org/performance/xla/jit
In particular note that XLA is not included in our binary distributions; you must build from source at the moment. Note that you must pick "enable XLA" when you run ./configure in order for XLA support to be enabled.
Once you've done that, Yaroslav Bulatov's advice is correct; you can build the binaries yourself, or run the tests via bazel.
I have some custom Tensorflow code I have in the contrib/ subdirectory of the project (all other parts of the code are standard Tensorflow from the official distribution).
I would like to be able to distribute this code as an external dependency on Tensorflow, such that I can distribute the library via pip and depend on the binary packages available for Tensorflow in pip as well.
My main goal is that I don't want to have users of my code have to compile the full Tensorflow tree (with my custom code only in contrib/) just to get my custom code / module.
Is this possible to do, and if so how?
I have built syntaxnet, and tensorflow-serving using bazel. Both embed their own (partial?) copy of tensorflow itself. I already have the problem where I'd like to "import" some parts of tensorflow-serving in a script that "lives" in the syntaxnet tree which I can't figure out (without doing some VERY ugly things).
Now I'd like "tensorboard", but that apparently doesn't get built as part of the embedded tensorflow inside of syntaxnet or tensorflow-serving.
So now I'm sure "I'm doing it wrong". How am I supposed to be combining the artifacts built by various separate bazel workspaces?
In particular, how can I build tensorflow (with tensorboard) AND syntaxnet AND tensorflow-serving and have them "installed" for use so I can start writing my own scripts in a completely separate directory/repository?
Is "./bazel-bin/blah" really the end-game with bazel? There is no "make install" equivalent?
You're right, currently Tensorboard targets are only exposed in the Tensorflow repo, and not the other two that use it. That means that to actually bring up Tensorboard, you'll need to checkout Tensorflow on its own and compile/run Tensorboard there (pointing it to the generated logdir).
Actually generating the training summary data in a log directory is done during training, in your case in the tensorflow/models repo. It looks like SummaryWriter is used in inception_train.py, so perhaps you can add something similar to syntaxnet. If that doesn't work and you're not able to link Tensorboard, I'd recommend filing an issue in tensorflow/models to add support for Tensorboard there. You shouldn't need Tensorboard in Tensorflow Serving.
Importing parts of Tensorflow Serving in syntaxnet would require you to add this new depedency as a submodule (like is done with tensorflow) or possibly a git_repository in the WORKSPACE file if that works. We've never tried this, so it's possible that something is broken for this untested use case. Please file issues if you encounter a problem with this.
As for just installing and running, Tensorflow Serving doesn't support that right now. It's a set of libraries that you link in directly into your server binary and compile (the repo offers some example servers and clients), but right now there is no simple "installed server". Tensorflow along with Tensorboard however can be installed and linked from anywhere.