Statically link custom op from .a file in tensorflow serving - tensorflow-serving

I have a custom op implemented CUDA and built using Makefile like this hdrnet. I can build .so and import in tensorflow. For tf-serving statically linking .a file is required but all tutorials reference bazel build process for custom op instead of directly linking compiled op from .a file.
Do I have to write build process as referenced by examples or I can build tf-serving with .so/.a files directly?

I ended up compile tensorflow-serving from source with op-linked in. Tensorflow tutorials are not complete for this and additional dependency was missing which I resolved in this issue issue.

Related

What are the steps to build Tensorflow with a custom oneDNN library implementation?

I am using a custom oneDNN library implementation which I need Tensorflow (v2.4.0) to build against.
However, I noticed that there are no build options to use a system-provided OneDNN libary when building Tensorflow.
I would like to know what are the steps to support a Tensorflow build that uses a oneDNN library provided by the system.
Some specifics...
The oneDNN library version is 1.6.4 and is already installed in the system (Linux).
This version corresponds to the one Tensforflow uses when compiling with the "--config=mkl_opensource_only" Bazel flag.
I have access to the library source code, but it would be best to use the compiled library.
The target architecture is RISC-V and the OS is Linux.
There is no easy way of telling bazel to link against a custom library, but if you have the modified source directory of oneDNN you can edit tensorflow/workspace.bzl file and replace mkl_dnn_v1 repository definition with a new_local_repository rule to point to your modified source directory. I.e. replace this block:
tf_http_archive(
name = "mkl_dnn_v1",
build_file = clean_dep("//third_party/mkl_dnn:mkldnn_v1.BUILD"),
sha256 = "5369f7b2f0b52b40890da50c0632c3a5d1082d98325d0f2bff125d19d0dcaa1d",
strip_prefix = "oneDNN-1.6.4",
urls = [
"https://storage.googleapis.com/mirror.tensorflow.org/github.com/oneapi-src/oneDNN/archive/v1.6.4.tar.gz",
"https://github.com/oneapi-src/oneDNN/archive/v1.6.4.tar.gz",
],
)
With something like this:
native.new_local_repository(
name = "mkl_dnn_v1",
build_file = clean_dep("//third_party/mkl_dnn:mkldnn_v1.BUILD"),
path = "/path/to/your/modified/oneDNN/sources",
)
You might also want to modify third_party/mkl_dnn/mkldnn_v1.BUILD file if you have added any new source files.
Addendum:
--config=mkl_opensource_only seems to be broken now, you might have a better luck using just --config=mkl instead.

check tensorflow version in CMake

I'm trying to check the TensorFlow (built from source) version in CMake.
If TensorFlow is built from source, there's a include ("eager", c_api.h, c_api_experimental.h, LICENSE) folder, and a lib (libtensorflow.so, libtensorflow_framwork.so) folder.
I tried find_package because of PACKAGE_FIND_VERSION variable. Although TensorFlow_FOUND variable was set, PACKAGE variable was not set. Maybe there needs something like .version file.
The reason I'm trying to do this is for a version check. My program needs tensorflow 1.10. If there is pre-built tensorflow already in user(/usr/include, /usr/lib), it should check version if it is 1.10.
Is there any good method for doing this?

How to force tensorflow to use a custom version of Eigen?

I am compiling Tensorflow 1.5, and I want to force bazel to include a custom version of the eigen header files, which are at:
usr/local/lib/python2.7/dist-packages/...
Conversely, whenever I try to compile (even after a bazel clean --expunge) tensorflow uses different files, which are copied during the build procedure at:
/root/.cache/bazel/_bazel_root/
Is there any way to force tensorflow to use different files?
You can change the tf_http_archive rule for eigen_archive (you must not change the name) in tensorflow/workspace.bzl to new_local_repository and use Tensorflow's eigen BUILD file (//third_party:eigen.BUILD).

Integrating tensorflow in a larger C++ project -- Library conflicts

Objective: Integrate tensorflow into a larger project.
Solution: 1) Integrate tensorflow into cmake by passing appropriate arguments to bazel and get a working build.
2) Unzip the *.whl file to get the library and headers.
Problem: Tensorflow builds but has its own header files for protobufs and Eigen. My project also depends on these two libraries and the versions might mismatch. However, I can use the libraries that tensorflow fetches and replace the one we currently use. We currently build protobuf in our system.
Question: I can find the protobuf and Eigen header files used by tesorflow inside the whl files built, but I cannot find the .so files.
My understanding of bazel is low, but it might be that it is removing the .so files from the sandbox it uses, I am not sure.
What can I do to always fetch the lib and include folders for tensorflow dependencies that it dowloads. Namely, protobuf. Eigen is header only.
Tried already: search in ~/.cache/bazel/ directory.

List of headers to use Tensorflow C++ API using libtensorflow_cc.so

I want to know what all header files are required in order to use Tensorflow's C++ APIs. Like in case of C APIs, there is just a single header c_api.h which has all the functions, etc. declared, is there any such single header for C++ APIs? I tried searching this, but unable to understand what is required and what is not.
There is a huge list of headers in tensorflow/cc, tensorflow/core and tensorflow/c which are used to build libtensorflow_cc.so and we also ship most of these in the tensorflow's distribution (By TF's distribution I mean TF which is built using bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package). Is that list of headers sufficient in order to use C++ API? or do we need to build any additional target in tensorflow/BUILD?
I've also gone through https://www.tensorflow.org/api_docs/cc/ but can't really make out the exact list of required headers.
In one of the related posts, I found that tensorflow/bazel-genfiles contain the required headers. Please confirm this.
Thanks in advance,
Nishidha
For those who are building TensorFlow v2 C++ for Windows using bazel, kindly use
bazel build --config=opt tensorflow:install_headers
This will generate an include folder with the cc header files in bazel-bin/tensorflow/.
as far as I know, there is no official distributable C++ API package. There is, however, tensorflow_cc project that builds and installs TF C++ API for you, along with convenient CMake targets you can link against.
Although it probably installs slightly more files than necessary, you can find the list of installed headers in CMakeLists.txt:130:
# install *all* files with .h extension
/tensorflow/**/*.h
# install all dependencies downloaded by contrib/makefile
/tensorflow/tensorflow/contrib/makefile/downloads/
# install all files from third_party folder (e.g., Eigen/Tensor)
/tensorflow/third_party/
And you can find the list of directories which should be included by your compiler in CMakeLists.txt:58:
/tensorflow
/tensorflow/bazel-genfiles
/tensorflow/tensorflow/contrib/makefile/downloads
/tensorflow/tensorflow/contrib/makefile/downloads/eigen
/tensorflow/tensorflow/contrib/makefile/downloads/gemmlowp
/tensorflow/tensorflow/contrib/makefile/gen/protobuf-host/include
Note that C++ API requires also eigen and protobuf headers and libraries, which are, in case of tensorflow_cc, built using contrib/makefile from TF repo.
You may prefer to use tensorflow_cc directly so that you don't have to bother with all this manually.
An alternative way to build libtensorflow_cc.so is to use tensorflow/tensorflow:devel-gpu docker image, then build it with command:
bazel build --config=opt //tensorflow:libtensorflow_cc.so