Tensorflow serving custom gpu op cannot find dependency when compiling - tensorflow

I fallowed the guides on making custom gpu op for tensorflow and could make shared lib. For tensorflow-serving I adapted required paths but I get error when building:
ERROR: /home/g360/Documents/eduardss/serving/tensorflow_serving/custom_ops/CUSTOM_OP/BUILD:32:1: undeclared inclusion(s) in rule '//tensorflow_serving/custom_ops/CUSTOM_OP:CUSTOM_OP_ops_gpu':
this rule is missing dependency declarations for the following files included by 'tensorflow_serving/custom_ops/CUSTOM_OP/cc/magic_op.cu.cc':
'external/org_tensorflow/tensorflow/core/platform/stream_executor.h'
'external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_platform_id.h'
'external/org_tensorflow/tensorflow/stream_executor/platform.h'
'external/org_tensorflow/tensorflow/stream_executor/device_description.h'
'external/org_tensorflow/tensorflow/stream_executor/launch_dim.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/port.h'
'external/org_tensorflow/tensorflow/stream_executor/device_options.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/logging.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/status.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/error.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/status_macros.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/statusor.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/statusor_internals.h'
'external/org_tensorflow/tensorflow/stream_executor/plugin.h'
'external/org_tensorflow/tensorflow/stream_executor/trace_listener.h'
'external/org_tensorflow/tensorflow/stream_executor/device_memory.h'
'external/org_tensorflow/tensorflow/stream_executor/kernel.h'
'external/org_tensorflow/tensorflow/stream_executor/kernel_cache_config.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/array_slice.h'
'external/org_tensorflow/tensorflow/stream_executor/dnn.h'
'external/org_tensorflow/tensorflow/stream_executor/event.h'
'external/org_tensorflow/tensorflow/stream_executor/host/host_platform_id.h'
'external/org_tensorflow/tensorflow/stream_executor/multi_platform_manager.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/initialize.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/initialize.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/platform.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/default/initialize.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/dso_loader.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/default/dso_loader.h'
'external/org_tensorflow/tensorflow/stream_executor/rocm/rocm_platform_id.h'
'external/org_tensorflow/tensorflow/stream_executor/scratch_allocator.h'
'external/org_tensorflow/tensorflow/stream_executor/temporary_device_memory.h'
'external/org_tensorflow/tensorflow/stream_executor/stream.h'
'external/org_tensorflow/tensorflow/stream_executor/blas.h'
'external/org_tensorflow/tensorflow/stream_executor/host_or_device_scalar.h'
'external/org_tensorflow/tensorflow/stream_executor/fft.h'
'external/org_tensorflow/tensorflow/stream_executor/platform/thread_annotations.h'
'external/org_tensorflow/tensorflow/stream_executor/temporary_memory_manager.h'
'external/org_tensorflow/tensorflow/stream_executor/stream_executor.h'
'external/org_tensorflow/tensorflow/stream_executor/kernel_spec.h'
'external/org_tensorflow/tensorflow/stream_executor/stream_executor_pimpl.h'
'external/org_tensorflow/tensorflow/stream_executor/device_memory_allocator.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/threadpool.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/env.h'
'external/org_tensorflow/tensorflow/stream_executor/lib/thread_options.h'
'external/org_tensorflow/tensorflow/stream_executor/rng.h'
'external/org_tensorflow/tensorflow/stream_executor/shared_memory_config.h'
'external/org_tensorflow/tensorflow/stream_executor/stream_executor_internal.h'
'external/org_tensorflow/tensorflow/stream_executor/allocator_stats.h'
'external/org_tensorflow/tensorflow/stream_executor/module_spec.h'
'external/org_tensorflow/tensorflow/stream_executor/plugin_registry.h'
'external/org_tensorflow/tensorflow/stream_executor/timer.h'
The function in question imports:
#if GOOGLE_CUDA
#define EIGEN_USE_GPU
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/core/util/gpu_kernel_helper.h"
#include "math.h"
#include <iostream>
Of these dependencies I think gpu_kernel_helper.h is the one causing error as my BUILD file dependencies are:
deps = [
"#org_tensorflow//tensorflow/core:framework",
"#org_tensorflow//tensorflow/core:lib",
"#org_tensorflow//third_party/eigen3",
] + if_cuda_is_configured([":cuda", "#local_config_cuda//cuda:cuda_headers"]),
If I try to import them directly it bazels complains that there is no BUILD file on path. I'm not really familiar with bazel build process so don't understand exactly how it needs to link imports.
EDIT 1
I used tensorflow-serving 2.1.0 and tensorflow/serving:2.1.0-devel-gpu docker image.
Looking in #org_tensorflow/tensorflow/core/BUILD there is actually some reference to 'gpu_kernel_helper.h':
tf_cuda_library(
name = "framework",
hdrs = [
...
"util/gpu_kernel_helper.h",
]
But apparently some futher downstream links are missing?

Solution: The missing dependency can be linked in with:
"#org_tensorflow//tensorflow/core:stream_executor_headers_lib"

Related

Cmake 3.1 + "Protobuf_IMPORT_DIRS" importing another .proto error

I'm still fairly new to CMAKE, but I'm using find_package(Protobuf Requried) to compile my .proto files as part of the build, and I'm having trouble getting imports to work, and I'm well and truly stumped.
I have 2 .proto files in the same direcotry, "protobuf" named "A.proto" and "B.proto"
Without an import, they compile fine.
If I change A.proto to have an import to B:
syntax = "proto3";
import "B.proto";
message MyMessage
{}
With a CMakeLists.txt file that sets the Protobuf_IMPORT_DIRS variable correctly (I think):
find_package(Protobuf REQUIRED)
set(Protobuf_IMPORT_DIRS ${Protobuf_IMPORT_DIRS} ${CMAKE_SOURCE_DIR}/protobuf)
...
protobuf_generate(TARGET ${MY_PROJECT_NAME})
I get this on build:
Running cpp protocol buffer compiler on protobuf/A.proto
B.proto: File not found.
protobuf/A.proto:3:1: Import "B.proto" was not found or had errors.
Any help would be much appreciated, as I feel like i'm taking crazy pills! :)
So i found the answer, although it took some hacking. Bascially I just read all the CMAKE files associated with Protobuf until I figured it out. There's probably better docs out there but I couldn't find.
Short Story:
if you call PROTOBUF_GENERATE_CPP, it respects setting a Protobuf_IMPORT_DIRS variable.
However if you directly call the protobuf_generate for a target as I do, the variable is ignored.
The answer is to call protobuf_generate with an argument of IMPORT_DIRS, such as:
protobuf_generate(TARGET ${MY_PROJECT_NAME} IMPORT_DIRS protobuf)
IMPORT_DIRS is a multi argument, you can supply several.
Here's the relevant code:
function(protobuf_generate)
include(CMakeParseArguments)
set(_options APPEND_PATH)
set(_singleargs LANGUAGE OUT_VAR EXPORT_MACRO PROTOC_OUT_DIR PLUGIN)
if(COMMAND target_sources)
list(APPEND _singleargs TARGET)
endif()
set(_multiargs PROTOS IMPORT_DIRS GENERATE_EXTENSIONS)
cmake_parse_arguments(protobuf_generate "${_options}" "${_singleargs}" "${_multiargs}" "${ARGN}")
I hope this will save someone headache in the future!

Successful build of Kicad 4.0.6 in Linux Mageia 5 via fixing a wx-3.0 symbol

I have managed to build the Kicad 4.0.6 in Linux Mageia 5.1 with gcc version 4.9.2. I first manually fixed two wxWidgets 3.0.2 header files in the /usr/include/wx-3.0/wx/ directory: regex.h and features.h. Kicad then compiled successfully. With the native wx-3.0 headers, the compiler generated the error in pcbnew/netlist_reader.cpp due to the undefined variable wxRE_ADVANCED.
The features.h header checks if the macro WX_NO_REGEX_ADVANCED is defined. If yes, features.h UNdefines wxHAS_REGEX_ADVANCED macro, and defines it, if no. The macro wxHAS_REGEX_ADVANCED, in turn, is used in regex.h to determine if among the enum constants wxRE_ADVANCED = 1 is present. The standard prebuilt Mageia 5 packages wxgtku3.0_0 and lib64wxgtku3.0-devel that I installed with the use of Mageia's software manager urpmi from Mageia repository WX_NO_REGEX_ADVANCED is defined, therefore wxHAS_REGEX_ADVANCED is undefined, and, hence, wxRE_ADVANCED is undefined either. Kicad 4.0.6 source package assumes wxRE_ADVANCED = 1, therefore the build process stops with the error.
Then I reverted /usr/include/wx-3.0/wx/regex.h and features.h to their original state and learned how to add the definition of wxRE_ADVANCED to CMakeLists.txt. However, I still have a question.
The recommended format of adding the definition to CMakeLists.txt I found at CMake command line for C++ #define is this:
if (NOT DEFINED wxRE_ADVANCED)
set(wxRE_ADVANCED 1)
endif()
add_definitions(-DwxRE_ADVANCED=$(wxRE_ADVANCED))
However, it did not work! The macro expansion for wxRE_ADVANCED in pcbnew/netlist_reader.cpp was empty. I printed it at compile time inserting the following lines into the netlist_reader.cpp file (this was hard to find, most of the recommended formats did not work. The correct one is in C preprocessor: expand macro in a #warning):
#define __STRINGIFY(TEXT) #TEXT
#define __WARNING(TEXT) __STRINGIFY(GCC warning TEXT)
#define WARNING(VALUE) __WARNING(__STRINGIFY(wxRE_ADVANCED = VALUE))
Pragma (WARNING(wxRE_ADVANCED))
Finally, I simplified the CMakeLists.txt definition down to this, and it was a success:
if (NOT DEFINED wxRE_ADVANCED)
set(wxRE_ADVANCED 1)
endif()
add_definitions(-DwxRE_ADVANCED=1)
My question: what is the meaning of "-DwxRE_ADVANCED=$(wxRE_ADVANCED)" if it does not work? Is it possible not to use set(wxRE_ADVANCED 1), and simply write add_definitions(-DwxRE_ADVANCED=1)? Thank you.
P.S. Yes, the Kicad 4.0.6 build process successfully finished with only one line added to the top level CMakeLists.txt file:
add_definitions(-DwxRE_ADVANCED=1)
A variable is called via $variable or ${variable}. Note the curly brackets, not parentheses.
Also, it is recommended to use:
target_compile_definitions(mytarget PUBLIC wxRE_ADVANCED=1)
on a target directly, rather than the general add_definitions() command.

How do I register "custom" Op (actually, from syntaxnet) with tensorflow serving?

I'm trying to serve a model exported from syntaxnet but the parser_ops are not available. The library file with the ops is found (out-of-tree) at:
../models/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_ops.so
I'm currently hacking the mnist_inference example, (because I don't know how to build anything out-of-tree with bazel), and the command I'm running is:
./bazel-out/local-opt/bin/tensorflow_serving/example/mnist_inference --port=9000 /tmp/model/00000001
And the error I'm getting is:
F tensorflow_serving/example/mnist_inference.cc:208] Check failed: ::tensorflow::Status::OK() == (bundle_factory->CreateSessionBundle(bundle_path, &bundle)) (OK vs. Not found: Op type not registered 'FeatureSize')
And FeatureSize is definitely defined in the parser_ops.so, I just don't know how to load it.
I'm not too familiar with TF (I work on Bazel) but it looks like you need to add parser_ops as a dependency of mnist_inference.
There is a right way to do this and a wrong (easier) way.
The Right Way
Basically you add syntaxnet as a dependency of the example you're building. Unfortunately, the syntax net project and the tensorflow serving project import tensorflow itself under different names, so you have to do some mangling of the serving WORKSPACE file to get this working.
Add the following to the tensorflow_serving WORKSPACE file:
local_repository(
name = "syntaxnet",
path = "/path/to/your/checkout/of/models/syntaxnet",
)
This allows you to refer to the targets in syntaxnet from the tensorflow project (by prefixing them with "#syntaxnet"). Unfortunately, as mentioned above, you also have to get all of syntaxnet's external dependencies into the WORKSPACE file, which is annoying. You can test out if it's working with bazel build #syntaxnet//syntaxnet:parser_ops_cc.
Once you've done that, then add the cc_library #syntaxnet//syntaxnet:parser_ops_cc (parser_ops.so is a cc_binary, which can't be used as a dependency) to mnist_inference's deps:
deps = [
"#syntaxnet//syntaxnet:parser_ops_cc",
"#grpc//:grpc++",
...
Note that this still won't quite work: parser_ops_cc is a private target in syntaxnet (so it can't be depended on from outside its package) but you could add an attribute to it like visibility = ["//visibility:public"] if you're just trying things out:
cc_library(
name = "parser_ops_cc",
srcs = ["ops/parser_ops.cc"],
visibility = ["//visibility:public"]
...
The Wrong Way
You have a .so, which you can add a src file for your binary. Add the directory it's in as a new_local_repository() and add it to srcs in the BUILD file.
WORKSPACE file:
new_local_repository(
name = "hacky_syntaxnet",
path = "/path/to/syntaxnet/bazel-out/local-opt/bin/syntaxnet",
build_file_content = """
exports_files(glob(["*"])) # Make all of the files available.
""",
)
BUILD file:
srcs = [
"mnist_inference.cc",
"#hacky_syntaxnet//:parser_ops.so"
],

how to get visual c++ default include path using cmake?

I try to use cmake determine whether exist inttypes.h header file for generate project of visual c++ 11.
Initially, i wrote the following sentence in CMakeLists.txt
FIND_FILE(HAVE_INTTYPES_H "inttypes.h" DOC "Does the inttypes.h exist?")
Unfortunately, the HAVE_INTTYPES_H variable is HAVE_INTTYPES_H-NOTFOUND.
Afterwards, i looked up cmake documentation about find_file, which mentioned the need to some search path. But i can not get the c standard header files in any place in cmake?
Thanks.
Your find_file call is correct. The problem is there's no inttypes.h on Visual Studio. So keep your test the same, but when it's not found include another headers, for instance provided by: http://code.google.com/p/msinttypes/
Something like:
FIND_FILE(HAVE_INTTYPES_H "inttypes.h" DOC "Does the inttypes.h exist?")
if (HAVE_INTTYPES_H)
add_definitions(-DHAVE_INTTYPES_H=1)
endif()
and in your code:
#ifdef HAVE_INTTYPES_H
#include <inttypes.h>
#else
#include "path/to/inttypes.h"
#endif
Now, to detect headers, you may also want to try using the CheckIncludeFile standard CMake module, it's trying to detect include files using your target compiler instead of searching the file system:
include(CheckIncludeFile)
check_include_file("stdint.h" STDINT_H_FOUND)
if (STDINT_H_FOUND)
add_definitions(-DHAVE_STDINT_H=1)
endif()

Avoid multiple include_directories directives using Cmake

I've been assigned to completely run a project using CMake.
Basically, the project has over 20 modules, and for each module i created a CMake file
such as:
# Module: CFS
file(
GLOB_RECURSE
files
*.c
*.cpp
)
include_directories("${PROJECT_SOURCE_DIR}/include/PEM/cfs")
include_directories("${PROJECT_SOURCE_DIR}/include/PEM/kernel2")
SET(LIBRARY_OUTPUT_PATH ${PROJECT_BINARY_DIR}/lib)
add_library(cfs ${files})
kernel2 is another module and has its own CMakeFile.
Now the problem is that a third module: m3 requires headers from cfs (which also require headers from kernel2)
So i basically go with:
# Module: m3
file( ... )
include_directories("${PROJECT_SOURCE_DIR}/include/PEM/cfs")
add_library(m3 ${files})
target_link_library(m3 cfs)
Unfortunately this is not enough, kernel2 included files won't be found when i compile unless I add:
include_directories("${PROJECT_SOURCE_DIR}/include/PEM/kernel2")
Am I doing it wrong? Perhaps I should also add include files using add_library directive?
If you have #include directives in cfs's headers, then you should use
include_directories("${PROJECT_SOURCE_DIR}/include/PEM/kernel2")
It's not the problem of CMake, but how C/C++ compiler works.
For example, you have following header in cfs:
#include "kernel2/someclass.h"
class SomeCfsClass
{
private:
SomeKernelClass kernelObject;
}
Now if you wish to instantiate SomeCfsClass in your m3 module, the compiler should know it's size. But knowing it's size is not possible without knowning SomeKernelClass definition from kernel2/someclass.h.
This situation can be resolved by storing not the object, but pointer to it inside SomeCfsClass:
class SomeKernelClass; // forward declare SomeKernelClass
class SomeCfsClass
{
private:
SomeKernelClass * kernelObject;
}
But of course, there are cases, when it's simply impossible to avoid including.
As an alternative, i can suggest to use relative paths in #include directives, but this solution is somewhat hackish.