Can custom TensorFlow user_ops be served through TensorFlow Serving - tensorflow-serving

I have created custom TensorFlow operators in C++ similar to the examples in tensorflow/user/ops/ and they are working fine when used in TensorFlow sessions.
When saving a SavedModel using the operator, the resulting saved_model does contain the operators (at least a cursory inspection of a text protocol buffer of such a model shows that). Trying to serve this with a tensorflow_model_server of course fails at first, since the operator is unknown.
So I proceeded to extend the tensorflow_model_server with an option to specify the user_ops libraries to be loaded beforehand. The relevant code snippet inserted into "main.cc" of the tensorflow_model_server is:
if (librarypath.size() > 0) {
// Load the library.
TF_Status* status = TF_NewStatus();
TF_LoadLibrary(librarypath.c_str(), status);
if (!TF_GetCode(status) == TF_OK) {
string status_msg(TF_Message(status));
std::cout << "Problem loading user_op library " << librarypath << ": " << TF_Message(status);
return -1; }
TF_DeleteStatus(status);
}
Unfortunately, this does not quite work as expected, I get
Problem loading user_op library /usr/lib64/multipolygon_op.so: /usr/lib64 /multipolygon_op.so: undefined symbol: _ZTIN10tensorflow8OpKernelE
This somehow refers to _pywrap_tensorflow_internal.so symbols. Do I need to build the user op library differently or am I just out of luck ?

Ok, after a trying out a number of different venues the answer turns out to be relatively simple:
The tensorflow_model_server is being linked in such a way that is does not provide its own symbols to newly loaded shared libraries. Thus adding "-rdynamic" to the linker options to change that makes everything fall into place:
bazel build --linkopt=-rdynamic //tensorflow_serving/model_servers:tensorflow_model_server

Related

How to enable/disable a particular bbappend for a specific MACHINE in Yocto

I'm trying to understand the mechanism Yocto provides to enable/disable a particular bbappend for a specific MACHINE. I read this link (Modifying Variables to Support a Different Machine):
https://www.yoctoproject.org/docs/1.5/dev-manual/dev-manual.html#best-practices-to-follow-when-creating-layers
And also found some information related here on stack overflow:
Machine specific layers in yocto
I have tried putting all this information into practice without any success. This is my particular problem:
A BSP layer for an "x" platform provides a qtbase_%.bbappend that modifies qtbase recipe from meta-qt5. I need this qtbase_%.bbappend only applying when building for MACHINE="x", but not for other different machines.
This is the content of the original qtbase_%.bbappend defined on the x-bsp-layer:
PACKAGECONFIG_GL = "gles2"
PACKAGECONFIG_FONTS = "fontconfig"
PACKAGECONFIG_APPEND = " \
${#bb.utils.contains("DISTRO_FEATURES", "wayland", "xkbcommon-evdev", \
bb.utils.contains("DISTRO_FEATURES", "x11", " ", "libinput eglfs gbm", d), d)} \
"
PACKAGECONFIG_append = " ${PACKAGECONFIG_APPEND} kms accessibility sm"
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
PACKAGECONFIG_remove = "evdev"
Whenever I try to build an image for a MACHINE different from "x", the compilation is broken:
| ERROR: Feature 'opengles2' was enabled, but the pre-condition 'config.win32 || (!config.watchos && !features.opengl-desktop && libs.opengl_es2)' failed.
| ERROR: Feature 'eglfs' was enabled, but the pre-condition '!config.android && !config.darwin && !config.win32 && features.egl' failed.
| ERROR: Feature 'gbm' was enabled, but the pre-condition 'libs.gbm' failed.
Removing the x-BSP-layer from bblayers.conf solves the problem, but that's not the kind of solution I am looking for.
I tried fixing this using information provided in previous links. I modified qtbase_%.bbappend recipe in this way:
PACKAGECONFIG_GL_x = "gles2"
PACKAGECONFIG_FONTS_x = "fontconfig"
PACKAGECONFIG_APPEND_x = " \
${#bb.utils.contains("DISTRO_FEATURES", "wayland", "xkbcommon-evdev", \
bb.utils.contains("DISTRO_FEATURES", "x11", " ", "libinput eglfs gbm", d), d)} \
"
PACKAGECONFIG_append_x = " ${PACKAGECONFIG_APPEND} kms accessibility sm"
FILESEXTRAPATHS_prepend_x := "${THISDIR}/${PN}:"
PACKAGECONFIG_remove_x = "evdev"
As you can see, I appended the "_x" suffix to all recipe variables. It's supposed (at least that it's what I understand) those "_x" make the variable being assigned just in case the PLATFORM="x" is defined. Right? But it doesn't work as expected, it generates the same problem. So, in practice, this means I don't understand even the basics of this mechanism.
Can some of you provide a good explanation for this? I think it should be helpful for others with the same issue out there. Thanks a lot for your time! :-)
Just add COMPATIBLE_MACHINE = "x" in .bbappend file.
As you can see, I appended "_x" suffix to all recipe variables
Remove all "_x" suffix in .bbappend file.
Note adding COMPATIBLE_MACHINE as suggested would change the signatures of the original recipe, which is bad practice, and would result in your layer failing the compatibility test carried out by the yocto-check-layer script. Consult this for details.
The correct way of making a .bbappend file machine-specific is through overrides, as you're already doing in your proposal. Why it still fails is a different question. I suggest you to inspect the variables of the recipe through bitbake, and switch machines to verify they change accordingly.

Azure IoT hub C sdk blob upload example possible without low level API?

I'm trying the iothub_client/samples/iothub_client_sample_upload_to_blob from the Azure IoT hub C sdk. It compiles and works fine if I use the low-level API.
But as soon as I switch to the convenience layer (as the documentation in the app's file suggests), I get an error:
/home/user/workspaceMisc/azure-iot-sdk-c/iothub_client/samples/iothub_client_sample_upload_to_blob/iothub_client_sample_upload_to_blob.c: In function ‘iothub_client_sample_upload_to_blob_run’:
/home/user/workspaceMisc/azure-iot-sdk-c/iothub_client/samples/iothub_client_sample_upload_to_blob/iothub_client_sample_upload_to_blob.c:77:25: error: implicit declaration of function ‘IoTHubClient_UploadToBlob’ [-Werror=implicit-function-declaration]
if (IoTHubClient_UploadToBlob(iotHubClientHandle, "subdir/hello_world.txt", (const unsigned char*)HELLO_WORLD, sizeof(HELLO_WORLD) - 1) != IOTHUB_CLIENT_OK)
^
cc1: all warnings being treated as errors
iothub_client/samples/iothub_client_sample_upload_to_blob/CMakeFiles/iothub_client_sample_upload_to_blob.dir/build.make:62: recipe for target 'iothub_client/samples/iothub_client_sample_upload_to_blob/CMakeFiles/iothub_client_sample_upload_to_blob.dir/iothub_client_sample_upload_to_blob.c.o' failed
How can I upload a file with the convenience layer instead of the low level layer? Is it possible at all?
I'm using Ubuntu 16.04, gcc 5.4.0 and the latest clone of the SDK.
Actually the function name is IoTHubClient_UploadToBlobAsync, you need add Async postfix. And there is needed additional two parameters: iotHubClientFileUploadCallback and context. This document is somewhat misleading.
So you can call this function like this:
IoTHubClient_UploadToBlobAsync(iotHubClientHandle, "subdir/hello_world.txt", (const unsigned char*)HELLO_WORLD, sizeof(HELLO_WORLD) - 1, NULL, NULL);

How to create an op like conv_ops in tensorflow?

What I'm trying to do
I'm new to C++ and bazel and I want to make some change on the convolution operation in tensorflow, so I decide that my first step is to create an ops just like it.
What I have done
I copied conv_ops.cc from //tensorflow/core/kernels and change the name of the ops registrated in my new_conv_ops.cc. I also changed some name of the functions in the file to avoid duplication. And here is my BUILD file.
As you can see, I copy the deps attributes of conv_ops from //tensorflow/core/kernels/BUILD. Then I use "bazel build -c opt //tensorflow/core/user_ops:new_conv_ops.so" to build the new op.
What my problem is
Then I got this error.
I tried to delete bounds_check and got same error for the next deps. Then I realize that there is some problem for including h files in //tensorflow/core/kernels from //tensorflow/core/user_ops. So how can I perfectely create a new op excatcly like conv_ops?
Adding a custom operation to TensorFlow is covered in the tutorial here. You can also look at actual code examples.
To address your specific problem, note that the tf_custom_op_library macro adds most of the necessary dependencies to your target. You can simply write the following :
tf_custom_op_library(
name="new_conv_ops.so",
srcs=["new_conv_ops.cc"]
)

How to get author of a pdf document with mupdf

how can I get metadata of a pdf document(e.g. title, author, creation date etc) by using mupdf library? There is not enough documentation to find out this functionality. Comments are not sufficient, too. Most probably, there is a functionality for this purpose but it is hard to find under these circumstances. The following code is what I have so far.
char info[64];
globals *glo = get_globals(env, thiz);
fz_meta(glo->doc, FZ_META_INFO, info, sizeof(info));
I have used FZ_META_INFO tag, but it doesn't work. I didn't get any info, just empty. I have checked that it has metadata. Any help is appreciated.
EDIT:
Target Android sdk:20
Min Android sdk:15
Mupdf version: 1.6
ndk: r10c
Development OS: Ubuntu 12.04
In what sense 'doesn't work' ? Throws an error ? Crashes ? Are you certain the PDF file you are using has any 'Info' metadata ?
What is the version of MuPDF ? What platform are you using ?
You need to set the relevant key in the buffer you pass to fz_meta before you call fz_mets, I notice you aren't doing that.
See win_main.c at around line 487, after you get past the macro this resolves to
char info[256]
sprintf(info, "Title");
fz_meta(doc, FZ_META_INFO, info, 256);
On return 'info' will contain the metadata associated with the Title key in the dictionary.
When in doubt, build the sample app and follow it in a debugger......
If the proper casting allow to send the key,
this casting is NOT correct to receive back a char*.
Exemple;
Proper casting to send a request
char buff[2048];
strcpy(buff,"CreationDate")
if (fz_meta(ctx,doc,FZ_META_INFO,&buff,2048)) {
buff[0] = 0;
}
Will:
find the key,
convert utf8
then will crash when copyback of the result
Proper casting to receive a request
char buff[2048];
strcpy(buff,"CreationDate")
if (fz_meta(ctx,doc,FZ_META_INFO,buff,2048)) {
buff[0] = 0;
}
Will crash during dict scanning.
looks really like a bug!
I confirm that modifying original source
info = pdf_dict_gets(ctx, info, (char *)ptr);
is the way to go. (even if strange that nobody else find it while writing code, because Meta are useful features frequently used

NiTE2 fails to initialize with .oni recording

I've recorded some files with OpenNI2 Tools\NiViewer and I can load and read them with OpenNI2 without problems.
However, when initializing the NiTE2 nite::UserTracker it breaks on create() before it returns.
nite::Status et = nite::NiTE::initialize();
nite::UserTracker* m_pUserTracker = new nite::UserTracker();
if (m_pUserTracker->create(&device) != nite::STATUS_OK) {
return openni::STATUS_ERROR;
}
The device was successfully created giving the path to the oni file and this exact code works if the stream is read directly from a kinect.
Is it necessary to give to NiTE more information about the .oni file before the initialization? Or NiTE2 doesn't support oni files?
Thanks!