How to build tensorflow from source with a model that uses custom ops which are renamed versions of existing ops? - tensorflow

I have a .tflite model that uses custom operations called MaxPoolingWithArgmax2D, MaxUnpooling2D, and Convolution2DTransposeBias.
These ops are actually not custom ops, since they are present in tensorflow already (MaxPoolWithArgmax, MaxUnpooling2D, conv2d_transpose)
After consulting this guide, I see that I'd have to write a kernel and an interface for these ops.
Is there a way to build tensorflow source without writing custom implementations for these ops since they're already present in the library? The only problem is that the model I'm using has renamed them due to which they're being recognized as custom ops. My goal is to perform inference with this model.
Edit: These ops are not select ops. They are built-in ops present inside the base library. However, the person who wrote this model renamed them, which makes them custom ops.
Edit 2: Photo for reference:

You can enable the existin TF ops through the Select TF option in the TFLite.
For example, during the conversion stage, you can enable them:
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
For the inference stage, please make sure that the select tf dependency is linked. When using TF Python API, it will automatically be enabled.
Please refer to this link.
Among the custom ops, the custom operator implementations for some of them are being distributed as a perception operator package.
from tensorflow.lite.kernels.perception import pywrap_perception_ops as perception_ops_registerer
from tensorflow.lite.python import interpreter as interpreter_wrapper
interpreter = interpreter_wrapper.InterpreterWithCustomOps(
model_content=model,
custom_op_registerers=[
perception_ops_registerer.PerceptionOpsRegisterer
])
Please take a look at this link.

Related

Inspecting functional keras model structure

I would like to inspect the layers and connections in a model, after having created a model using the Functional API in Keras. Essentially to start at the output and recursively enumerate the inputs of each layer instance. Is there a way to do this in the Keras or TensorFlow API?
The purpose is to create a more detailed visualisation than the ones provided by Keras (tf.keras.utils.plot_model). The model is generated procedurally based on a parameter file.
I have successfully used attributes of the KerasTensor objects to do this inspection:
output = Dense(1)(...)
print(output)
print(output.node)
print(output.node.keras_inputs)
print(output.node.keras_inputs[0].node)
This wasn't available in TF 2.6, only 2.7, and I realise it's not documented anywhere.
Is there a proper way to do this?

How to convert a tensorflow hub pretrained model as to be consumable by tensorflow serving

I am trying to use this for my object detection task. The problems I am facing are:
On running the saved_model_cli command, I am getting the following output. There is no signature defined with tag-set "serve" also the method name is empty
The variable folder in the model directory only contains a few bytes of data which means the weights are not actually written to disk.
The model format seems to be HubModule V1 which seems to be the issue, any tips on making the above model servable are highly appreciated.
TF2 SavedModels should not have this problem, only Hub.Modules from TF1 since Hub.Modules use the signatures for other purposes. You can take a hub.Module and build a servable SavedModel, but it's quite complex and involves building the signatures yourself.
Instead, I recommend checking out the list of TF2 object detection models on TFHub.dev for a model you can use instead of the model you are using: https://tfhub.dev/s?module-type=image-object-detection&tf-version=tf2
These models should be servable with TF Serving

Dumping Weights in TensorflowLite

new Tensorflow 2.0 user. My project requires me to investigate the weights for the neural network i created in Tensorflow (super simple one). I think I know how to do it in the regular Tensorflow case. Namely I use the command model.save_weights(filename). I would like to repeat this effort for a .tflite model but I am having trouble. Instead of generating my own tensorflow lite model, I am using one of the many models which are provided online: https://www.tensorflow.org/lite/guide/hosted_model to avoid having to troubleshoot my use of the Tensorflow Lite converter. Any thoughts?

How to use smart reply custom ops in python or tfjs?

I'm trying to implement smart reply tflite model in python or tfjs, but they are using custom ops. Please refer https://github.com/tensorflow/examples/tree/master/lite/examples/smart_reply/android/app/libs/cc.
So how to build that custom op separately and use that custom op in python or tfjs?

How to use Tensorflow model comparison with tflite_diff_example_test

I have trained a model for detection, which is doing great when embedded in tensorflow sample app.
After freezing with export_tflite_ssd_graph and conversion to tflite using toco the results do perform rather bad and have a huge "variety".
Reading this answer on a similar problem with loss of accuracy I wanted to try tflite_diff_example_test on a tensorflow docker machine.
As the documentation is not that evolved right now, I build the tool referencing this SO Post
using:
bazel build tensorflow/contrib/lite/testing/tflite_diff_example_test.cc which ran smooth.
After figuring out all my needed input parameters I tried the testscript with following commands:
~/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/bazel_tools/tools/test/test-setup.sh tensorflow/contrib/lite/testing/tflite_diff_example_test '--tensorflow_model=/tensorflow/shared/exported/tflite_graph.pb' '--tflite_model=/tensorflow/shared/exported/detect.tflite' '--input_layer=a,b,c,d' '--input_layer_type=float,float,float,float' '--input_layer_shape=1,3,4,3:1,3,4,3:1,3,4,3:1,3,4,3' '--output_layer=x,y'
and
bazel-bin/tensorflow/contrib/lite/testing/tflite_diff_example_test --tensorflow_model="/tensorflow/shared/exported/tflite_graph.pb" --tflite_model="/tensorflow/shared/exported/detect.tflite" --input_layer=a,b,c,d --input_layer_type=float,float,float,float --input_layer_shape=1,3,4,3:1,3,4,3:1,3,4,3:1,3,4,3 --output_layer=x,y
Both ways are failing. Errors:
way:
tflite_diff_example_test.cc:line 1: /bazel: Is a directory
tflite_diff_example_test.cc: line 3: syntax error near unexpected token '('
tflite_diff_example_test.cc: line 3: 'Licensed under the Apache License, Version 2.0 (the "License");'
/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/bazel_tools/tools/test/test-setup.sh: line 184: /tensorflow/: Is a directory
/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/bazel_tools/tools/test/test-setup.sh: line 276: /tensorflow/: Is a directory
way:
2018-09-10 09:34:27.650473: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Failed to create session. Op type not registered 'TFLite_Detection_PostProcess' in binary running on d36de5b65187. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)tf.contrib.resamplershould be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
I would really appreciate any help, that enables me to compare the output of two graphs using tensorflows given tests.
The second way you mentioned is the correct way to use tflite_diff. However, the object detection model containing the TFLite_Detection_PostProcess op cannot be run via tflite_diff.
tflite_diff runs the provided TensorFlow (.pb) model in the TensorFlow runtime and runs the provided TensorFlow Lite (.tflite) model in the TensorFlow Lite runtime. In order to run the .pb model in the TensorFlow runtime, all of the operations must be implemented in TensorFlow.
However, in the model you provided, the TFLite_Detection_PostProcess op is not implemented in TensorFlow runtime - it is only available in the TensorFlow Lite runtime. Therefore, TensorFlow cannot resolve the op. Therefore, you unfortunately cannot use the tflite_diff tool with this model.