TF-Lite Non-Max-Suppression - tensorflow

I am attempting to convert a graph with tf.image.non_max_suppression or tf.image.combined_non_max_suppression but both API calls yield an error like "tf.CombinedNonMaxSuppression op is neither a custom op nor a flex op." My setup is TF2.3.1, python 3.7, Windows 10.
I understand that some tf functions are not supported for conversion to TF-Lite but the link below shows tfl function for non-max-suppression.
https://tensorflow.google.cn/mlir/tfl_ops#tflnon_max_suppression_v4_tflnonmaxsuppressionv4op
What do I need to do to be able to run the converter on my function in order to use the tfl.non_max_suppression_vx function?

Non-max-suppression operation is not supported in TF Lite. If you want to use them you have to use via TF lib, when converting, adding these lines
converter.experimental_new_converter=True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
There is an other way that you should rewrite NMS op by ops supported in TF Lite or sort of so. If you rewrite successfully, please tell me. Thanks.

Related

Missing some boosted trees operations in Tensorflow Lite

I have a Tensorflow model based on BoostedTreesClassifier and I want to deploy it on a mobile with the help of Tensorflow Lite.
However, when I try to convert my model to the Tensorflow Lite model I get an error saying that there are unsupported operations (as of Tensorflow v2.3.1):
tf.BoostedTreesBucketize
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
Adding tf.lite.OpsSet.SELECT_TF_OPS option helps a bit, but still some operations need a custom implementation:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
I've also tried Tensorflow v2.4.0-rc3, which reduces the set to the following one:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
Conversion code is like the following:
converter = tf.lite.TFLiteConverter.from_saved_model(model_path, signature_keys=['serving_default'])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
signature_keys is specified explicitly, because the model exported with BoostedTreesClassifier#export_saved_model has multiple signatures.
Is there a way to deploy this model on mobile other than writing custom implementation for non-supported ops?

Converting To Tflite model

While converting model to tflite getting this error
"""
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CONV_2D, MAX_POOL_2D, MUL, RELU, SOFTMAX, SQUEEZE, SUB. Here is a list of operators for which you will need custom implementations: AdjustContrastv2, AdjustHue, AdjustSaturation, RandomUniform.
"""
How to resolve this?
tensorflow version: 1.13.1
You can use TF ops directly by selecting TF ops.
I've confirmed that AdjustContrastv2, AdjustHue, AdjustSaturation are available via FlexDelegate.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/flex/allowlisted_flex_ops.cc#L35
To use this feature, you need to use TF 2.4 or higher. Since TF 2.4 is not available yet, you need to use tf-nightly release.
FYI, regarding migration TF1 to TF2, please check https://www.tensorflow.org/guide/migrate
You may try adding following lines to specify your model can use ops in both TF Lite built in and in TF.
converter.experimental_new_converter=True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
Or better you should rewrite ops not supported in TF Lite built in by ops available in TF built in

"Unkown (custom) loss function" when using tflite_convert on a {TF 2.0.0-beta1 ; Keras} model

Summary
My question is composed by:
A context in which I present my project, my working environment and my workflow
The detailed problem
The concerned parts of my code
The solutions I tried to solve my problem
The question reminder
Context
I've written a Python Keras implementation of a downgraded version of the original Super-Resolution GAN. Now I want to test it using Google Firebase Machine Learning Kit, by hosting it in the Google servers. That's why I have to convert my Keras program to a TensorFlow Lite one.
Environment and workflow (with the problem)
I'm training my program on Google Colab working environment: there, I've installed TF 2.0.0-beta1 (this choice is motivated by this uncorrect answer: https://datascience.stackexchange.com/a/57408/78409).
Workflow (and problem):
I write locally my Python Keras program, keeping in mind that it will run on TF 2. So I use TF 2 imports, for example: from tensorflow.keras.optimizers import Adam and also from tensorflow.keras.layers import Conv2D, BatchNormalization
I send my code to my Drive
I run without any problem my Google Colab Notebook: TF 2 is used.
I get the output model in my Drive, and I download it.
I try to convert this model to the TFLite format by executing the following CLI: tflite_convert --output_file=srgan.tflite --keras_model_file=srgan.h5: here the problem appears.
The problem
Instead of outputing the TF Lite converted model from the TF (Keras) model, the previous CLI outputs this error:
ValueError: Unknown loss function:build_vgg19_loss_network
The function build_vgg19_loss_network is a custom loss function that I've implemented and that must be used by the GAN.
Parts of code that rise this problem
Presenting the custom loss function
The custom loss function is implemented like that:
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape)
return mean(square(loss_model(ground_truth_image) - loss_model(predicted_image)))
Compiling the generator network with my custom loss function
generator_model.compile(optimizer=the_optimizer, loss=build_vgg19_loss_network)
What I've tried to do in order to solve the problem
As I read it on StackOverflow (link at the beginning of this question), TF 2 was thought to be sufficient to output a Keras model which would be correctly processed by my tflite_convert CLI. But it's not, obviously.
As I read it on GitHub, I tried to manually set my custom loss function among Keras' loss functions, by adding these lines: import tensorflow.keras.losses
tensorflow.keras.losses.build_vgg19_loss_network = build_vgg19_loss_network. It didn't work.
I read on GitHub I could use custom objects with load_model Keras function: but I only want to use compile Keras function. Not load_model.
My final question
I want to do only minor changes to my code, since it works fine. So I don't want, for example, to replace compile with load_model. With this constraint, could you help me, please, to make my CLI tflite_convert works with my custom loss function?
Since you are claiming that TFLite conversion is failing due to a custom loss function, you can save the model file without keep the optimizer details. To do that, set include_optimizer parameter to False as shown below:
model.save('model.h5', include_optimizer=False)
Now, if all the layers inside your model are convertible, they should get converted into TFLite file.
Edit:
You can then convert the h5 file like this:
import tensorflow as tf
model = tf.keras.models.load_model('model.h5') # srgan.h5 for you
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Usual practice to overcome the unsupported operators in TFLite conversion is documented here.
I had the same error. I recommend changing the loss to "mse" since you already have a well-trained model and you don't need to train with the .tflite file.

Export MXNet model to ONNX with _contrib_MultiBoxPrior Error

I created an object detection model in AWS SageMaker, based on SSD/ResNet50 and in MXNet.
Now I would like to optimize it in TensorRT, for which I need to export to ONNX as a first step.
Looking for any recommendation on converting _contrib_MultiBoxPrior to a supported symbol didn't yield any result for me.
Basic code
input_shape = (1, 3, 512, 512)
converted_model_path = onnx_mxnet.export_model(sym_file, params_file, [input_shape], np.float32, onnx_file)
The exact error message is
"AttributeError: No conversion function registered for op type _contrib_MultiBoxPrior yet."
What is the recommended way to solve this error?
The implementation of the MultiBoxPrior operator is dependent on ONNX supporting it. You can track the issue here: https://github.com/apache/incubator-mxnet/issues/15181
Alternatively you can try using mxnet-tensorrt. It uses the subgraph API which means that the symbol that can be executed in TensorRT are executed in the TensorRT runtime, and the ones that cannot are executed in the MXNet runtime.
https://mxnet.incubator.apache.org/versions/master/tutorials/tensorrt/inference_with_trt.html
Note that the current version of this tutorial is for the 1.3.0 version of MXNet I believe. An update is coming for the next release with a simpler API and better performance.

Exporting a frozen graph .pb file in Tensorflow 2

I've beeen trying out the Tensorflow 2 alpha and I have been trying to freeze and export a model to a .pb graphdef file.
In Tensorflow 1 I could do something like this:
# Freeze the graph.
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph to .pb file.
with open('model.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
However this doesn't seem possible anymore as convert_variables_to_constants is removed and use of sessions is discouraged.
I looked and found there is the freeze graph util
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py that works with SavedModel exports.
Is there some way to do it within Python still or I am meant to switch and use this tool now?
I have also faced this same problem while migrating from tensorflow1.x to tensoflow2.0 beta.
This problem can be solved by 2 methods:
1st is to go to the tensflow2.0 docs search for the methods you have used and change the syntax for each line &
To use google's tf_ugrade_v2 script
tf_upgrade_v2 --infile your_tf1_script_file --outfile converted_tf2_file
You try above command to change your tensorflow1.x script to tensorflow2.0, it will solve all your problem.
Also, you can rename the method (Manual step by refering documentation)
Rename 'tf.graph_util.convert_variables_to_constants' to 'tf.compat.v1.graph_util.convert_variables_to_constants'
The measure problem is that in tensorflow2.0 is that many syntax and function has changed try referring the tensoflow2.0 docs or use the google's tf_upgrade_v2 script
Not sure if you've seen this Tensorflow 2.0 issue, but this response seems to be a work-around:
https://github.com/tensorflow/tensorflow/issues/29253#issuecomment-530782763
Note: this hasn't worked for my nlp model but maybe it will work for you. The suggested work-around is to use model.save_weights('weights.h5') while in TF 2.0 environment. Then create new environment with TF 1.14 and do all following steps in TF 1.14 env. Build your model model = create_model() and use model.load_weights('weights.h5') to load weights back into your model. Then save entire model with model.save('final_model.h5'). If you manage to have success with the above steps, then follow the rest of the steps in the link to use freeze_graph.