Can't convert onnx model to tflite using TF 2.4.1 - tensorflow

I'm having an ONNX model, which I can successfully convert to TF with TF 2.4.1. But when it comes to the conversion of that saved model to TFLite an error happens.
The code:
import onnx
import tensorflow as tf
from onnx_tf.backend import prepare
print(tf.__version__)
# Convert model.onnx to Tensorflow
onnx_model = onnx.load('model.onnx')
onnx.checker.check_model(onnx_model)
tf_rep = prepare(onnx_model)
tf_rep.export_graph('model')
# Convert saved model to tflite
converter = tf.lite.TFLiteConverter.from_saved_model('model')
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)
Everything goes OK until the conversion step, which ends like so:
>>> tf_lite_model = converter.convert()
2021-04-22 18:18:14.715046: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-04-22 18:18:14.715072: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-04-22 18:18:14.715078: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-04-22 18:18:14.716044: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: model
2021-04-22 18:18:14.778050: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-04-22 18:18:14.778083: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: model
2021-04-22 18:18:14.998062: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-04-22 18:18:15.043862: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-04-22 18:18:15.438804: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: model
2021-04-22 18:18:15.809851: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1093808 microseconds.
2021-04-22 18:18:18.757257: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): error: operand #0 does not dominate this use
Traceback (most recent call last):
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
result = _convert_saved_model(**converter_kwargs)
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
data = toco_convert_protos(
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
I have no idea, what this message means, but if I switch to TF 2.2 the conversion passes w/o errors. The bad thing is, that due to another problem now the initial ONNX to TF conversion fails.
Anybody having an idea, what this message means and what could be done with it?
TIA

Is it possible to share your the saved model directory to me? I can help debugging.
The general advise is that, there are two possibilities that
(1) TF Lite converter may not handle the saved model correctly.
(2) onnx conversion tool may not create a valid TF saved model.
Using the recent TF version (2.5 or tf-nightly) might help resolve this problem in the (1) case but it's not guaranteed.
I confirmed that the tf-nightly version could convert the attached saved model without any issue:
converter = tf.lite.TFLiteConverter.from_saved_model(
"/tmp/onnx_model")
tflite_model = converter.convert()
with open("/tmp/onnx.tflite", "wb") as f:
f.write(tflite_model)

Related

I have built the classification model using tensorflow estimator after saving the model , when converting it into tensorflow lite it shows an error

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model("/content/drive/MyDrive/tensorflowtest/1618754788") #path to the SavedModel directenter code hereory
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
Model Saves Without Any Error
tflite_model = converter.convert()
When i execute this line I get this Exception
ConverterError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
295 return model_str
296 except Exception as e:
--> 297 raise ConverterError(str(e))
298
299 if distutils.spawn.find_executable(_toco_from_proto_bin) is None:
ConverterError: <unknown>:0: error: loc("head/predictions/str_classes"): 'tf.AsString' op is neither a custom op nor a flex op
<unknown>:0: error: failed while converting: 'main':
Some ops in the model are custom ops, See instructions to implement custom ops: https://www.tensorflow.org/lite/guide/ops_custom
Custom ops: AsString
Details:
tf.AsString(tensor<?x1xi64>) -> (tensor<?x1x!tf.string>) : {device = "", fill = "", precision = -1 : i64, scientific = false, shortest = false, width = -1 : i64}
I tried Using tensor flow nightly but error still remains
I am trying to build a classification model using tensorflow as then i want to convert it into tensorflow lite for Android App
if you have any other appproch without converting into tensorflow lite that would be acceptable too
The TF select option in the TFLite product does not allow tf.AsString op yet. For such cases, you can report the feature request at here.
The above op isn't included the TF select's allowed list, which can be fixed by adding the relevant code like this commit. It would be great if you can create a such PR.
The fix is submitted and the AsString op will be available through the TF select option since the tomorrow's TensorFlow nightly version.

Exporting fine-tuned saved model to TensorFlow Lite error

I fine.tuned an SSD model to recognize a custom object.
I followed the tutorials, ran the training process and exported the model, I tested it for inference and everything works great.
So, now I have a structure like:
exported models/
|
---- SSD_custom_model/
|
--------checkpoint/
--------saved_model/
--------pipeline.config
which I assume is what is referred to as "Saved model" in the TensorFlow documentation.
So, I wanted to convert this model to TensorFlow Lite to test in on an Android device, I checked the tutorials and I'm trying:
import tensorflow as tf
saved_model_dir = 'exported-models/SSD_custom_model/'
# # Convert the model
## I tried either just
# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
## or, with more options
converter = tf.lite.TFLiteConverter.from_saved_model(
saved_model_dir, signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
And I'm getting the error
File "/home/lews/anaconda3/envs/tf/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): requires element_shape to be 1D tensor during TF Lite transformation pass
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
It seems to be complaining about the input shape ('requires element_shape to be 1D tensor during TF Lite transformation pass'). Maybe I should've modified something about the model before the fine-tuning process? Or after that?
Hi,I'm doing the same work and encountered the same error, but I sovled it.
The model I converted is SSD-Mobile-v2, and I'm using tensorflow 2_4, so I believe this will work for you.
All you need to do is to create a new conda environment (python 3.8 is ok), and then install tf-nightly:
pip install tf-nightly
It's important to note that the version of tf-nightly must be >= 2.5.
At first I used the tf-nightly 2.3, I encountered another error. Then I upgrade it to 2.5, the converter finally works.

How to save a TensorFlow Hub model in SavedModels format?

I'd like to load a model from TensorFlow Hub and save it to disk. I tried:
import tensorflow as tf
import tensorflow_hub as hub
def save_module(url, save_path):
with tf.Graph().as_default():
module = hub.load(url)
tf.saved_model.save(module, save_path)
save_module("https://tfhub.dev/google/universal-sentence-encoder/4", "./saved-module")
But this fails with:
Traceback (most recent call last):
File "C:\project\python-env\lib\site-packages\tensorflow\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "C:\project\python-env\lib\site-packages\tensorflow\python\client\session.py", line 1349, in _run_fn
return self._call_tf_sessionrun(options, feed_dict, fetch_list,
File "C:\project\python-env\lib\site-packages\tensorflow\python\client\session.py", line 1441, in _call_tf_sessionrun
return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Error while reading resource variable EncoderDNN/DNN/ResidualHidden_2/dense/kernel/part_27 from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/EncoderDNN/DNN/ResidualHidden_2/dense/kernel/part_27)
[[{{node EncoderDNN/DNN/ResidualHidden_2/dense/kernel/part_27/Read/ReadVariableOp}}]]
[[EncoderDNN/DNN/ResidualHidden_3/dense/kernel/part_22/Read/ReadVariableOp/_287]]
(1) Failed precondition: Error while reading resource variable EncoderDNN/DNN/ResidualHidden_2/dense/kernel/part_27 from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/EncoderDNN/DNN/ResidualHidden_2/dense/kernel/part_27)
[[{{node EncoderDNN/DNN/ResidualHidden_2/dense/kernel/part_27/Read/ReadVariableOp}}]]
0 successful operations.
0 derived errors ignored.
The answer must use the TensorFlow 2 API. Ideally, I want to accomplish this without Keras but I'll also accept answers that use it. Any ideas?
I couldn't get this working without Keras, but in any case this works:
import tensorflow as tf
import tensorflow_hub as hub
def save_module(url, save_path):
module = hub.KerasLayer(url)
model = tf.keras.Sequential(module)
tf.saved_model.save(model, save_path)
save_module("https://tfhub.dev/google/universal-sentence-encoder/4", "./saved-module")

OpenCV DNN, Import .pb file from tensorflow Assertion Failed error: scaleMat.type() == CV_32FC1 in function 'populateNet'

I was trying to import a frozen (and optimized using tensorflow.python.tools.optimize_for_inference ) pb "optimized.pb" file using
cv2.dnn.readNetFromTensorflow("optimized.pb")
This resulted in the following error:
Traceback (most recent call last):
File "opencv.py", line 4, in <module>
net = cv2.dnn.readNetFromTensorflow("optimized.pb")
cv2.error: OpenCV(3.4.3) /io/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1380: error: (-215:Assertion failed) scaleMat.type() == CV_32FC1 in function 'populateNet'
The model included transpose convolution layers. However the error disappears when I do not include any deconvolution layers.
Can anyone help me understand and correct this error?
I've solved this error in my network by replace
up8 = UpSampling2D(size=(2, 2), interpolation='bilinear')(conv7)
to
up8 = UpSampling2D(size=(2, 2))(conv7)
It looks like my opencv(version 3.4.6) is not supported bilinear interpolation in UpSampling2D layer.

Error converting delf to tensorflow js web

I'm following this [1] and trying to convert this [2] to tensorflow js with [0]. I run into [3]. Any chance anyone knows what's going on?
[0]
tensorflowjs_converter
--input_format=tf_hub
'https://tfhub.dev/google/delf/1'
delf
[1] https://github.com/tensorflow/tfjs-converter#step-1-converting-a-savedmodel-keras-h5-session-bundle-frozen-model-or-tensorflow-hub-module-to-a-web-friendly-format
[2] https://www.tensorflow.org/hub/modules/google/delf/1
[3]
Using TensorFlow backend.
2018-08-21 17:49:34.351121: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Creating a model with inputs [u'score_threshold', u'image', u'image_scales', u'max_feature_num'] and outputs [u'module_apply_default/NonMaxSuppression/Gather/GatherV2_1', u'module_apply_default/NonMaxSuppression/Gather/GatherV2_3', u'module_apply_default/postprocess_1/pca_l2_normalization', u'module_apply_default/Reshape_4', u'module_apply_default/truediv_2', u'module_apply_default/NonMaxSuppression/Gather/GatherV2', u'module_apply_default/ExpandDims'].
Traceback (most recent call last):
File "/usr/local/bin/tensorflowjs_converter", line 11, in
sys.exit(main())
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/converter.py", line 286, in main
strip_debug_ops=FLAGS.strip_debug_ops)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 420, in convert_tf_hub_module
graph = load_graph(frozen_file, ','.join(output_node_names))
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflowjs/converters/tf_saved_model_conversion.py", line 63, in load_graph
tf.import_graph_def(graph_def, name='')
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/Users/goto/Library/Python/2.7/lib/python/site-packages/tensorflow/python/framework/importer.py", line 422, in import_graph_def
raise ValueError(str(e))
ValueError: Input 0 of node module_apply_default/while/resnet_v1_50/conv1/Conv2D/ReadVariableOp/Enter was passed float from module/resnet_v1_50/conv1/weights:0 incompatible with expected resource.
What version of the tensorflowjs_converter are you using? My guess is that the DELF model uses some Ops which are unsupported by TFJS. The latest version of the TFJS converter should give clearer error messages about unsupported ops if that is in fact the issue.
Not all TensorFlow Hub modules are TFJS compatible. In particular, there are some Ops which are not implemented in TFJS and so the modules cannot be converted. You can find a list of supported TFJS Ops here
You can try updating to the latest version of the TFJS converter to get a better error message and update TFJS to see if more of the ops are supported in a more recent version. Otherwise, you can search for open features requests or file a new one here to request the Op be supported.