I have a TFLite model converted from Jax:
converter = tf.lite.TFLiteConverter.experimental_from_jax(
[serving_func], [[("encoder_inputs", encoder_inputs), ("decoder_inputs", decoder_inputs), ("primings", primings)]])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS
]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
When I try to load it in Python, I get
ValueError: Didn't find op for builtin opcode 'SELECT' version '3'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
Registration failed.
When I try to load the model on device in Kotlin, it ends with
java.lang.IllegalArgumentException: Internal error: Cannot create interpreter: Didn't find op for builtin opcode 'SELECT' version '3'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
What should I try doing differently? If I try to remove anything from target_spec.supported_ops, the converter will not convert.
Related
I am new to Tensorflow and I need to convert the TFRS to the tflite model. Does anyone have any idea or experience related to this topic?
I simply ran the retrieval code by Colab.
And added the recommended method to convert the final model to tflite. That was:
converter = tf.lite.TFLiteConverter.from_saved_model(path) # path to the SavedModel directory
tflite_model = converter.convert()
You can see the error in the following image.
enter image description here
Try downloading the latest tf-nightly pip and then convert this model using new MLIR converter. You can enable the new converter by setting:
converter.experimental_new_converter = True
I fine.tuned an SSD model to recognize a custom object.
I followed the tutorials, ran the training process and exported the model, I tested it for inference and everything works great.
So, now I have a structure like:
exported models/
|
---- SSD_custom_model/
|
--------checkpoint/
--------saved_model/
--------pipeline.config
which I assume is what is referred to as "Saved model" in the TensorFlow documentation.
So, I wanted to convert this model to TensorFlow Lite to test in on an Android device, I checked the tutorials and I'm trying:
import tensorflow as tf
saved_model_dir = 'exported-models/SSD_custom_model/'
# # Convert the model
## I tried either just
# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
## or, with more options
converter = tf.lite.TFLiteConverter.from_saved_model(
saved_model_dir, signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
And I'm getting the error
File "/home/lews/anaconda3/envs/tf/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): requires element_shape to be 1D tensor during TF Lite transformation pass
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
It seems to be complaining about the input shape ('requires element_shape to be 1D tensor during TF Lite transformation pass'). Maybe I should've modified something about the model before the fine-tuning process? Or after that?
Hi,I'm doing the same work and encountered the same error, but I sovled it.
The model I converted is SSD-Mobile-v2, and I'm using tensorflow 2_4, so I believe this will work for you.
All you need to do is to create a new conda environment (python 3.8 is ok), and then install tf-nightly:
pip install tf-nightly
It's important to note that the version of tf-nightly must be >= 2.5.
At first I used the tf-nightly 2.3, I encountered another error. Then I upgrade it to 2.5, the converter finally works.
I have a Tensorflow model based on BoostedTreesClassifier and I want to deploy it on a mobile with the help of Tensorflow Lite.
However, when I try to convert my model to the Tensorflow Lite model I get an error saying that there are unsupported operations (as of Tensorflow v2.3.1):
tf.BoostedTreesBucketize
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
Adding tf.lite.OpsSet.SELECT_TF_OPS option helps a bit, but still some operations need a custom implementation:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
I've also tried Tensorflow v2.4.0-rc3, which reduces the set to the following one:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
Conversion code is like the following:
converter = tf.lite.TFLiteConverter.from_saved_model(model_path, signature_keys=['serving_default'])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
signature_keys is specified explicitly, because the model exported with BoostedTreesClassifier#export_saved_model has multiple signatures.
Is there a way to deploy this model on mobile other than writing custom implementation for non-supported ops?
I've managed to create a TensorFlow model, saved as SavedModel .pb format with a custom operation.
My problem is that I cannot convert it to lite version either using command line utilities or python API
my python API is:
import tensorflow as tf
import os
import custom_op
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
converter = tf.lite.TFLiteConverter.from_saved_model("./SavedModel")
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
But conversion failed with error:
ValueError: Provide an input shape for input array 'X'.
I assume because my placeholders don't have a shape type. I don't understand why the normal TensorFlow model works with out it.
Any help?
As it describes in documentation of TensorFlow Lite, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model.
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
You can pass this information as described here. You need to provide input shape for your input array 'X'. Like,
tf.lite.TFLiteConverter.from_saved_model("./Saved_model", input_shapes={("X" : [1,H,W,C])})
I was trying to convert a model in a Keras file (.h5) to a TensorFlow Lite file (.tflite) using the following codes:
# Save model as .h5 keras file
keras_file = "eSleep.h5"
model_save = tf.keras.models.save_model(model,keras_file,overwrite=True,include_optimizer=True)
# Export keras file to TensorFlow Lite model
converter = tf.lite.TFLiteConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open("eSleep.tflite", "wb").write(tflite_model)
However, the following line:
tflite_model = converter.convert()
returned errors:
I tensorflow/core/grappler/devices.cc:53] Number of eligible GPUs (core count >= 8): 0 (Note: TensorFlow was not compiled with CUDA support)
I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
E tensorflow/core/grappler/grappler_item_builder.cc:636] Init node dense/kernel/Assign doesn't exist in graph
Can anybody help me to understand what does "Init node dense/kernel/Assign doesn't exist in graph" means and how to fix the error?
In my experience the converted model should work fine, even though this error is shown. You can ignore the error.
I solved the problem by using TensorFlow 1.12.