AttributeError: module 'tensorflow.contrib.lite.python.convert_saved_model' has no attribute 'convert' - tensorflow

I am trying to convert my premade DNN Model to tflite file, using the function:
from tensorflow.contrib.lite.python import convert_saved_model
convert_saved_model.convert(saved_model_dir=saved_model, output_tflite="/TF_Lite_Model")
I have the last verison of tensorflow installed 1.10
I am using UBUNTU 16.04
the error is the following:
AttributeError: module 'tensorflow.contrib.lite.python.convert_saved_model' has no attribute 'convert'

The API for converting SavedModels to TensorFlow Lite FlatBuffers is TocoConverter.from_saved_model as documented here. The documentation has been copied below.
To provide a general explanation. from_saved_model is a classmethod that returns a TocoConverter object. TocoConverter has a function convert. convert_saved_model is a function and therefore does not have its own convert function.
Copied from documentation:
The following example shows how to convert a SavedModel into a TensorFlow Lite FlatBuffer.
import tensorflow as tf
converter = tf.contrib.lite.TocoConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
For more complex SavedModels, the optional parameters that can be passed into TocoConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.contrib.lite.TocoConverter).

I had to compile the tflite contrib module as it was missing on my repo.

Related

Exporting fine-tuned saved model to TensorFlow Lite error

I fine.tuned an SSD model to recognize a custom object.
I followed the tutorials, ran the training process and exported the model, I tested it for inference and everything works great.
So, now I have a structure like:
exported models/
|
---- SSD_custom_model/
|
--------checkpoint/
--------saved_model/
--------pipeline.config
which I assume is what is referred to as "Saved model" in the TensorFlow documentation.
So, I wanted to convert this model to TensorFlow Lite to test in on an Android device, I checked the tutorials and I'm trying:
import tensorflow as tf
saved_model_dir = 'exported-models/SSD_custom_model/'
# # Convert the model
## I tried either just
# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
## or, with more options
converter = tf.lite.TFLiteConverter.from_saved_model(
saved_model_dir, signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
And I'm getting the error
File "/home/lews/anaconda3/envs/tf/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): requires element_shape to be 1D tensor during TF Lite transformation pass
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
It seems to be complaining about the input shape ('requires element_shape to be 1D tensor during TF Lite transformation pass'). Maybe I should've modified something about the model before the fine-tuning process? Or after that?
Hi,I'm doing the same work and encountered the same error, but I sovled it.
The model I converted is SSD-Mobile-v2, and I'm using tensorflow 2_4, so I believe this will work for you.
All you need to do is to create a new conda environment (python 3.8 is ok), and then install tf-nightly:
pip install tf-nightly
It's important to note that the version of tf-nightly must be >= 2.5.
At first I used the tf-nightly 2.3, I encountered another error. Then I upgrade it to 2.5, the converter finally works.

Converting To Tflite model

While converting model to tflite getting this error
"""
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CONV_2D, MAX_POOL_2D, MUL, RELU, SOFTMAX, SQUEEZE, SUB. Here is a list of operators for which you will need custom implementations: AdjustContrastv2, AdjustHue, AdjustSaturation, RandomUniform.
"""
How to resolve this?
tensorflow version: 1.13.1
You can use TF ops directly by selecting TF ops.
I've confirmed that AdjustContrastv2, AdjustHue, AdjustSaturation are available via FlexDelegate.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/flex/allowlisted_flex_ops.cc#L35
To use this feature, you need to use TF 2.4 or higher. Since TF 2.4 is not available yet, you need to use tf-nightly release.
FYI, regarding migration TF1 to TF2, please check https://www.tensorflow.org/guide/migrate
You may try adding following lines to specify your model can use ops in both TF Lite built in and in TF.
converter.experimental_new_converter=True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
Or better you should rewrite ops not supported in TF Lite built in by ops available in TF built in

Tensorflow 2.3.0 -> 2.2.0 comparability: ValueError: Unknown layer: Functional

I'm having a problem similar to the one described here:
ValueError: Unknown layer: Functional
import tensorflow as tf
model = tf.keras.models.load_model("model.h5")
which throws: ValueError: Unknown layer: Functional.
I'm pretty sure this is because the h5 file was saved in TF 2.3.0 and I'm trying to load it in 2.2.0. I'd rather not convert using tf 2.3.0 directly, and I'm hoping to find a way of manually fixing the h5py file itself, or passing the right custom object to the model loader. I've noticed that it seems like it's just an extra key wherever the config file is stored, e.g. https://github.com/tensorflow/tensorflow/issues/41929
The problem is, I'm not sure how to manually get rid of the Functional layer in the h5 file. Specifically, I've tried:
import h5py
f = h5py.File("model.h5",'r')
print(f['model_weights'].keys())
which gives:
<KeysViewHDF5 ['concatenate_1', 'conv1d_3', 'conv1d_4', 'conv1d_5', 'dense_1', 'dropout_4', 'dropout_5', 'dropout_6', 'dropout_7', 'embedding_1', 'global_average_pooling1d_1', 'global_max_pooling1d_1', 'input_2']>
and I don't see the Functional layer anywhere. Where exactly is the config for the model stored in this file? E.g. I'm looking for something like {"class_name": "Functional", "config": {"name": "model", "layers":...}}
Question: is there a way I can manually edit the h5 file using h5py to get rid of the Functional layer?
Alternatively, can I pass a specific custom_obects={'Functiona':???} to the load_model function?
I've tried {'Functional':tf.keras.models.Model} but that returns ('Keyword argument not understood:', 'groups') because I think it's trying to load a model into weights?
I had a similar problem. The only way I could solve it without changing the Tensorflow version and retraining the model is by building the model structure again using Keras API in TensorFlow 2.2.0 and then call:
model.load_weights(<h5 file>)
where the original h5 file was created using TensorFlow 2.3.0. If you already have the code that builds the model structure then this method should be relatively easy since all you have to do is replace load_model(<h5 file>) with the line above.
Just change
keras.models import load_model
tensorflow.keras.models import load_model
then
load_model('model.h5', compile = False)

Problem converting tensorflow model to lite version

I've managed to create a TensorFlow model, saved as SavedModel .pb format with a custom operation.
My problem is that I cannot convert it to lite version either using command line utilities or python API
my python API is:
import tensorflow as tf
import os
import custom_op
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
converter = tf.lite.TFLiteConverter.from_saved_model("./SavedModel")
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
But conversion failed with error:
ValueError: Provide an input shape for input array 'X'.
I assume because my placeholders don't have a shape type. I don't understand why the normal TensorFlow model works with out it.
Any help?
As it describes in documentation of TensorFlow Lite, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model.
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
You can pass this information as described here. You need to provide input shape for your input array 'X'. Like,
tf.lite.TFLiteConverter.from_saved_model("./Saved_model", input_shapes={("X" : [1,H,W,C])})

TensorFlow Lite: Init node doesn't exist

I was trying to convert a model in a Keras file (.h5) to a TensorFlow Lite file (.tflite) using the following codes:
# Save model as .h5 keras file
keras_file = "eSleep.h5"
model_save = tf.keras.models.save_model(model,keras_file,overwrite=True,include_optimizer=True)
# Export keras file to TensorFlow Lite model
converter = tf.lite.TFLiteConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open("eSleep.tflite", "wb").write(tflite_model)
However, the following line:
tflite_model = converter.convert()
returned errors:
I tensorflow/core/grappler/devices.cc:53] Number of eligible GPUs (core count >= 8): 0 (Note: TensorFlow was not compiled with CUDA support)
I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
E tensorflow/core/grappler/grappler_item_builder.cc:636] Init node dense/kernel/Assign doesn't exist in graph
Can anybody help me to understand what does "Init node dense/kernel/Assign doesn't exist in graph" means and how to fix the error?
In my experience the converted model should work fine, even though this error is shown. You can ignore the error.
I solved the problem by using TensorFlow 1.12.