What is the right way to convert saved tensorflow model to tensorflow Lite - tensorflow

I have a saved tensorflow model the same as all models in the model zoo.
I want to convert it to tesorflow lite, I find the following way from tensorflow github (my tensorflw version is 2):
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
# extract the downloaded file
!tar -xzvf ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
!pip install tf-nightly
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("m.tflite", "wb").write(tflite_model)
But the output and input shape of the converted model don't match the original model, check the following:
Original Model Input & Output shape
Converted Model Input & Output shape
So there is a problem here! the input / output shape should be matched the original model!
Any idea?

From Tensorflow github issues, I used their answer to solve my problem.
Link
Their approach:
!pip install tf-nightly
import tensorflow as tf
## TFLite Conversion
model = tf.saved_model.load("saved_model")
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([1, 300, 300, 3])
tf.saved_model.save(model, "saved_model_updated", signatures={"serving_default":concrete_func})
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir='saved_model_updated', signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
## TFLite Interpreter to check input shape
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
print(input_shape)
[ 1 300 300 3]
Thank you MeghnaNatraj

The shape of both models input and output should be the same as shown below
If the model is already in saved_model format, you the code below
# if you are using same model
export_dir = 'ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model'
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
if your model is in Keras format, use the format below
# if it's a keras model
model = tf.keras.applications.MobileNetV2(weights="imagenet", input_shape= (224, 224, 3))
converter = tf.lite.TFLiteConverter.from_keras_model(model)
In both cases, the intention is to get the converter.
I don't have the saved_model, so I will use keras model and convert it to saved_model format just use the Keras model format as an example
import pathlib #to use path
model = tf.keras.applications.MobileNetV2(weights="imagenet", input_shape= (224, 224, 3))
export_dir = 'imagenet/saved_model'
tf.saved_model.save(model, export_dir) #convert keras to saved model
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT] #you can also optimize for size or latency OPTIMIZE_FOR_SIZE, OPTIMIZE_FOR_LATENCY
tflite_model = converter.convert()
#save the model
tflite_model_file = pathlib.Path('m.tflite')
tflite_model_file.write_bytes(tflite_model)
tflite_interpreter = tf.lite.Interpreter(model_path= 'm.tflite') #you can load the content with model_content=tflite_model
# get shape of tflite input and output
input_details = tflite_interpreter.get_input_details()
output_details = tflite_interpreter.get_output_details()
print("Input: {}".format( input_details[0]['shape']))
print("Output:{}".format(output_details[0]['shape']))
# get shape of the origin model
print("Input: {}".format( model.input.shape))
print("Output: {}".format(model.output.shape))
For the tflite: I have this
For the Original Model I have this
You will see the shape of both tflite and keras model are the same

Just reshape your input tensor.
You can use the resize_tensor_input function, like this:
interpreter.resize_tensor_input(input_index=0, tensor_size=[1, 640, 640, 3])
Now you input shape will be: [1, 640, 640, 3].

Related

TFGPT2LMHeadModel to TFLite changes the input and output shape

The TFGPT2LMHeadModel convertion to TFlite renders unexpected input and output shape
as oppoed to the pre trained model gpt2-64.tflite , how can we fix the same ?
!wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-64.tflite
import numpy as np
import tensorflow as tf
tflite_model_path = 'gpt2-64.tflite'
# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#print the output
input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data.shape)
print(input_shape)
Gives output as
>(1, 64, 50257)
> [ 1 64]
which is as expected
but when we try to convert TFGPT2LMHeadModel to TFLITE , we get different output as below
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
import numpy as np
model = TFGPT2LMHeadModel.from_pretrained('gpt2') # or 'distilgpt2'
input_spec = tf.TensorSpec([1, 64], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# For FP16 quantization:
# converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("gpt2-64-2.tflite", "wb").write(tflite_model)
tflite_model_path = 'gpt2-64-2.tflite'
# Load the TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#print the output
input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data.shape)
print(input_shape)
Output:
>(2, 1, 12, 1, 64)
>[1 1]
How can we fix the same ?

YOLOv5 - Convert to tflite but make scores type float32 instead of int32

I am trying to use a custom object detection model trained with YOLOv5 converted to tflite for an Android app (using this exact TensorFlow example).
The model has been converted to tflite by using the YOLOv5 converter like this:
python export.py --weights newmodel.pt --include tflite --int8 --agnostic-nms
This is the export.py function that exports model as tflite:
`def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):
# YOLOv5 TensorFlow Lite export
import tensorflow as tf
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
batch_size, ch, *imgsz = list(im.shape) # BCHW
f = str(file).replace('.pt', '-fp16.tflite')
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if int8:
from models.tf import representative_dataset_gen
dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False)
converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = []
converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_quantizer = True
f = str(file).replace('.pt', '-int8.tflite')
if nms or agnostic_nms:
converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)
tflite_model = converter.convert()
open(f, "wb").write(tflite_model)
return f, None`
The working example uses these tensors: Working example model's tensors
My tensors look like this: My custom model's tensors
The problem is that I don't know how to convert my output tensor's SCORE type from int32 to float32. Therefore, the app does not work with my custom model (I think this is the only problem that is stopping my custom model from working).
YoloV5 model is returning data in INT32 format. But TensorBuffer does not support data type: INT32.
To use On Device ML in Android, use SSD models. Because only SSD models are currently supported by tflite library.

TFlite model.process() sometimes needs input data TensorImage and sometimes TensorBuffer to process an image? Are there different image input data?

Some TFlite models model.process() seems to need TensorBuffer and other rather needs TensorImage . I don't know why?
First, I took a regular TensorFlow / Keras model that was saved using:
model.save(keras_model_path,
include_optimizer=True,
save_format='tf')
Then I compress and quantize this Keras model (300 MB) to a TFlite format using:
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
I've got a lot smaller TFlite model (40 Mo) which needs TensorBuffer <input_data> when calling model.process(<input_data>)
Second, I've trained and saved as TFLite model using TensorFlow Lite Model Maker and now I've got a TFLite model that needs TensorImage <input_data> when calling model.process(<input_data>).
Are there two different TFlite models depending on how you build and train it?
Maybe it's related to the fact that the Keras model was based on Inception and the TensorFlow Lite Model Maker uses EfficientNet. How convert from one TFlite model to the other? How someone can change the input of images to be able to process the same, for example TensorImage or bitmap data input?
With the very valuable help of #Farmaker, I've solve my problem. I simply wanted to convert a Keras model into a more compact TFlite model to install it in a mobile application. I realized that the generated TFlite model was not compatible and #Farmaker pointed out to me very correctly that the metadata was missing.
You should use TensorFlow 2.6.0 or less because incompatibility with Flatbuffer.
pip3 uninstall tensorflow
pip3 install tensorflow==2.6.0
pip3 install keras==2.6.0
Convert the Keras model to TFlite
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
Add metadata as shown here in the « TensorFlow Lite Metadata Writer API » tutorial
3.1 Provide a labels.txt file (a file of all the target class labels, one label by line)
For instance, to create such a file,
your_labels_list = [
'class1','class2',...]
with open('labels.txt', 'w') as labels_file:
for label in your_labels_list:
labels_file.write(label + "\n")
3.2 Provide extra library to support TFlite metadata generation
pip3 install tflite-support-nightly
3.3 Generate the metadata
from tflite_support.metadata_writers import image_classifier
from tflite_support.metadata_writers import writer_utils
ImageClassifierWriter = image_classifier.MetadataWriter
# Normalization parameters are required when processing the image
# https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
_TFLITE_MODEL_PATH = "<your_path_to_model.tflite>"
_LABELS_FILE = ""<your_path_to_labels.txt>""
_TFLITE_METADATA_MODEL_PATHS = ""<your_path_to_model_with_metadata.tflite>""
# Create the metadata writer
metadata_generator = ImageClassifierWriter.create_for_inference(
writer_utils.load_file(_TFLITE_MODEL_PATH),
[_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABELS_FILE])
# Verify the metadata generated
print(metadata_generator.get_metadata_json())
# Integrate the metadata into the TFlite model
writer_utils.save_file(metadata_generator.populate(), _TFLITE_METADATA_MODEL_PATHS)
That's all folks!
You can use tdfs, dataset, dataset_image, tf.constants, and other data formats.
You also can use tf.constants where you input required parameters OR you can input weights algorithms. ( Convolution layer also capable )
I determine the input and target response catagorizes.
[ Sequence to Sequence mapping ]:
group_1_ShoryuKen_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,0,0,0, 0,0,0,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0 ], shape=(1, 1, 48), dtype=tf.float32)
# get_weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm" )
lstm_weight_1 = layer1_lstm.get_weights()[0]
lstm_filter_1 = layer1_lstm.get_weights()[1]
# set weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm " )
layer1_conv.set_weights([lstm_weight_1, lstm_filter_1])
[ TDFS ]:
builder = tfds.builder('cats_vs_dogs', data_dir='file:\\\\F:\\datasets\\downloads\\PetImages\\')
ds = tfds.load('cats_vs_dogs', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
data = DataLoader.from_folder('F:\\datasets\\downloads\\flower_photos\\')
train_data, test_data = data.split(0.9)
for example in ds.take(1):
image, label = example["image"], example["label"]
model = image_classifier.create(train_data)
...

TfLite for Microcontrollers giving hybrid error

I converted my keras .h5 file to a quantized tflite in order to run on the new OpenMV Cam H7 plus but when I run it I get an error saying "Hybrid Models are not supported on TFLite Micro."
I'm not sure why my model is appearing as hybrid; the code I used to convert is below:
model = load_model('inceptionV3.h5')
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('inceptionV3_openmv2.tflite', 'wb') as f:
f.write(tflite_model)
I'd appreciate if someone could guide me if I'm doing something wrong or if there is a better way to convert it.
Try this code
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file("inceptionV3.h5")
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tf_model = converter.convert()
open ("inceptionV3_openmv2.tflite" , "wb") .write(tf_model)

Tensorflow Model to TFLITE

I have this code for building a semantic search engine using pre-trained universal encoder from tensorflow hub. I am not able to convert to tlite. I have saved the model to my directory.
Importing the model:
module_path ="/content/drive/My Drive/4"
%time model = hub.load(module_path)
#print ("module %s loaded" % module_url)
#Create function for using modeltraining
def embed(input):
return model(input)
Training the model on data:
## training the model
Model_USE= embed(data)
Saving the model:
exported = tf.train.Checkpoint(v=tf.Variable(Model_USE))
exported.f = tf.function(
lambda x: exported.v * x,
input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
export_dir = "/content/drive/My Drive/"
tf.saved_model.save(exported,export_dir)
Saving works fine but when I convert to tflite it gives error.
Conversion code:
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Error:
as_list() is not defined on an unknown TensorShape.
First, you should need to add a data generator to have representative inputs for the converter. Just like this:
def representative_data_gen():
for input_value in dataset.take(100):
yield [input_value]
The input value must be of shape (1, your_iput_shape) as if it had batch shape of 1. It has to be yielded as a list; mandatory.
You should also declare which type of optimization do you want, for example:
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
Nevertheless, I have also encountered problems with the different options of the converter depending on the network structure, which in this case I do not know. So, to make a clean run of the converter I would just do:
converter = lite.TFLiteConverter.from_keras_model(model)
converter.experimental_new_converter = True
converter.optimizations = [lite.Optimize.DEFAULT]
tfmodel = converter.convert()
The converter.experimental_new_converter = True is for problems when converting RNNs as in https://github.com/tensorflow/tensorflow/issues/34813
EDIT:
As explained here: ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]' TFLite only allows the first dimension of your data to be None, that is, the batch. All other dimensions must be fixed. Try padding them with, for example, tf.keras.preprocessing.sequence.pad_sequences.
Then mask your sequences in the network as described in: tensorflow.org/guide/keras/masking_and_padding with Embedding or Masking layers.