TfLite for Microcontrollers giving hybrid error - tensorflow

I converted my keras .h5 file to a quantized tflite in order to run on the new OpenMV Cam H7 plus but when I run it I get an error saying "Hybrid Models are not supported on TFLite Micro."
I'm not sure why my model is appearing as hybrid; the code I used to convert is below:
model = load_model('inceptionV3.h5')
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('inceptionV3_openmv2.tflite', 'wb') as f:
f.write(tflite_model)
I'd appreciate if someone could guide me if I'm doing something wrong or if there is a better way to convert it.

Try this code
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file("inceptionV3.h5")
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tf_model = converter.convert()
open ("inceptionV3_openmv2.tflite" , "wb") .write(tf_model)

Related

YOLOv5 - Convert to tflite but make scores type float32 instead of int32

I am trying to use a custom object detection model trained with YOLOv5 converted to tflite for an Android app (using this exact TensorFlow example).
The model has been converted to tflite by using the YOLOv5 converter like this:
python export.py --weights newmodel.pt --include tflite --int8 --agnostic-nms
This is the export.py function that exports model as tflite:
`def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):
# YOLOv5 TensorFlow Lite export
import tensorflow as tf
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
batch_size, ch, *imgsz = list(im.shape) # BCHW
f = str(file).replace('.pt', '-fp16.tflite')
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if int8:
from models.tf import representative_dataset_gen
dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False)
converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = []
converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_quantizer = True
f = str(file).replace('.pt', '-int8.tflite')
if nms or agnostic_nms:
converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)
tflite_model = converter.convert()
open(f, "wb").write(tflite_model)
return f, None`
The working example uses these tensors: Working example model's tensors
My tensors look like this: My custom model's tensors
The problem is that I don't know how to convert my output tensor's SCORE type from int32 to float32. Therefore, the app does not work with my custom model (I think this is the only problem that is stopping my custom model from working).
YoloV5 model is returning data in INT32 format. But TensorBuffer does not support data type: INT32.
To use On Device ML in Android, use SSD models. Because only SSD models are currently supported by tflite library.

TFlite model.process() sometimes needs input data TensorImage and sometimes TensorBuffer to process an image? Are there different image input data?

Some TFlite models model.process() seems to need TensorBuffer and other rather needs TensorImage . I don't know why?
First, I took a regular TensorFlow / Keras model that was saved using:
model.save(keras_model_path,
include_optimizer=True,
save_format='tf')
Then I compress and quantize this Keras model (300 MB) to a TFlite format using:
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
I've got a lot smaller TFlite model (40 Mo) which needs TensorBuffer <input_data> when calling model.process(<input_data>)
Second, I've trained and saved as TFLite model using TensorFlow Lite Model Maker and now I've got a TFLite model that needs TensorImage <input_data> when calling model.process(<input_data>).
Are there two different TFlite models depending on how you build and train it?
Maybe it's related to the fact that the Keras model was based on Inception and the TensorFlow Lite Model Maker uses EfficientNet. How convert from one TFlite model to the other? How someone can change the input of images to be able to process the same, for example TensorImage or bitmap data input?
With the very valuable help of #Farmaker, I've solve my problem. I simply wanted to convert a Keras model into a more compact TFlite model to install it in a mobile application. I realized that the generated TFlite model was not compatible and #Farmaker pointed out to me very correctly that the metadata was missing.
You should use TensorFlow 2.6.0 or less because incompatibility with Flatbuffer.
pip3 uninstall tensorflow
pip3 install tensorflow==2.6.0
pip3 install keras==2.6.0
Convert the Keras model to TFlite
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
Add metadata as shown here in the « TensorFlow Lite Metadata Writer API » tutorial
3.1 Provide a labels.txt file (a file of all the target class labels, one label by line)
For instance, to create such a file,
your_labels_list = [
'class1','class2',...]
with open('labels.txt', 'w') as labels_file:
for label in your_labels_list:
labels_file.write(label + "\n")
3.2 Provide extra library to support TFlite metadata generation
pip3 install tflite-support-nightly
3.3 Generate the metadata
from tflite_support.metadata_writers import image_classifier
from tflite_support.metadata_writers import writer_utils
ImageClassifierWriter = image_classifier.MetadataWriter
# Normalization parameters are required when processing the image
# https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
_TFLITE_MODEL_PATH = "<your_path_to_model.tflite>"
_LABELS_FILE = ""<your_path_to_labels.txt>""
_TFLITE_METADATA_MODEL_PATHS = ""<your_path_to_model_with_metadata.tflite>""
# Create the metadata writer
metadata_generator = ImageClassifierWriter.create_for_inference(
writer_utils.load_file(_TFLITE_MODEL_PATH),
[_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABELS_FILE])
# Verify the metadata generated
print(metadata_generator.get_metadata_json())
# Integrate the metadata into the TFlite model
writer_utils.save_file(metadata_generator.populate(), _TFLITE_METADATA_MODEL_PATHS)
That's all folks!
You can use tdfs, dataset, dataset_image, tf.constants, and other data formats.
You also can use tf.constants where you input required parameters OR you can input weights algorithms. ( Convolution layer also capable )
I determine the input and target response catagorizes.
[ Sequence to Sequence mapping ]:
group_1_ShoryuKen_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,0,0,0, 0,0,0,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0 ], shape=(1, 1, 48), dtype=tf.float32)
# get_weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm" )
lstm_weight_1 = layer1_lstm.get_weights()[0]
lstm_filter_1 = layer1_lstm.get_weights()[1]
# set weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm " )
layer1_conv.set_weights([lstm_weight_1, lstm_filter_1])
[ TDFS ]:
builder = tfds.builder('cats_vs_dogs', data_dir='file:\\\\F:\\datasets\\downloads\\PetImages\\')
ds = tfds.load('cats_vs_dogs', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
data = DataLoader.from_folder('F:\\datasets\\downloads\\flower_photos\\')
train_data, test_data = data.split(0.9)
for example in ds.take(1):
image, label = example["image"], example["label"]
model = image_classifier.create(train_data)
...

Converted TF Lite pre trained model not working correctly

I recently used this tensorflow model. I converted that compressed model to tflite file by these below codes:
converter = tf.lite.TFLiteConverter.from_saved_model("movenet_multipose_lightning")
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
tflite_model_files = pathlib.Path("movenet_multipose.tflite")
tflite_model_files.write_bytes(tflite_quant_model)
converter.target_spec.supported_types = [tf.float16]
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = pathlib.Path("movenet_multipose_f16.tflite")
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
Every things seems good and the tflite file exported correctly but when I use this converted model in android app it thrown an error like this:
Cannot copy to a TensorFlowLite tensor (serving_default_input:0) with 589824 bytes from a Java Buffer with 147456 bytes.
so whats the problem? How can I convert models like that correctly to use them in android apps?

Cannot convert from a fine-tuned GPT-2 model to a Tensorflow Lite model

I've fine tuned a distilgpt2 model using my own text using run_language_modeling.py and its working fine after training and run_generation.py script produces the expected results.
Now I want to convert this to a Tensorflow Lite model and did so by using the following
from transformers import *
CHECKPOINT_PATH = '/content/drive/My Drive/gpt2_finetuned_models/checkpoint-2500'
model = GPT2LMHeadModel.from_pretrained("distilgpt2")
model.save_pretrained(CHECKPOINT_PATH)
model = TFGPT2LMHeadModel.from_pretrained(CHECKPOINT_PATH, from_pt=True)
But I dont think I'm doing this right as after conversion, when I write
print(model.inputs)
print(model.outputs)
I get
None
None
But I still went ahead with the TFLite conversion using :
import tensorflow as tf
input_spec = tf.TensorSpec([1, 64], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# FP16 quantization:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("/content/gpt2-fp16.tflite", "wb").write(tflite_model)
But does not work and when using the generated tflite model I get the error:
tensorflow/lite/kernels/kernel_util.cc:249 d1 == d2 || d1 == 1 || d2 == 1 was not true.
Which I'm sure has something to to with my model not converting properly and getting None for input/output.
Does anyone have any idea how to fix this?
Thanks

What is the right way to convert saved tensorflow model to tensorflow Lite

I have a saved tensorflow model the same as all models in the model zoo.
I want to convert it to tesorflow lite, I find the following way from tensorflow github (my tensorflw version is 2):
!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
# extract the downloaded file
!tar -xzvf ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
!pip install tf-nightly
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("m.tflite", "wb").write(tflite_model)
But the output and input shape of the converted model don't match the original model, check the following:
Original Model Input & Output shape
Converted Model Input & Output shape
So there is a problem here! the input / output shape should be matched the original model!
Any idea?
From Tensorflow github issues, I used their answer to solve my problem.
Link
Their approach:
!pip install tf-nightly
import tensorflow as tf
## TFLite Conversion
model = tf.saved_model.load("saved_model")
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([1, 300, 300, 3])
tf.saved_model.save(model, "saved_model_updated", signatures={"serving_default":concrete_func})
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir='saved_model_updated', signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
## TFLite Interpreter to check input shape
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
print(input_shape)
[ 1 300 300 3]
Thank you MeghnaNatraj
The shape of both models input and output should be the same as shown below
If the model is already in saved_model format, you the code below
# if you are using same model
export_dir = 'ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model'
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
if your model is in Keras format, use the format below
# if it's a keras model
model = tf.keras.applications.MobileNetV2(weights="imagenet", input_shape= (224, 224, 3))
converter = tf.lite.TFLiteConverter.from_keras_model(model)
In both cases, the intention is to get the converter.
I don't have the saved_model, so I will use keras model and convert it to saved_model format just use the Keras model format as an example
import pathlib #to use path
model = tf.keras.applications.MobileNetV2(weights="imagenet", input_shape= (224, 224, 3))
export_dir = 'imagenet/saved_model'
tf.saved_model.save(model, export_dir) #convert keras to saved model
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT] #you can also optimize for size or latency OPTIMIZE_FOR_SIZE, OPTIMIZE_FOR_LATENCY
tflite_model = converter.convert()
#save the model
tflite_model_file = pathlib.Path('m.tflite')
tflite_model_file.write_bytes(tflite_model)
tflite_interpreter = tf.lite.Interpreter(model_path= 'm.tflite') #you can load the content with model_content=tflite_model
# get shape of tflite input and output
input_details = tflite_interpreter.get_input_details()
output_details = tflite_interpreter.get_output_details()
print("Input: {}".format( input_details[0]['shape']))
print("Output:{}".format(output_details[0]['shape']))
# get shape of the origin model
print("Input: {}".format( model.input.shape))
print("Output: {}".format(model.output.shape))
For the tflite: I have this
For the Original Model I have this
You will see the shape of both tflite and keras model are the same
Just reshape your input tensor.
You can use the resize_tensor_input function, like this:
interpreter.resize_tensor_input(input_index=0, tensor_size=[1, 640, 640, 3])
Now you input shape will be: [1, 640, 640, 3].