How to convert Tensorflow .pb models to Tensforflow Lite - tensorflow

I need to convert a tensorflow pb model into tensorflow lite, by using Google CoLab.
The conversion procedures are next:
1) To upload the model:
from google.colab import files
pbfile = files.upload()
2) To convert it:
import tensorflow as tf
pb_file = 'data_513.pb'
tflite_file = 'data_513.tlite'
converter = tf.lite.TFLiteConverter.from_frozen_graph(pb_file, ['ImageTensor'], ['SemanticPredictions'],
input_shapes={"ImageTensor":[1,513,513,3]})
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
The conversion fails with the next error
Check failed: array.data_type == array.final_data_type Array "ImageTensor" has mis-matching actual and final data types (data_type=uint8, final_data_type=float).
I think I may need to specify some extra commands to overcome this error, but I can't find any information about it.

Finally found the solution. Here the snipped for others to use:
import tensorflow as tf
pb_file = 'model.pb'
tflite_file = 'model.tflite'
converter = tf.lite.TFLiteConverter.from_frozen_graph(pb_file, ['ImageTensor'], ['SemanticPredictions'],
input_shapes={"ImageTensor":[1,513,513,3]})
converter.inference_input_type=tf.uint8
converter.quantized_input_stats = {'ImageTensor': (128, 127)} # (mean, stddev)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
files.download(tflite_file)

Related

TFlite model.process() sometimes needs input data TensorImage and sometimes TensorBuffer to process an image? Are there different image input data?

Some TFlite models model.process() seems to need TensorBuffer and other rather needs TensorImage . I don't know why?
First, I took a regular TensorFlow / Keras model that was saved using:
model.save(keras_model_path,
include_optimizer=True,
save_format='tf')
Then I compress and quantize this Keras model (300 MB) to a TFlite format using:
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
I've got a lot smaller TFlite model (40 Mo) which needs TensorBuffer <input_data> when calling model.process(<input_data>)
Second, I've trained and saved as TFLite model using TensorFlow Lite Model Maker and now I've got a TFLite model that needs TensorImage <input_data> when calling model.process(<input_data>).
Are there two different TFlite models depending on how you build and train it?
Maybe it's related to the fact that the Keras model was based on Inception and the TensorFlow Lite Model Maker uses EfficientNet. How convert from one TFlite model to the other? How someone can change the input of images to be able to process the same, for example TensorImage or bitmap data input?
With the very valuable help of #Farmaker, I've solve my problem. I simply wanted to convert a Keras model into a more compact TFlite model to install it in a mobile application. I realized that the generated TFlite model was not compatible and #Farmaker pointed out to me very correctly that the metadata was missing.
You should use TensorFlow 2.6.0 or less because incompatibility with Flatbuffer.
pip3 uninstall tensorflow
pip3 install tensorflow==2.6.0
pip3 install keras==2.6.0
Convert the Keras model to TFlite
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
Add metadata as shown here in the « TensorFlow Lite Metadata Writer API » tutorial
3.1 Provide a labels.txt file (a file of all the target class labels, one label by line)
For instance, to create such a file,
your_labels_list = [
'class1','class2',...]
with open('labels.txt', 'w') as labels_file:
for label in your_labels_list:
labels_file.write(label + "\n")
3.2 Provide extra library to support TFlite metadata generation
pip3 install tflite-support-nightly
3.3 Generate the metadata
from tflite_support.metadata_writers import image_classifier
from tflite_support.metadata_writers import writer_utils
ImageClassifierWriter = image_classifier.MetadataWriter
# Normalization parameters are required when processing the image
# https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
_TFLITE_MODEL_PATH = "<your_path_to_model.tflite>"
_LABELS_FILE = ""<your_path_to_labels.txt>""
_TFLITE_METADATA_MODEL_PATHS = ""<your_path_to_model_with_metadata.tflite>""
# Create the metadata writer
metadata_generator = ImageClassifierWriter.create_for_inference(
writer_utils.load_file(_TFLITE_MODEL_PATH),
[_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABELS_FILE])
# Verify the metadata generated
print(metadata_generator.get_metadata_json())
# Integrate the metadata into the TFlite model
writer_utils.save_file(metadata_generator.populate(), _TFLITE_METADATA_MODEL_PATHS)
That's all folks!
You can use tdfs, dataset, dataset_image, tf.constants, and other data formats.
You also can use tf.constants where you input required parameters OR you can input weights algorithms. ( Convolution layer also capable )
I determine the input and target response catagorizes.
[ Sequence to Sequence mapping ]:
group_1_ShoryuKen_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,0,0,0, 0,0,0,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0 ], shape=(1, 1, 48), dtype=tf.float32)
# get_weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm" )
lstm_weight_1 = layer1_lstm.get_weights()[0]
lstm_filter_1 = layer1_lstm.get_weights()[1]
# set weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm " )
layer1_conv.set_weights([lstm_weight_1, lstm_filter_1])
[ TDFS ]:
builder = tfds.builder('cats_vs_dogs', data_dir='file:\\\\F:\\datasets\\downloads\\PetImages\\')
ds = tfds.load('cats_vs_dogs', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
data = DataLoader.from_folder('F:\\datasets\\downloads\\flower_photos\\')
train_data, test_data = data.split(0.9)
for example in ds.take(1):
image, label = example["image"], example["label"]
model = image_classifier.create(train_data)
...

Converted TF Lite pre trained model not working correctly

I recently used this tensorflow model. I converted that compressed model to tflite file by these below codes:
converter = tf.lite.TFLiteConverter.from_saved_model("movenet_multipose_lightning")
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
tflite_model_files = pathlib.Path("movenet_multipose.tflite")
tflite_model_files.write_bytes(tflite_quant_model)
converter.target_spec.supported_types = [tf.float16]
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = pathlib.Path("movenet_multipose_f16.tflite")
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
Every things seems good and the tflite file exported correctly but when I use this converted model in android app it thrown an error like this:
Cannot copy to a TensorFlowLite tensor (serving_default_input:0) with 589824 bytes from a Java Buffer with 147456 bytes.
so whats the problem? How can I convert models like that correctly to use them in android apps?

Cannot convert Tensorflow .pb frozen graph to tensorflow lite due to strange 'utf-8' codec error on Colab

I have am ONNX model that I converted to tensorflow, that conversion went ahead without any problems, but now I want to convert this .pb file to tf lite using the following code
import tensorflow as tf
TF_PATH = "/content/tf_model/saved_model.pb" # where the forzen graph is stored
TFLITE_PATH = "./model.tflite"
# make a converter object from the saved tensorflow file
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(TF_PATH, # TensorFlow freezegraph .pb model file
input_arrays=['input_ids'], # name of input arrays as defined in torch.onnx.export function before.
output_arrays=['logits'], # name of output arrays defined in torch.onnx.export function before.
)
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.compat.v1.lite.OpsSet.TFLITE_BUILTINS,
tf.compat.v1.lite.OpsSet.SELECT_TF_OPS]
tf_lite_model = converter.convert()
# Save the model.
with open(TFLITE_PATH, 'wb') as f:
f.write(tf_lite_model)
But when I run this cell on Colab I get the error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position
3: invalid start byte
And directs towards the line: converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph().
I cant seem to figure out what is causing this..
Looks like your frozen_graph is not frozen graph but saved_model format. If I guess right all you need is to change conversion method: you are looking for convert from SavedModel
Assuming you are using TF2 and it will be:
import tensorflow as tf
TF_PATH = "/content/tf_model/" # where the saved_model is stored - but folder name
TFLITE_PATH = "./model.tflite"
# make a converter object from the saved tensorflow file
converter = tf.lite.TFLiteConverter.from_saved_model(TF_PATH)
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tf_lite_model = converter.convert()
# Save the model.
open(TFLITE_PATH, "wb").write(tf_lite_model)

Cannot convert from a fine-tuned GPT-2 model to a Tensorflow Lite model

I've fine tuned a distilgpt2 model using my own text using run_language_modeling.py and its working fine after training and run_generation.py script produces the expected results.
Now I want to convert this to a Tensorflow Lite model and did so by using the following
from transformers import *
CHECKPOINT_PATH = '/content/drive/My Drive/gpt2_finetuned_models/checkpoint-2500'
model = GPT2LMHeadModel.from_pretrained("distilgpt2")
model.save_pretrained(CHECKPOINT_PATH)
model = TFGPT2LMHeadModel.from_pretrained(CHECKPOINT_PATH, from_pt=True)
But I dont think I'm doing this right as after conversion, when I write
print(model.inputs)
print(model.outputs)
I get
None
None
But I still went ahead with the TFLite conversion using :
import tensorflow as tf
input_spec = tf.TensorSpec([1, 64], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# FP16 quantization:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("/content/gpt2-fp16.tflite", "wb").write(tflite_model)
But does not work and when using the generated tflite model I get the error:
tensorflow/lite/kernels/kernel_util.cc:249 d1 == d2 || d1 == 1 || d2 == 1 was not true.
Which I'm sure has something to to with my model not converting properly and getting None for input/output.
Does anyone have any idea how to fix this?
Thanks

TfLite for Microcontrollers giving hybrid error

I converted my keras .h5 file to a quantized tflite in order to run on the new OpenMV Cam H7 plus but when I run it I get an error saying "Hybrid Models are not supported on TFLite Micro."
I'm not sure why my model is appearing as hybrid; the code I used to convert is below:
model = load_model('inceptionV3.h5')
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('inceptionV3_openmv2.tflite', 'wb') as f:
f.write(tflite_model)
I'd appreciate if someone could guide me if I'm doing something wrong or if there is a better way to convert it.
Try this code
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file("inceptionV3.h5")
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tf_model = converter.convert()
open ("inceptionV3_openmv2.tflite" , "wb") .write(tf_model)