TFlite model.process() sometimes needs input data TensorImage and sometimes TensorBuffer to process an image? Are there different image input data? - tensorflow

Some TFlite models model.process() seems to need TensorBuffer and other rather needs TensorImage . I don't know why?
First, I took a regular TensorFlow / Keras model that was saved using:
model.save(keras_model_path,
include_optimizer=True,
save_format='tf')
Then I compress and quantize this Keras model (300 MB) to a TFlite format using:
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
I've got a lot smaller TFlite model (40 Mo) which needs TensorBuffer <input_data> when calling model.process(<input_data>)
Second, I've trained and saved as TFLite model using TensorFlow Lite Model Maker and now I've got a TFLite model that needs TensorImage <input_data> when calling model.process(<input_data>).
Are there two different TFlite models depending on how you build and train it?
Maybe it's related to the fact that the Keras model was based on Inception and the TensorFlow Lite Model Maker uses EfficientNet. How convert from one TFlite model to the other? How someone can change the input of images to be able to process the same, for example TensorImage or bitmap data input?

With the very valuable help of #Farmaker, I've solve my problem. I simply wanted to convert a Keras model into a more compact TFlite model to install it in a mobile application. I realized that the generated TFlite model was not compatible and #Farmaker pointed out to me very correctly that the metadata was missing.
You should use TensorFlow 2.6.0 or less because incompatibility with Flatbuffer.
pip3 uninstall tensorflow
pip3 install tensorflow==2.6.0
pip3 install keras==2.6.0
Convert the Keras model to TFlite
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.keras.utils.image_dataset_from_directory(dir_val,
batch_size=batch_size,
image_size=(150,150))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open(tflite_model_path, 'wb') as file:
file.write(tflite_model)
Add metadata as shown here in the « TensorFlow Lite Metadata Writer API » tutorial
3.1 Provide a labels.txt file (a file of all the target class labels, one label by line)
For instance, to create such a file,
your_labels_list = [
'class1','class2',...]
with open('labels.txt', 'w') as labels_file:
for label in your_labels_list:
labels_file.write(label + "\n")
3.2 Provide extra library to support TFlite metadata generation
pip3 install tflite-support-nightly
3.3 Generate the metadata
from tflite_support.metadata_writers import image_classifier
from tflite_support.metadata_writers import writer_utils
ImageClassifierWriter = image_classifier.MetadataWriter
# Normalization parameters are required when processing the image
# https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
_TFLITE_MODEL_PATH = "<your_path_to_model.tflite>"
_LABELS_FILE = ""<your_path_to_labels.txt>""
_TFLITE_METADATA_MODEL_PATHS = ""<your_path_to_model_with_metadata.tflite>""
# Create the metadata writer
metadata_generator = ImageClassifierWriter.create_for_inference(
writer_utils.load_file(_TFLITE_MODEL_PATH),
[_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABELS_FILE])
# Verify the metadata generated
print(metadata_generator.get_metadata_json())
# Integrate the metadata into the TFlite model
writer_utils.save_file(metadata_generator.populate(), _TFLITE_METADATA_MODEL_PATHS)
That's all folks!

You can use tdfs, dataset, dataset_image, tf.constants, and other data formats.
You also can use tf.constants where you input required parameters OR you can input weights algorithms. ( Convolution layer also capable )
I determine the input and target response catagorizes.
[ Sequence to Sequence mapping ]:
group_1_ShoryuKen_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,0,0,0, 0,0,0,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0 ], shape=(1, 1, 48), dtype=tf.float32)
# get_weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm" )
lstm_weight_1 = layer1_lstm.get_weights()[0]
lstm_filter_1 = layer1_lstm.get_weights()[1]
# set weights
layer1_lstm = model.get_layer( name="layer1_bidirection-lstm " )
layer1_conv.set_weights([lstm_weight_1, lstm_filter_1])
[ TDFS ]:
builder = tfds.builder('cats_vs_dogs', data_dir='file:\\\\F:\\datasets\\downloads\\PetImages\\')
ds = tfds.load('cats_vs_dogs', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
data = DataLoader.from_folder('F:\\datasets\\downloads\\flower_photos\\')
train_data, test_data = data.split(0.9)
for example in ds.take(1):
image, label = example["image"], example["label"]
model = image_classifier.create(train_data)
...

Related

post quantization int8 and prune my model after i trained it using ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8

im trying to inference my model in Arduino 33BLE and to do so i trained my model using ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8 i got a model size with 6.5mb with 88%map#0.5IOU which is nice so i tried to quantize the model using int8 but the model size increased to 11.5mb and the accuracy was trash i dont know what happened if someone can help me it will be great
my code to quantize the model
import tensorflow as tf
import io
import PIL
import numpy as np
import tensorflow_datasets as tfds
def representative_dataset_gen():
folder = "/content/dataset/train/images"
image_size = 320
raw_test_data = []
files = glob.glob(folder+'/*.jpeg')
for file in files:
image = Image.open(file)
image = image.convert("RGB")
image = image.resize((image_size, image_size))
#Quantizing the image between -1,1;
image = (2.0 / 255.0) * np.uint8(image) - 1.0
#image = np.asarray(image).astype(np.float32)
image = image[np.newaxis,:,:,:]
raw_test_data.append(image)
for data in raw_test_data:
yield [data]
converter = tf.lite.TFLiteConverter.from_saved_model('/content/gdrive/MyDrive/customTF2/data/tflite/saved_model')
converter.representative_dataset = representative_dataset_gen
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
tflite_model = converter.convert()
with open('/mydrive/customTF2/data/tflite/saved_model/detect8.tflite', 'wb') as f:
f.write(tflite_model)
also if there is away to prune the model it will also help with reducing the size to less than 1mb.
i also tried yolov5 and pruned and quantize the model to 1.9mb but couldnt go further, and then tried to convert the tflite model to .h model to inference in esp32 (instead since my tflite model is larger than 1mb), but the model size also increased to 11mb.
i tried post training quantization for my model but the model size increased instead of decreasing, not only that,the model performance reduced drastically. for the pruning part i couldnt do using movbilenetV2 and i hope someone can help

YOLOv5 - Convert to tflite but make scores type float32 instead of int32

I am trying to use a custom object detection model trained with YOLOv5 converted to tflite for an Android app (using this exact TensorFlow example).
The model has been converted to tflite by using the YOLOv5 converter like this:
python export.py --weights newmodel.pt --include tflite --int8 --agnostic-nms
This is the export.py function that exports model as tflite:
`def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):
# YOLOv5 TensorFlow Lite export
import tensorflow as tf
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
batch_size, ch, *imgsz = list(im.shape) # BCHW
f = str(file).replace('.pt', '-fp16.tflite')
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if int8:
from models.tf import representative_dataset_gen
dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False)
converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = []
converter.inference_input_type = tf.uint8 # or tf.int8
converter.inference_output_type = tf.uint8 # or tf.int8
converter.experimental_new_quantizer = True
f = str(file).replace('.pt', '-int8.tflite')
if nms or agnostic_nms:
converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)
tflite_model = converter.convert()
open(f, "wb").write(tflite_model)
return f, None`
The working example uses these tensors: Working example model's tensors
My tensors look like this: My custom model's tensors
The problem is that I don't know how to convert my output tensor's SCORE type from int32 to float32. Therefore, the app does not work with my custom model (I think this is the only problem that is stopping my custom model from working).
YoloV5 model is returning data in INT32 format. But TensorBuffer does not support data type: INT32.
To use On Device ML in Android, use SSD models. Because only SSD models are currently supported by tflite library.

Converted TF Lite pre trained model not working correctly

I recently used this tensorflow model. I converted that compressed model to tflite file by these below codes:
converter = tf.lite.TFLiteConverter.from_saved_model("movenet_multipose_lightning")
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
tflite_model_files = pathlib.Path("movenet_multipose.tflite")
tflite_model_files.write_bytes(tflite_quant_model)
converter.target_spec.supported_types = [tf.float16]
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = pathlib.Path("movenet_multipose_f16.tflite")
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
Every things seems good and the tflite file exported correctly but when I use this converted model in android app it thrown an error like this:
Cannot copy to a TensorFlowLite tensor (serving_default_input:0) with 589824 bytes from a Java Buffer with 147456 bytes.
so whats the problem? How can I convert models like that correctly to use them in android apps?

TfLite for Microcontrollers giving hybrid error

I converted my keras .h5 file to a quantized tflite in order to run on the new OpenMV Cam H7 plus but when I run it I get an error saying "Hybrid Models are not supported on TFLite Micro."
I'm not sure why my model is appearing as hybrid; the code I used to convert is below:
model = load_model('inceptionV3.h5')
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('inceptionV3_openmv2.tflite', 'wb') as f:
f.write(tflite_model)
I'd appreciate if someone could guide me if I'm doing something wrong or if there is a better way to convert it.
Try this code
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file("inceptionV3.h5")
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tf_model = converter.convert()
open ("inceptionV3_openmv2.tflite" , "wb") .write(tf_model)

Tensorflow Model to TFLITE

I have this code for building a semantic search engine using pre-trained universal encoder from tensorflow hub. I am not able to convert to tlite. I have saved the model to my directory.
Importing the model:
module_path ="/content/drive/My Drive/4"
%time model = hub.load(module_path)
#print ("module %s loaded" % module_url)
#Create function for using modeltraining
def embed(input):
return model(input)
Training the model on data:
## training the model
Model_USE= embed(data)
Saving the model:
exported = tf.train.Checkpoint(v=tf.Variable(Model_USE))
exported.f = tf.function(
lambda x: exported.v * x,
input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
export_dir = "/content/drive/My Drive/"
tf.saved_model.save(exported,export_dir)
Saving works fine but when I convert to tflite it gives error.
Conversion code:
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Error:
as_list() is not defined on an unknown TensorShape.
First, you should need to add a data generator to have representative inputs for the converter. Just like this:
def representative_data_gen():
for input_value in dataset.take(100):
yield [input_value]
The input value must be of shape (1, your_iput_shape) as if it had batch shape of 1. It has to be yielded as a list; mandatory.
You should also declare which type of optimization do you want, for example:
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
Nevertheless, I have also encountered problems with the different options of the converter depending on the network structure, which in this case I do not know. So, to make a clean run of the converter I would just do:
converter = lite.TFLiteConverter.from_keras_model(model)
converter.experimental_new_converter = True
converter.optimizations = [lite.Optimize.DEFAULT]
tfmodel = converter.convert()
The converter.experimental_new_converter = True is for problems when converting RNNs as in https://github.com/tensorflow/tensorflow/issues/34813
EDIT:
As explained here: ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]' TFLite only allows the first dimension of your data to be None, that is, the batch. All other dimensions must be fixed. Try padding them with, for example, tf.keras.preprocessing.sequence.pad_sequences.
Then mask your sequences in the network as described in: tensorflow.org/guide/keras/masking_and_padding with Embedding or Masking layers.