I have a model in keras using 1 layer of LSTM with bidirectional wrapper, which I want to convert to tensorflow lite.
I'm using the callback ModelCheckpoint while training the model to save the model and the best weights.
Then I'm using this code to reload the best trained model from the checkpoint:
predictor = None
path_Load = os.path.join(os.getcwd(),'LSTMB_CheckPoints.hdf5')
predictor = load_model(path_Load)
predictor.load_weights(path_Load)
Upon checking with validation data, the model loads succesfully and works as intended. Now I want to convert it to Tensorflow Lite, with a bit of code I found on stackoverflow -
keras_file = path_Load
converter = tf.lite.TFLiteConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
I thought maybe the checkpoint file was causing problems, so I resaved the model, and called converter again using -
keras_file = "keras_model.h5"
tf.keras.models.save_model(predictor, keras_file)
converter = tf.lite.TFLiteConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
upon which I am getting this error -
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value bidirectional_1/backward_lstm_1/kernel
[[{{node _retval_bidirectional_1/backward_lstm_1/kernel_0_0}}]]
I tried to run the global variable initializer function of tensorflow
predictor = None
path_Load = os.path.join(os.getcwd(),'LSTMB_CheckPoints.hdf5')
predictor = load_model(path_Load)
predictor.load_weights(path_Load)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
keras_file = "keras_model.h5"
tf.keras.models.save_model(predictor, keras_file)
converter = tf.lite.TFLiteConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
before resaving the model, to check whether the model variables were getting uninitialized, but the error still remains.
has anyone faced a similar issue and found a solution? Is there a way to convert the sequential model to tflite directly without saving and reloading the file?
you should save your model in the .pb file.
First, load you model if you saved it before and then run
YOUR_MODEL.save('NAME.pb').
Now you have a folder that contains saved model.pb and necessary other files and folders.
Create converter instance:
convertor = tensorflow.lite.TFLiteConverter.from_saved_model('NAME.pb').
At the end convert your model and save it:
tfmodel = converter.convert()
open("model.tflite","wb").write(tfmodel)
You first need to save your model into .h5 format by running
model.save("model.h5")
afterward, follow those steps
new_model= tf.keras.models.load_model(filepath="model.h5")
tflite_converter = tf.lite.TFLiteConverter.from_keras_model(new_model)
tflite_model = tflite_converter.convert()
open("tf_lite_model.tflite", "wb").write(tflite_model)
Related
I've been trying to follow this process to run an object detector (SSD MobileNet) on the Google Coral Edge TPU:
Edge TPU model workflow
I've successfully trained and evaluated my model with the Object Detection API. I have the model both in checkpoint format as well as tf SavedModel format. As per the documentation, the next step is to convert to .tflite format using post-training quantization.
I am to attempting to follow this example. The export_tflite_graph_tf2.py script and the conversion code that comes after run without errors, but I see some weird behavior when I try to actually use the model to run inference.
I am unable to use the saved_model generated by export_tflite_graph_tf2.py. When running the following code, I get an error:
print('loading model...')
model = tf.saved_model.load(tflite_base)
print('model loaded!')
results = model(image_np)
TypeError: '_UserObject' object is not callable --> results = model(image_np)
As a result, I have no way to tell if the script broke my model or not before I even convert it to tflite. Why would model not be callable in this way? I have even verified that the type returned by tf.saved_model.load() is the same when I pass in a saved_model before it went through the export_tflite_graph_tf2.py script and after. The only possible explanation I can think of is that the script alters the object in some way that causes it to break.
I convert to tflite with post-training quantization with the following code:
def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(images_dir + '/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
# supports PNG as well
image = tf.io.decode_image(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
if i == 0:
print(image.dtype)
yield [image]
# This enables quantization
# This sets the representative dataset for quantization
converter = tf.lite.TFLiteConverter.from_saved_model(base_saved_model)
# converter = tf.lite.TFLiteConverter.from_keras(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT] # issue here?
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [
# tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
# tf.lite.OpsSet.SELECT_TF_OPS, # enable TensorFlow ops.
tf.lite.OpsSet.TFLITE_BUILTINS_INT8 # This ensures that if any ops can't be quantized, the converter throws an error
]
# This ensures that if any ops can't be quantized, the converter throws an error
# For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity.
converter.target_spec.supported_types = [tf.int8]
converter.target_spec.supported_ops += [tf.lite.OpsSet.TFLITE_BUILTINS]
# These set the input and output tensors to uint8 (added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quantized = converter.convert()
Everything runs with no errors, but when I try to actually run an image through the model, it returns garbage. I tried removing the quantization to see if that was the issue, but even without quantization it returns seemingly random bounding boxes that are completely off from the model's performance prior to conversion. The shape of the output tensors look fine, it's just the content is all wrong.
What's the right way to get this model converted to a quantized tflite form? I should note that I can't use the tflite_convert utility because I need to quantize the model, and it appears according to the source code that the quantize_weights flag is deprecated? There are a bunch of conflicting resources I see from TF1 and TF2 about this conversion process so I'm pretty confused.
Note: I'm using a retrained SSD MobileNet from the model zoo. I have not made any changes to the architecture in my training workflow. I've confirmed that the errors persist even on the base model pulled directly from the object detection model zoo.
I’m have a very similar problem with Post Training Quantization and asked about it on GitHub
I could manage to get results from the TFLite model but they were not good enough. Here is the notebook how I did it. Maybe it helps you to get a step forward.
I am trying to download the VGG19 model via TensorFlow
base_model = VGG19(input_shape = [256,256,3],
include_top = False,
weights = 'imagenet')
However the download always gets stuck before it finishes downloading. I've tried with different models too like InceptionV3 and the same happens there.
Fortunately, the prompt makes the link available where the model can be downloaded manually
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg19/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5
19546112/80134624 [======>.......................] - ETA: 11s
After downloading the model from the given link I try to import the model using
base_model = load_model('vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5')
but I get this error
ValueError: No model found in config file.
How do I load in the downloaded .h5 model manually?
You're using load_model on weights, instead of a model. You need to have a defined model first, then load the weights.
weights = "path/to/weights"
model = VGG19 # the defined model
model.load_weights(weights) # the weights
Got the same problem when learning on tensorflow tutorial, too.
Transfer learning and fine-tuning: Create the base model from the pre-trained convnets
# Create the base model from the pre-trained model MobileNet V2
IMG_SIZE = (160, 160)
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights=None)
# load model weights manually
weights = 'mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_160_no_top.h5'
base_model.load_weights(weights)
I tried download the model.h5, and load manually. It works.
`
I have MobileNetV2 based model that uses the TimeDistributed layer. I want to convert that model to a TensorFlow Lite model in order to run it on a smartphone, but there is on undefined operation.
Here is the code:
import tensorflow as tf
IMAGE_SHAPE = (224, 224, 3)
mobilenet_model = tf.keras.applications.MobileNetV2(input_shape=IMAGE_SHAPE,
include_top=False,
pooling='avg',
weights='imagenet')
inputs = tf.keras.Input(shape=(5,) + IMAGE_SHAPE)
x = tf.keras.applications.mobilenet_v2.preprocess_input(inputs)
outputs = tf.keras.layers.TimeDistributed(mobilenet_model)(x)
model = tf.keras.Model(inputs, outputs)
model.compile()
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tfmodel = converter.convert() # fails
Here is the error message:
error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Mul
Details:
tf.Mul(tensor<?x5x224x224x3xf32>, tensor<f32>) -> (tensor<?x5x224x224x3xf32>)
The error is caused by the interaction between the input preprocessing and TimeDistributed layer. If I disable the input preprocessing, then the conversion succeeds, but obviously the network will not work properly without retraining. Also models that have the preprocessing but don't have TimeDistributed layer can be converted. Is it possible to move the preprocessing to a different place to avoid this error?
Also, adding the select ops helps to convert it successfully, but I'm not sure how to enable them at runtime. I'm using the Mediapipe framework to build an Android app. and I don't think Mediapipe supports linking to extra operations.
Try this to conversion of model.
converter = TFLiteConverter.from_saved_model(model_dir)
converter.experimental_new_converter = False
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.allow_custom_ops=True
tflite_model = converter.convert()
Check the documentation on how to convert it and use it inside android. Maybe you will have success with that. If not come back and ping me again.
You can give it a try without:
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
if your tensorflow version is 2.6.0 or 2.6.1
Check the allow list operators here
Best
I had lost my dataset by a careless mistake. I have only my tflite file left in my hand. Is there any solution to reverse back h5 file. I have been done decent research in this but no solutions found.
The conversion from a TensorFlow SaveModel or tf.keras H5 model to .tflite is an irreversible process. Specifically, the original model topology is optimized during the compilation by the TFLite converter, which leads to some loss of information. Also, the original tf.keras model's loss and optimizer configurations are discarded, because those aren't required for inference.
However, the .tflite file still contains some information that can help you restore the original trained model. Most importantly, the weight values are available, although they might be quantized, which could lead to some loss in precision.
The code example below shows you how to read weight values from a .tflite file after it's created from a simple trained tf.keras.Model.
import numpy as np
import tensorflow as tf
# First, create and train a dummy model for demonstration purposes.
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=[5], activation="relu"),
tf.keras.layers.Dense(1, activation="sigmoid")])
model.compile(loss="binary_crossentropy", optimizer="sgd")
xs = np.ones([8, 5])
ys = np.zeros([8, 1])
model.fit(xs, ys, epochs=1)
# Convert it to a TFLite model file.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted.tflite", "wb").write(tflite_model)
# Use `tf.lite.Interpreter` to load the written .tflite back from the file system.
interpreter = tf.lite.Interpreter(model_path="converted.tflite")
all_tensor_details = interpreter.get_tensor_details()
interpreter.allocate_tensors()
for tensor_item in all_tensor_details:
print("Weight %s:" % tensor_item["name"])
print(interpreter.tensor(tensor_item["index"])())
These weight values loaded back from the .tflite file can be used with tf.keras.Model.set_weights() method, which will allow you to re-inject the weight values into a new instance of trainable Model that you have in Python. Obviously, this requires you to still have access to the code that defines the model's architecture.
When inspecting the Keras model yolov3-tiny.h5 using netron
I see that the input node is called input_1 and has type float32[?,?,?,3]. I would expect float32[?,416,416,3]
How can I force it to be float32[?,416,416,3]?
This is needed for downstream processing. The Keras model has to be converted to a frozen_model.pb in Tensorflow and then be further processed for deployement.
The deployement tools cannot handle an input with unknow w,h size.
Here is how I generated the Keras model. I downloaded the yolov3-tiny.cfg (https://github.com/pjreddie/darknet/blob/master/cfg/yolov3-tiny.cfg) and yolov3-tiny.weigths (https://pjreddie.com/media/files/yolov3-tiny.weights)
And then converted the model to a keras model using the following command :
python convert.py -p yolov3-tiny.cfg yolov3-tiny.weights model_data/yolov3-tiny.h5
(this code is obtained by cloning https://github.com/qqwweee/keras-yolo3)
Making a prediction using the saved Keras model works fine :
python yolo_video.py --image --model model_data/yolov3-tiny.h5
However when inspecting the Keras model yolov3-tiny.h5 using netron
I see that the input node is called input_1 and has type float32[?,?,?,3]
I would expect float32[?,416,416,3]
How can I force it to be float32[?,416,416,3]?
You could do that while converting the keras model file to tflite by passing input_shapes as a dictionary with input node name as the key and shape as the value.
Check out the TFLiteConverter class here
def _main_(args):
kmodel = args.kmodel
lmodel = args.lmodel
converter = tf.lite.TFLiteConverter.from_keras_model_file(kmodel, input_shapes={"input_1":[1,416,416,3]})
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
open("lmodel", "wb").write(tflite_model)
if __name__ == '__main__':
argparser = argparse.ArgumentParser(description='Quantize a trained yolo model')
argparser.add_argument('-k', '--kmodel', help='path to model weights file input')
argparser.add_argument('-l', '--lmodel', help='path to tensor flow lite output')
args = argparser.parse_args()
_main_(args)