How to use tf.lite.TFLiteConverter to get the tflite from tensor_forest model which could be using in the Android smartphone.
I found that the tensor_forest in the contrib directory (link). And there is an example using this model here (link).
I add following code to save the model:
saver = tf.train.Saver(save_relative_paths=True, max_to_keep=10)
checkpoint_prefix = 'checkpoints/model'
for i in range(1, num_steps + 1):
(skip)
if i % 50 == 0 or i == 1:
saver.save(sess, checkpoint_prefix, global_step=i)
(skip)
It generated the 'checkpoint', 'model-2000.data-00000-of-00001', 'model-2000.index', 'model-2000.meta' in the checkpoints directory. But I cannot convert it by referencing the using of tf.lite.TFLiteConverter cause it seems to need the pb file or missing some parameters.
How can I generate the tflite in the correct way?
Related
I've been trying to follow this process to run an object detector (SSD MobileNet) on the Google Coral Edge TPU:
Edge TPU model workflow
I've successfully trained and evaluated my model with the Object Detection API. I have the model both in checkpoint format as well as tf SavedModel format. As per the documentation, the next step is to convert to .tflite format using post-training quantization.
I am to attempting to follow this example. The export_tflite_graph_tf2.py script and the conversion code that comes after run without errors, but I see some weird behavior when I try to actually use the model to run inference.
I am unable to use the saved_model generated by export_tflite_graph_tf2.py. When running the following code, I get an error:
print('loading model...')
model = tf.saved_model.load(tflite_base)
print('model loaded!')
results = model(image_np)
TypeError: '_UserObject' object is not callable --> results = model(image_np)
As a result, I have no way to tell if the script broke my model or not before I even convert it to tflite. Why would model not be callable in this way? I have even verified that the type returned by tf.saved_model.load() is the same when I pass in a saved_model before it went through the export_tflite_graph_tf2.py script and after. The only possible explanation I can think of is that the script alters the object in some way that causes it to break.
I convert to tflite with post-training quantization with the following code:
def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(images_dir + '/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
# supports PNG as well
image = tf.io.decode_image(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
if i == 0:
print(image.dtype)
yield [image]
# This enables quantization
# This sets the representative dataset for quantization
converter = tf.lite.TFLiteConverter.from_saved_model(base_saved_model)
# converter = tf.lite.TFLiteConverter.from_keras(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT] # issue here?
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [
# tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
# tf.lite.OpsSet.SELECT_TF_OPS, # enable TensorFlow ops.
tf.lite.OpsSet.TFLITE_BUILTINS_INT8 # This ensures that if any ops can't be quantized, the converter throws an error
]
# This ensures that if any ops can't be quantized, the converter throws an error
# For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity.
converter.target_spec.supported_types = [tf.int8]
converter.target_spec.supported_ops += [tf.lite.OpsSet.TFLITE_BUILTINS]
# These set the input and output tensors to uint8 (added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quantized = converter.convert()
Everything runs with no errors, but when I try to actually run an image through the model, it returns garbage. I tried removing the quantization to see if that was the issue, but even without quantization it returns seemingly random bounding boxes that are completely off from the model's performance prior to conversion. The shape of the output tensors look fine, it's just the content is all wrong.
What's the right way to get this model converted to a quantized tflite form? I should note that I can't use the tflite_convert utility because I need to quantize the model, and it appears according to the source code that the quantize_weights flag is deprecated? There are a bunch of conflicting resources I see from TF1 and TF2 about this conversion process so I'm pretty confused.
Note: I'm using a retrained SSD MobileNet from the model zoo. I have not made any changes to the architecture in my training workflow. I've confirmed that the errors persist even on the base model pulled directly from the object detection model zoo.
I’m have a very similar problem with Post Training Quantization and asked about it on GitHub
I could manage to get results from the TFLite model but they were not good enough. Here is the notebook how I did it. Maybe it helps you to get a step forward.
I fine-tuned a BERT model from Tensorflow hub to build a simple sentiment analyzer. The model trains and runs fine. On export, I simply used:
tf.saved_model.save(model, export_dir='models')
And this works just fine.. until I reboot.
On a reboot, the model no longer loads. I've tried using a Keras loader as well as the Tensorflow Server, and I get the same error.
I get the following error message:
Not found: /tmp/tfhub_modules/09bd4e665682e6f03bc72fbcff7a68bf879910e/assets/vocab.txt; No such file or directory
The model is trying to load assets from the tfhub modules cache, which is wiped by reboot. I know I could persist the cache, but I don't want to do that because I want to be able to generate models and then copy them over to a separate application without worrying about the cache.
The crux of it is that I don't think it's necessary to look in the cache for the assets at all. The model was saved with an assets folder wherein vocab.txt was generated, so in order to find the assets it just needs to look in its own assets folder (I think). However, it doesn't seem to be doing that.
Is there any way to change this behaviour?
Added the code for building and exporting the model (it's not a clever model, just prototyping my workflow):
bert_model_name = "bert_en_uncased_L-12_H-768_A-12"
BATCH_SIZE = 64
EPOCHS = 1 # Initial
def build_bert_model(bert_model_name):
input_layer = tf.keras.layers.Input(shape=(), dtype=tf.string, name="inputs")
preprocessing_layer = hub.KerasLayer(
map_model_to_preprocess[bert_model_name], name="preprocessing"
)
encoder_inputs = preprocessing_layer(input_layer)
bert_model = hub.KerasLayer(
map_name_to_handle[bert_model_name], name="BERT_encoder"
)
outputs = bert_model(encoder_inputs)
net = outputs["pooled_output"]
net = tf.keras.layers.Dropout(0.1)(net)
net = tf.keras.layers.Dense(1, activation=None, name="classifier")(net)
return tf.keras.Model(input_layer, net)
def main():
train_ds, val_ds = load_sentiment140(batch_size=BATCH_SIZE, epochs=EPOCHS)
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
init_lr = 3e-5
optimizer = tf.keras.optimizers.Adam(learning_rate=init_lr)
model = build_bert_model(bert_model_name)
model.compile(optimizer=optimizer, loss='mse', metrics='mse')
model.fit(train_ds, validation_data=val_ds, steps_per_epoch=steps_per_epoch)
tf.saved_model.save(model, export_dir='models')
This problem comes from a TensorFlow bug triggered by versions /1 and /2 of https://tfhub.dev/tensorflow/bert_en_uncased_preprocess. The updated models tensorflow/bert_*_preprocess/3 (released last Friday) avoid this bug. Please update to the newest version.
The Classify Text with BERT tutorial has been updated accordingly.
Thanks for bringing this up!
I have a Tensorflow file AlexNet.pb. I am trying to load it then classify an image that I have. I can't find a way to load it then classify an image.
No-one seems to have a simple example of loading and running the .pb file.
It depends on how the protobuf file has been created.
If the .pb file is the result of:
# Create a builder to export the model
builder = tf.saved_model.builder.SavedModelBuilder("export")
# Tag the model in order to be capable of restoring it specifying the tag set
builder.add_meta_graph_and_variables(sess, ["tag"])
builder.save()
You have to know how that model has been tagged and use the tf.saved_model.loader.load method to load the saved graph in the current, empty, graph.
If the model instead has been frozen you have to load the binary file in memory manually:
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.get_default_graph()
tf.import_graph_def(graph_def, name="prefix")
In both cases, you have to know the name of the input tensor and the name of the node you want to execute:
If, for example, your input tensor is a placeholder named batch_ and the node you want to execute is the node named dense/BiasAdd:0 you have to
batch = graph.get_tensor_by_name('batch:0')
prediction = restored_graph.get_tensor_by_name('dense/BiasAdd:0')
values = sess.run(prediction, feed_dict={
batch: your_input_batch,
})
You can use opencv to load .pb models,
eg.
net = cv2.dnn.readNet("model.pb")
Make sure you are using specific version of opencv - OpenCV 3.4.2 or OpenCV 4
We're trying to translate old training code based into a more tf.estimator.Estimator compliant code.
In the initial code we fine tune an original model for a target dataset. Only some layers are loaded from the checkpoint before the training takes place using a combination of variables_to_restore and init_fn with the MonitoredTrainingSession.
How can one achieve this kind of weight loading with the tf.estimator.Estimator approach ?
you have two options, first one is simpler:
1- use tf.train.init_from_checkpoint in your model_fn
2- model_fn returns an EstimatorSpec. You can set scaffold viaEstimatorSpec.
import tensorflow as tf
def model_fn():
# your model defintion here
# ...
# specify your saved checkpoint path
checkpoint_path = "model.ckpt"
ws = tf.estimator.WarmStartSettings(ckpt_to_initialize_from=checkpoint_path)
est = tf.estimator.Estimator(model_fn=model_fn, warm_start_from=ws)
I am using tensorflow checkpointing after every 10 epochs using the following code :
checkpoint_dir = os.path.abspath(os.path.join(out_dir, "checkpoints"))
checkpoint_prefix = os.path.join(checkpoint_dir, "model")
...
if current_step % checkpoint_every == 0:
path = saver.save(sess, checkpoint_prefix, global_step=current_step)
print("Saved model checkpoint to {}\n".format(path))
The problem is that, as the new files are getting generated, previous 5 model files are getting deleted automatically.
This is the expected behavior, the docs for tf.train.Saver say that by default the 5 most recent checkpoint files are kept. To adjust that, set max_to_keep the the desired value.