I have to try the quantization to my model(tflite).
I want to change float32 to float 16 through the dynamic range quantization.
This is my code:
import tensorflow as tf
import json
import sys
import pprint
from tensorflow import keras
import numpy as np
converter = tf.lite.TFLiteConverter.from_saved_model('models')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_quant_model = converter.convert()
open("quant.tflite", "wb").write(tflite_quant_model)
In my MacBook, there is a folder called 'models', which contains two tflite files there.
When I execute the code, the following error occurs:
converter = tf.lite.TFLiteConverter.from_saved_model('quantization')
OSError: SavedModel file does not exist at: models/{saved_model.pbtxt|saved_model.pb}
I checked most of the posts in stack overflow, but I couldn't find a solution.
Please review my code and give me some advice.
I uploaded my tflite file because I guess it would be necessary to check if there was a problem.
This is my model(download link):
https://drive.google.com/file/d/13gft7bREsv2vZYFvfoCiP5ndxHkfGKIM/view?usp=sharing
Thank you so much.
The tf.lite.TFLiteConverter.from_saved_model function takes a tensorflow (.pb) model as a parameter. On the other hand, you give a tensorflowlite (.tflite) model, which necessarily leads to an error. If you want to convert your model to float 16, the only way I know of is to take the original model in ".pb" format and you convert it as you want
Related
I'm trying to convert a PyTorch model(pth file containing weights) to an onnx file then to a TensorFlow model since I work on TensorFlow. to then fine-tune it.
This is my attempt so far. I keep however getting errors.enter image description here
I think the problem is that the weights are for a vision transformer. But I haven't figure out what type of model to use to convert it. I'm assuming a CRNN but if there is an easier way I would love to know.
PS: I did load the pth file to my drive. the path is correct
from torch.autograd import Variable
import torch.onnx
import torchvision
import torch
import onnx
import torch.nn as nn
dummy_input = torch.randn(1, 3, 224, 224)
file_path='/content/drive/MyDrive/VitSTR/vitstr_base_patch16_224_aug.pth'
model = torchvision.models.vgg16()
model.load_state_dict(torch.load(file_path))
model.eval()
torch.onnx.export(model, dummy_input, "vitstr.onnx")
Thank you all.
I used the same architecture as the one in the model and it worked.
I'm trying to learn how to use some ML stuff for Android. I got the Text Classification demo working and seems to work fine. So then I tried creating my own model.
The code I used to create my own model was this:
import numpy as np
import os
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
spec = model_spec.get('mobilebert_classifier')
train_data = DataLoader.from_csv(
filename='/path to file/train.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=True)
model = text_classifier.create(train_data, model_spec=spec, epochs=10)
model.export(export_dir='average_word_vec')
The code appeared to run fine and it created a model.tflite file for me. I then replaced the demo tflite file with mine. But when I run the demo I get the following error:
java.lang.AssertionError: Error occurred when initializing NLClassifier: Type mismatch for input tensor serving_default_input_type_ids:0. Requested STRING, got INT32.
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.initJniWithByteBuffer(Native Method)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.access$100(NLClassifier.java:67)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier$2.createHandle(NLClassifier.java:223)
at org.tensorflow.lite.task.core.TaskJniUtils.createHandleFromLibrary(TaskJniUtils.java:91)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromBufferAndOptions(NLClassifier.java:219)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromFileAndOptions(NLClassifier.java:175)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromFile(NLClassifier.java:150)
at org.tensorflow.lite.examples.textclassification.client.TextClassificationClient.load(TextClassificationClient.java:44)
at org.tensorflow.lite.examples.textclassification.MainActivity.lambda$onStart$1$MainActivity(MainActivity.java:67)
at org.tensorflow.lite.examples.textclassification.-$$Lambda$MainActivity$eJaQnJq74KcmPEczFE5swJIGydg.run(Unknown Source:2)
What am I missing?
In your codes you trained a MobileBERT model, but saved to the path of average_word_vec?
spec = model_spec.get('mobilebert_classifier')
model.export(export_dir='average_word_vec')
One posssiblity is: you use the model of average_word_vec, but add MobileBERT metadata, thus the preprocessing doesn't match.
Could you follow the Model Maker tutorial and try again?
https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
Make sure change the export path.
I need to run my model in NVIDIA JETSON T2, So I converted my working yoloV3 model into tensorRT(.trt format)(https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d)This link mentioned helped me to convert the Yolo model into .trt .But after converting the model to .trt model I needed to test if it works fine (i.e) If the detection is good enough. I couldn't find any sample code for loading and testing .trt model. If anybody can help me , please pull up a sample code in the answer section or any link for reference.
You can load and perform the inference of your TRT Model using this snippet of code.
This is executed in Tensorflow 2.1.0 and Google Colab Environment.
from tensorflow.python.compiler.tensorrt import trt_convert as trt
from tensorflow.python.saved_model import tag_constants
saved_model_loaded = tf.saved_model.load(output_saved_model_dir, tags=[tag_constants.SERVING])
signature_keys = list(saved_model_loaded.signatures.keys())
print(signature_keys) # Outputs : ['serving_default']
graph_func = saved_model_loaded.signatures[signature_keys[0]]
graph_func(x_test) # Use this to perform inference
output_saved_model_dir is the location of your TensorRT Optimized model in SavedModel format.
From here, you can add your testing methods to determine the performance of your pre and post-processed model.
EDIT:
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import numpy as np
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(max_workspace_size_bytes=(1<<32))
conversion_params = conversion_params._replace(precision_mode="FP16")
conversion_params = conversion_params._replace(maximum_cached_engines=100)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=input_saved_model_dir,
conversion_params=conversion_params)
converter.convert()
converter.save(output_saved_model_dir)
Here are the codes used for Converting and Saving the Tensorflow RT Optimized model.
I load a saved h5 model and want to save the model as pb.
The model is saved during training with the tf.keras.callbacks.ModelCheckpoint callback function.
TF version: 2.0.0a
edit: same issue also with 2.0.0-beta1
My steps to save a pb:
I first set K.set_learning_phase(0)
then I load the model with tf.keras.models.load_model
Then, I define the freeze_session() function.
(optional I compile the model)
Then using the freeze_session() function with tf.keras.backend.get_session
The error I get, with and without compiling:
AttributeError: module 'tensorflow.python.keras.api._v2.keras.backend'
has no attribute 'get_session'
My Question:
Does TF2 not have the get_session anymore?
(I know that tf.contrib.saved_model.save_keras_model does not exist anymore and I also tried tf.saved_model.save which not really worked)
Or does get_session only work when I actually train the model and just loading the h5 does not work
Edit: Also with a freshly trained session, no get_session is available.
If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
update:
Since the official release of TF2.x graph/session concept has changed. The savedmodel api should be used.
You can use the tf.compat.v1.disable_eager_execution() with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
I do save the model to pb from h5 model:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I use TF2 to convert model like:
pass keras.callbacks.ModelCheckpoint(save_weights_only=True) to model.fit and save checkpoint while training;
After training, self.model.load_weights(self.checkpoint_path) load checkpoint;
self.model.save(h5_path, overwrite=True, include_optimizer=False) save as h5;
convert h5 to pb just like above;
I'm wondering the same thing, as I'm trying to use get_session() and set_session() to free up GPU memory. These functions seem to be missing and aren't in the TF2.0 Keras documentation. I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
use
from tensorflow.compat.v1.keras.backend import get_session
in keras 2 & tensorflow 2.2
then call
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
from tensorflow.compat.v1.keras.backend import get_session
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
I've managed to create a TensorFlow model, saved as SavedModel .pb format with a custom operation.
My problem is that I cannot convert it to lite version either using command line utilities or python API
my python API is:
import tensorflow as tf
import os
import custom_op
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
converter = tf.lite.TFLiteConverter.from_saved_model("./SavedModel")
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
But conversion failed with error:
ValueError: Provide an input shape for input array 'X'.
I assume because my placeholders don't have a shape type. I don't understand why the normal TensorFlow model works with out it.
Any help?
As it describes in documentation of TensorFlow Lite, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model.
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
You can pass this information as described here. You need to provide input shape for your input array 'X'. Like,
tf.lite.TFLiteConverter.from_saved_model("./Saved_model", input_shapes={("X" : [1,H,W,C])})