I'm using TensorFlow 2.1 in order to train models with quantization-aware training.
The code to do that is:
import tensorflow_model_optimization as tfmot
model = tfmot.quantization.keras.quantize_annotate_model(model)
This will add fake-quantize nodes to the graph. These nodes should adjust the model's weights so they are more easier to be quantized into int8 and to work with int8 data.
When the training ends, I convert and quantize the model to TF-Lite like so:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = [give data provider]
quantized_tflite_model = converter.convert()
At this point, I wouldn't expect to see the fake-quantize layers in the TL-Lite graph. But surprisingly, I do see them.
Moreover, when I run this quantized model in TF-Lite C++ sample app, I see that it's also running the fake-quantize nodes during inference. In addition to that, it also dequantize and quantize the activations between each layer.
That's a sample of the output from the C++ code:
Node 0 Operator Builtin Code 80 FAKE_QUANT
Inputs: 1
Outputs: 237
Node 1 Operator Builtin Code 114 QUANTIZE
Inputs: 237
Outputs: 238
Node 2 Operator Builtin Code 3 CONV_2D
Inputs: 238 59 58
Outputs: 167
Temporaries: 378
Node 3 Operator Builtin Code 6 DEQUANTIZE
Inputs: 167
Outputs: 239
Node 4 Operator Builtin Code 80 FAKE_QUANT
Inputs: 239
Outputs: 166
Node 5 Operator Builtin Code 114 QUANTIZE
Inputs: 166
Outputs: 240
Node 6 Operator Builtin Code 3 CONV_2D
Inputs: 240 61 60
Outputs: 169
So I find all this very weird, taking also into account the fact that this model should run only on int8 and actually fake-quantize nodes are getting float32 as inputs.
Any help here would be appreciated.
representative_dataset is mostly used with post-training quantization.
Comparing your commands with QAT example, you probably want to remove that line .
https://www.tensorflow.org/model_optimization/guide/quantization/training_example
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
# Create float TFLite model.
float_converter = tf.lite.TFLiteConverter.from_keras_model(model)
float_tflite_model = float_converter.convert()
# Measure sizes of models.
_, float_file = tempfile.mkstemp('.tflite')
_, quant_file = tempfile.mkstemp('.tflite')
with open(quant_file, 'wb') as f:
f.write(quantized_tflite_model)
with open(float_file, 'wb') as f:
f.write(float_tflite_model)
print("Float model in Mb:", os.path.getsize(float_file) / float(2**20))
print("Quantized model in Mb:", os.path.getsize(quant_file) / float(2**20))
You can force TF Lite to only use the INT operations:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
If an error occurs, then some layers of your network do not have an INT8 implementation yet.
Furthermore you could also try to investigate your network using Netron.
Nonetheless, if you also want to have INT8 inputs and output you also need to adjust those:
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
However, there is currently an open issue regarding the in- and output, see Issue #38285
I have encountered the same issue. In my case, the quantized tflite model's size increases by ~3x with fake quantization. Does it occur to you? Inspecting the tflite graph in Netron shows quantization layers are inserted between every ops.
My workaround so far is to initiate a new copy of the model without fake quantization, and then load the weights by layers from the quantization-aware-trained model. It can't directly set weights to the whole model because fake quantization layers have parameters, too.
Related
I've been trying to follow this process to run an object detector (SSD MobileNet) on the Google Coral Edge TPU:
Edge TPU model workflow
I've successfully trained and evaluated my model with the Object Detection API. I have the model both in checkpoint format as well as tf SavedModel format. As per the documentation, the next step is to convert to .tflite format using post-training quantization.
I am to attempting to follow this example. The export_tflite_graph_tf2.py script and the conversion code that comes after run without errors, but I see some weird behavior when I try to actually use the model to run inference.
I am unable to use the saved_model generated by export_tflite_graph_tf2.py. When running the following code, I get an error:
print('loading model...')
model = tf.saved_model.load(tflite_base)
print('model loaded!')
results = model(image_np)
TypeError: '_UserObject' object is not callable --> results = model(image_np)
As a result, I have no way to tell if the script broke my model or not before I even convert it to tflite. Why would model not be callable in this way? I have even verified that the type returned by tf.saved_model.load() is the same when I pass in a saved_model before it went through the export_tflite_graph_tf2.py script and after. The only possible explanation I can think of is that the script alters the object in some way that causes it to break.
I convert to tflite with post-training quantization with the following code:
def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(images_dir + '/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
# supports PNG as well
image = tf.io.decode_image(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
if i == 0:
print(image.dtype)
yield [image]
# This enables quantization
# This sets the representative dataset for quantization
converter = tf.lite.TFLiteConverter.from_saved_model(base_saved_model)
# converter = tf.lite.TFLiteConverter.from_keras(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT] # issue here?
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [
# tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
# tf.lite.OpsSet.SELECT_TF_OPS, # enable TensorFlow ops.
tf.lite.OpsSet.TFLITE_BUILTINS_INT8 # This ensures that if any ops can't be quantized, the converter throws an error
]
# This ensures that if any ops can't be quantized, the converter throws an error
# For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity.
converter.target_spec.supported_types = [tf.int8]
converter.target_spec.supported_ops += [tf.lite.OpsSet.TFLITE_BUILTINS]
# These set the input and output tensors to uint8 (added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quantized = converter.convert()
Everything runs with no errors, but when I try to actually run an image through the model, it returns garbage. I tried removing the quantization to see if that was the issue, but even without quantization it returns seemingly random bounding boxes that are completely off from the model's performance prior to conversion. The shape of the output tensors look fine, it's just the content is all wrong.
What's the right way to get this model converted to a quantized tflite form? I should note that I can't use the tflite_convert utility because I need to quantize the model, and it appears according to the source code that the quantize_weights flag is deprecated? There are a bunch of conflicting resources I see from TF1 and TF2 about this conversion process so I'm pretty confused.
Note: I'm using a retrained SSD MobileNet from the model zoo. I have not made any changes to the architecture in my training workflow. I've confirmed that the errors persist even on the base model pulled directly from the object detection model zoo.
I’m have a very similar problem with Post Training Quantization and asked about it on GitHub
I could manage to get results from the TFLite model but they were not good enough. Here is the notebook how I did it. Maybe it helps you to get a step forward.
TLDR:
Short term: Trying to quantize a specific portion of a TF model (recreated from a TFLite model). Skip to pictures below. \
Long term: Transfer Learn on Yamnet and compile for Edge TPU.
Source code to follow along is here
I've been trying to transfer learn on Yamnet and compile for a Coral Edge TPU for a few weeks now.
Started here, but quickly realized that model wouldn't quantize and compile for the Edge TPU because of the dynamic input and out of the box TFLite quantization doesn't work well with the preprocessing of audio before Yamnet's MobileNet.
After tinkering and learning for a few weeks, I found a Yamnet model compiled for the Edge TPU (sadly without source code) and figured my best shot would be to try to recreate it in TF, then quantize, then compile to TFLite, then compile for the edge TPU. I'll also have to figure out how to set the weights - not sure if I have to/can do that pre or post quantization. Anyway, I've effectively recreated the model, but am having a hard time quantizing without a bunch of wacky behavior.
The model currently looks like this:
I want it to look like this:
For quantizing, I tried:
TFLite Model Optimization which puts tfl.quantize ops all over the place and fails to compile for the Edge TPU.
Quantization Aware Training which throws some annoying errors that I've been trying to work through.
If you know a better way to achieve the long term goal than what I proposed, please (please please please) share! Otherwise, help on specific quant ops would be great! Also, reach out for clarity
I've ran into your same issues trying to convert the Yamnet model by tensorflow into full integers in order to compile it for Coral edgetpu and I think I've found a workaround for that.
I've been trying to stick to the tutorials posted in the section tflite-model-maker and finding a solution within this API because, for experience, I found it to be a very powerful tool.
If your goal is to build a model which is fully compiled for the edgetpu (meaning all layers, including input and output ones, being converted to int8 type) I'm afraid this solution won't fit for you. But since you posted you're trying to obtain a custom model with the same structure of:
Yamnet model compiled for the Edge TPU
then I think this workaround would help you.
When you train your custom model following the basic tutorial it is possible to export the custom model both in .tflite format
model.export(models_path, tflite_filename='my_birds_model.tflite')
and full tensorflow model:
model.export(models_path, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
Then it is possible to convert the full tensorflow saved model to tflite format by using the following script:
import tensorflow as tf
import numpy as np
import glob
from scipy.io import wavfile
dataset_path = '/path/to/DATASET/testing/*/*.wav'
representative_data = []
saved_model_path = './saved_model'
samples = glob.glob(dataset_path)
input_size = 15600 #Yamnet model's input size
def representative_data_gen():
for input_value in samples:
sample_rate, audio_data = wavfile.read(input_value, 'rb')
audio_data = np.array(audio_data)
splitted_audio_data = tf.signal.frame(audio_data, input_size, input_size, pad_end=True, pad_value=0) / tf.int16.max #normalization in [-1,+1] range
yield [np.float32(splitted_audio_data[0])]
tf.compat.v1.enable_eager_execution()
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
converter.experimental_new_converter = True #if you're using tensorflow<=2.2
converter.optimizations = [tf.lite.Optimize.DEFAULT]
#converter.inference_input_type = tf.uint8 # or tf.uint8
#converter.inference_output_type = tf.uint8 # or tf.uint8
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
open(saved_model_path + "converted_model.tflite", "wb").write(tflite_model)
As you can see, the lines which tell the converter to change input/output type are commented. This is because Yamnet model expects in input normalized values of audio sample in the range [-1,+1] and the numerical representation must be float32 type. In fact the compiled model of Yamnet you posted uses the same dtype for input and output layers (float32).
That being said you will end up with a tflite model converted from the full tensorflow model produced by tflite-model-maker. The script will end with the following line:
fully_quantize: 0, inference_type: 6, input_inference_type: 0, output_inference_type: 0
and the inference_type: 6 tells you the inference operations are suitable for being compiled to coral edgetpu.
The last step is to compile the model. If you compile the model with the standard edgetpu_compiler command line :
edgetpu_compiler -s converted_model.tflite
the final model would have only 4 operations which run on the EdgeTPU:
Number of operations that will run on Edge TPU: 4
Number of operations that will run on CPU: 53
You have to add the optional flag -a which enables multiple subgraphs (it is in experimental stage though)
edgetpu_compiler -sa converted_model.tflite
After this you will have:
Number of operations that will run on Edge TPU: 44
Number of operations that will run on CPU: 13
And most of the model operations will be mapped to edgetpu, namely:
Operator Count Status
MUL 1 Mapped to Edge TPU
DEQUANTIZE 4 Operation is working on an unsupported data type
SOFTMAX 1 Mapped to Edge TPU
GATHER 2 Operation not supported
COMPLEX_ABS 1 Operation is working on an unsupported data type
FULLY_CONNECTED 3 Mapped to Edge TPU
LOG 1 Operation is working on an unsupported data type
CONV_2D 14 Mapped to Edge TPU
RFFT2D 1 Operation is working on an unsupported data type
LOGISTIC 1 Mapped to Edge TPU
QUANTIZE 3 Operation is otherwise supported, but not mapped due to some unspecified limitation
DEPTHWISE_CONV_2D 13 Mapped to Edge TPU
MEAN 1 Mapped to Edge TPU
STRIDED_SLICE 2 Mapped to Edge TPU
PAD 2 Mapped to Edge TPU
RESHAPE 1 Operation is working on an unsupported data type
RESHAPE 6 Mapped to Edge TPU
I'am trying to convert a tf.keras model based on mobilenetv2 with transpose convolution using latest tf-nighlty. Here is the conversion code
#saved_model_dir='/content/ksaved' # tried from saved model also
#converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter = tf.lite.TFLiteConverter.from_keras_model(reshape_model)
converter.experimental_new_converter=True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = representative_dataset_gen
tflite_quant_modell = converter.convert()
open("/content/model_quant.tflite", "wb").write(tflite_quant_modell)
The conversion was successful(in google colab); but it has quantize and dequantize operators at the ends(as seen using netron). All operators seems to be supported. Representative data set images are float32 in generator and the model has a 4 channel float32 input by default. It looks like we need a UINT8 input and output inside model for coral TPU. How can we properly carry out this conversion?
Ref:-
Full integer quantization of weights and activations
How to quantize inputs and outputs of optimized tflite model
Coral Edge TPU Compiler cannot convert tflite model: Model not quantized
I tried with 'tf.compat.v1.lite.TFLiteConverter.from_keras_model_file' instead of v2 version.I got error: "Quantization not yet supported for op: TRANSPOSE_CONV" while trying to quantize the model in latest tf 1.15 (using representative dataset) and "Internal compiler error. Aborting! " from coral tpu compiler using tf2.0 quantized tflite
Tflite model # https://github.com/tensorflow/tensorflow/issues/31368
It seems to work until the last constitutional block (1x7x7x160)
The compiler error(Aborting) does not give any information regarding the potential cause and all types of convolutional layers seems to be supported as per coral docs.
Coral doc: https://coral.ai/docs/edgetpu/models-intro/#quantization
Here is a dummy model example of quantizing a keras model. Notice I'm using strict tf1.15 for the example, because tf2.0 deprecated:
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
with the from_keras_model api. I think the most confusing thing about this is that you can still call it but nothing happens. This means that model will still take in float inputs. I notice that you are using tf2.0, because from_keras_model is a tf2.0 api. Coral still suggest using tf1.15 for converting model for now. I suggest downgrading tensorflow or maybe even just use this (while keeping tf2.0, it may or may not work):
tf.compat.v1.lite.TFLiteConverter.from_keras_model_file
More on it here.
I always make sure not to use the experimental converter:
converter.experimental_new_converter = False
When inspecting the Keras model yolov3-tiny.h5 using netron
I see that the input node is called input_1 and has type float32[?,?,?,3]. I would expect float32[?,416,416,3]
How can I force it to be float32[?,416,416,3]?
This is needed for downstream processing. The Keras model has to be converted to a frozen_model.pb in Tensorflow and then be further processed for deployement.
The deployement tools cannot handle an input with unknow w,h size.
Here is how I generated the Keras model. I downloaded the yolov3-tiny.cfg (https://github.com/pjreddie/darknet/blob/master/cfg/yolov3-tiny.cfg) and yolov3-tiny.weigths (https://pjreddie.com/media/files/yolov3-tiny.weights)
And then converted the model to a keras model using the following command :
python convert.py -p yolov3-tiny.cfg yolov3-tiny.weights model_data/yolov3-tiny.h5
(this code is obtained by cloning https://github.com/qqwweee/keras-yolo3)
Making a prediction using the saved Keras model works fine :
python yolo_video.py --image --model model_data/yolov3-tiny.h5
However when inspecting the Keras model yolov3-tiny.h5 using netron
I see that the input node is called input_1 and has type float32[?,?,?,3]
I would expect float32[?,416,416,3]
How can I force it to be float32[?,416,416,3]?
You could do that while converting the keras model file to tflite by passing input_shapes as a dictionary with input node name as the key and shape as the value.
Check out the TFLiteConverter class here
def _main_(args):
kmodel = args.kmodel
lmodel = args.lmodel
converter = tf.lite.TFLiteConverter.from_keras_model_file(kmodel, input_shapes={"input_1":[1,416,416,3]})
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
open("lmodel", "wb").write(tflite_model)
if __name__ == '__main__':
argparser = argparse.ArgumentParser(description='Quantize a trained yolo model')
argparser.add_argument('-k', '--kmodel', help='path to model weights file input')
argparser.add_argument('-l', '--lmodel', help='path to tensor flow lite output')
args = argparser.parse_args()
_main_(args)
I have successfully gone through the official tutorial, which explains how to retrain inception-v3 model and later successfully retrained the same model o train the model for specific purposes.
The model, however, is complex and slow compared to other, simpler models, such as inception-v1 which accuracy is good enough for some tasks. Specifically, I would like to retrain the model to use it on Android and ideally the performance in terms of speed should be comparable to original TensorFlow Android demo. Anyway, I tried to retrain the inception-v1 model from this link with following modifications in retrain.py:
BOTTLENECK_TENSOR_NAME = 'avgpool0/reshape:0'
BOTTLENECK_TENSOR_SIZE = 2048
MODEL_INPUT_WIDTH = 224
MODEL_INPUT_HEIGHT = 224
MODEL_INPUT_DEPTH = 3
JPEG_DATA_TENSOR_NAME = 'input'
RESIZED_INPUT_TENSOR_NAME = 'input'
As opposed to inception v3, inception v1 does not have any decodeJpeg or resize nodes:
inception v3 nodes:
DecodeJpeg/contents
DecodeJpeg
Cast
ExpandDims/dim
ExpandDims
ResizeBilinear/size
ResizeBilinear
...
pool_3
pool_3/_reshape/shape
pool_3/_reshape
softmax/weights
softmax/biases
softmax/logits/MatMul
softmax/logits
softmax
inception v1 nodes:
input
conv2d0_w
conv2d0_b
conv2d1_w
conv2d1_b
conv2d2_w
conv2d2_b
...
softmax1_pre_activation
softmax1
avgpool0/reshape/shape
avgpool0/reshape
softmax2_pre_activation/matmul
softmax2_pre_activation
softmax2
output
output1
output2
so I guess the images have to be reshaped before being fed into the graph.
Right now the error occurs when hitting the following function:
def run_bottleneck_on_image(sess, image_data, image_data_tensor,
bottleneck_tensor):
"""Runs inference on an image to extract the 'bottleneck' summary layer.
Args:
sess: Current active TensorFlow Session.
image_data: Numpy array of image data.
image_data_tensor: Input data layer in the graph.
bottleneck_tensor: Layer before the final softmax.
Returns:
Numpy array of bottleneck values.
"""
bottleneck_values = sess.run(
bottleneck_tensor,
{image_data_tensor: image_data})
bottleneck_values = np.squeeze(bottleneck_values)
return bottleneck_values
Error:
TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a
Operation into a Tensor.
I guess the data on input node of inception v1 graph has to be reshaped to match the data after passing the following nodes in inception v3:
DecodeJpeg/contents
DecodeJpeg
Cast
ExpandDims/dim
ExpandDims
ResizeBilinear/size
ResizeBilinear
If anyone has already managed to retrain the inception v1 model or has an idea how to reshape the data in inception v1 case to match inception v3, I would be very thankful for any tips or suggestions.
Not sure if you have solved this or not but I am working on a similar problem.
I am trying to use a different model (not Inception-v1 or Inception-v3) with the Inception-v3 transfer learning tutorial. This post seems to be on the right track of remapping the input of the new model (in your case inception-v1) to play nice with the jpeg encoding used in the rest of the tutorial:
feeding image data in tensorflow for transfer learning
The only problem I am having is a error in my input saying "Cannot convert a tensor of type uint8 to an input type of float32" but this may at least put you on the right track.
Good Luck!
(For the ones who are still interested)
Bottleneck tensor size should be 1024 for inception-v1. For me, the following setup works with mentioned inception-v1 for this retrain script. No need for jpeg data tensor or else.
bottleneck_tensor_name = 'avgpool0/reshape:0'
bottleneck_tensor_size = 1024
input_width = 224
input_height = 224
input_depth = 3
resized_input_tensor_name = 'input:0'