as far as i known, the matrix's inverse is a common operator.
while tf.raw_ops.MatrixInverse is not supported in tflite and BatchMatrixInverse is not available in GraphDef version 1205.
How can i calculate the inverse of the matrix in tflite?
Best wishes
Before you convert your model to tflite model you have to enable TF ops to work in tflite by using the below code
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
For more details please refer to this document. Thank You.
Related
I used Keract to visualize the feature maps of a TensorFlow/Keras model.
I have applied quantization with TensorFlow Lite. I would like to visualize the feature maps generated by the TensorFlow Lite model during an inference. Do you know a way to do this?
The reason is that I don't fully understand the interaction between weights, activations and scale/zero-point coefficients. So I would like to do the inference process step by step for a quantized network.
Thank you for your help
There are several ways to extract information about weights, scales and zero-point values.
way one:
You can also find additional information about the below code from the TensorFlow website.
import tensorflow as tf
import numpy as np
#Load your TFLite model.
TF_LITE_MODEL_FILE_NAME = "Your_TFLite_file.tflite"
interpreter = tf.lite.Interpreter(model_path=TF_LITE_MODEL_FILE_NAME)
#Gives you input and output details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
#Gives you all tensor details index-wise. you will find all the quantization_parameters here
interpreter.get_tensor_details()
#get individual tensor value. interpreter.get_tensor(give_index_number). You will find the index for individual tensor index from get_tensor_details
interpreter.allocate_tensors()
r= interpreter.get_tensor(12).astype(np.float32)
print('Tensors', r)
Way two (the easy way):
Upload your TFLite file on the Netron Website. There you can get lots of information about your TFlite file. You can install Netron on your PC also. Here, is the Netron git link to install Netron on your PC.
I'm using Tensorflow lite for Android, official Python code uses the following code before using TF.lite.Interpreter, but this code uses the TensorFlow module, which is not available in Android Java. How should TensorFlow Lite implement the bicubic Resize method?
img_resized = tf.image.resize(img, [width, height], method='bicubic', preserve_aspect_ratio=False)
img_input = img_resized.numpy()
reshape_img = img_input.reshape(1, width, height, 3)
tensor = tf.convert_to_tensor(reshape_img, dtype=tf.float32)
# load model
print("Load model...")
interpreter = tf.lite.Interpreter(model_path=model_name)
official Python code use model.tflite, I only found these two methods in Android TensorFlow Lite, not bicubic:
ResizeOp.ResizeMethod.BILINEAR
ResizeOp.ResizeMethod.NEAREST_NEIGHBOR
TFLite does not have bicubic resize. Three options here:
If part of preprocessing - do it on java side
Try to use selected TF ops: add converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops. tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops. ] and add implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly' to your build.gradle. Details
If 1 and 2 do not work - you will be offered by friendly community to implement yourself
I have a Tensorflow model based on BoostedTreesClassifier and I want to deploy it on a mobile with the help of Tensorflow Lite.
However, when I try to convert my model to the Tensorflow Lite model I get an error saying that there are unsupported operations (as of Tensorflow v2.3.1):
tf.BoostedTreesBucketize
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
Adding tf.lite.OpsSet.SELECT_TF_OPS option helps a bit, but still some operations need a custom implementation:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
I've also tried Tensorflow v2.4.0-rc3, which reduces the set to the following one:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
Conversion code is like the following:
converter = tf.lite.TFLiteConverter.from_saved_model(model_path, signature_keys=['serving_default'])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
signature_keys is specified explicitly, because the model exported with BoostedTreesClassifier#export_saved_model has multiple signatures.
Is there a way to deploy this model on mobile other than writing custom implementation for non-supported ops?
While converting model to tflite getting this error
"""
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CONV_2D, MAX_POOL_2D, MUL, RELU, SOFTMAX, SQUEEZE, SUB. Here is a list of operators for which you will need custom implementations: AdjustContrastv2, AdjustHue, AdjustSaturation, RandomUniform.
"""
How to resolve this?
tensorflow version: 1.13.1
You can use TF ops directly by selecting TF ops.
I've confirmed that AdjustContrastv2, AdjustHue, AdjustSaturation are available via FlexDelegate.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/flex/allowlisted_flex_ops.cc#L35
To use this feature, you need to use TF 2.4 or higher. Since TF 2.4 is not available yet, you need to use tf-nightly release.
FYI, regarding migration TF1 to TF2, please check https://www.tensorflow.org/guide/migrate
You may try adding following lines to specify your model can use ops in both TF Lite built in and in TF.
converter.experimental_new_converter=True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
Or better you should rewrite ops not supported in TF Lite built in by ops available in TF built in
I'am trying to convert a tf.keras model based on mobilenetv2 with transpose convolution using latest tf-nighlty. Here is the conversion code
#saved_model_dir='/content/ksaved' # tried from saved model also
#converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter = tf.lite.TFLiteConverter.from_keras_model(reshape_model)
converter.experimental_new_converter=True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = representative_dataset_gen
tflite_quant_modell = converter.convert()
open("/content/model_quant.tflite", "wb").write(tflite_quant_modell)
The conversion was successful(in google colab); but it has quantize and dequantize operators at the ends(as seen using netron). All operators seems to be supported. Representative data set images are float32 in generator and the model has a 4 channel float32 input by default. It looks like we need a UINT8 input and output inside model for coral TPU. How can we properly carry out this conversion?
Ref:-
Full integer quantization of weights and activations
How to quantize inputs and outputs of optimized tflite model
Coral Edge TPU Compiler cannot convert tflite model: Model not quantized
I tried with 'tf.compat.v1.lite.TFLiteConverter.from_keras_model_file' instead of v2 version.I got error: "Quantization not yet supported for op: TRANSPOSE_CONV" while trying to quantize the model in latest tf 1.15 (using representative dataset) and "Internal compiler error. Aborting! " from coral tpu compiler using tf2.0 quantized tflite
Tflite model # https://github.com/tensorflow/tensorflow/issues/31368
It seems to work until the last constitutional block (1x7x7x160)
The compiler error(Aborting) does not give any information regarding the potential cause and all types of convolutional layers seems to be supported as per coral docs.
Coral doc: https://coral.ai/docs/edgetpu/models-intro/#quantization
Here is a dummy model example of quantizing a keras model. Notice I'm using strict tf1.15 for the example, because tf2.0 deprecated:
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
with the from_keras_model api. I think the most confusing thing about this is that you can still call it but nothing happens. This means that model will still take in float inputs. I notice that you are using tf2.0, because from_keras_model is a tf2.0 api. Coral still suggest using tf1.15 for converting model for now. I suggest downgrading tensorflow or maybe even just use this (while keeping tf2.0, it may or may not work):
tf.compat.v1.lite.TFLiteConverter.from_keras_model_file
More on it here.
I always make sure not to use the experimental converter:
converter.experimental_new_converter = False