Missing some boosted trees operations in Tensorflow Lite - tensorflow

I have a Tensorflow model based on BoostedTreesClassifier and I want to deploy it on a mobile with the help of Tensorflow Lite.
However, when I try to convert my model to the Tensorflow Lite model I get an error saying that there are unsupported operations (as of Tensorflow v2.3.1):
tf.BoostedTreesBucketize
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
Adding tf.lite.OpsSet.SELECT_TF_OPS option helps a bit, but still some operations need a custom implementation:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
I've also tried Tensorflow v2.4.0-rc3, which reduces the set to the following one:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
Conversion code is like the following:
converter = tf.lite.TFLiteConverter.from_saved_model(model_path, signature_keys=['serving_default'])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
signature_keys is specified explicitly, because the model exported with BoostedTreesClassifier#export_saved_model has multiple signatures.
Is there a way to deploy this model on mobile other than writing custom implementation for non-supported ops?

Related

TF Yamnet Transfer Learning and Quantization

TLDR:
Short term: Trying to quantize a specific portion of a TF model (recreated from a TFLite model). Skip to pictures below. \
Long term: Transfer Learn on Yamnet and compile for Edge TPU.
Source code to follow along is here
I've been trying to transfer learn on Yamnet and compile for a Coral Edge TPU for a few weeks now.
Started here, but quickly realized that model wouldn't quantize and compile for the Edge TPU because of the dynamic input and out of the box TFLite quantization doesn't work well with the preprocessing of audio before Yamnet's MobileNet.
After tinkering and learning for a few weeks, I found a Yamnet model compiled for the Edge TPU (sadly without source code) and figured my best shot would be to try to recreate it in TF, then quantize, then compile to TFLite, then compile for the edge TPU. I'll also have to figure out how to set the weights - not sure if I have to/can do that pre or post quantization. Anyway, I've effectively recreated the model, but am having a hard time quantizing without a bunch of wacky behavior.
The model currently looks like this:
I want it to look like this:
For quantizing, I tried:
TFLite Model Optimization which puts tfl.quantize ops all over the place and fails to compile for the Edge TPU.
Quantization Aware Training which throws some annoying errors that I've been trying to work through.
If you know a better way to achieve the long term goal than what I proposed, please (please please please) share! Otherwise, help on specific quant ops would be great! Also, reach out for clarity
I've ran into your same issues trying to convert the Yamnet model by tensorflow into full integers in order to compile it for Coral edgetpu and I think I've found a workaround for that.
I've been trying to stick to the tutorials posted in the section tflite-model-maker and finding a solution within this API because, for experience, I found it to be a very powerful tool.
If your goal is to build a model which is fully compiled for the edgetpu (meaning all layers, including input and output ones, being converted to int8 type) I'm afraid this solution won't fit for you. But since you posted you're trying to obtain a custom model with the same structure of:
Yamnet model compiled for the Edge TPU
then I think this workaround would help you.
When you train your custom model following the basic tutorial it is possible to export the custom model both in .tflite format
model.export(models_path, tflite_filename='my_birds_model.tflite')
and full tensorflow model:
model.export(models_path, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
Then it is possible to convert the full tensorflow saved model to tflite format by using the following script:
import tensorflow as tf
import numpy as np
import glob
from scipy.io import wavfile
dataset_path = '/path/to/DATASET/testing/*/*.wav'
representative_data = []
saved_model_path = './saved_model'
samples = glob.glob(dataset_path)
input_size = 15600 #Yamnet model's input size
def representative_data_gen():
for input_value in samples:
sample_rate, audio_data = wavfile.read(input_value, 'rb')
audio_data = np.array(audio_data)
splitted_audio_data = tf.signal.frame(audio_data, input_size, input_size, pad_end=True, pad_value=0) / tf.int16.max #normalization in [-1,+1] range
yield [np.float32(splitted_audio_data[0])]
tf.compat.v1.enable_eager_execution()
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
converter.experimental_new_converter = True #if you're using tensorflow<=2.2
converter.optimizations = [tf.lite.Optimize.DEFAULT]
#converter.inference_input_type = tf.uint8 # or tf.uint8
#converter.inference_output_type = tf.uint8 # or tf.uint8
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
open(saved_model_path + "converted_model.tflite", "wb").write(tflite_model)
As you can see, the lines which tell the converter to change input/output type are commented. This is because Yamnet model expects in input normalized values of audio sample in the range [-1,+1] and the numerical representation must be float32 type. In fact the compiled model of Yamnet you posted uses the same dtype for input and output layers (float32).
That being said you will end up with a tflite model converted from the full tensorflow model produced by tflite-model-maker. The script will end with the following line:
fully_quantize: 0, inference_type: 6, input_inference_type: 0, output_inference_type: 0
and the inference_type: 6 tells you the inference operations are suitable for being compiled to coral edgetpu.
The last step is to compile the model. If you compile the model with the standard edgetpu_compiler command line :
edgetpu_compiler -s converted_model.tflite
the final model would have only 4 operations which run on the EdgeTPU:
Number of operations that will run on Edge TPU: 4
Number of operations that will run on CPU: 53
You have to add the optional flag -a which enables multiple subgraphs (it is in experimental stage though)
edgetpu_compiler -sa converted_model.tflite
After this you will have:
Number of operations that will run on Edge TPU: 44
Number of operations that will run on CPU: 13
And most of the model operations will be mapped to edgetpu, namely:
Operator Count Status
MUL 1 Mapped to Edge TPU
DEQUANTIZE 4 Operation is working on an unsupported data type
SOFTMAX 1 Mapped to Edge TPU
GATHER 2 Operation not supported
COMPLEX_ABS 1 Operation is working on an unsupported data type
FULLY_CONNECTED 3 Mapped to Edge TPU
LOG 1 Operation is working on an unsupported data type
CONV_2D 14 Mapped to Edge TPU
RFFT2D 1 Operation is working on an unsupported data type
LOGISTIC 1 Mapped to Edge TPU
QUANTIZE 3 Operation is otherwise supported, but not mapped due to some unspecified limitation
DEPTHWISE_CONV_2D 13 Mapped to Edge TPU
MEAN 1 Mapped to Edge TPU
STRIDED_SLICE 2 Mapped to Edge TPU
PAD 2 Mapped to Edge TPU
RESHAPE 1 Operation is working on an unsupported data type
RESHAPE 6 Mapped to Edge TPU

Converting tf.keras model to TFLite: Model is slow and doesn't work with XNN Pack

Until recently I had been training a model using TF 1.15 based on MobileNetV2.
After training I had always been able to run these commands to generate a TFLite version:
tf.keras.backend.set_learning_phase(0)
converter = tf.lite.TFLiteConverter.from_keras_model_file(
tf_keras_path)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [
tf.lite.constants.FLOAT16]
tflite_model = converter.convert()
The resulting model was fast enough for our needs and when our Android developer used XNN Pack, we got an extra 30% reduction in inference time.
More recently I've developed a replacement model using TF2.4.1, based on the built-in keras implementation of efficientnet-b2.
This new model has larger input image size ((260,260) vs (224,224)) and its keras inference time is about 1.5x that of the older model.
However, when I convert to TFLite using these commands:
converter = tf.lite.TFLiteConverter.from_keras_model(newest_v3)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
there are a number of problems:
inference time is 5x slower than the old model
on Android our developer sees this error: "Failed to apply XNNPACK delegate: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors."
When I run the conversion I can see a message that says: "function_optimizer: function_optimizer did nothing. time = 9.854ms."
I have also tried saving as SavedModel and converting the saved model.
Another attempt I made was to use the command line tool with these arguments (and as far as I recall, pretty much every permutation of arguments possible):
tflite_convert --saved_model_dir newest_v3/ \
--enable_v1_converter \
--experimental_new_converter True \
--input-shape=1,260,260,3 \
--input-array=input_1:0 \
--post_training_quantize \
--quantize_to_float16 \
--output_file newest_v3d.tflite\
--allow-custom-ops
If anyone can shed some light onto what's going on here I'd be very grateful.
Tensorflow-lite does currently support tensors with dynamic shapes(enabled by default and explicitly by the "experimental_new_converter True" option on your conversion) but this issue below points out that XNNPack does not:
https://github.com/tensorflow/tensorflow/issues/42491
As XNNPack is not able to optimize the graph of the EfficientNet model you are not getting a boost in performance making the inference 5 times slower than before instead of just around 1.5 times.
Personally I would just recommend to move to EfficientNet-lite as it's the mobile/TPU counterpart of EfficientNet and was designed taking into account the restricted sets of operations available in Tensorflow-lite:
https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html

Custom object detection model to TensorFlow Lite, shape of model input

I need to export a custom object detection model, fine-tuned on a custom dataset, to TensorFlow Lite, so that it can run on Android devices.
I'm using TensorFlow 2.4.1 on Ubuntu 18.04, and so far this is what I did:
fine-tuned an 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8' model, using a dataset of new images. I used the 'model_main_tf2.py' script from the repository;
I exported the model using 'exporter_main_v2.py'
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\custom_model\pipeline.config --trained_checkpoint_dir .\models\custom_model\ --output_directory .\exported-models\custom_model
which produced a Saved Model (.pb file);
3. I tested the exported model for inference, and everything works fine. In the detection routine, I used:
def get_model_detection_function(model):
##Get a tf.function for detection
#tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
and the shape of the produced image object is 640x640, as expected.
Then, I tried to convert this .pb model to tflite.
After updating to the nightly version of tensorflow (with the normal version, I got an error), I was actually able to produce a .tflite file by using this code:
import tensorflow as tf
from tflite_support import metadata as _metadata
saved_model_dir = 'exported-models/custom_model/'
## Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
I tried to use this model in AndroidStudio, following the instructions given here.
However, I'm getting a couple of errors:
something regarding 'Not a valid Tensorflow lite model' (have to check better on this);
the error:
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_input_tensor:0) with 3 bytes from a Java Buffer with 270000 bytes.
The second error seems to indicate there's something weird with the input expected from the tflite model.
I examined the file with Netron, and this is what I got:
the input is expected to have...1x1x1x3 shape, or am I misinterpreting the graph?
Should I somehow set the tensor input size when using the tflite exporter?
Anyway, what is the right way to export my custom model so that it can run on Android?
TF Ops are supported via the Flex delegate. I bet that is the problem. If you want to check if it is that, you can do:
Download benchmark app with flex delegate support for TF Ops. You can find it here, in the section Native benchmark binary: https://www.tensorflow.org/lite/performance/measurement. For example, for android is https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex
Connect your phone to your computer and where you have downloaded the apk, do adb push <apk_name> /data/local/tmp
Push your model adb push <tflite_model> /data/local/tmp
Open shell adb shell and go to folder cd /data/local/tmp. Then run the app with ./<apk_name> --graph=<tflite_model>
Info from:
https://www.tensorflow.org/lite/guide/ops_select
https://www.tensorflow.org/lite/performance/measurement

Unable to properly convert tf.keras model to quantized format for coral TPU

I'am trying to convert a tf.keras model based on mobilenetv2 with transpose convolution using latest tf-nighlty. Here is the conversion code
#saved_model_dir='/content/ksaved' # tried from saved model also
#converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter = tf.lite.TFLiteConverter.from_keras_model(reshape_model)
converter.experimental_new_converter=True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = representative_dataset_gen
tflite_quant_modell = converter.convert()
open("/content/model_quant.tflite", "wb").write(tflite_quant_modell)
The conversion was successful(in google colab); but it has quantize and dequantize operators at the ends(as seen using netron). All operators seems to be supported. Representative data set images are float32 in generator and the model has a 4 channel float32 input by default. It looks like we need a UINT8 input and output inside model for coral TPU. How can we properly carry out this conversion?
Ref:-
Full integer quantization of weights and activations
How to quantize inputs and outputs of optimized tflite model
Coral Edge TPU Compiler cannot convert tflite model: Model not quantized
I tried with 'tf.compat.v1.lite.TFLiteConverter.from_keras_model_file' instead of v2 version.I got error: "Quantization not yet supported for op: TRANSPOSE_CONV" while trying to quantize the model in latest tf 1.15 (using representative dataset) and "Internal compiler error. Aborting! " from coral tpu compiler using tf2.0 quantized tflite
Tflite model # https://github.com/tensorflow/tensorflow/issues/31368
It seems to work until the last constitutional block (1x7x7x160)
The compiler error(Aborting) does not give any information regarding the potential cause and all types of convolutional layers seems to be supported as per coral docs.
Coral doc: https://coral.ai/docs/edgetpu/models-intro/#quantization
Here is a dummy model example of quantizing a keras model. Notice I'm using strict tf1.15 for the example, because tf2.0 deprecated:
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
with the from_keras_model api. I think the most confusing thing about this is that you can still call it but nothing happens. This means that model will still take in float inputs. I notice that you are using tf2.0, because from_keras_model is a tf2.0 api. Coral still suggest using tf1.15 for converting model for now. I suggest downgrading tensorflow or maybe even just use this (while keeping tf2.0, it may or may not work):
tf.compat.v1.lite.TFLiteConverter.from_keras_model_file
More on it here.
I always make sure not to use the experimental converter:
converter.experimental_new_converter = False

"Unkown (custom) loss function" when using tflite_convert on a {TF 2.0.0-beta1 ; Keras} model

Summary
My question is composed by:
A context in which I present my project, my working environment and my workflow
The detailed problem
The concerned parts of my code
The solutions I tried to solve my problem
The question reminder
Context
I've written a Python Keras implementation of a downgraded version of the original Super-Resolution GAN. Now I want to test it using Google Firebase Machine Learning Kit, by hosting it in the Google servers. That's why I have to convert my Keras program to a TensorFlow Lite one.
Environment and workflow (with the problem)
I'm training my program on Google Colab working environment: there, I've installed TF 2.0.0-beta1 (this choice is motivated by this uncorrect answer: https://datascience.stackexchange.com/a/57408/78409).
Workflow (and problem):
I write locally my Python Keras program, keeping in mind that it will run on TF 2. So I use TF 2 imports, for example: from tensorflow.keras.optimizers import Adam and also from tensorflow.keras.layers import Conv2D, BatchNormalization
I send my code to my Drive
I run without any problem my Google Colab Notebook: TF 2 is used.
I get the output model in my Drive, and I download it.
I try to convert this model to the TFLite format by executing the following CLI: tflite_convert --output_file=srgan.tflite --keras_model_file=srgan.h5: here the problem appears.
The problem
Instead of outputing the TF Lite converted model from the TF (Keras) model, the previous CLI outputs this error:
ValueError: Unknown loss function:build_vgg19_loss_network
The function build_vgg19_loss_network is a custom loss function that I've implemented and that must be used by the GAN.
Parts of code that rise this problem
Presenting the custom loss function
The custom loss function is implemented like that:
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape)
return mean(square(loss_model(ground_truth_image) - loss_model(predicted_image)))
Compiling the generator network with my custom loss function
generator_model.compile(optimizer=the_optimizer, loss=build_vgg19_loss_network)
What I've tried to do in order to solve the problem
As I read it on StackOverflow (link at the beginning of this question), TF 2 was thought to be sufficient to output a Keras model which would be correctly processed by my tflite_convert CLI. But it's not, obviously.
As I read it on GitHub, I tried to manually set my custom loss function among Keras' loss functions, by adding these lines: import tensorflow.keras.losses
tensorflow.keras.losses.build_vgg19_loss_network = build_vgg19_loss_network. It didn't work.
I read on GitHub I could use custom objects with load_model Keras function: but I only want to use compile Keras function. Not load_model.
My final question
I want to do only minor changes to my code, since it works fine. So I don't want, for example, to replace compile with load_model. With this constraint, could you help me, please, to make my CLI tflite_convert works with my custom loss function?
Since you are claiming that TFLite conversion is failing due to a custom loss function, you can save the model file without keep the optimizer details. To do that, set include_optimizer parameter to False as shown below:
model.save('model.h5', include_optimizer=False)
Now, if all the layers inside your model are convertible, they should get converted into TFLite file.
Edit:
You can then convert the h5 file like this:
import tensorflow as tf
model = tf.keras.models.load_model('model.h5') # srgan.h5 for you
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Usual practice to overcome the unsupported operators in TFLite conversion is documented here.
I had the same error. I recommend changing the loss to "mse" since you already have a well-trained model and you don't need to train with the .tflite file.