Fetching TFLite Version information from TFLite model - tensorflow

I have a TFLite Model.
How to fetch the version of TFLite used to create the model?
During automation, I was trying to fetch the TFLite Models and running inference over them. Currently , I am using TFLite 2.4.1 library. The models created above this versions which has unsupported operations, need to error out.
What is the best way of handling ?
How to get TFLite version from the model.

The "min_runtime_version" model metadata in the TFLite mode file contains the information that describes the minimal runtime version that is capable of running the given model.
The above value in the TFLite flatbuffer schema can be read by the existing C++ and Python schema libraries. For example,
from tensorflow.lite.python import schema_py_generated as schema_fb
tflite_model = schema_fb.Model.GetRootAsModel(model_buf, 0)
# Gets metadata from the model file.
for i in range(tflite_model.MetadataLength()):
meta = tflite_model.Metadata(i)
if meta.Name().decode("utf-8") == "min_runtime_version":
buffer_index = meta.Buffer()
metadata = tflite_model.Buffers(buffer_index)
min_runtime_version_bytes = metadata.DataAsNumpy().tobytes()
References:
Model metadata table in TFLite flatbuffer schema

Related

Missing some boosted trees operations in Tensorflow Lite

I have a Tensorflow model based on BoostedTreesClassifier and I want to deploy it on a mobile with the help of Tensorflow Lite.
However, when I try to convert my model to the Tensorflow Lite model I get an error saying that there are unsupported operations (as of Tensorflow v2.3.1):
tf.BoostedTreesBucketize
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
Adding tf.lite.OpsSet.SELECT_TF_OPS option helps a bit, but still some operations need a custom implementation:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
tf.BoostedTreesQuantileStreamResourceGetBucketBoundaries
tf.BoostedTreesQuantileStreamResourceHandleOp
I've also tried Tensorflow v2.4.0-rc3, which reduces the set to the following one:
tf.BoostedTreesEnsembleResourceHandleOp
tf.BoostedTreesPredict
Conversion code is like the following:
converter = tf.lite.TFLiteConverter.from_saved_model(model_path, signature_keys=['serving_default'])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
signature_keys is specified explicitly, because the model exported with BoostedTreesClassifier#export_saved_model has multiple signatures.
Is there a way to deploy this model on mobile other than writing custom implementation for non-supported ops?

How to Create model.graphdef and config.pbtxt of a trained model in tensorflow?

In order to serve a model in triton Inference server, the model repository should look like below
<model-repository-path>/
<model-name>/
config.pbtxt
1/
model.graphdef
But I am stuck at creating the above repository.
After you trained a model in TensorFlow,
1)How do you create model.graphdef?
2)how do you create config.pbtxt?
'model.graphdef' is your trained tensorflow model, it will have the extension .pb. When you train a model in tensorflow the weights are saved as a pb file. You can add that to this folder.
In this case you do not need to create a config.pbtxt file because triton inference server can automatically generate the configurations from tensorflow, tensorRT and ONNX models. You can simply start the server with the command --strict-model-config = false and it will generate the config.pbtxt file for you.
If you do wish to create your own config.pbtxt you can do so as well. The details are available in the official triton documentation.
The auto generated config file can be viewed using
curl localhost:8000/v2/models/<model_name>/config | jq
Use information from this output to create your own config file

How to generate .tf/.tflite files from python

I am trying to generate the custom tensor flow model (tf/tflite file) which i wanted to use for my mobile application.
I have gone through few machine learning and tensor flow blogs, from there I started to generate a simple ML model.
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/
https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
https://www.youtube.com/watch?v=ICY4Lvhyobk
All these are really nice and they guided me to do the below steps,
i)Install all necessary tools (TensorFlow,Python,Jupyter,etc).
ii)Load the Training and testing Data.
iii)Run the tensor flow session for train and evaluate the results.
iv)Steps to increase the accuracy
But i am not able to generate the .tf/.tflite files.
I tried the following code, but that generates an empty file.
converter = tf.contrib.lite.TFLiteConverter.from_session(sess,[],[])
model = converter.convert()
file = open( 'model.tflite' , 'wb' )
file.write( model )
I have checked few answers in stackoverflow and according to my understanding in-order to generate the .tf files we need to create the pb files, freezing the pb file and then generating the .tf files.
But how can we achieve this?
Tensorflow provides Tflite converter to convert saved model to Tflite model.For more details find here.
tf.lite.TFLiteConverter.from_saved_model() (recommended): Converts a SavedModel.
tf.lite.TFLiteConverter.from_keras_model(): Converts a Keras model.
tf.lite.TFLiteConverter.from_concrete_functions(): Converts concrete functions.

I don't understand how to switch from Tensorflow to Tensorflow Lite on a project taken from GitHub

I'm trying to create a .tflite model from a CycleGAN taken from GitHub (https://github.com/vanhuyz/CycleGAN-TensorFlow).
I am very new in this field and I do not understand how to expose the .pb model (which I have already created from the checkpoints) in a .tflite model.
I tried with tflite_convert but without any result, also because I don't know the parameters to insert as --input_arrays and --output_arrays.
Some idea?
I would recommend using the TFLiteConverter python api here: https://www.tensorflow.org/lite/convert/python_api and use SavedModel as your model input format. Otherwise, you can provide the input and output tensor names or your pb model as input_arrays and output_arrays.

How can I view weights in a .tflite file?

I get the pre-trained .pb file of MobileNet and find it's not quantized while the fully quantized model should be converted into .tflite format. Since I'm not familiar with tools for mobile app developing, how can I get the fully quantized weights of MobileNet from .tflite file. More precisely, how can I extract quantized parameters and view its numerical values ?
The Netron model viewer has nice view and export of data, as well as a nice network diagram view.
https://github.com/lutzroeder/netron
I'm also in the process of studying how TFLite works. What I found may not be the best approach and I would appreciate any expert opinions. Here's what I found so far using flatbuffer python API.
First you'll need to compile the schema with flatbuffer. The output will be a folder called tflite.
flatc --python tensorflow/contrib/lite/schema/schema.fbs
Then you can load the model and get the tensor you want. Tensor has a method called Buffer() which is, according to the schema,
An index that refers to the buffers table at the root of the model.
So it points you to the location of the data.
from tflite import Model
buf = open('/path/to/mode.tflite', 'rb').read()
model = Model.Model.GetRootAsModel(buf, 0)
subgraph = model.Subgraphs(0)
# Check tensor.Name() to find the tensor_idx you want
tensor = subgraph.Tensors(tensor_idx)
buffer_idx = tensor.Buffer()
buffer = model.Buffers(buffer_idx)
After that you'll be able to read the data by calling buffer.Data()
Reference:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/schema/schema.fbs
https://github.com/google/flatbuffers/tree/master/samples
Using TensorFlow 2.0, you can extract the weights and some information regarding the tensor (shape, dtype, name, quantization) with the following script - inspired from TensorFlow documentation
import tensorflow as tf
import h5py
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="v3-large_224_1.0_uint8.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# get details for each layer
all_layers_details = interpreter.get_tensor_details()
f = h5py.File("mobilenet_v3_weights_infos.hdf5", "w")
for layer in all_layers_details:
# to create a group in an hdf5 file
grp = f.create_group(str(layer['index']))
# to store layer's metadata in group's metadata
grp.attrs["name"] = layer['name']
grp.attrs["shape"] = layer['shape']
# grp.attrs["dtype"] = all_layers_details[i]['dtype']
grp.attrs["quantization"] = layer['quantization']
# to store the weights in a dataset
grp.create_dataset("weights", data=interpreter.get_tensor(layer['index']))
f.close()
You can view it using Netron app
macOS: Download the .dmg file or run brew install netron
Linux: Download the .AppImage file or run snap install netron
Windows: Download the .exe installer or run winget install netron
Browser: Start the browser version.
Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').