Tensorflow frozen_graph to tflite for android app - tensorflow

I am a newbie on the TensorFlow object detection library. I have a specific data set what I have to produce myself and labeled it with thousands of jpg. I have run the file to detect the object from these images.. any way.The end of the process i have gotten frozen_graph and from it I exported model.ckpl file to inference graph folder everything goes fine, and I have tested model.ckpl model on the object_detection.ipynb file it works fine. Until this step, there is no problem.
However,Am not able to understand how could convert that model.ckpl file to model.tflite file to use on android studio app.
I have see many things like but I am no idea what is the input_tensors = [...]
output_tensors = [...]
I may already know but what it was actually...
Could you show me how could I convert it?

Use tensorboard to find out your input and output layer. For reference follow these links -
https://heartbeat.fritz.ai/intro-to-machine-learning-on-android-how-to-convert-a-custom-model-to-tensorflow-lite-e07d2d9d50e3
Tensorflow Convert pb file to TFLITE using python

If you don't know your inputs and outputs, use summarize_graph tool and feed it your frozen model.
See command here
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#inspecting-graphs

If you have trained your model from scratch you must be having the .meta file. Also you need to specify the output node names using which you can create a .pb file. Please refer to below link on steps to create this file:
Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file
Once this is created you can further convert your .pb to tflite as below:
import tensorflow as tf
graph_def_file = "model.pb"
input_arrays = ["model_inputs"]
output_arrays = ["model_outputs"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

Related

Converting TFRS to the tflite model

I am new to Tensorflow and I need to convert the TFRS to the tflite model. Does anyone have any idea or experience related to this topic?
I simply ran the retrieval code by Colab.
And added the recommended method to convert the final model to tflite. That was:
converter = tf.lite.TFLiteConverter.from_saved_model(path) # path to the SavedModel directory
tflite_model = converter.convert()
You can see the error in the following image.
enter image description here
Try downloading the latest tf-nightly pip and then convert this model using new MLIR converter. You can enable the new converter by setting:
converter.experimental_new_converter = True

Custom object detection model to TensorFlow Lite, shape of model input

I need to export a custom object detection model, fine-tuned on a custom dataset, to TensorFlow Lite, so that it can run on Android devices.
I'm using TensorFlow 2.4.1 on Ubuntu 18.04, and so far this is what I did:
fine-tuned an 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8' model, using a dataset of new images. I used the 'model_main_tf2.py' script from the repository;
I exported the model using 'exporter_main_v2.py'
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\custom_model\pipeline.config --trained_checkpoint_dir .\models\custom_model\ --output_directory .\exported-models\custom_model
which produced a Saved Model (.pb file);
3. I tested the exported model for inference, and everything works fine. In the detection routine, I used:
def get_model_detection_function(model):
##Get a tf.function for detection
#tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
and the shape of the produced image object is 640x640, as expected.
Then, I tried to convert this .pb model to tflite.
After updating to the nightly version of tensorflow (with the normal version, I got an error), I was actually able to produce a .tflite file by using this code:
import tensorflow as tf
from tflite_support import metadata as _metadata
saved_model_dir = 'exported-models/custom_model/'
## Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
I tried to use this model in AndroidStudio, following the instructions given here.
However, I'm getting a couple of errors:
something regarding 'Not a valid Tensorflow lite model' (have to check better on this);
the error:
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_input_tensor:0) with 3 bytes from a Java Buffer with 270000 bytes.
The second error seems to indicate there's something weird with the input expected from the tflite model.
I examined the file with Netron, and this is what I got:
the input is expected to have...1x1x1x3 shape, or am I misinterpreting the graph?
Should I somehow set the tensor input size when using the tflite exporter?
Anyway, what is the right way to export my custom model so that it can run on Android?
TF Ops are supported via the Flex delegate. I bet that is the problem. If you want to check if it is that, you can do:
Download benchmark app with flex delegate support for TF Ops. You can find it here, in the section Native benchmark binary: https://www.tensorflow.org/lite/performance/measurement. For example, for android is https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex
Connect your phone to your computer and where you have downloaded the apk, do adb push <apk_name> /data/local/tmp
Push your model adb push <tflite_model> /data/local/tmp
Open shell adb shell and go to folder cd /data/local/tmp. Then run the app with ./<apk_name> --graph=<tflite_model>
Info from:
https://www.tensorflow.org/lite/guide/ops_select
https://www.tensorflow.org/lite/performance/measurement

How to convert from Tensorflow.js (.json) model into Tensorflow (SavedModel) or Tensorflow Lite (.tflite) model?

I have downloaded a pre-trained PoseNet model for Tensorflow.js (tfjs) from Google, so its a json file.
However, I want to use it on Android, so I need the .tflite model. Although someone has 'ported' a similar model from tfjs to tflite here, I have no idea what model (there are many variants of PoseNet) they converted. I want to do the steps myself. Also, I don't want to run some arbitrary code someone uploaded into a file in stackOverflow:
Caution: Be careful with untrusted code—TensorFlow models are code. See Using TensorFlow Securely for details. Tensorflow docs
Does anyone know any convenient ways to do this?
You can find out what tfjs format you have by looking in the json file. It often says "graph-model". The difference between them are here.
From tfjs graph model to SavedModel (more common)
Use tfjs-to-tf by Patrick Levin.
import tfjs_graph_converter.api as tfjs
tfjs.graph_model_to_saved_model(
"savedmodel/posenet/mobilenet/float/050/model-stride16.json",
"realsavedmodel"
)
# Code below taken from https://www.tensorflow.org/lite/convert/python_api
converter = tf.lite.TFLiteConverter.from_saved_model("realsavedmodel")
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
From tfjs layers model to SavedModel
Note: This will only work for layers model format, not graph model format as in the question. I've written the difference between them here.
Install and use tensorflowjs-convert to convert the .json file into a Keras HDF5 file (from another SO thread).
On mac, you'll face issues running pyenv (fix) and on Z-shell, pyenv won't load correctly (fix). Also, once pyenv is running, use python -m pip install tensorflowjs instead of pip install tensorflowjs, because pyenv did not change python used by pip for me.
Once you've followed the tensorflowjs_converter guide, run tensorflowjs_converter to verify it works with no errors, and should just warn you about Missing input_path argument. Then:
tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras tfjs_model.json hdf5_keras_model.hdf5
Convert the Keras HDF5 file into a SavedModel (standard Tensorflow model file) or directly into .tflite file using the TFLiteConverter. The following runs in a Python file:
# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
or to save to a SavedModel:
# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
tf.keras.models.save_model(
model, filepath, overwrite=True, include_optimizer=True, save_format=None,
signatures=None, options=None
)

How can I view weights in a .tflite file?

I get the pre-trained .pb file of MobileNet and find it's not quantized while the fully quantized model should be converted into .tflite format. Since I'm not familiar with tools for mobile app developing, how can I get the fully quantized weights of MobileNet from .tflite file. More precisely, how can I extract quantized parameters and view its numerical values ?
The Netron model viewer has nice view and export of data, as well as a nice network diagram view.
https://github.com/lutzroeder/netron
I'm also in the process of studying how TFLite works. What I found may not be the best approach and I would appreciate any expert opinions. Here's what I found so far using flatbuffer python API.
First you'll need to compile the schema with flatbuffer. The output will be a folder called tflite.
flatc --python tensorflow/contrib/lite/schema/schema.fbs
Then you can load the model and get the tensor you want. Tensor has a method called Buffer() which is, according to the schema,
An index that refers to the buffers table at the root of the model.
So it points you to the location of the data.
from tflite import Model
buf = open('/path/to/mode.tflite', 'rb').read()
model = Model.Model.GetRootAsModel(buf, 0)
subgraph = model.Subgraphs(0)
# Check tensor.Name() to find the tensor_idx you want
tensor = subgraph.Tensors(tensor_idx)
buffer_idx = tensor.Buffer()
buffer = model.Buffers(buffer_idx)
After that you'll be able to read the data by calling buffer.Data()
Reference:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/schema/schema.fbs
https://github.com/google/flatbuffers/tree/master/samples
Using TensorFlow 2.0, you can extract the weights and some information regarding the tensor (shape, dtype, name, quantization) with the following script - inspired from TensorFlow documentation
import tensorflow as tf
import h5py
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="v3-large_224_1.0_uint8.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# get details for each layer
all_layers_details = interpreter.get_tensor_details()
f = h5py.File("mobilenet_v3_weights_infos.hdf5", "w")
for layer in all_layers_details:
# to create a group in an hdf5 file
grp = f.create_group(str(layer['index']))
# to store layer's metadata in group's metadata
grp.attrs["name"] = layer['name']
grp.attrs["shape"] = layer['shape']
# grp.attrs["dtype"] = all_layers_details[i]['dtype']
grp.attrs["quantization"] = layer['quantization']
# to store the weights in a dataset
grp.create_dataset("weights", data=interpreter.get_tensor(layer['index']))
f.close()
You can view it using Netron app
macOS: Download the .dmg file or run brew install netron
Linux: Download the .AppImage file or run snap install netron
Windows: Download the .exe installer or run winget install netron
Browser: Start the browser version.
Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').

How can I use the Tensorflow .pb file?

I have a Tensorflow file AlexNet.pb. I am trying to load it then classify an image that I have. I can't find a way to load it then classify an image.
No-one seems to have a simple example of loading and running the .pb file.
It depends on how the protobuf file has been created.
If the .pb file is the result of:
# Create a builder to export the model
builder = tf.saved_model.builder.SavedModelBuilder("export")
# Tag the model in order to be capable of restoring it specifying the tag set
builder.add_meta_graph_and_variables(sess, ["tag"])
builder.save()
You have to know how that model has been tagged and use the tf.saved_model.loader.load method to load the saved graph in the current, empty, graph.
If the model instead has been frozen you have to load the binary file in memory manually:
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
graph = tf.get_default_graph()
tf.import_graph_def(graph_def, name="prefix")
In both cases, you have to know the name of the input tensor and the name of the node you want to execute:
If, for example, your input tensor is a placeholder named batch_ and the node you want to execute is the node named dense/BiasAdd:0 you have to
batch = graph.get_tensor_by_name('batch:0')
prediction = restored_graph.get_tensor_by_name('dense/BiasAdd:0')
values = sess.run(prediction, feed_dict={
batch: your_input_batch,
})
You can use opencv to load .pb models,
eg.
net = cv2.dnn.readNet("model.pb")
Make sure you are using specific version of opencv - OpenCV 3.4.2 or OpenCV 4