tflite_convert command not found ubuntu 20.04 - tensorflow

I have trained my tensorflow 1.15 ssd_mobilenet_v2_quantized model and want to export it in .tflite format. I have already ran the export_tflite_ssd_graph.py and have generated the saved_model.pb file. The next step would be to run a convert_tflite script like this:
tflite_convert \
--input_file=$OUTPUT_DIR/tflite_graph.pb \
--output_file=$OUTPUT_DIR/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=QUANTIZED_UINT8 \
--mean_values=128 \
--std_values=128 \
--change_concat_input_ranges=false \
--allow_custom_ops
From this tutorial: https://gilberttanner.com/blog/convert-your-tensorflow-object-detection-model-to-tensorflow-lite/. But I get a tflite_convert command not found error. How should I get the tflite_convert file???

Many ways to achieve your goals one way is to use model.save where you can continue the running codes and model. I think the command is someone builds scripts, you can build from these templates.
Sample: Converting a SavedModel to a TensorFlow Lite model.
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Save
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model.save(
saved_model_dir,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None,
save_traces=True
)
converter = tf.lite.TFLiteConverter.from_saved_model( saved_model_dir )
tflite_model = converter.convert()
with open( saved_model_dir + '\\model.tflite', 'wb' ) as f:
f.write(tflite_model)
Sample 2: Converting a build model to a TensorFlow Lite model.
tf_lite_model_converter = tf.lite.TFLiteConverter.from_keras_model(
model
)
tflite_model = tf_lite_model_converter.convert()
with open(checkpoint_dir + '\\model.tflite', 'wb') as f:
f.write(tflite_model)

Related

How can I convert a saved .pb model to TFLITE?

I have a model trained with TF 1.4 exported to a frozen inference graph with the file "models/research/object_detection/export_tflite_ssd_graph.py"
How can I can convert it tflite? I'm having a lot of issues
You can use the command line tool or the Python API.
Python API example:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
CLI example:
tflite_convert \
--output_file=/tmp/foo.tflite \
--graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
--input_arrays=input \
--output_arrays=MobilenetV1/Predictions/Reshape_1
Tensorflow officially recommends using the Python API.

Reshaping tensorflow output tensors

I am training an object detection model with Azure customvision.ai. The model output is with tensorflow, either saved model .pb, .tf or .tflite.
The model output type is designated as float32[1,13,13,50]
I then push the .tflite onto a Google Coral Edge device and attempt to run it (previous .tflite models trained with Google Cloud worked, but I'm now bound to corporate Azure and need to use customvision.ai). These commands are with
$ mdt shell
$ export DEMO_FILES="/usr/lib/python3/dist*/edgetpu/demo"
$ export DISPLAY=:0 && edgetpu_detect \
$ --source /dev/video1:YUY2:1280x720:20/1 \
$ --model ${DEMO_FILES}/model.tflite
Finally, the model attempts to run, but results in a ValueError
'This model has a {}.'.format(output_tensors_sizes.size)))
ValueError: Detection model should have 4 output tensors! This model has 1.
What is happening here? How do I reshape my tensorflow model to match the device requirements of 4 output tensors?
The model that works
The model that does not work
Edit, this outputs a tflite model, but still has only one output
python tflite_convert.py \
--output_file=model.tflite \
--graph_def_file=saved_model.pb \
--saved_model_dir="C:\Users\b0588718\AppData\Roaming\Python\Python37\site-packages\tensorflow\lite\python" \
--inference_type=FLOAT \
--input_shapes=1,416,416,3 \
--input_arrays=Placeholder \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--mean_values=128 \
--std_dev_values=128 \
--allow_custom_ops \
--change_concat_input_ranges=false \
--allow_nudging_weights_to_use_fast_gemm_kernel=true
You are running an object detection demo where the engine expects 4 outputs from the model and your model only have one outputs. Maybe you had the tflite conversion incorrect? For instance, if you grabbed the Face SSD model from our zoo, conversion should be like this:
$ tflite_convert \
--output_file=face_ssd.tflite \
--graph_def_file=tflite_graph.pb \
--inference_type=QUANTIZED_UINT8 \
--input_shapes=1,320,320,3 \
--input_arrays normalized_input_image_tensor \
--output_arrays "TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3" \
--mean_values 128 \
--std_dev_values 128 \
--allow_custom_ops \
--change_concat_input_ranges=false \
--allow_nudging_weights_to_use_fast_gemm_kernel=true
Take a look at a similar query for more details:
https://github.com/google-coral/edgetpu/issues/135#issuecomment-640677917

Converting Mobilenet segmentation model to tflite

I beginner in Tensorflow so kindly forgive me for this simple question, but I am unable to find this answer any where. I am working on converting mobilenet Segmentation model (http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz) trained on Pascal dataset to Tensorflow-lite for mobile inference for more than a week but with no success. I am unable to define properly the input and output format for converter.
import tensorflow as tf
import numpy as np
img = tf.placeholder(name="Image", dtype=tf.float32, shape=(512,512, 3))
out = tf.placeholder(name="Output", dtype=tf.float32, shape=(512,512, 1))
localpb = 'frozen_inference_graph.pb'
tflite_file = 'retrained_graph_eyes1za.lite'
print("{} -> {}".format(localpb, tflite_file))
converter = tf.lite.TFLiteConverter.from_frozen_graph(
localpb, img, out
)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
But, it is throwing lots of error like eager excecution. Kindly tell me how should I write the code to convert above Mobilenet model to tflite.
try this in command prompt or bash shell
You can use either of the following two ways
for tensorflow installed from package manager
python -m tensorflow.python.tools.optimize_for_inference \
--input=/path/to/frozen_inference_graph.pb \
--output=/path/to/frozen_inference_graph_stripped.pb \
--frozen_graph=True \
--input_names="sub_7" \
--output_names="ResizeBilinear_3"
build from source
bazel build tensorflow/python/tools:optimize_for_inference
bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=/path/to/frozen_inference_graph.pb \
--output=/path/to/frozen_inference_graph_stripped.pb \
--frozen_graph=True \
--input_names="sub_7" \
--output_names="ResizeBilinear_3"
Hope it works!!

How to convert .pb to TFLite format?

I downloaded a retrained_graph.pb and retrained_labels.txt file of a model I trained in Azure cognitive service. Now I want to make an Android app using that model and to do so I have to convert it to TFLite format. I used toco and I am getting the following error:
ValueError: Invalid tensors 'input' were found.
I am basically following this tutorial and have problem on step 4 and direcly
copy pasted the terminal code:
https://heartbeat.fritz.ai/neural-networks-on-mobile-devices-with-tensorflow-lite-a-tutorial-85b41f53230c
I am making a wild guess here, maybe you entered input_arrays=input.
Which may not be true. Use this script to find the name of the input and output arrays of the frozen inference graph
import tensorflow as tf
gf = tf.GraphDef()
m_file = open('frozen_inference_graph.pb','rb')
gf.ParseFromString(m_file.read())
with open('somefile.txt', 'a') as the_file:
for n in gf.node:
the_file.write(n.name+'\n')
file = open('somefile.txt','r')
data = file.readlines()
print "output name = "
print data[len(data)-1]
print "Input name = "
file.seek ( 0 )
print file.readline()
In my case they are:
output name: SemanticPredictions
input name: ImageTensor
You can use utility tflite_convert which is the part of tensorflow 1.10 (or higher) package.
The simple use for float inference is something like:
tflite_convert \
--output_file=/tmp/retrained_graph.tflite \
--graph_def_file=/tmp/retrained_graph.pb \
--input_arrays=input \
--output_arrays=output
Where input and output - are input and ouput tensors of your tensorflow graph
import tensorflow as tf
gf = tf.GraphDef()
m_file = open('frozen_inference_graph.pb','rb')
for n in gf.node:
print( n.name )
first one is input_arrays
last names are output_arrays (could be more than one depends on your number of output of the model)
my output
image_tensor <--- input_array
Cast
Preprocessor/map/Shape Preprocessor/map/strided_slice/stack
Preprocessor/map/strided_slice/stack_1
.
.
.
Postprocessor/BatchMultiClassNonMaxSuppression/map/
TensorArrayStack_5/TensorArrayGatherV3
Postprocessor/Cast_3
Postprocessor/Squeeze
add/y
add
detection_boxes <---output_array
detection_scores <---output_array
detection_multiclass_scores
detection_classes <---output_array
num_detections <---output_array
raw_detection_boxes
raw_detection_scores
Most of the answers here prove to be broken due to the version issues. This worked for me:
Note: First find the name of the input and output layers using Netron, as I mentioned here. In my case they are input and output.
!pip install tensorflow-gpu==1.15.0
# Convert
!toco --graph_def_file /content/yolo-v2-tiny-coco.pb \
--output_file yolo-v2-tiny-coco.tflite \
--output_format TFLITE \
--inference_type FLOAT \
--inference_input_type FLOAT \
--input_arrays input \
--output_arrays output
Also, as per zldrobit's amazing work, you can also fetch a better quantized version of this TFLite model as:
# Now let's quantize it
!toco --graph_def_file /content/yolo-v2-tiny-coco.pb \
--output_file quantized-yolo-v2-tiny-coco.tflite \
--output_format TFLITE \
--inference_type FLOAT \
--inference_input_type FLOAT \
--input_arrays input \
--output_arrays output \
--post_training_quantize
if you're using TF2 then the following will work for you to post quantized the .pb file.
import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file = 'path/to/frozen_inference__graph.pb',
input_arrays = ['Input_Tensor_Name'],
output_arrays = ['Output_Tensor_Name']
)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
incase if you want full int8 quantization then
import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file = 'path/to/frozen_inference__graph.pb',
input_arrays = ['Input_Tensor_Name'],
output_arrays = ['Output_Tensor_Name']
)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
image_shape=(input_width,input_height,no_of_channels) #change it according to your need
def representative_dataset_gen():
for i in range(10):
# creating fake images
image = tf.random.normal([1] + list(image_shape))
yield [image]
converter.representative_dataset = tf.lite.RepresentativeDataset(representative_dataset_gen)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # For EdgeTPU, no float ops allowed
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
The error hints that you have not entered the correct
--input_arrays
From TF Lite Developer Guide
I quote :
"Setting the input_array and output_array arguments is not straightforward. The easiest way to find these values is to explore the graph using TensorBoard."
Using the Tensorboard isn't hard either, by simply running this command
tensorboard --logdir=path/to/log-directory
View the TensorBoard at
localhost:6006
To run the tflite converter on your local machine, you will need bazel and toco.
And if you read some issues in GitHub, in some versions of Tensrflow tflite causes a lot of trouble. To overcome this trouble, some recommend using tf-nightly!
To avoid all this, simply use Google Colab to convert your .pb into .lite or .tflite.
Since Colab started having the "upload" option for uploading your files into the current kernel, this I think is the most simple way without having to worry about other packages and their dependencies.
Here is the code for the same:
from google.colab import drive
drive.mount('/content/drive')
!cd drive/My\ Drive
from google.colab import files
pbfile = files.upload()
import tensorflow as tf
localpb = 'frozen_inference_graph_frcnn.pb'
tflite_file = 'frcnn_od.lite'
print("{} -> {}".format(localpb, tflite_file))
converter = tf.lite.TFLiteConverter.from_frozen_graph(
localpb,
["image_tensor"],
['detection_boxes']
)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
"""**download optimized .lite file to local machine**"""
files.download(tflite_file)
There are two ways in which you can upload your .pb file to the current session:
i) (The easy way) After running the first cell in the above notebook, the drive will be mounted. So on your left part of the screen go to the files column and right click on the folder you want to upload your .pb file and choose upload.
Then use "ls" and "cd" commands to work your way into the folder and run the tflite converter cell.
ii) Run the cell with files.upload() command and click on browse and choose the .pb file from your local machine.
Once the file is uploaded, give its path to the variable "localpb" and also the name of the .lite model. Then simply run the cell having the "TFLiteConverter" comamnd.
And voila. You should have a tflite model appear in your drive. Simply right-click on it and download to your local machine to run inferences.
without bazel you can try the following code
pip uninstall tensorflow
pip install tf-nightly
pip show protobuf
If protobuf is version 3.6.1, then proceed to installing the pre-release version of 3.7.0.
pip uninstall protobuf
pip install protobuf==3.7.0rc2
I still couldn’t get the command line version to work. It kept returning the error: “tflite_convert: error: –input_arrays and –output_arrays are required with –graph_def_file” although both parameters were supplied. It worked in Python, however.
import tensorflow as tf
graph_def_file = "model.pb"
input_arrays = ["model_inputs"]
output_arrays = ["model_outputs"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Substituting Mul for input fixed it for me.
IMAGE_SIZE=299
tflite_convert \
--graph_def_file=tf_files/retrained_graph.pb \
--output_file=tf_files/optimized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3 \
--input_array=Mul \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT
I am following up from my previous answer you can use the following script to convert your trained model on ssd mobilenet to tflte using
python object_detection/export_tflite_ssd_graph \
--pipeline_config_path ssd_0.75_export/pipeline.config \
--trained_checkpoint_prefix ssd_0.75_export/model.ckpt \
--output_directory ssd_to_tflite_output
To do this you will first need to be present in research folder of tensorflow object detection API, and change the dile path/name as per your names.
If this dosent work try running this script from research folder and rerun:
protoc object_detection/protos/*.proto --python_out=.
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
Most likely it's because during the retraining process the input and output tensors were renamed. If this is a retrained inceptionv3 graph, try using Mul as the input tensor name and final_result as the output tensor name.
bazel run --config=opt //tensorflow/contrib/lite/toco:toco -- \
... other options ...
--input_shape=1,299,299,3 \
--input_array=Mul \
--output_array=final_result
Similar adjustment if you use tflife_convert as Aleksandr suggest.
import tensorflow as tf
!tflite_convert \
--output_file "random.tflite" \
--graph_def_file "pb file path" \
--input_arrays "input tensor name" \
--output_arrays "output tensor name"

How to convert a retrained model to tflite format?

I have retrained an image classifier model on MobileNet, I have these files.
Further, I used toco to compress the retrained model to convert the model to .lite format, but I need it in .tflite format. Is there anyway I can get to tflite format from existing files?
Here is a simple python script which you can use to convert .pb format graph into tflite.
import tensorflow as tf
graph_def_file = "output_graph.pb" ##Your frozen graph
input_arrays = ["input"] ##Input Node
output_arrays = ["final_result"] ##Output Node
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite","wb").write(tflite_model)
You can rename the .lite model to .tflite and it should work just fine.
Alternatively, with toco, you can rename the output as it is created :
toco \
--input_file=tf_files/retrained_graph.pb \
--output_file=tf_files/optimized_graph.lite \ //change this to tflite
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,224,224,3 \
--input_array=input \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT
In order to convert TensorFlow checkpoints and GraphDef to a TensorFlow Lite FlatBuffer:
Freeze the checkpoints and graph using freeze_graph.py
Convert the frozen graph to a TensorFlow Lite FlatBuffer using
TOCO.
Your freeze_graph.py command will look similar to the following:
freeze_graph -- \
--input_graph=output_graph.pb \
--input_binary=true \
--input_checkpoint=checkpoint \
--output_graph=frozen_graph.pb \
--output_node_names= MobilenetV1/Predictions/Softmax
You can use either TocoConverter (Python API) or tflite_convert (command line tool) with your model. TocoConverter accepts a tf.Session, frozen graph def, SavedModel directory or a Keras model file. tflite_convert accepts the later three formats.
When using TOCO, specify the output_file parameter with a .tflite extension.