I was trying to convert a model in a Keras file (.h5) to a TensorFlow Lite file (.tflite) using the following codes:
# Save model as .h5 keras file
keras_file = "eSleep.h5"
model_save = tf.keras.models.save_model(model,keras_file,overwrite=True,include_optimizer=True)
# Export keras file to TensorFlow Lite model
converter = tf.lite.TFLiteConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open("eSleep.tflite", "wb").write(tflite_model)
However, the following line:
tflite_model = converter.convert()
returned errors:
I tensorflow/core/grappler/devices.cc:53] Number of eligible GPUs (core count >= 8): 0 (Note: TensorFlow was not compiled with CUDA support)
I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
E tensorflow/core/grappler/grappler_item_builder.cc:636] Init node dense/kernel/Assign doesn't exist in graph
Can anybody help me to understand what does "Init node dense/kernel/Assign doesn't exist in graph" means and how to fix the error?
In my experience the converted model should work fine, even though this error is shown. You can ignore the error.
I solved the problem by using TensorFlow 1.12.
Related
I am new to Tensorflow and I need to convert the TFRS to the tflite model. Does anyone have any idea or experience related to this topic?
I simply ran the retrieval code by Colab.
And added the recommended method to convert the final model to tflite. That was:
converter = tf.lite.TFLiteConverter.from_saved_model(path) # path to the SavedModel directory
tflite_model = converter.convert()
You can see the error in the following image.
enter image description here
Try downloading the latest tf-nightly pip and then convert this model using new MLIR converter. You can enable the new converter by setting:
converter.experimental_new_converter = True
I need to export a custom object detection model, fine-tuned on a custom dataset, to TensorFlow Lite, so that it can run on Android devices.
I'm using TensorFlow 2.4.1 on Ubuntu 18.04, and so far this is what I did:
fine-tuned an 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8' model, using a dataset of new images. I used the 'model_main_tf2.py' script from the repository;
I exported the model using 'exporter_main_v2.py'
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\custom_model\pipeline.config --trained_checkpoint_dir .\models\custom_model\ --output_directory .\exported-models\custom_model
which produced a Saved Model (.pb file);
3. I tested the exported model for inference, and everything works fine. In the detection routine, I used:
def get_model_detection_function(model):
##Get a tf.function for detection
#tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
and the shape of the produced image object is 640x640, as expected.
Then, I tried to convert this .pb model to tflite.
After updating to the nightly version of tensorflow (with the normal version, I got an error), I was actually able to produce a .tflite file by using this code:
import tensorflow as tf
from tflite_support import metadata as _metadata
saved_model_dir = 'exported-models/custom_model/'
## Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
I tried to use this model in AndroidStudio, following the instructions given here.
However, I'm getting a couple of errors:
something regarding 'Not a valid Tensorflow lite model' (have to check better on this);
the error:
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_input_tensor:0) with 3 bytes from a Java Buffer with 270000 bytes.
The second error seems to indicate there's something weird with the input expected from the tflite model.
I examined the file with Netron, and this is what I got:
the input is expected to have...1x1x1x3 shape, or am I misinterpreting the graph?
Should I somehow set the tensor input size when using the tflite exporter?
Anyway, what is the right way to export my custom model so that it can run on Android?
TF Ops are supported via the Flex delegate. I bet that is the problem. If you want to check if it is that, you can do:
Download benchmark app with flex delegate support for TF Ops. You can find it here, in the section Native benchmark binary: https://www.tensorflow.org/lite/performance/measurement. For example, for android is https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex
Connect your phone to your computer and where you have downloaded the apk, do adb push <apk_name> /data/local/tmp
Push your model adb push <tflite_model> /data/local/tmp
Open shell adb shell and go to folder cd /data/local/tmp. Then run the app with ./<apk_name> --graph=<tflite_model>
Info from:
https://www.tensorflow.org/lite/guide/ops_select
https://www.tensorflow.org/lite/performance/measurement
I fine.tuned an SSD model to recognize a custom object.
I followed the tutorials, ran the training process and exported the model, I tested it for inference and everything works great.
So, now I have a structure like:
exported models/
|
---- SSD_custom_model/
|
--------checkpoint/
--------saved_model/
--------pipeline.config
which I assume is what is referred to as "Saved model" in the TensorFlow documentation.
So, I wanted to convert this model to TensorFlow Lite to test in on an Android device, I checked the tutorials and I'm trying:
import tensorflow as tf
saved_model_dir = 'exported-models/SSD_custom_model/'
# # Convert the model
## I tried either just
# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
## or, with more options
converter = tf.lite.TFLiteConverter.from_saved_model(
saved_model_dir, signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
And I'm getting the error
File "/home/lews/anaconda3/envs/tf/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): requires element_shape to be 1D tensor during TF Lite transformation pass
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
It seems to be complaining about the input shape ('requires element_shape to be 1D tensor during TF Lite transformation pass'). Maybe I should've modified something about the model before the fine-tuning process? Or after that?
Hi,I'm doing the same work and encountered the same error, but I sovled it.
The model I converted is SSD-Mobile-v2, and I'm using tensorflow 2_4, so I believe this will work for you.
All you need to do is to create a new conda environment (python 3.8 is ok), and then install tf-nightly:
pip install tf-nightly
It's important to note that the version of tf-nightly must be >= 2.5.
At first I used the tf-nightly 2.3, I encountered another error. Then I upgrade it to 2.5, the converter finally works.
I would like to make sure whether the following steps I executed to get the tflite of yolov2-lite model are correct or not?
Step1 Saving graph and weights to protobuf file
flow --model cfg/yolov2-tiny.cfg --load bin/yolov2-tiny.weights --savepb.
This command created build_graph folder with yolov2-tiny.pb and yolov2-tiny.meta.
Step2 Converting pb to tflite
I executed the below piece of code to get the yolov2-tiny.tflite
import tensorflow as tf
localpb = 'yolov2-tiny.pb'
tflite_file = 'yolov2-tiny.tflite'
print("{} -> {}".format(localpb, tflite_file))
converter = tf.lite.TFLiteConverter.from_frozen_graph(
localpb,
input_arrays= ['input'],
output_arrays= ['output']
)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
If the above steps I followed to get this tflite are correct, then please suggest me the command to run this tflite file in coral edge TPU USB accelerator.
Thank you so much :)
unfortunately, yolo models are supported by the edgetpu compiler as of now. I recommend using mobile_ssd models.
For future reference, your pipeline should be:
1) Train the model
2) Convert to tflite
3) Compiled for EdgeTPU (the step that actually delegates the work onto the TPU)
I've managed to create a TensorFlow model, saved as SavedModel .pb format with a custom operation.
My problem is that I cannot convert it to lite version either using command line utilities or python API
my python API is:
import tensorflow as tf
import os
import custom_op
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
converter = tf.lite.TFLiteConverter.from_saved_model("./SavedModel")
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
But conversion failed with error:
ValueError: Provide an input shape for input array 'X'.
I assume because my placeholders don't have a shape type. I don't understand why the normal TensorFlow model works with out it.
Any help?
As it describes in documentation of TensorFlow Lite, you can pass different parameters in tf.lite.TFLiteConverter.from_saved_model.
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
You can pass this information as described here. You need to provide input shape for your input array 'X'. Like,
tf.lite.TFLiteConverter.from_saved_model("./Saved_model", input_shapes={("X" : [1,H,W,C])})