I made tflite model with eager_few_shot_od_training_tflite.ipynb but can't use it on flutter project - tensorflow2.0

I made tflite model with
https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tflite.ipynb
but the probem is that I export tflite file that I successfully made in above link ,
then use it in my flutter object detection api app
https://github.com/hiennguyen92/flutter_realtime_object_detection
then I run it with my tflite model
then it cause error saying
Caused by: java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (StatefulPartitionedCall:1) with shape [1, 10] to a Java object with shape [1, 10, 4].
I think it's because of my output of tflite model is like below...?
How can I resolve above error ?

Related

Inference of Audio data on HuggingFace Wav2Vec TFlite Model

I've Been trying to run inference on an audio sample with a tflite version of Wav2Vec2.0 model that got from HuggingFace. When I try and run inference on the model, this is the error I get
RuntimeError: Index out of range using input dim 1; input has only 1 dims
(while executing 'StridedSlice' via Eager)Node number 1491 (TfLiteFlexDelegate) failed to invoke.
A Colab Notebook with the TFLite conversion code and the used inference code can be found here
Thank You!

Custom object detection model to TensorFlow Lite, shape of model input

I need to export a custom object detection model, fine-tuned on a custom dataset, to TensorFlow Lite, so that it can run on Android devices.
I'm using TensorFlow 2.4.1 on Ubuntu 18.04, and so far this is what I did:
fine-tuned an 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8' model, using a dataset of new images. I used the 'model_main_tf2.py' script from the repository;
I exported the model using 'exporter_main_v2.py'
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\custom_model\pipeline.config --trained_checkpoint_dir .\models\custom_model\ --output_directory .\exported-models\custom_model
which produced a Saved Model (.pb file);
3. I tested the exported model for inference, and everything works fine. In the detection routine, I used:
def get_model_detection_function(model):
##Get a tf.function for detection
#tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
and the shape of the produced image object is 640x640, as expected.
Then, I tried to convert this .pb model to tflite.
After updating to the nightly version of tensorflow (with the normal version, I got an error), I was actually able to produce a .tflite file by using this code:
import tensorflow as tf
from tflite_support import metadata as _metadata
saved_model_dir = 'exported-models/custom_model/'
## Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
I tried to use this model in AndroidStudio, following the instructions given here.
However, I'm getting a couple of errors:
something regarding 'Not a valid Tensorflow lite model' (have to check better on this);
the error:
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_input_tensor:0) with 3 bytes from a Java Buffer with 270000 bytes.
The second error seems to indicate there's something weird with the input expected from the tflite model.
I examined the file with Netron, and this is what I got:
the input is expected to have...1x1x1x3 shape, or am I misinterpreting the graph?
Should I somehow set the tensor input size when using the tflite exporter?
Anyway, what is the right way to export my custom model so that it can run on Android?
TF Ops are supported via the Flex delegate. I bet that is the problem. If you want to check if it is that, you can do:
Download benchmark app with flex delegate support for TF Ops. You can find it here, in the section Native benchmark binary: https://www.tensorflow.org/lite/performance/measurement. For example, for android is https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex
Connect your phone to your computer and where you have downloaded the apk, do adb push <apk_name> /data/local/tmp
Push your model adb push <tflite_model> /data/local/tmp
Open shell adb shell and go to folder cd /data/local/tmp. Then run the app with ./<apk_name> --graph=<tflite_model>
Info from:
https://www.tensorflow.org/lite/guide/ops_select
https://www.tensorflow.org/lite/performance/measurement

How to use customvision.ai to create object detection model for TensorFlow Lite?

I have an object detection model that I've created in https://customvision.ai. If I export it as a TensorFlow Lite model, I get a model that expects FLOAT32 [1, 416, 416, 3] as input and returns FLOAT32 [1, 13, 13, 35] as output (as per TensorFlow Lite's visualize.py).
I would like to use that model in an Android app. I've tried to load the .tflite model file into the TensorFlow Lite object detection sample app, however it expects a different format. I get the following exception when running the app. java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 13, 13, 35] and a Java object with shape [1, 10, 4].
Is it feasible to adapt the sample app to use the model from customvision.ai?
How should I interpret the shape [1, 13, 13, 35]?
Thanks in advance!

Failed Tensorflow Lite conversion: Unsupported data type in placeholder op

System information:
Mac OS Mojave
TensorFlow installed from (source or binary):
pip install tensorflow
TensorFlow version (or github SHA if from source):
1.12
I am trying to convert a simple convolutional tensorflow model to tensorflow lite. I already have it in SavedModel format. But when I try to run the convert util on the saved model, I get:
RuntimeError: TOCO failed see console for info.
b"2018-12-30 15:40:54.449737: I tensorflow/contrib/lite/toco/import_tensorflow.cc:189] Unsupported data type in placeholder op: 2\n2018-12-30 15:40:54.450020: F tensorflow/contrib/lite/toco/import_tensorflow.cc:2137] Check failed: status.ok() Unexpected value forattribute 'T'. Expected 'DT_FLOAT'\n"
To save the model, I have:
// model is an Estimator instance
def export(model):
model.export_saved_model("tmp/export", serving_input_receiver_fn)
and:
def serving_input_receiver_fn():
features = { 'x': tf.placeholder(shape=[1, 100, 100, 1], dtype=tf.as_dtype(np.int32)) }
return tf.estimator.export.ServingInputReceiver(features, features)
Input dtype is np.int32, so I attempt to cast that to a tf type here.
I can attach the full model def on request.
Thanks.
The solution to this was not in the placeholder op itself, but in the model declaration. I was using a float64 input type. Switching to float32, and setting dtype=float32 in the placeholder, solved my issue.

Toolkit Error: Stage Details Not Supported: VarHandleOp

I have trained a model using keras and saved to json file and .h5 weight file. Then I need to put this run on intel Neural Compute stick. So I converted these two model and weight files to .meta file for tensorflow using my repo: https://github.com/anuragcp/to_ncs_graph.git
Prediction working without NCS. On creating graph from this meta file using mvNCCompile command it make an error:
[Error 5] Toolkit Error: Stage Details Not Supported: VarHandleOp
I Have checked on both ncsdk v1 & v2 - same result.
Any idea how to solve this?