load TFlite model and quantizate - tensorflow2.0

I would like to load a tflite model and quantize it but seems the model that is loaded does not have the right object class attributes.
here is my code
interpreter = tf.lite.TFLiteConverter.from_saved_model('efficientdet_d0_coco17_tpu-32/saved_model')
config = QuantizationConfig.for_float16()
interpreter.export(export_dir='.', tflite_filename='x_model_fp16.tflite', quantization_config=config)
error
AttributeError Traceback (most recent call last)
<ipython-input-14-19596da94f3d> in <module>
----> 1 interpreter.export(export_dir='.', tflite_filename='/content/drive/Othercomputers/intel16/botri/efficient_net/x_model_fp16.tflite', quantization_config=config)
AttributeError: 'TFLiteSavedModelConverterV2' object has no attribute 'export'
I also have tried to load tflite format file but the error stays the same.

Related

Tensorflow: How to load an object detection model and viewing the model's architecture?

I am trying to load a object detection model and viewing the architecture because I need to know what my input and output layers are in order to convert the model format to a different format.
Right now, I am trying to do:
model = tf.saved_model.load('/content/drive/MyDrive/my_ssd_mobnet_640x640_tuned/tfliteexport/saved_model')
model.summary()
And am getting this error:
AttributeError Traceback (most recent call last)
<ipython-input-18-5f15418b3570> in <module>()
----> 1 model.summary()
AttributeError: '_UserObject' object has no attribute 'summary'
My model is in .pb, but I do also have a .tflite version as well.
According to the documentation, you should load the model as keras model, like this:
model = keras.models.load_model("my/path/saved_model")
The .pb version should work.

Can't convert onnx model to tflite using TF 2.4.1

I'm having an ONNX model, which I can successfully convert to TF with TF 2.4.1. But when it comes to the conversion of that saved model to TFLite an error happens.
The code:
import onnx
import tensorflow as tf
from onnx_tf.backend import prepare
print(tf.__version__)
# Convert model.onnx to Tensorflow
onnx_model = onnx.load('model.onnx')
onnx.checker.check_model(onnx_model)
tf_rep = prepare(onnx_model)
tf_rep.export_graph('model')
# Convert saved model to tflite
converter = tf.lite.TFLiteConverter.from_saved_model('model')
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)
Everything goes OK until the conversion step, which ends like so:
>>> tf_lite_model = converter.convert()
2021-04-22 18:18:14.715046: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-04-22 18:18:14.715072: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-04-22 18:18:14.715078: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-04-22 18:18:14.716044: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: model
2021-04-22 18:18:14.778050: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-04-22 18:18:14.778083: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: model
2021-04-22 18:18:14.998062: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-04-22 18:18:15.043862: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-04-22 18:18:15.438804: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: model
2021-04-22 18:18:15.809851: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1093808 microseconds.
2021-04-22 18:18:18.757257: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): error: operand #0 does not dominate this use
Traceback (most recent call last):
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
result = _convert_saved_model(**converter_kwargs)
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
data = toco_convert_protos(
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
I have no idea, what this message means, but if I switch to TF 2.2 the conversion passes w/o errors. The bad thing is, that due to another problem now the initial ONNX to TF conversion fails.
Anybody having an idea, what this message means and what could be done with it?
TIA
Is it possible to share your the saved model directory to me? I can help debugging.
The general advise is that, there are two possibilities that
(1) TF Lite converter may not handle the saved model correctly.
(2) onnx conversion tool may not create a valid TF saved model.
Using the recent TF version (2.5 or tf-nightly) might help resolve this problem in the (1) case but it's not guaranteed.
I confirmed that the tf-nightly version could convert the attached saved model without any issue:
converter = tf.lite.TFLiteConverter.from_saved_model(
"/tmp/onnx_model")
tflite_model = converter.convert()
with open("/tmp/onnx.tflite", "wb") as f:
f.write(tflite_model)

AttributeError: 'TensorSliceDataset' object has no attribute "'element_spec'"

I am very new to tensorflow. I ran following code -
import tensorflow as tf
d = tf.data.Dataset.from_tensor_slices(tf.random.uniform([3,6]))
d.element_spec
And got following error -
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-4-a00819363729> in <module>()
1 import tensorflow as tf
2 d = tf.data.Dataset.from_tensor_slices(tf.random.uniform([3,6]))
----> 3 d.element_spec
AttributeError: 'TensorSliceDataset' object has no attribute 'element_spec'
I cannot find solution for this error. I just want to print the type of each element component in variable d. What is going wrong here??
I am using tensorflow 2.0.

OpenCV DNN, Import .pb file from tensorflow Assertion Failed error: scaleMat.type() == CV_32FC1 in function 'populateNet'

I was trying to import a frozen (and optimized using tensorflow.python.tools.optimize_for_inference ) pb "optimized.pb" file using
cv2.dnn.readNetFromTensorflow("optimized.pb")
This resulted in the following error:
Traceback (most recent call last):
File "opencv.py", line 4, in <module>
net = cv2.dnn.readNetFromTensorflow("optimized.pb")
cv2.error: OpenCV(3.4.3) /io/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1380: error: (-215:Assertion failed) scaleMat.type() == CV_32FC1 in function 'populateNet'
The model included transpose convolution layers. However the error disappears when I do not include any deconvolution layers.
Can anyone help me understand and correct this error?
I've solved this error in my network by replace
up8 = UpSampling2D(size=(2, 2), interpolation='bilinear')(conv7)
to
up8 = UpSampling2D(size=(2, 2))(conv7)
It looks like my opencv(version 3.4.6) is not supported bilinear interpolation in UpSampling2D layer.

TF object detection API - Compute evaluation measures failed

I successfully trained a model on my own dataset, exported the inference graph and did the inference on my test dataset.
I now have
the detections as tfrecord file, specified in input config
an eval_config file with the specified metrics set
When I try to compute the measures like in the new object detector inference and evaluation measure computation tutorial with
python object_detection/metrics/offline_eval_map_corloc.py --eval_dir=/media/sf_shared --eval_config_path=/media/sf_shared/eval_config.pbtxt --input_config_path=/media/sf_shared/input_config.pbtxt
It returns this AttributeError:
INFO:tensorflow:Processing file: /media/sf_shared/detections.record
INFO:tensorflow:Processed 0 images...
Traceback (most recent call last):
File "object_detection/metrics/offline_eval_map_corloc.py", line 173, in <module>
tf.app.run(main)
File "/home/chrza/anaconda2/envs/tf27/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/metrics/offline_eval_map_corloc.py", line 166, in main
metrics = read_data_and_evaluate(input_config, eval_config)
File "object_detection/metrics/offline_eval_map_corloc.py", line 124, in read_data_and_evaluate
decoded_dict)
File "/home/chrza/anaconda2/envs/tf27/lib/python2.7/site-packages/tensorflow/models/research/object_detection/utils/object_detection_evaluation.py", line 174, in add_single_ground_truth_image_info
(groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult]
AttributeError: 'NoneType' object has no attribute 'size'
Any hints?
I fixed it (temporarily) as follows:
if (standard_fields.InputDataFields.groundtruth_difficult in groundtruth_dict.keys()) and groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult]:
if groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult].size or not groundtruth_classes.size:
groundtruth_difficult = groundtruth_dict[standard_fields.InputDataFields.groundtruth_difficult]
In place of the existing lines (195-198) in
object_detection/metrutils/object_detection_evaluation.py
The error is caused due to the fact that, even in the case there is no difficulty flag passed, the size of the object is being checked for.
This is an error if you skipped that parameter in your tf records.
Perhaps this was the intent of the developers, but the clarity of documentation certainly leaves a lot to be desired for.