Background:
I am trying to convert the tf2 model for SSD MobileNet V2 FPNLite 320x320 (for example) from the official tf zoo. The model should run eventually on raspberry pi, so I would like it to run on the tflite interpreter (without full tf). The docs imply that ssd model conversion is supported.
Whats happening:
the process is detailed in this colab notebook. It is failing with the error:
ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size#__inference___call___23519" at "StatefulPartitionedCall#__inference_signature_wrapper_25508") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
if I add the flag tf.lite.OpsSet.SELECT_TF_OPS, it works but wont run on the rpi, as it does not have the ops.
Can this be done? Has anyone succeeded?
Since TF.Size is not natively supported on TFLite you can use TF Select mode which fallbacks to TF for the missing op, which during conversion is enabled using "SELECT_TF_OPS" that you tried.
When you run inference you will need to use Interpreter which have Select ops linked.
See the guide on running inference.
Related
I fine.tuned an SSD model to recognize a custom object.
I followed the tutorials, ran the training process and exported the model, I tested it for inference and everything works great.
So, now I have a structure like:
exported models/
|
---- SSD_custom_model/
|
--------checkpoint/
--------saved_model/
--------pipeline.config
which I assume is what is referred to as "Saved model" in the TensorFlow documentation.
So, I wanted to convert this model to TensorFlow Lite to test in on an Android device, I checked the tutorials and I'm trying:
import tensorflow as tf
saved_model_dir = 'exported-models/SSD_custom_model/'
# # Convert the model
## I tried either just
# converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
## or, with more options
converter = tf.lite.TFLiteConverter.from_saved_model(
saved_model_dir, signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
And I'm getting the error
File "/home/lews/anaconda3/envs/tf/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): requires element_shape to be 1D tensor during TF Lite transformation pass
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("map/TensorArrayV2_1#__inference_call_func_11694" at "StatefulPartitionedCall#__inference_signature_wrapper_14068") at "StatefulPartitionedCall")): failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
It seems to be complaining about the input shape ('requires element_shape to be 1D tensor during TF Lite transformation pass'). Maybe I should've modified something about the model before the fine-tuning process? Or after that?
Hi,I'm doing the same work and encountered the same error, but I sovled it.
The model I converted is SSD-Mobile-v2, and I'm using tensorflow 2_4, so I believe this will work for you.
All you need to do is to create a new conda environment (python 3.8 is ok), and then install tf-nightly:
pip install tf-nightly
It's important to note that the version of tf-nightly must be >= 2.5.
At first I used the tf-nightly 2.3, I encountered another error. Then I upgrade it to 2.5, the converter finally works.
I converted a pretrained keras model to use it with Tensorflow.js following the steps in this guide
Now, when I try to import it to javascript using
const model = tf.loadModel("{% static "keras/model.json" %}");
The following error shows up:
Uncaught (in promise) Error: Unknown layer: GaussianNoise. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with
tf.serialization.registerClass().
at new t (errors.ts:48)
at deserializeKerasObject (generic_utils.ts:239)
at deserialize (serialization.ts:31)
at t.fromConfig (models.ts:940)
at deserializeKerasObject (generic_utils.ts:274)
at deserialize (serialization.ts:31)
at models.ts:302
at common.ts:14
at Object.next (common.ts:14)
at i (common.ts:14)
I'm using 0.15.3 version of Tensorflow.js, imported this way:
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs#0.15.3/dist/tf.min.js"></script>
I trained my neural network with Tensorflow 1.12.0 and Keras 2.2.4
You are using the layer tf.layer.gaussianNoise that is not supported yet by tfjs.
Consider changing this layer by another one supported
I'm trying to export SavedModel of a classifier of the type TPUEstimator. Since I'm trying to export the model to run predictions on a GPU/CPU, the use_tpu parameter of TPUEstimator was set to False.
When I try to save the model, the following error is thrown:
NotImplementedError: Operation of type AssignVariableOp
(AssignVariableOp) is not supported on the TPU for inference. Execution
will fail if this op is used in the graph. Make sure your variables are
using variable_scope.
Since I plan to serve the model through a GPU/CPU, the Op shouldn't be a problem. How can I export this estimator as SavedModel?
This might help, just before calling export_savedmodel(...) call
estimator._export_to_tpu = False
If you actually don't need TPU inference support you can create a tf.estimator.Estimator instead of a tf.contrib.tpu.TPUEstimator one, using the same model_fn and trained model. Then, you should be able to export the model.
TensorFlow Object Detection API
Using the TensorFlow Object Detection API to retrain MobileNet on my own DataSet. The issue occurs as I try to run my inference graph that has been both frozen and quantized.
System:
Ubuntu 16.04,
TensorFlow 1.2 (from source, CPU only),
Bazel 0.4.5
Issue:
Use provided frozen_graph.pb from model zoo.
Quantize to 8-bit using
bazel-bin/tensorflow/tools/graph_transforms/transform_graph.
Run inference
This works, however,
Re-train and produce my own frozen_graph.pb using object_detection/export_inference_graph.py
Quantize to 8-bit using bazel-bin/tensorflow/tools/graph_transforms/transform_graph
Run inference <-- Produces error
Does NOT work, and the error I'm getting during the attempt to run the graph is:
File
"/home/unibap/TensorFlow/tensorflow-python2-sse4.2/local/lib/python2.7/site-packages/tensorflow/python/client/session.py",
line 1298, in _do_call
raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: The node
'Preprocessor/map/while/ResizeImage/ResizeBilinear/eightbit' has
inputs from different frames. The input
'Preprocessor/map/while/ResizeImage/size' is in frame
'Preprocessor/map/while/Preprocessor/map/while/'. The input
'Preprocessor/map/while/ResizeImage/ResizeBilinear_eightbit/Preprocessor/map/while/ResizeImage/ExpandDims/quantize'
is in frame ''.
Since I can quantize and run the provided frozen_graph.pb the issue has to be with the export tool? Which export tool was used to create the frozen_graph.pb that are in the model zoo? Or how was the export tool called?
PS:
Quote from comments in export_inference_graph.pb, assuring me that it should produce a frozen graph if checkpoint is provided.
"Optionally, one can freeze the graph by converting the weights in the provided
checkpoint as graph constants thereby eliminating the need to use a checkpoint
file during inference."
Best
I have been experimenting with the new 8-bit quantization feature available in TensorFlow. I could run the example given in the blog post (quantization of googlenet) without any issue and it works fine for me !!!
Now, I would like to apply the same for a simpler network. So I used a pre-trained network for CIFAR-10 (which is trained on Caffe), extracted its parameters, created corresponding graph in tensorflow, initialized the weights with this pre-trained weights and finally saved it as a GraphDef object. See this IPython Notebook for full procedure.
Now I applied the 8-bit quantization with the tensorflow script as mentioned in the Pete Warden's blog:
bazel-bin/tensorflow/contrib/quantization/tools/quantize_graph --input=cifar.pb --output=qcifar.pb --mode=eightbit --bitdepth=8 --output_node_names="ArgMax"
Now I wanted to run the classification on this quantized network. So I loaded the new qcifar.pb to a tensorflow session and passed the image (the same way I passed it to original version). Full code can be found in this IPython Notebook.
But as you can see at the end, I am getting following error:
NotFoundError: Op type not registered 'QuantizeV2'
Can anybody suggest what am I missing here?
Because the quantized ops and kernels are in contrib, you'll need to explicitly load them in your python script. There's an example of that in the quantize_graph.py script itself:
from tensorflow.contrib.quantization import load_quantized_ops_so
from tensorflow.contrib.quantization.kernels import load_quantized_kernels_so
This is something that we should update the documentation to mention!