I have a TensorFlow Keras model which is stored in .pb format and from .pb format I am converting the model to .onnx format using the tf2onnx model
!python -m tf2onnx.convert --saved-model model.pb --output model.onnx
now after converting I see that my input layer is in NHWC format and I need to convert the same to NCHW, to achieve that I am using
!python -m tf2onnx.convert --saved-model model.pb --output model_3.onnx --inputs-as-nchw input0:0
which is still giving me the same output as NHWC
I have to consume the above model in NVIDIA Deepstream which only accepts NCHW format.
I found this link which talks about the transpose of the input layer, but unfortunately, that is also not working.
Convert between NHWC and NCHW in TensorFlow
#import tensorflow as tf
images_nhwc = tf.compat.v1.placeholder(tf.float32, [1, 200, 300, 3])
# input batch
out = tf.transpose(images_nhwc, [0, 3, 1, 2])
#print(out.get_shape())
model.build(out.get_shape())
It would be really helpful if some experts can share their thoughts on how to convert NHWC to NCHW
I found the solution.
I had to take the latest code of tf2onnx.convert.from_keras. I took the main branch from tf2onnx
!pip install --force-reinstall git+https://github.com/onnx/tensorflow-onnx.git#main
!pip show tf2onnx
!pip freeze | grep tf2onnx
once that was done I was able to load the latest functionality and updated code at
https://github.com/onnx/tensorflow-onnx/tree/e896723e410a59a600d1a73657f9965a3cbf2c3b .
Below is the code I used to convert my model from .pb to .onnx along with NHWC to NCHW.
# give the list of *inputs* which should be converted and returned *as nchw*
_INPUT = model.input.name
model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model, inputs_as_nchw=[_INPUT])
The biggest catch about the above code was [_INPUT] which was suppose to be a list and I was able find this information in the test cases.
I have retrained some tensorflow2.0 model, it's working as 1 class object detector, prepared with object detection api v2 (https://tensorflow-object-detection-api-tutorial.readthedocs.io/).
After that I have converted it to onnx (tf2onnx.convert) and tested - got the same inference results.
I have tested all pretrained models (downloaded from tf model zoo https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md):
ssd_mobilenet_v2_320x320_coco17_tpu-8
ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8
ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8
ssd_resnet50_v1_fpn_640x640_coco17_tpu-8
I have retrained it by using some small batch of data.
The problem is with using it with gstreamer/deepstream. As I have seen, gstreamer consumes the onnx model, or model after converting it to TensorRT. (If I will provide onnx - model is also converted to TensorRT of course, but it's done by gstreamer right before running)
I was also trying to same pipeline with train->convert to onnx->convert to trt (or just provide onnx model to gstreamer). Same issue.
Error:
ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::519] Error Code
9: Internal Error ((Unnamed Layer* 747) [Recurrence]: IRecurrenceLayer
cannot be used to compute a shape tensor)
TensorRT Version: 8.2.1.8
tf2onnx Version: 1.9.3
Is there any chance to get some help?
Or maybe I should skip the onnx model and just convert it from tensorflow to tensorRT engine? Is it possible?
Of course I can upload the model if it would help.
BR!
I am training the maskrcnn inception v2 model on the Tensorflow version for further work with OpenVino. After training the model, I freeze the model using a script in object_detection_API directory:
python exporter_main_v2.py \
--trained_checkpoint_dir training
--output_directory inference_graph
--pipeline_config_path training/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config
After this script, I get the saved model and pipeline files, which should be used in OpenVInO in the future
The following error occurs when uploading the received files to model optimizer:
Model Optimizer version:
2020-08-20 11:37:05.425293: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
frozen graph in text or binary format
inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.
I teach the model by following the example from the link article, using my own dataset: https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api
On gpu, the model starts and works, but I need to get the converted model for OpenVINO
Run the mo_tf.py script with a path to the SavedModel directory:
python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>
I have a Problem converting Keras Models into Layers API format models to use with tensorflowjs
I use the command:
$ tensorflowjs_converter --input_format keras kerasModels/vgg16_weights_tf_dim_ordering_tf_kernels.h5 convertedModels/
I get an error "KeyError: Can't open attribute (can't locate attribute 'keras version')"
Here is an image of the error log:
I assume you are trying to convert the model downloaded from here, which is possibly outdated now.
You can download the VGG16 model fresh from keras-applications using the following python script:
from keras.applications.vgg16 import VGG16
model = VGG16(include_top=True, weights='imagenet')
model.save("VGG16.h5")
OS Platform and Distribution: Linux Ubuntu 14.04
TensorFlow version: tensorflow (1.4.0) from binary,
CUDA/cuDNN version: cuda 8.0
I have trained a customized model with tensorflow and I am trying to make it a tensorflow lite model for mobile apps. My model defines like:
def P_Net(inputs,label=None,bbox_target=None,landmark_target=None,training=True):
#define common param
with slim.arg_scope([slim.conv2d],
activation_fn=prelu,
weights_initializer=slim.xavier_initializer(),
biases_initializer=tf.zeros_initializer(),
weights_regularizer=slim.l2_regularizer(0.0005),
padding='valid'):
print inputs.get_shape()
net = slim.conv2d(inputs, 28, 3, stride=1,scope='conv1')
......
conv4_1 = slim.conv2d(net,num_outputs=2,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.softmax)
#conv4_1 = slim.conv2d(net,num_outputs=1,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.sigmoid)
print conv4_1.get_shape()
#batch*H*W*4
bbox_pred = slim.conv2d(net,num_outputs=4,kernel_size=[1,1],stride=1,scope='conv4_2',activation_fn=None)
print bbox_pred.get_shape()
where conv4_1 and conv4_2 is the output layer.
I freeze the model with:
freeze_graph.freeze_graph('out_put_model/model.pb', '', False, model_path, 'Squeeze,Squeeze_1', '', '', 'out_put_model/frozen_model.pb', '', '')
After that, I could use tensorboard to view graphs. When I read it back to double check it, I get identical info to the checkpoint model.
Then, I try to save the frozen_model.pb to tensorflow lite model. While tensorflow 1.4.0 doesn't have tensorflow lite module, I checkout tensorflow from github and bazel run toco like:
bazel run --config=opt //tensorflow/contrib/lite/toco:toco -- --input_file='/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/frozen_model.pb' --output_file='/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/pnet.tflite' --inference_type=FLOAT --input_shape=1,128,128,3 --input_array=image_height,image_width,input_image --output_array=Squeeze,Squeeze_1 --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --dump_graphviz=/tmp
However, I get error about output array not found:
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/frozen_model.pb' '--output_file=/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/pnet.tflite' '--inference_type=FLOAT' '--input_shape=1,128,128,3' '--input_array=image_height,image_width,input_image' '--output_array=Squeeze,Squeeze_1' '--input_format=TENSORFLOW_GRAPHDEF' '--output_format=TFLITE' '--dump_graphviz=/tmp'
2018-04-03 11:17:37.412589: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Abs
2018-04-03 11:17:37.412660: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Abs
2018-04-03 11:17:37.412699: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Abs
2018-04-03 11:17:37.412880: F tensorflow/contrib/lite/toco/tooling_util.cc:686] Check failed: model.HasArray(output_array) Output array not found: Squeeze,Squeeze_1
Question:
How to set the --output_array=Squeeze,Squeeze_1 parameter? I think it's the same as output nodes in freeze_graph() in tensorboard, I do find the "Squeeze" and "Squeeze_1" node
How to set the --input_shape=1,128,128,3 --input_array=image_height,image_width,input_image parameter? I check and find the mobile do have a fixed size image input, but in my model, there's not fixed size of input image and fully convolution input like:
self.image_op = tf.placeholder(tf.float32, name='input_image')
self.width_op = tf.placeholder(tf.int32, name='image_width')
self.height_op = tf.placeholder(tf.int32, name='image_height')
image_reshape = tf.reshape(self.image_op, [1, self.height_op, self.width_op, 3])
and a reshape to 1widthheight*3
So, how to write this as input shape?
For Input array
[node.op.name for node in model.inputs]
For Output array
[node.op.name for node in model.outputs]
converting frozen model to tf_lite has never been a easy job, thanks to tensorflow. hope this code can help you summarize the graph and help you find output and input array
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph={PATH_TO_FROZEN_GRAPH}/optimized_best.pb`
I came across this issue when trying to retrain and then convert to tflite.
This is the solution that worked for me:
With 1.9 and above (and possibly 1.8 too, haven't tested.) you need to drop the --input_format field and change the --input_file param to --graph_def_file
So you end up with a command that looks a bit like:
toco \
--graph_def_file=tf_files/retrained_graph.pb \
--output_file=tf_files/optimized_graph.lite \
--output_format=TFLITE \
--input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3 \
--input_array=input \
--output_array=final_result \
--inference_type=FLOAT \
--inference_input_type=FLOAT
I was then able to complete the poets example and get my tflite file to work on android.
Source:
https://github.com/googlecodelabs/tensorflow-for-poets-2/issues/68
You can use below tool to determine input and output array and model size along with other parameter for tflite conversion. This also creates a nice visualization of tensorflow frozen graph.
Github link for the tool