Using model optimizer for tensorflow slim models - tensorflow

I am aiming to inference tensorflow slim model with Intel OpenVINO optimizer. Using open vino docs and slides for inference and tf slim docs for training model.
It's a multi-class classification problem. I have trained tf slim mobilnet_v2 model from scratch (using sript train_image_classifier.py). Evaluation of trained model on test set gives relatively good results to begin with (using script eval_image_classifier.py):
eval/Accuracy[0.8017]eval/Recall_5[0.9993]
However, single .ckpt file is not saved (even though at the end of train_image_classifier.py run there is a message like "model.ckpt is saved to checkpoint_dir"), there are 3 files (.ckpt-180000.data-00000-of-00001, .ckpt-180000.index, .ckpt-180000.meta) instead.
OpenVINO model optimizer requires a single checkpoint file.
According to docs I call mo_tf.py with following params:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_checkpoint D:/model/model.ckpt-180000 -b 1
It gives the error (same if pass --input_checkpoint D:/model/model.ckpt):
[ ERROR ] The value for command line parameter "input_checkpoint" must be existing file/directory, but "D:/model/model.ckpt-180000" does not exist.
Error message is clear, there are not such files on disk. But as I know most tf utilities convert .ckpt-????.meta to .ckpt under the hood.
Trying to call:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_meta_graph D:/model/model.ckpt-180000.meta -b 1
Causes:
[ ERROR ] Unknown configuration of input model parameters
It doesn't matter for me in which way I will transfer graph to OpenVINO intermediate representation, just need to reach that result.
Thanks a lot.
EDIT
I managed to run OpenVINO model optimizer on frozen graph of tf slim model. However I still have no idea why had my previous attempts (based on docs) failed.

you can try converting the model to frozen format (.pb) and then convert the model using OpenVINO.
.ckpt-meta has the metagraph. The computation graph structure without variable values.
the one you can observe in tensorboard.
.ckpt-data has the variable values,without the skeleton or structure. to restore a model we need both meta and data files.
.pb file saves the whole graph (meta+data)
As per the documentation of OpenVINO:
When a network is defined in Python* code, you have to create an inference graph file. Usually, graphs are built in a form that allows model training. That means that all trainable parameters are represented as variables in the graph. To use the graph with the Model Optimizer, it should be frozen.
https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow
the OpenVINO optimizes the model by converting the weighted graph passed in frozen form.

Related

Incorrect freezing of weights maskrcnn Tensorflow 2 in object_detection_API

I am training the maskrcnn inception v2 model on the Tensorflow version for further work with OpenVino. After training the model, I freeze the model using a script in object_detection_API directory:
python exporter_main_v2.py \
--trained_checkpoint_dir training
--output_directory inference_graph
--pipeline_config_path training/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config
After this script, I get the saved model and pipeline files, which should be used in OpenVInO in the future
The following error occurs when uploading the received files to model optimizer:
Model Optimizer version:
2020-08-20 11:37:05.425293: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
frozen graph in text or binary format
inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.
I teach the model by following the example from the link article, using my own dataset: https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api
On gpu, the model starts and works, but I need to get the converted model for OpenVINO
Run the mo_tf.py script with a path to the SavedModel directory:
python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>

Can openVINO model optimiser be used to convert tensorflow ann models?

I trained an ANN model as saved it as .h5 file.Then I converted the model into tensorflow model and got 'savedmodel.pb' and 'variables' folder.
Then I used model optimiser openvino to generate IR files using:
python3 mo_tf.py --input_model saved_model.pb
But I get the following error:
[ FRAMEWORK ERROR ] Error parsing message
TensorFlow cannot read the model file: "/home/user/Downloads/OpenVino/dldt-2019/model-optimizer/saved_model.pb" is incorrect TensorFlow model file
Can openVINO be used to convert ANN models in the first place?
Unfortunately, OpenVINO does not support Keras model format.
I guess you got an error from model optimizer because your model is not frozen.
There are a lot of scripts which can be used to convert a Keras model to a frozen Tensorflow graph. I can recommend this one for example.
Hope it will help.

Determine tag-sets in Tensorflow Hub saved model

I'm trying to determine if this Tensorflow Hub model can be converted to TFLITE format (and eventually compiled for the TPU/Coral Board), by doing something like this.
converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model("./")
tflite_model = converter.convert()
However, I need to specify the model tag-sets and this command gives no results (in both TF 1.13.1 and 2.0):
% saved_model_cli show --dir .
The given SavedModel contains the following tag-sets:
The saved_model.pb file is in this directory, and Netron is too unwieldy - given the size of the model it barely opens - so it's difficult to inspect. The model can be opened for inference: detector = hub.load(module_url).signatures['default'] so perhaps I can show the model summary from the detector object (?).
Any ideas how I can determine the model structure?
Any insight into the practicality of converting this model to TFLITE and then compiling for the TPU would be appreciated.

How to convert a TensorFlow SavedModel graph to a Caffe model?

I want to use MMdnn to convert a tensorflow ResNet model to other frameworks. It seems that I can only use mmconvert to read from a .pb frozen graph file.
However, when using tf.estimator.Estimator, the .pb file that it creates is a SavedModelDef. I understand this to be a wrapper around the tf GraphDef. Thus the GraphDef .pb file can be extracted from the SavedModel using freeze_graph.py.
From there, I will need the name of the input node in the tf GraphDef. But I'm unsure how to identify the name from looking at the .pbtxt. The tf.Estimator inputs with a tf.Dataset object, according to the framework.
I'm guessing there should be a tf.Placeholder somewhere that accepts the input. But I'm not sure how to find what the input node actually is.
Answering my own question here. The freeze_graph utility that comes with tensorflow is useful for extracting the graphdef from the tf SavedModel format.
To find the name of the input node, make sure to saved the tf SavedModel in pbtxt format. Open it up and look for the first node of your compute graph, e.g. if using tf resnet, the first nodes will be named resnet_model/*. Find the node that feeds this node, and you will have the name of the input node to specify to MMdnn tools. I expected this to be a tf.Placeholder that the Estimator adds for inputs. This node was just named Placeholder, so that's what I specified as the input node.
First extract the compute graph.
freeze_graph --input_saved_model_dir <path/to/saved_model_dir> --output_node_names softmax --output_graph ./graph_def.pb
Then use MMdnn to convert it to caffe.
mmconvert -sf tensorflow -iw ./graph_def.pb --inNodeName Placeholder --inputShape 224,224,3 --dstNodeName softmax -df caffe -om tf_resnet

Error with 8-bit Quantization in Tensorflow

I have been experimenting with the new 8-bit quantization feature available in TensorFlow. I could run the example given in the blog post (quantization of googlenet) without any issue and it works fine for me !!!
Now, I would like to apply the same for a simpler network. So I used a pre-trained network for CIFAR-10 (which is trained on Caffe), extracted its parameters, created corresponding graph in tensorflow, initialized the weights with this pre-trained weights and finally saved it as a GraphDef object. See this IPython Notebook for full procedure.
Now I applied the 8-bit quantization with the tensorflow script as mentioned in the Pete Warden's blog:
bazel-bin/tensorflow/contrib/quantization/tools/quantize_graph --input=cifar.pb --output=qcifar.pb --mode=eightbit --bitdepth=8 --output_node_names="ArgMax"
Now I wanted to run the classification on this quantized network. So I loaded the new qcifar.pb to a tensorflow session and passed the image (the same way I passed it to original version). Full code can be found in this IPython Notebook.
But as you can see at the end, I am getting following error:
NotFoundError: Op type not registered 'QuantizeV2'
Can anybody suggest what am I missing here?
Because the quantized ops and kernels are in contrib, you'll need to explicitly load them in your python script. There's an example of that in the quantize_graph.py script itself:
from tensorflow.contrib.quantization import load_quantized_ops_so
from tensorflow.contrib.quantization.kernels import load_quantized_kernels_so
This is something that we should update the documentation to mention!