What are input and output node names in inception v3 with slim library? - tensorflow

I retrained inceptionV3 model on my own data using Tensorflow slim. Below files are generated after training :-
graph.pbtxt, model.ckpt, model.meta, model.index, checkpoint,
events.out.tfevents
I want to freeze the graph files and create a .pb file. I don't know what is input node and output node in inception v3. And using Tensorboard is complex for me.
What are the input/output nodes in inceptionV3?(in slim/nets) OR how can I find the input/output nodes ?
OS : window 7

(A). If you will make it to bottom of this link. You would find this somewhere(specific to inceptionV3) :
input_layer=input
output_layer=InceptionV3/Predictions/Reshape_1
(B). Another way is to print all tensors of the model and get input/output tensor
from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file
ckpt_path="model.ckpt"
print_tensors_in_checkpoint_file(file_name=ckpt_path, tensor_name='', all_tensors=True, all_tensor_names=True)
(C). If you need to print tensor names of .pb file. You can use this simple code.
Check what would work for you.

Related

What are these 2 files in the CenterNet MobileNetV2 from the Tensorflow OD model zoo?, Do we need them?

Do we need these files?, The Tensorflow Doc don't say anything about them
The model.tflite file is the pretrained model in .tflite format. So if you want to use the model out of the box, you can use this file.
The label_map.txt is used to map the output of your network to actual comprehensible results. I.e. both of the files are needed if you want to use the model out of the box. It is not needed for re-training.

How to convert a TensorFlow SavedModel graph to a Caffe model?

I want to use MMdnn to convert a tensorflow ResNet model to other frameworks. It seems that I can only use mmconvert to read from a .pb frozen graph file.
However, when using tf.estimator.Estimator, the .pb file that it creates is a SavedModelDef. I understand this to be a wrapper around the tf GraphDef. Thus the GraphDef .pb file can be extracted from the SavedModel using freeze_graph.py.
From there, I will need the name of the input node in the tf GraphDef. But I'm unsure how to identify the name from looking at the .pbtxt. The tf.Estimator inputs with a tf.Dataset object, according to the framework.
I'm guessing there should be a tf.Placeholder somewhere that accepts the input. But I'm not sure how to find what the input node actually is.
Answering my own question here. The freeze_graph utility that comes with tensorflow is useful for extracting the graphdef from the tf SavedModel format.
To find the name of the input node, make sure to saved the tf SavedModel in pbtxt format. Open it up and look for the first node of your compute graph, e.g. if using tf resnet, the first nodes will be named resnet_model/*. Find the node that feeds this node, and you will have the name of the input node to specify to MMdnn tools. I expected this to be a tf.Placeholder that the Estimator adds for inputs. This node was just named Placeholder, so that's what I specified as the input node.
First extract the compute graph.
freeze_graph --input_saved_model_dir <path/to/saved_model_dir> --output_node_names softmax --output_graph ./graph_def.pb
Then use MMdnn to convert it to caffe.
mmconvert -sf tensorflow -iw ./graph_def.pb --inNodeName Placeholder --inputShape 224,224,3 --dstNodeName softmax -df caffe -om tf_resnet

Using model optimizer for tensorflow slim models

I am aiming to inference tensorflow slim model with Intel OpenVINO optimizer. Using open vino docs and slides for inference and tf slim docs for training model.
It's a multi-class classification problem. I have trained tf slim mobilnet_v2 model from scratch (using sript train_image_classifier.py). Evaluation of trained model on test set gives relatively good results to begin with (using script eval_image_classifier.py):
eval/Accuracy[0.8017]eval/Recall_5[0.9993]
However, single .ckpt file is not saved (even though at the end of train_image_classifier.py run there is a message like "model.ckpt is saved to checkpoint_dir"), there are 3 files (.ckpt-180000.data-00000-of-00001, .ckpt-180000.index, .ckpt-180000.meta) instead.
OpenVINO model optimizer requires a single checkpoint file.
According to docs I call mo_tf.py with following params:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_checkpoint D:/model/model.ckpt-180000 -b 1
It gives the error (same if pass --input_checkpoint D:/model/model.ckpt):
[ ERROR ] The value for command line parameter "input_checkpoint" must be existing file/directory, but "D:/model/model.ckpt-180000" does not exist.
Error message is clear, there are not such files on disk. But as I know most tf utilities convert .ckpt-????.meta to .ckpt under the hood.
Trying to call:
python mo_tf.py --input_model D:/model/mobilenet_v2_224.pb --input_meta_graph D:/model/model.ckpt-180000.meta -b 1
Causes:
[ ERROR ] Unknown configuration of input model parameters
It doesn't matter for me in which way I will transfer graph to OpenVINO intermediate representation, just need to reach that result.
Thanks a lot.
EDIT
I managed to run OpenVINO model optimizer on frozen graph of tf slim model. However I still have no idea why had my previous attempts (based on docs) failed.
you can try converting the model to frozen format (.pb) and then convert the model using OpenVINO.
.ckpt-meta has the metagraph. The computation graph structure without variable values.
the one you can observe in tensorboard.
.ckpt-data has the variable values,without the skeleton or structure. to restore a model we need both meta and data files.
.pb file saves the whole graph (meta+data)
As per the documentation of OpenVINO:
When a network is defined in Python* code, you have to create an inference graph file. Usually, graphs are built in a form that allows model training. That means that all trainable parameters are represented as variables in the graph. To use the graph with the Model Optimizer, it should be frozen.
https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow
the OpenVINO optimizes the model by converting the weighted graph passed in frozen form.

Converting a model trained and saved with tf.estimator to .pb

I have a model trained with tf.estimator and it was exported after training as below
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
feature_placeholders)
classifier.export_savedmodel(
r'./path/to/model/trainedModel', serving_input_fn)
This gives me a saved_model.pb and a folder which contains weights as a .data file. I can reload the saved model using
predictor = tf.contrib.predictor.from_saved_model(r'./path/to/model/trainedModel')
I'd like to run this model on android and that requires the model to be in .pb format. How can I freeze this predictor for use on android platform?
I don't deploy to Android, so you might need to customize the steps a bit, but this is how I do this:
Run <tensorflow_root_installation>/python/tools/freeze_graph.py with arguments --input_saved_model_dir=<path_to_the_savedmodel_directory>, --output_node_names=<full_name_of_the_output_node> (you can get the name of the output node from graph.pbtxt, although that's not the most comfortable of ways), --output_graph=frozen_model.pb
(optionally) Run <tensorflow_root_installation>/python/tools/optimize_for_inference.py with adequate arguments. Alternatively you can look up the Graph Transform Tool and selectively apply optimizations.
At the end of step 1 you'll already have a frozen model with no variables left, that you can then deploy to Android.

[Tensorflow]Different Graph architecture in COCO pre-trained model and re-train model

I have followed the doc:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md
to retrained the model of ssd_mobilenet_v1_coco_2017_11_17.tar.gz.
I only modify the pipe config with
- finetune to TRUE
- set the model path
- set the trained data path with actual path of tfrecord
then, I try to use the export script with :
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md
Finally I have got the pb with my own data.
but I found that the result of re-train model was not the same architecture with the one in ssd_mobilenet_v1_coco_2017_11_17.tar.gz
Here is the picture which I captured from Tensorboard:
The retrained one of my own data and pipe configure .
https://i.stack.imgur.com/kegcT.jpg
The original one from pb file within ssd_mobilenet_v1_coco_2017_11_17.tar.gz
https://i.stack.imgur.com/IAGi3.jpg
According to the pictures from tensorboard , I found that there were two input tensors in BoxPredictor but there were three with original.
Could anyone who could help me to solve this problem ?
Thanks...
PS: I use the tensorflow with version 1.4.0-GPU