How to use freeze_graph.py tool in TensorFlow v1 - tensorflow

Is it possible to use the freeze_graph.py tool with models saved via saver.save in TensorFlow v1? If so, how?
I have code that looks roughly like this:
supervisor = tf.train.Supervisor(logdir=output_directory_path)
with supervisor.managed_session() as session:
# train the model here
supervisor.saver.save(session, output_directory_path)
This produces a directory containing:
checkpoint
output
output-16640.data-00000-of-00001
output-16640.index
output-16640.meta
Where output is a directory containing the files for intermediate training steps. The rest are files.
My understanding is that this is a meta graph (the .meta file) and its variables (the .data* file) in saver v2 format. These files contain the data needed for the freeze_graph.py tool but it is unclear how to tell the freeze_graph.py tool to load the data from these files.
All of these attempts produce the error message Input checkpoint '...' doesn't exist!
python freeze_graph.py --input_checkpoint checkpoint --output_graph /tmp/out
python freeze_graph.py --input_checkpoint . --output_graph /tmp/out
python freeze_graph.py --input_checkpoint output-16640 --output_graph /tmp/out
The freeze_graph.py code includes the comment 'input_checkpoint' may be a prefix if we're using Saver V2 format next to where the --input_checkpoint argument is used so I had thought the third of the above attempts would work but, alas, no.

As #mrry pointed out in a comment, the answer to this particular question is to prefix the output prefix with ./. When this was done I discovered it is also necessary to provide values for the --input_graph and --output_name_names arguments.
The command now looks like
python freeze_graph.py \
--input_graph output/graph.pbtxt \
--input_checkpoint ./output-16640 \
--output_graph /tmp/out \
--output_node_names <name>
Unfortunately my graph contains variables for pre-loaded data which causes freeze_graph.py to fail with a message like Attempting to use uninitialized value ...; solving this subsequent problem is beyond the scope of this question.

Related

Which protobuf format to convert to VINO?

How do I convert a Net to VINO when both .pb and pbtxt format are used to read the net - which of the two best serves ?
frozen_graph = str("detection/240x180_depth0.75_ssd_mobilenetv1/frozen_inference_graph.pb")
text_graph = str("detection/240x180_depth0.75_ssd_mobilenetv1/graph.pbtxt")
cvNet = cv2.dnn.readNetFromTensorflow(frozen_graph, text_graph)
Which of the .pb and pbtxt do I use above?
i.e. How does one support the other?
The link https://medium.com/#prasadpal107/saving-freezing-optimizing-for-inference-restoring-of-tensorflow-models-b4146deb21b5 will give you an understanding of the different files associated with model. In short, .pbtxt files are human readable which holds only the structure of graph. It helps to check if some nodes are missing for debugging purpose.
.pb files holds much more details and in most cases, it holds the weights and biases on different layers. Hence you need to use .pb file. The link http://answers.opencv.org/question/187904/readnetfromtensorflow-when-loading-customized-model/ will give you some additional details.
Only frozen_inference_graph.pb is needed in your case to convert topology to Vino model. As well you would need pipeline.json for model
Go to Model Optimizer folder
python mo_tf.py \
--input_model <PATH_TO_MODEL>/frozen_inference_graph.pb \
--tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \
--tensorflow_object_detection_api_pipeline_config <PATH_TO_MODEL>/pipeline.json \
--input_shape [1,180,240,3]

Understanding export_tflite_ssd_graph.py

Here is tutorial about converting Mobilenet+SSD to tflite at some point they use export_tflite_ssd_graph.py, as I understand this custom script is used to support tf.image.non_max_suppression operation.
export CONFIG_FILE=gs://${YOUR_GCS_BUCKET}/data/pipeline.config
export CHECKPOINT_PATH=gs://${YOUR_GCS_BUCKET}/train/model.ckpt-2000
export OUTPUT_DIR=/tmp/tflite
python object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path=$CONFIG_FILE \
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
--output_directory=$OUTPUT_DIR \
--add_postprocessing_op=true
But I wonder what is pipeline.config and how to create it if I use custom model(for example FaceBoxes) that use tf.image.non_max_suppression operation?
The main objective of export_tflite_ssd_graph.py is to export the training checkpoint files into a frozen graph that you can later use for transfer learning or for straight inference (because they contain the model structure info as well as the trained weights info). In fact, all the models listed in model zoo are the frozen graph generated this way.
As for the tf.image.non_max_suppression, export_tflite_ssd_graph.py is not used to 'support' it but if --add_postprocessing_op is set true there will be another custom op node added to the frozen graph, this custom node will have the functionality similar to op tf.image.non_max_suppression. See reference here.
Finally the pipeline.config file directly corresponds to a config file in the you use for training (--pipeline_config_path), it is a copy of it but often with a modified score threshold (See description here about pipeline.config.), so you will have to create it before the training if you use a custom model. And to create a custom config file, here is the official tutorial.

How to get rid of additional ops added in the graph while fine-tuning Tensorflow Inception_V3 model?

I am trying to convert a fine-tuned tensorflow inception_v3 model to uff format which can be run on NVIDIA's Jetson TX2. For conversion to uff, certain ops are supported, some are not. I am able to successfully freeze and convert to uff inception_v3 model with imagenet checkpoint provided by tensorflow. However if I fine-tune the model, additional ops like Floor, RandomUniform, etc are added in the new graph which are not yet supported. These layers remain even after freezing the model. This is happening in the fine-tuning for flowers sample provided on tensorflow site as well.
I want to understand why additional ops are added in the graph, while fine-tuning is just supposed to modify the final layer to match number of outputs required.
If they are added while training, how can I get rid of them? What post-processing steps tensorflow team followed before releasing inception_v3 model for imagenet?
I can share the pbtxt files if needed. For now, model layers details are uploaded at https://github.com/shrutim90/TF_to_UFF_Issue. I am using Tensorflow 1.6 with GPU.
I am following the steps to freeze or fine-tune the model from: https://github.com/tensorflow/models/tree/master/research/slim#Pretrained. As described in the above link, to reproduce the issue, install TF-Slim image models library and follow these steps:
1. python export_inference_graph.py \
--alsologtostderr \
--model_name=inception_v3 \
--output_file=/tmp/inception_v3_inf_graph.pb
2. python freeze_graph.py \
--input_graph=/tmp/inception_v3_inf_graph.pb \
--input_checkpoint=/tmp/checkpoints/inception_v3.ckpt \
--input_binary=true --output_graph=/tmp/frozen_inception_v3.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1
3. DATASET_DIR=/tmp/flowers
TRAIN_DIR=/tmp/flowers-models/inception_v3
CHECKPOINT_PATH=/tmp/my_checkpoints/inception_v3.ckpt
python train_image_classifier.py --train_dir=$TRAIN_DIR --dataset_dir=$DATASET_DIR --dataset_name=flowers --dataset_split_name=train --model_name=inception_v3 --checkpoint_path=${CHECKPOINT_PATH} --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits
4. python freeze_graph.py \
--input_graph=/tmp/graph.pbtxt \
--input_checkpoint=/tmp/checkpoints/model.ckpt-2539 \
--input_binary=false --output_graph=/tmp/frozen_inception_v3_flowers.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1
To check the layers, you can check out .pbtxt file or use NVIDIA's convert-to-uff utility.
Run training script -> export_inference_graph -> freeze_graph . This gets rid of all the extra nodes and model can be easily converted to uff.

*Tensorflow Objection Detection API* mAP calculations for YOLO darkflow?

so I would like to use to TF Object Detection API to calculate mAP scores for YOLO.
Currently I'm training my YOLO model and it is producing ckpt files/frozen graph files.
I would like to take my YOLO model and evaluate it using the TF Obj Dec API. I know Eval.py says
2) Three configuration files may be provided: a model_pb2.DetectionModel
configuration file to define what type of DetectionModel is being evaluated, an
input_reader_pb2.InputReader file to specify what data the model is evaluating
and an eval_pb2.EvalConfig file to configure evaluation parameters.
Example usage:
./eval \
--logtostderr \
--checkpoint_dir=path/to/checkpoint_dir \
--eval_dir=path/to/eval_dir \
--eval_config_path=eval_config.pbtxt \
--model_config_path=model_config.pbtxt \
--input_config_path=eval_input_config.pbtxt
However, with darkflow there is no model_pb2.DetectonModel Configuration file. Is this possible?

Using inception-v3 checkpoint file in tensorflow

In one of my project, I used a public pre-trained inception-v3 model available here : http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz.
I only want to use last feature vector (output of pool_3/_reshape:0). By looking at script example classify_image.py, I can successfully pass an image throught the Deep DNN, extract the bottleneck tensor (bottleneck_tensor = sess.graph.get_tensor_by_name('pool_3/_reshape:0')) and use it for further purpose.
I recently saw that there were a more recent trained inception model. Checkpoint of training is available here : http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz.
I would like to use this new pretrained instead of the old one. However file format is different. The "old model" uses a graph def in ProtocolBuffer form (classify_image_graph_def.pb) that is easily reusable. The "new one" only provides a checkpoint format, and I'm struggling to insert it into my code.
Is there an easy way to convert a checkpoint file to a ProtocolBuffer file that could be then used to create a graph?
It seems you have to use freeze_graph.py:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
The script converts checkpoint variables into Const ops in a standalone GraphDef file.
This script is designed to take a GraphDef proto, a SaverDef proto, and a set of variable values stored in a checkpoint file, and output a GraphDef with all of the variable ops converted into const ops containing the values of the
variables.
It's useful to do this when we need to load a single file in C++, especially in environments like mobile or embedded where we may not have access to the RestoreTensor ops and file loading calls that they rely on.
An example of command-line usage is:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=some_graph_def.pb \
--input_checkpoint=model.ckpt-8361242 \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax