How do I plot the validation loss in Tensorboard for object detection? - tensorflow

I am training an object detection model using Tensorflow's object detection API, specifically the model_main_tf2.py script. For some reason only the training accuracy is plotted, but not the validation. Can anyone help me in this regard? I would really appreciate it.
Here is the full command I'm using to start the training:
python3 model_main_tf2.py --model_dir /trained_model/ \
-- sample_1_of_n_eval_examples 10 \
--pipeline_config_path pipeline.config \
--alsologtostderr
P.S. There seem to be some answers on Stackoverflow for model_main.py, but not for the tf2 version

Related

MobileNet: High Accuracy On Validation and Poor Prediction Results

I am training MobileNet_v1_1.0_224 using TensorFlow. I am using the python scripts present in the TensorFlow-Slim image classification model library for training. My dataset distribution with 4 classes is as follows:
normal_faces: 42070
oncall_faces: 13563 (People faces with mobile in the image when they're on call)
smoking_faces: 5949
yawning_faces: 1630
All images in the dataset are square images and larger than 224x224
I am using train_image_classifier.py to train the model with following arguments,
python train_image_classifier.py \
--train_dir=${TRAIN_DIR} \
--dataset_name=custom \
--dataset_split_name=train \
--dataset_dir=${DATASET_DIR} \
--model_name=mobilenet_v1 \
--batch_size=32\
--max_number_of_steps=25000
After training the model, eval_image_classifier.py shows an accuracy greater than 95% on Validation set but when I exported the frozen graph and used it for predictions, it performs very poorly.
I have also tried this notebook but this also produced similar results.
Log: Training Log
Plots: Loss and Accuracy
What is the reason for this? How do I fix this issue?
I have seen similar issues on SO but nothing related to MobileNets specifically.
Did you use a validation set? If so what was the validation accuracy?
If you used a validation set a good way to check if you are doing predictions properly is to run model.predict on the validation set.

How to get train input data on training Tensorflow Object Detection API?

When model of faster_rcnn_resnet101 trains, losses are shown on terminal each step.
I want to know which data is input each step. when loss increases, i don't know why loss increases.
Is there someone knowing how to see input data each step?
You can't check each step result ,but in your object_detection/training directory trained checkpoint prefix is created which updating on after specific no of step.
you can check object detection using current trained model.
Eg:
python3 export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path training/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix training/model.ckpt-25000 \
--output_directory latest_dataset
here model.ckpt-25000 is the no of steps(25000) trained till now.

vgg_19 slim model a frozen.pb graph?

I downloaded the vgg_19_2016_08_28.tar.gz and extracted a vgg-19.pb graph. I am using this for tf2onnx. However, this seems to have some dynamic parameters and hence tf2onnx if failing. I want to check if the vgg-19.pb is a frozen graph, if not how can I get a frozen vgg_19.pb graph?
Same question for tensorflow_inception_graph - inception_v3_2016_08_28.tar.gz
Same question for resnet - resnet_v1_50_2016_08_28.tar.gz
All downloaded from here - https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models
To convert TF models to ONNX you need to freeze the graph. The TensorFlow tool to freeze the graph is https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
For example
python -m tensorflow.python.tools.freeze_graph \
--input_graph=my_checkpoint_dir/graphdef.pb \
--input_binary=true \
--output_node_names=output \
--input_checkpoint=my_checkpoint_dir \
--output_graph=tests/models/fc-layers/frozen.pb
To find the inputs and outputs for the TensorFlow graph the model developer will know or you can consult TensorFlow's summarize_graph tool (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms), for example:
summarize_graph --in_graph=tests/models/fc-layers/frozen.pb

Good training total loss but inference give bad performance

I exported a graph from tensorflow using this command:
export_inference_graph.py --input_type image_tensor \
--pipeline_config_path=/mnt/data/pipeline.config \
--trained_checkpoint_prefix=/mnt/data/checkpoints/model.ckpt-1670059 \
--output_directory=/mnt/data/output/2018-04-23-1
My network converges :
However, when using the network to do inference, it is really fast but detects the wrong things or nothing at all.
I am using the inception model, any help would be amazing.
Thanks

How to get rid of additional ops added in the graph while fine-tuning Tensorflow Inception_V3 model?

I am trying to convert a fine-tuned tensorflow inception_v3 model to uff format which can be run on NVIDIA's Jetson TX2. For conversion to uff, certain ops are supported, some are not. I am able to successfully freeze and convert to uff inception_v3 model with imagenet checkpoint provided by tensorflow. However if I fine-tune the model, additional ops like Floor, RandomUniform, etc are added in the new graph which are not yet supported. These layers remain even after freezing the model. This is happening in the fine-tuning for flowers sample provided on tensorflow site as well.
I want to understand why additional ops are added in the graph, while fine-tuning is just supposed to modify the final layer to match number of outputs required.
If they are added while training, how can I get rid of them? What post-processing steps tensorflow team followed before releasing inception_v3 model for imagenet?
I can share the pbtxt files if needed. For now, model layers details are uploaded at https://github.com/shrutim90/TF_to_UFF_Issue. I am using Tensorflow 1.6 with GPU.
I am following the steps to freeze or fine-tune the model from: https://github.com/tensorflow/models/tree/master/research/slim#Pretrained. As described in the above link, to reproduce the issue, install TF-Slim image models library and follow these steps:
1. python export_inference_graph.py \
--alsologtostderr \
--model_name=inception_v3 \
--output_file=/tmp/inception_v3_inf_graph.pb
2. python freeze_graph.py \
--input_graph=/tmp/inception_v3_inf_graph.pb \
--input_checkpoint=/tmp/checkpoints/inception_v3.ckpt \
--input_binary=true --output_graph=/tmp/frozen_inception_v3.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1
3. DATASET_DIR=/tmp/flowers
TRAIN_DIR=/tmp/flowers-models/inception_v3
CHECKPOINT_PATH=/tmp/my_checkpoints/inception_v3.ckpt
python train_image_classifier.py --train_dir=$TRAIN_DIR --dataset_dir=$DATASET_DIR --dataset_name=flowers --dataset_split_name=train --model_name=inception_v3 --checkpoint_path=${CHECKPOINT_PATH} --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits
4. python freeze_graph.py \
--input_graph=/tmp/graph.pbtxt \
--input_checkpoint=/tmp/checkpoints/model.ckpt-2539 \
--input_binary=false --output_graph=/tmp/frozen_inception_v3_flowers.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1
To check the layers, you can check out .pbtxt file or use NVIDIA's convert-to-uff utility.
Run training script -> export_inference_graph -> freeze_graph . This gets rid of all the extra nodes and model can be easily converted to uff.