tensorflow Evaluation SSD_Mobilenetv2 320x320 fpnlite - tensorflow

I am trying to do the evaluation of a trained SSD_Mobilenetv2 320x320 fpnlite on tensorflow. I ran training and evaluation parallel in two different colabs account. But I am always getting the metrics values as -1(evaluation result) after each checkpoint. The lose is also increasing after evaluating each checkpoint.
Below is the command used to run the evaluation:
!python /model_main_tf2.py \
--pipeline_config_path /pipeline.config \
--model_dir /model_ \
--checkpoint_dir /model_

Related

MobileNet: High Accuracy On Validation and Poor Prediction Results

I am training MobileNet_v1_1.0_224 using TensorFlow. I am using the python scripts present in the TensorFlow-Slim image classification model library for training. My dataset distribution with 4 classes is as follows:
normal_faces: 42070
oncall_faces: 13563 (People faces with mobile in the image when they're on call)
smoking_faces: 5949
yawning_faces: 1630
All images in the dataset are square images and larger than 224x224
I am using train_image_classifier.py to train the model with following arguments,
python train_image_classifier.py \
--train_dir=${TRAIN_DIR} \
--dataset_name=custom \
--dataset_split_name=train \
--dataset_dir=${DATASET_DIR} \
--model_name=mobilenet_v1 \
--batch_size=32\
--max_number_of_steps=25000
After training the model, eval_image_classifier.py shows an accuracy greater than 95% on Validation set but when I exported the frozen graph and used it for predictions, it performs very poorly.
I have also tried this notebook but this also produced similar results.
Log: Training Log
Plots: Loss and Accuracy
What is the reason for this? How do I fix this issue?
I have seen similar issues on SO but nothing related to MobileNets specifically.
Did you use a validation set? If so what was the validation accuracy?
If you used a validation set a good way to check if you are doing predictions properly is to run model.predict on the validation set.

Deeplab xception for mobile (tensorflow lite)

I am checking the option to run image segmentation using the pre-trained deeplab xception65_coco_voc_trainval model.
The frozen model size is ~161MB, after I convert it to tflite the size is ~160MB, and running this model on my PC cpu takes ~25 seconds.
Is that "expected" or there is something I can do better?
The conversion to tflite is as follow:
tflite_convert \
--graph_def_file="deeplabv3_pascal_trainval/frozen_inference_graph.pb" \
--output_file="deeplab_xception_pascal.tflite" \
--output_format=TFLITE \
--input_shape=1,513,513,3 \
--input_arrays="sub_7" \
--output_arrays="ArgMax" \
--inference_type=FLOAT \
--allow_custom_ops
Thanks!
According to https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md, xception65_coco_voc_trainval with 3 eval scales takes about 223 seconds. The frozen graph has a single eval scale, so ~25 seconds sounds about right to me.
To speed up inference for TfLite I would suggest using gpu delegate, but as you are running on a PC, you will need to find a smaller model. Maybe try one of the mobilenet based models? The edgetpu models will run in tflite without an edgetpu and should be quite fast, although these are trained on cityscapes.

How to get train input data on training Tensorflow Object Detection API?

When model of faster_rcnn_resnet101 trains, losses are shown on terminal each step.
I want to know which data is input each step. when loss increases, i don't know why loss increases.
Is there someone knowing how to see input data each step?
You can't check each step result ,but in your object_detection/training directory trained checkpoint prefix is created which updating on after specific no of step.
you can check object detection using current trained model.
Eg:
python3 export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path training/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix training/model.ckpt-25000 \
--output_directory latest_dataset
here model.ckpt-25000 is the no of steps(25000) trained till now.

Tensorboard eval.py IOU for object detection

I used the ssd_mobilenet_v1_coco from detection model zoo in tensorflow object detection. I am currently training the model by running
python legacy/train.py --logtostderr --train_dir=trainingmobile/ --pipeline_config_path=trainingmobile/pipeline.config
I want to run an evaluation job by running eval.py to get other metrics like IOU and PR Curve but I don't know how to do that. I am able to run the command
python legacy/eval.py \
--logtostderr \
--checkpoint_dir= path/to/checkpoint \
--eval_dir= path/to/eval \
--pipeline_config_path= path/to/config
then I ran the command
tensorboard --logdir=path/to/eval
The tensorboard shows only the test image output. How can i get other metrics like IOU and PR Curve?
First of all, I'd highly recommend you to use the newer model_main.py script for training and evaluation combined. You can use it as shown below:
python object_detection/model_main.py \
--pipeline_config_path=path/to/config \
--model_dir=path/to/train_dir \
--num_train_steps=NUM_TRAIN_STEPS \
--num_eval_steps=NUM_EVAL_STEPS \
--alsologtostderr
It combines training and evaluation and you can enter tensorboard with
tensorboard -logdir=path/to/train_dir
Tensorboard will not only disply the training process, it will also show your progress over your validation set. They use the COCO metric as default metric!
To your original problem: Maybe you should change the eval settings in your config file to larger numbers:
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10}
If you'll use the model_main.py script, the number of evaluation will be set by the flags.
Good to know: The info output of tnesorflow is disabled in the newer model_main.py script. You can enable it by adding
tf.logging.set_verbosity(tf.logging.INFO)
after the import section.

Good training total loss but inference give bad performance

I exported a graph from tensorflow using this command:
export_inference_graph.py --input_type image_tensor \
--pipeline_config_path=/mnt/data/pipeline.config \
--trained_checkpoint_prefix=/mnt/data/checkpoints/model.ckpt-1670059 \
--output_directory=/mnt/data/output/2018-04-23-1
My network converges :
However, when using the network to do inference, it is really fast but detects the wrong things or nothing at all.
I am using the inception model, any help would be amazing.
Thanks