Images get rotated during training - tensorflow

I am trying to train a ssd_mobilenet_v2_keras for object detection on a dataset of more or less 6000 images. The problem is that images are rotated randomly during training (or at least, this is what it looks like from the tensorboard). This is the configuration I am using in the pipeline.config file:
train_config {
batch_size: 32
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_rgb_to_gray {
probability: 0.25
}
}
data_augmentation_options {
random_jpeg_quality {
random_coef: 0.8
min_jpeg_quality: 50
max_jpeg_quality: 100
}
}
sync_replicas: true
optimizer {
adam_optimizer: {
epsilon: 1e-7
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: 1e-3
total_steps: 50000
warmup_learning_rate: 2.5e-4
warmup_steps: 5000
}
}
}
use_moving_average: false
}
fine_tune_checkpoint: "pre-trained-models/ssd_mobilenet_v2_320x320_coco17_tpu-8/checkpoint/ckpt-0"
num_steps: 50000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "detection"
fine_tune_checkpoint_version: V2
}
I have also tried to remove the random horizontal flip (I knew that was probably not solve anything, I just gave it a try...) but nothing changes, I still see some training images rotated in the tensorboard, and also if I run the evaluation sometimes the images are rotated. Of course the xml with the bounding box coordinates is not "rotated" so the ground truth image in tensorboard appear completely wrong, the object is in a position and the ground truth box is in a completely different position (the right position if the image wasn't rotated...)

Related

How to improve the accuracy of ssd mobilenet v2 coco using Tensorflow Object detection API

I'm using the Tensorflow Object Detection API to create a custom object detector. I'm using the COCO trained models for transfer learning.
I trained it using Faster Rcnn Resnet and got very accurate results, but the inference speed of this model is very slow. I tried training it with SSD mobilenet V2, which has very fast speed, but I'm getting very low accuracy with this model. Is there anything I can change in the config file to increase the accuracy of the model? Or will the SSD model not give very accurate results since it's a lightweight model?
Here's the config file I'm using right now. (I trained it using ~150 images and for 10000 steps)
ssd {
num_classes: 1
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
}
}
similarity_calculator {
iou_similarity {
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
reduce_boxes_in_lowest_layer: true
}
}
image_resizer {
fixed_shape_resizer {
height: 900
width: 400
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 3
box_code_size: 4
apply_sigmoid_to_scores: false
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
}
}
}
feature_extractor {
type: 'ssd_inception_v2'
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
override_base_feature_extractor_hyperparams: true
}
loss {
classification_loss {
weighted_sigmoid {
}
}
localization_loss {
weighted_smooth_l1 {
}
}
hard_example_miner {
num_hard_examples: 3000
iou_threshold: 0.99
loss_type: CLASSIFICATION
max_negatives_per_positive: 3
min_negatives_per_image: 0
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
batch_size: 12
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
}
fine_tune_checkpoint: "/content/models/research/pretrained_model/model.ckpt"
from_detection_checkpoint: true
num_steps: 10000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
}```
It is very difficult to get high accuracy from a model that was designed to run on mobile phones.
My suggestion is to use the high accuracy model and improve the inference time.
Convert the model to TensorRT.
https://github.com/tensorflow/tensorrt/tree/master/tftrt/examples/object_detection
You can increase the number of steps :
num_steps : 2000000
And then if the loss is at around 1 or 2 and still the prediction outcomes are not satisfying then nothing can be done. You can try some other model. You could also refer to the COCO trained datasets and chose one with higher COCO mAP[^1] and lesser Speed (ms).
You can try different models and see what works best for your application.
If still, the problem persists you could try increasing the number of training images
There are so many places that you can improve.
Typically, you want to use a small input size for SSD, e.g. 320x320, which should at least 3x faster than your current input size 900x400 looks strange.
In addition, you only have 1 foreground class. You typically want to double check on the required anchors and min_size/max_size, all of which are related to prior-box used in SSD. I am pretty sure that the default config, which is for ms-coco, does not fit well in many tasks. For example, if it is a car plate detection task, the plate width is much greater than the height, and thus you can safe drop those aspect_ratios <= 1.
In addition, min_size and max_size are also important. If you use the default settings, you will have anchor boxes with size even bigger than your input image size, is this something you expect? If not, you want to adjust the settings too.
Furthermore, you want to dive deep to see what data augmentation fits your problem best. Recently, auto augmentation is also added.
Finally, you can always boost your performance by using new losses, e.g. focal loss for classification.

Training loss value is increasing after some training time, but the model detects objects pretty good

I encounter a strange problem while training CNN to detect objects from my own dataset. I am using transfer learning and at the beginning of training, the loss value is decreasing (as expected). But after some time, it gets higher and higher, and I have no idea why it happens.
At the same time, when I look at Images tab on Tensorboard to check how well the CNN predicts objects, I can see that it does it very well, it doesn't look as it is getting worse over time. Also, the Precision and Recall charts look good, only the Loss charts (especially classification_loss) show an increasing trend over time.
Here are some specific details:
I have 10 different classes of logos (such as DHL, BMW, FedEx, etc.)
Around 600 images per class
I use tensorflow-gpu on Ubuntu 18.04
I tried multiple pre-trained models, the latest being faster_rcnn_resnet101_coco with this config pipeline:
model {
faster_rcnn {
num_classes: 10
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "/home/franciszek/Pobrane/models-master/research/object_detection/logo_detection/models2/faster_rcnn_resnet101_coco/model.ckpt"
from_detection_checkpoint: true
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "/home/franciszek/Pobrane/models-master/research/object_detection/logo_detection/data2/train.record"
}
label_map_path: "/home/franciszek/Pobrane/models-master/research/object_detection/logo_detection/data2/label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "/home/franciszek/Pobrane/models-master/research/object_detection/logo_detection/data2/test.record"
}
label_map_path: "/home/franciszek/Pobrane/models-master/research/object_detection/logo_detection/data2/label_map.pbtxt"
shuffle: false
num_readers: 1
}
Here you can see results that I get after training for nearly 23 hours and reaching over 120k steps:
Loss and Total Loss
Precision
So, my question is, why is the loss value increasing over time? It should be getting smaller or stay more or less constant, but you can clearly see the increasing trend in the above charts.
I think everything is properly configured and my dataset is pretty decent (also .tfrecord files were correctly "built").
To check if it is my fault I tried to use somebody's else dataset and configuration files. So I used the racoon dataset author's files (he provided all of the necessary files on his repo). I just downloaded them and started training with no modifications to check if I would get similar results as him.
Surprisingly, after 82k steps, I got entirely different charts than the ones shown in the linked article (that were captured after 22k steps). Here you can see the comparison of our results:
My losses vs his TotalLoss
My precision vs his mAP
Clearly, something worked differently on my PC. I suspect it may be the same reason why I get increasing loss on my own dataset, that's why I mentioned it.
The totalLoss is the weighted sum of those four other losses. (RPN cla and reg losses, BoxCla cla and reg losses) and they are all Evaluation loss. On tensorboard you can check or uncheck to see the evaluation results for training only or for evaluation only. (For example, the following pic has train summary and evaluation summary)
If the evaluation loss is increasing, this might suggest an overfitting model, besides, the precision metrics dropped a little bit.
To try a better fine-tuning result, you may try adjusting the weights of the four losses, for example, you may increase the weight for BoxClassifierLoss/classification_loss to let the model focused on this metric better. In your config file, the loss weight for second_stage_classification_loss_weight and first_stage_objectness_loss_weight are both 1 while the other two are both 2, so the model currently focused on the other two a little more.
An extra question about why loss_1 and loss_2 are the same. This can be explained by looking at the tensorflow graph.
Here loss_2 is the summary for total_loss, (note this total_loss is not the same as in totalLoss) and the red-circled node is a tf.identity node. This node will output the same tensor as the input, so loss_1 is the same as loss_2

faster-rcnn config file in tensorflow

I am using Google API for object detection in tensorflow to train and infer on a custom dataset.
I would like to adjust the parameters of the config file to better suit my samples (e.g. no. of region proposals, size of ROI bbox, etc.).
To do so, I need to know what each parameter does.
Unfortunately, the config files (found here ) do not have comments or explanations.
Some, such as "num classes" are self-explanatory, but others are tricky.
I found this file with more comments , but wasn't able to 'translate' it to my format.
I would like to know one of the following:
1. explanation of each parameter for google's API config file
or
2. 'translation' from the official faster-rcnn to google's API config
or at least
3. thorough review of faster-rcnn with technical details of the parameters (the official article doesn't provide all the details)
Thank you for your kind help !
Example of a config file:
# Faster R-CNN with Resnet-101 (v1) configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 90
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 0
learning_rate: .0003
}
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record"
}
label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
I found two sources that shed some light on the config file:
1. The folder protos inside tensorflow github covers all configuration options with some comments on each options. You should checkout faster_rcnn.proto , eval.proto and train.proto for the most common
2. This blog post by Algorithmia covers thoroughly all steps to download, prepare and train faster RCNN on Google's Open Images dataset. 2/3-way through, there is some discussion on the configuration options.

Object detection boxes are lost at second evaluation step

I'm a beginner with Tensorflow 1.4.0 and I'm trying to perform my first training + evaluation process on an object detection model. What I'm seeing is something weird when looking at the output of the evaluation steps.
Here is the steps I made. First, it's worth to say that my goal is to detect two different kind of shapes in very particular scientific images. They are under a kind of "copyright" so I just can show a simplified version of them (made by hand). Just keep in mind that the original ones are way more detailed.
A raw example of input image, see it as a repeated pattern (there is always a grid in the background) with some particular shapes in random positions.
As you can see I want to train the model to detect 2 classes: "round" shapes (class A) and "irregular" shapes (class B).
I used labelImg to generate labels for both the classes in XML format. In general, I've labeled 168 images (960x720 RGB, PNG) ending up with a total of 800 boxes (a single image might have multiple A/B shapes in it).
I've also prepared a smaller dataset for evaluation composed of 10 new images and 150 labels. This time the images are bigger than the others in the train dataset (but they are not "resized", simply the viewport is larger so there could be more events in each input). We are talking about 1920x1440 RGB, PNG images.
Then I converted the XMLs for both the datasets into two .tfrecord files (there are some scripts around GitHub for this).
Then I prepared all the other input files for Tensorflow:
Label map file:
item {
id: 1
name: 'shape_a'
display_name: 'Shape A'
}
item {
id: 2
name: 'shape_b'
display_name: 'Shape B'
}
Config file (adapted from https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs). As you can see I've chosen the faster_rcnn_inception_v2 and I tried to train it from scratch (because of the nature of those images, that are way different from the ones used in the pretrained models). Most of the parameters are kept as they are in the repository.
model {
faster_rcnn {
num_classes: 2
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 720
max_dimension: 960
}
}
feature_extractor {
type: 'faster_rcnn_inception_v2'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.5
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.5
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0002
schedule {
step: 0
learning_rate: .0002
}
schedule {
step: 900000
learning_rate: .00002
}
schedule {
step: 1200000
learning_rate: .000002
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
from_detection_checkpoint: false
# fine_tune_checkpoint: "./run/train/modelXXXXXX.ckpt"
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {}
}
data_augmentation_options {
random_vertical_flip {}
}
data_augmentation_options {
random_adjust_brightness { max_delta: 0.15 }
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "./train.tfrecord"
}
label_map_path: "./label_map.pbtxt"
}
eval_config: {
num_examples: 10
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
eval_interval_secs: 300
}
eval_input_reader: {
tf_record_input_reader {
input_path: "./eval.tfrecord"
}
label_map_path: "./label_map.pbtxt"
shuffle: false
num_readers: 1
}
Finally, I run Tensorflow by calling the https://github.com/tensorflow/models/blob/master/research/object_detection/train.py script. By running on a notebook Nvidia Quadro GPU, performances are around 0.600 sec/step. There are no errors in the console but the first thing I see is that the Loss seems to converge to 0.4 and stay there in relatively few (?) steps:
When around 500 steps, I've also started the evaluation script (https://github.com/tensorflow/models/blob/master/research/object_detection/eval.py) on the CPU. It runs every 5 minutes (eval_interval_secs: 300) and I can see the output on Tensorboard.
Here is the problem. The first evaluation is relative to the checkpoint at step #0, so the output images are a bunch of randomly displaced boxes, and this should be normal. One fact is that only boxes for the first A class are present.
Then, from the second evaluation (around step #1000) and so on all the output images have no detection anymore! No A/B class boxes are drawn and nothing show up until I decide to stop everything (step #10000).
I was expecting to continue seeing detection, even if with errors.
I have many questions and I've probably made clear mistakes in my flow (my knowledge is still very limited):
Is it really a strange behavior what I'm seeing on loss and evaluation outputs?
What techniques can I use to check if I did some conceptual mistakes in data preparation?
Can I debug what's happening under the hood during training?
How about the Tensorflow config file? Is there something wrong there?
A note: I've also tried that same thing using other models like ssd_*, but behavior is the same.

Tensorflow object detection API evaluation stuck

I'm using Tensorflow object detection API on my own data with faster_rcnn_resnet101 model. I'm training from scratch. Training part goes well, but evaluation part stuck from the start and never showed result. It looks like:
I tried using older version of api that I downloaded few months ago, on the same dataset. Everything worked. Is there something wrong with the current version of api, especially on evaluation part? Thank you for attention.
My configuration file looks like this:
model {
faster_rcnn {
num_classes: 10
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 0
learning_rate: .0003
}
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
#fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
#from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
#num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "/PATH/TO/train.record"
}
label_map_path: "/PATH/TO/my_label_map.pbtxt"
}
eval_config: {
num_examples: 2000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
#max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "/PATH/TO/test.record"
}
label_map_path: "/PATH/TO/my_label_map.pbtxt"
shuffle: false
num_readers: 1
num_epochs: 1
}
Faster R-CNN object detector takes a little longer to evaluate (in comparison with YOLO or SSD) due to higher accuracy vs speed tradeoff. I recommend reducing the number of images to 5-10 to see if the evaluation script produces an output. As an additional check you can visualize the detected objects in tensorboard by adding the num_visualizations key to eval config:
eval_config: {
num_examples: 10
num_visualizations: 10
min_score_threshold: 0.15
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 1
}
With the above config you should be able to see images tab in tensorboard with object detections. Notice that I also reduced the IoU threshold to 0.15 to allow detection of less confident boxes.