I'm trying to fine-tune the SSD MobileNet v2 (from the model zoo) with my own data. In pipeline.config I see that use_dropout is set to false. Why is that? I thought dropout should be used to prevent overfitting.
box_predictor {
convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.99999989895e-05
}
}
initializer {
truncated_normal_initializer {
mean: 0.0
stddev: 0.0299999993294
}
}
activation: RELU_6
batch_norm {
decay: 0.999700009823
center: true
scale: true
epsilon: 0.0010000000475
train: true
}
}
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.800000011921
kernel_size: 3
box_code_size: 4
apply_sigmoid_to_scores: false
}
}
Is it because of batch normalization? In this paper, it says that:
3.4 Batch Normalization regularizes the
model
When training with Batch Normalization, a training example is seen in conjunction with other examples in the
mini-batch, and the training network no longer producing deterministic values for a given training example. In
our experiments, we found this effect to be advantageous
to the generalization of the network. Whereas Dropout
(Srivastava et al., 2014) is typically used to reduce overfitting, in a batch-normalized network we found that it can
be either removed or reduced in strength.
Related
I'm working on training a model by using Transfer Learning. The pre-trained model I use is "SSD MobileNet V2 FPNLite 320x320" from Model Zoo TF2. And I have some confusion about item train_config, in file pipeline.config.
If I change num_steps, model will train with num_steps. But when I change total_steps, the model still train with num_steps. Even if I setnum_steps > total_step, there is no error. And when I check all SSD model in Model Zoo TF2, I always see that total_steps the same as num_steps.
Question: Do I need to set total_steps the same with num_steps? What is the relationship between it?
train_config {
batch_size: 128
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.07999999821186066
total_steps: 50000
warmup_learning_rate: 0.026666000485420227
warmup_steps: 1000
}
}
momentum_optimizer_value: 0.8999999761581421
}
use_moving_average: false
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED"
num_steps: 50000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "classification"
fine_tune_checkpoint_version: V2
}
I am training tensorflow object detection API model on the custom dataset i.e. License plate dataset. My goal is to deploy this model to the edge device using tensorflow lite so I can't use any RCNN family model. Because, I can't convert any RCNN family object detection model to tensorflow lite model (this is the limitation from tensorflow object detection API). I am using ssd_mobilenet_v2_coco model to train the custom dataset. Following is the code snippet of my config file:
model {
ssd {
num_classes: 1
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
}
}
similarity_calculator {
iou_similarity {
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
}
}
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 1
box_code_size: 4
apply_sigmoid_to_scores: false
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
}
feature_extractor {
type: 'ssd_mobilenet_v2'
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
loss {
classification_loss {
weighted_sigmoid {
}
}
localization_loss {
weighted_smooth_l1 {
}
}
hard_example_miner {
num_hard_examples: 3000
iou_threshold: 0.99
loss_type: CLASSIFICATION
max_negatives_per_positive: 3
min_negatives_per_image: 3
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
batch_size: 24
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
}
fine_tune_checkpoint: "/home/sach/DL/Pycharm_Workspace/TF1.14/License_Plate_F-RCNN/dataset/experiments/training_SSD/ssd_mobilenet_v2_coco_2018_03_29/model.ckpt"
fine_tune_checkpoint_type: "detection"
num_steps: 150000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "/home/sach/DL/Pycharm_Workspace/TF1.14/License_Plate_F-RCNN/dataset/records/training.record"
}
label_map_path: "/home/sach/DL/Pycharm_Workspace/TF1.14/License_Plate_F-RCNN/dataset/records/classes.pbtxt"
}
eval_config: {
num_examples: 488
num_visualizations : 488
}
eval_input_reader: {
tf_record_input_reader {
input_path: "/home/sach/DL/Pycharm_Workspace/TF1.14/License_Plate_F-RCNN/dataset/records/testing.record"
}
label_map_path: "/home/sach/DL/Pycharm_Workspace/TF1.14/License_Plate_F-RCNN/dataset/records/classes.pbtxt"
shuffle: false
num_readers: 1
}
I have total 1932 images (train images: 1444 and val images: 448). I have trained the model for 150000 steps. Following is the output from tensorboard:
DetectionBoxes Precision mAP#0.5 IOU: After 150K steps, the object detection model accuracy (mAP#0.5 IOU) is ~0.97 i.e. 97%. Which seems to be fine at the moment.
Training Loss: After 150K steps, the training loss is ~1.3. This seems to be okay.
Evaluation/Validation Loss: After 150K steps, the evaluation/validation loss is ~3.90 which is pretty high. However, there is huge difference between training and evaluation loss. Is there any overfitting exist? How can I overcome this problem? In my point of view, training and evaluation loss should be close to each other.
How can I reduce validation/evaluation loss?
I am using the default config file so by default use_dropout: false. Should I change it to use_dropout: true in case overfitting exist?
What should be the acceptable range of training and validation loss for object detection model?
Please share your views. Thanking you!
There are several reasons for overfitting problem In Neural networks, by looking at your config file, I would like to suggest a few things to try to avoid overfitting.
use_dropout: true so that it makes the Neurons less sensitive to minor changes in the weights.
Try increasing iou_threshold in batch_non_max_suppression.
Use l1 regularizer or combination of l1 and l2 regularizer.
Change the optimizer to Nadam or Adam Optimizers.
Include more Augmentation techniques.
You can also use Early Stopping to track your accuracy.
Alternatively, you can observe the Tensorboard visualization, take the weights before the step where the validation loss starts increasing.
I hope trying these steps will resolve the overfitting issue of your model.
I'm trying to train the model using pretrained faster_rcnn_inception_v2_coco. I'm using the following config file:
model {
faster_rcnn {
num_classes: 37
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 1080
max_dimension: 1365
}
}
feature_extractor {
type: "faster_rcnn_inception_v2"
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
height_stride: 16
width_stride: 16
scales: 0.25
scales: 0.5
scales: 1.0
scales: 2.0
aspect_ratios: 0.5
aspect_ratios: 1.0
aspect_ratios: 2.0
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0001
}
}
initializer {
truncated_normal_initializer {
stddev: 0.00999999977648
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.699999988079
first_stage_max_proposals: 31
second_stage_batch_size: 30
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0001
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
use_dropout: true
dropout_keep_probability: 0.20
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.600000023842
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SOFTMAX
}
second_stage_classification_loss{
weighted_sigmoid_focal{
gamma:2
alpha:0.5
}
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config {
batch_size: 1
data_augmentation_options {
random_jitter_boxes {
}
}
optimizer {
adam_optimizer {
learning_rate {
manual_step_learning_rate {
initial_learning_rate: 4.99999987369e-05
schedule {
step: 160000
learning_rate: 1e-05
}
schedule {
step: 175000
learning_rate: 1e-06
}
}
}
}
use_moving_average: true
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "/home/deploy/tensorflow/models/research/object_detection/ved/model.ckpt"
from_detection_checkpoint: true
num_steps: 400000
}
train_input_reader {
label_map_path: "/home/deploy/tensorflow/models/research/object_detection/ved/tomato.pbtxt"
tf_record_input_reader {
input_path: "/home/deploy/tensorflow/models/research/object_detection/ved/train.record"
}
}
eval_config {
num_visualizations: 4
max_evals: 5
num_examples: 4
max_num_boxes_to_visualize : 100
metrics_set: "coco_detection_metrics"
eval_interval_secs: 600
}
eval_input_reader {
label_map_path: "/home/deploy/tensorflow/models/research/object_detection/ved/tomato.pbtxt"
shuffle: true
num_epochs: 1
num_readers: 1
tf_record_input_reader {
input_path: "/home/deploy/tensorflow/models/research/object_detection/ved/val.record"
}
sample_1_of_n_examples: 2
}
But I'm getting following error:
InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [148] rhs shape= [40]
[[node save/Assign_728 (defined at /home/deploy/tensorflow/models/research/object_detection/model_lib.py:490) = Assign[T=DT_FLOAT, _class=["loc:#SecondStageBoxPredictor/BoxEncodingPredictor/biases"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](SecondStageBoxPredictor/BoxEncodingPredictor/biases, save/RestoreV2/_1457)]]
[[{{node save/RestoreV2/_1768}} = _Send[T=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_1773_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"](save/RestoreV2:884)]]
I don't know why it's happening. I've changed num_classes, first_stage_max_proposals and second_stage_batch_size.
Try correcting the checkpoint file path. The checkpoint file should be from the same model that is used for training. Usually, it comes with the pre trained models package downloaded from TensorFlow Model Zoo.
Try to fix in this line:
fine_tune_checkpoint: "/home/deploy/tensorflow/models/research/object_detection/ved/model.ckpt"
Hope this helps others trying to do Transfer learning using tensorflow object detection api
I found this Transfer learning with TensorFlow Hub, this link is about classification changing the code for object detection should be a nice learning curve for how every tries it out.
It basically has 3 steps
This tutorial demonstrates how to:
Use models from TensorFlow Hub with tf.keras.
Use an image classification model from TensorFlow Hub.
Do simple transfer learning to fine-tune a model for your own image classes.
You can have a look at the Downloading the model section which has a classifier as a example
You can check out pre trained object detection model that the Tensorflow Hub currently support
Here are a few good once's
Model name
Speed (ms)
COCO mAP
CenterNet HourGlass104 512x512
70
41.9
EfficientDet D2 768x768
67
41.8
EfficientDet D3 896x896
95
45.4
SSD ResNet101 V1 FPN 640x640 (RetinaNet101)
57
35.6
CenterNet Resnet101 V1 FPN 512x512
34
34.2
CenterNet Resnet50 V1 FPN 512x512
27
31.2
Step 1 and Step 2 can be completed if you follow the previous section properly.
You can use this section simple_transfer_learning
You have to got through the entire Transfer learning with TensorFlow Hub to understand more
I'm using tensorflow object detection api. The problem with this api is that it exports frozen graph for inference. I can't use that graph for serving. So, as a work around I followed the tutorial here. But when I'm trying to export the graph I'm getting following error:
InvalidArgumentError (see above for traceback): Restoring from
checkpoint failed. This is most likely due to a mismatch between the
current graph and the graph from the checkpoint. Please ensure that
you have not altered the graph expected based on the checkpoint.
Original error:
Assign requires shapes of both tensors to match. lhs shape= [1024,4]
rhs shape= [1024,8]
[[node save/Assign_258 (defined at
/home/deploy/models/research/object_detection/exporter.py:67) =
Assign[T=DT_FLOAT,
_class=["loc:#SecondStageBoxPredictor/BoxEncodingPredictor/weights"], use_locking=true, validate_shape=true,
_device="/job:localhost/replica:0/task:0/device:GPU:0"](SecondStageBoxPredictor/BoxEncodingPredictor/weights,
save/RestoreV2/_517)]] [[{{node save/RestoreV2/_522}} =
_SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device_incarnation=1, tensor_name="edge_527_save/RestoreV2",
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
The error says there is a mismatch in the graph. A possible cause might be that I'm using pretrained graph for training which might have 4 classification and my model has 8 classification. (hence mismatch of shape). There is a similar issue for deeplab model and their solution for their
specific model was to start the training with --initialize_last_layer=False and --last_layers_contain_logits_only=False parameters. But tensorflow object detection doesn't have that parameters. So, how should I proceed ? Also, is there any other way of serving tensorflow object detection api ?
My config file looks like this:
model {
faster_rcnn {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 1000
width: 1000
resize_method: AREA
}
}
feature_extractor {
type: "faster_rcnn_inception_v2"
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
height_stride: 16
width_stride: 16
scales: 0.25
scales: 0.5
scales: 1.0
scales: 2.0
aspect_ratios: 0.5
aspect_ratios: 1.0
aspect_ratios: 2.0
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.00999999977648
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.699999988079
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
use_dropout: false
dropout_keep_probability: 1.0
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.600000023842
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config {
batch_size: 8
data_augmentation_options {
random_horizontal_flip {
}
}
optimizer {
adam_optimizer {
learning_rate {
manual_step_learning_rate {
initial_learning_rate: 0.00010000000475
schedule {
step: 40000
learning_rate: 3.00000010611e-05
}
}
}
}
use_moving_average: true
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "/home/deploy/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
num_steps: 60000
max_number_of_boxes: 100
}
train_input_reader {
label_map_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/carrot_identify.pbtxt"
tf_record_input_reader {
input_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/train.record"
}
}
eval_config {
num_visualizations: 100
num_examples: 135
eval_interval_secs: 60
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/carrot_identify.pbtxt"
shuffle: true
num_epochs: 1
num_readers: 1
tf_record_input_reader {
input_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/test.record"
}
sample_1_of_n_examples: 1
}
When exporting models for tf serving, the config file and checkpoint files should correspond to each other.
The problem is when exporting the custom trained model, you were using the old config file with new checkpoint files.
I am working on training the object detector with a custom dataset designed to detect the head of a plant. I am using the "Faster R-CNN with Resnet-101 (v1)" that was originally designed for the pet dataset.
I modified the config file to match my dataset (1875 training/375 eval) of images that 275x550 in size. I converted all record files. And the pipeline file is shown below.
I trained on a gpu overnight for 100k steps and the actual evaluation results look really good. It detects all the plant heads and the data is really useful.
The issue is the actual metrics. When checking the tensorboard logs for the eval, all the metrics increase until 30k steps and then drop again making a nice hump in the middle. This goes for the loss, mAP, and precision results.
Why is this result happening? I assumed that if you keep training, the metrics should just flatten out to a line and not just decrease downwards again.
mAP Evaluation: https://imgur.com/a/hjobr6c
Loss Evaluation: https://imgur.com/a/EY8Afqc
# Faster R-CNN with Resnet-101 (v1) originally for Oxford-IIIT Pets Dataset. Modified for wheat head detection
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 1
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 275
max_dimension: 550
}
}
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 900000
learning_rate: .00003
}
schedule {
step: 1200000
learning_rate: .000003
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "object_detection/faster_rcnn_resnet101_coco_11_06_2017/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "object_detection/data_wheat/train.record-?????-of-00010"
}
label_map_path: "object_detection/data_wheat/wheat_label_map.pbtxt"
}
eval_config: {
metrics_set: "coco_detection_metrics"
num_examples: 375
}
eval_input_reader: {
tf_record_input_reader {
input_path: "object_detection/data_wheat/val.record-?????-of-00010"
}
label_map_path: "object_detection/data_wheat/wheat_label_map.pbtxt"
shuffle: false
num_readers: 1
}
This is a standard case of overfitting: your model is memorizing the training data and lost its ability to generalize on unseen data.
For cases like this one you have two options:
early stopping: monitor the validation metrics and as soon as the metrics become constants and/or starts decreasing stop the training
add regularization to the model (and also do early stopping anyway)