Imbalanced dataset Object Detection - tensorflow

Is there a good way to fine tune a model for object detection (in particular, I am trying to use the Tensorflow Object Detection API) for a dataset with highly skewed data? I am trying to use take some categories of COCO and combine it with my own custom data, but there are only about 50 images of my data.
I have tried with just combining the coco data and my own data but it just predicts the coco categories everytime.

You could try using Focal Loss.
See: https://arxiv.org/pdf/1708.02002.pdf
In the Tensorflow Object Detection model file this would appear as follows:
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}

Related

Which losses scalar shoud I watch in tensorboard?

I'm training my custom object detection model that only detects person.
I followed this github document.
Here is my environment.
Tensorflow version : 1.13.1 GPU
Tensorboard version : 1.13.1
Pre-trained model : SSD mobilenet v2 quantized 300x300 coco
According to above document, my model's loss should drop down under 2.
So I opened tensorboard and found classification losses, total losses, clone losses scalars.
Here is my tensorboard snapshot.
Which losses scalars should I look up?
I also have problem with converging.
My model's loss converged at 4, not 2.
How can I fix it?
I reduced the learning rate 0.004(provided value) to 0.0000095.
But it still converged at 4.
Here is my train_config.
train_config: {
batch_size: 6
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.0000095
decay_steps: 500
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
Is there anything to change?
Thanks for all help.

My custom model's loss converged to a high value(Tensorflow)

I'm training my custom model that detects only person.
I used tensorflow object detection API and followed this github document.
I got images from coco dataset.
400 test images and 1600 train images are prepared.
Here is my train config.
train_config: {
batch_size: 6
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.0001
decay_steps: 250
decay_factor: 0.9
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
}
And environment.
Tensorflow : 1.13.1 gpu
GPU : GTX 1070
CDUA : 10.0
cuDNN : 7.4.2
According to above github document, the loss should be under 2.
But my model's loss always converges to 4.
Is there any problems??
I can't figure out what is wrong...
Thanks for all help.

Tensorflow OD API - Use of dropout when fine-tuning a model

I'm trying to fine-tune the SSD MobileNet v2 (from the model zoo) with my own data. In pipeline.config I see that use_dropout is set to false. Why is that? I thought dropout should be used to prevent overfitting.
box_predictor {
convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.99999989895e-05
}
}
initializer {
truncated_normal_initializer {
mean: 0.0
stddev: 0.0299999993294
}
}
activation: RELU_6
batch_norm {
decay: 0.999700009823
center: true
scale: true
epsilon: 0.0010000000475
train: true
}
}
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.800000011921
kernel_size: 3
box_code_size: 4
apply_sigmoid_to_scores: false
}
}
Is it because of batch normalization? In this paper, it says that:
3.4 Batch Normalization regularizes the
model
When training with Batch Normalization, a training example is seen in conjunction with other examples in the
mini-batch, and the training network no longer producing deterministic values for a given training example. In
our experiments, we found this effect to be advantageous
to the generalization of the network. Whereas Dropout
(Srivastava et al., 2014) is typically used to reduce overfitting, in a batch-normalized network we found that it can
be either removed or reduced in strength.

Transfer learning using tensorflow object detection api

I'm trying to train the model using pretrained faster_rcnn_inception_v2_coco. I'm using the following config file:
model {
faster_rcnn {
num_classes: 37
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 1080
max_dimension: 1365
}
}
feature_extractor {
type: "faster_rcnn_inception_v2"
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
height_stride: 16
width_stride: 16
scales: 0.25
scales: 0.5
scales: 1.0
scales: 2.0
aspect_ratios: 0.5
aspect_ratios: 1.0
aspect_ratios: 2.0
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0001
}
}
initializer {
truncated_normal_initializer {
stddev: 0.00999999977648
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.699999988079
first_stage_max_proposals: 31
second_stage_batch_size: 30
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0001
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
use_dropout: true
dropout_keep_probability: 0.20
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.600000023842
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SOFTMAX
}
second_stage_classification_loss{
weighted_sigmoid_focal{
gamma:2
alpha:0.5
}
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config {
batch_size: 1
data_augmentation_options {
random_jitter_boxes {
}
}
optimizer {
adam_optimizer {
learning_rate {
manual_step_learning_rate {
initial_learning_rate: 4.99999987369e-05
schedule {
step: 160000
learning_rate: 1e-05
}
schedule {
step: 175000
learning_rate: 1e-06
}
}
}
}
use_moving_average: true
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "/home/deploy/tensorflow/models/research/object_detection/ved/model.ckpt"
from_detection_checkpoint: true
num_steps: 400000
}
train_input_reader {
label_map_path: "/home/deploy/tensorflow/models/research/object_detection/ved/tomato.pbtxt"
tf_record_input_reader {
input_path: "/home/deploy/tensorflow/models/research/object_detection/ved/train.record"
}
}
eval_config {
num_visualizations: 4
max_evals: 5
num_examples: 4
max_num_boxes_to_visualize : 100
metrics_set: "coco_detection_metrics"
eval_interval_secs: 600
}
eval_input_reader {
label_map_path: "/home/deploy/tensorflow/models/research/object_detection/ved/tomato.pbtxt"
shuffle: true
num_epochs: 1
num_readers: 1
tf_record_input_reader {
input_path: "/home/deploy/tensorflow/models/research/object_detection/ved/val.record"
}
sample_1_of_n_examples: 2
}
But I'm getting following error:
InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [148] rhs shape= [40]
[[node save/Assign_728 (defined at /home/deploy/tensorflow/models/research/object_detection/model_lib.py:490) = Assign[T=DT_FLOAT, _class=["loc:#SecondStageBoxPredictor/BoxEncodingPredictor/biases"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](SecondStageBoxPredictor/BoxEncodingPredictor/biases, save/RestoreV2/_1457)]]
[[{{node save/RestoreV2/_1768}} = _Send[T=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_1773_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"](save/RestoreV2:884)]]
I don't know why it's happening. I've changed num_classes, first_stage_max_proposals and second_stage_batch_size.
Try correcting the checkpoint file path. The checkpoint file should be from the same model that is used for training. Usually, it comes with the pre trained models package downloaded from TensorFlow Model Zoo.
Try to fix in this line:
fine_tune_checkpoint: "/home/deploy/tensorflow/models/research/object_detection/ved/model.ckpt"
Hope this helps others trying to do Transfer learning using tensorflow object detection api
I found this Transfer learning with TensorFlow Hub, this link is about classification changing the code for object detection should be a nice learning curve for how every tries it out.
It basically has 3 steps
This tutorial demonstrates how to:
Use models from TensorFlow Hub with tf.keras.
Use an image classification model from TensorFlow Hub.
Do simple transfer learning to fine-tune a model for your own image classes.
You can have a look at the Downloading the model section which has a classifier as a example
You can check out pre trained object detection model that the Tensorflow Hub currently support
Here are a few good once's
Model name
Speed (ms)
COCO mAP
CenterNet HourGlass104 512x512
70
41.9
EfficientDet D2 768x768
67
41.8
EfficientDet D3 896x896
95
45.4
SSD ResNet101 V1 FPN 640x640 (RetinaNet101)
57
35.6
CenterNet Resnet101 V1 FPN 512x512
34
34.2
CenterNet Resnet50 V1 FPN 512x512
27
31.2
Step 1 and Step 2 can be completed if you follow the previous section properly.
You can use this section simple_transfer_learning
You have to got through the entire Transfer learning with TensorFlow Hub to understand more

Training loss fluctuates a lot

I am fine tuning SSD Mobilenet (COCO) on Pascal VOC dataset. I have around 17K images in the training set and num_steps is 100000. Details of config are -
train_config: {
batch_size: 1
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.0001
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
However the training loss fluctuates a lot as shown here training loss
How can I avoid this ?
thanks
It seems likely that your learning rate, though you're decaying it, is
still too large in later steps.
Things that I recommend at that point would be to:
Increase your decay
Try another optimizer (e.g. ADAM which worked good for me in such cases)