I googled around a bit but I only found questions about enabling data augmentation.
I followed this tutorial but with my own dataset (only one class). I already performed data augmentation on my dataset so I deleted the responsible lines from the pipeline.config.
Now my pipeline looks like this
model {
ssd {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
feature_extractor {
type: "ssd_resnet50_v1_fpn_keras"
depth_multiplier: 1.0
min_depth: 16
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 0.00039999998989515007
}
}
initializer {
truncated_normal_initializer {
mean: 0.0
stddev: 0.029999999329447746
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
override_base_feature_extractor_hyperparams: true
fpn {
min_level: 3
max_level: 7
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 0.00039999998989515007
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.009999999776482582
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
depth: 256
num_layers_before_predictor: 4
kernel_size: 3
class_prediction_bias_init: -4.599999904632568
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 2
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 9.99999993922529e-09
iou_threshold: 0.6000000238418579
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: false
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
}
train_config {
batch_size: 1
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.03999999910593033
total_steps: 25000
warmup_learning_rate: 0.013333000242710114
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.8999999761581421
}
use_moving_average: false
}
fine_tune_checkpoint: "/home/sally/work/training/TensorFlow/workspace/pre-trained-models/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0"
num_steps: 25000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "detection"
use_bfloat16: false
fine_tune_checkpoint_version: V2
}
train_input_reader {
label_map_path: "/home/sally/work/training/TensorFlow/workspace/annotations/label_map.pbtxt"
tf_record_input_reader {
input_path: "/home/sally/work/training/TensorFlow/workspace/annotations/train.record"
}
}
eval_config {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/sally/work/training/TensorFlow/workspace/annotations/label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "/home/sally/work/training/TensorFlow/workspace/annotations/test.record"
}
}
I started the training but with tensorboard I can see that the training images are very very distorted.
For reference normal images look like this
As you can see I try to detect Kellogs boxes. The dataset is generated using blender (soda can and fence are to have some sort of decoy objects and to be able to cover the boxes partially)
Now my question: How do I disably any sort of data augmentation in the object detection api?
The map is very very low because of these distorted images used during the training process.
This is an issue with the normalization of the image. It does not affect your training.
However, if you want the images to be displayed correctly in tensorboard, then normalize them between (0, 1). Check this link for some possible changes.
Note: normalizing between (-1, 1) has been reported to create the same issue.
Related
Hy,I work with faster_rcnn_resnet101_v1_1024x1024_coco17_tpu-8 pretrained model. I have problems when evaluating the model. The training went without any problems. I start the evaluation of the model with the command:
python model_main_tf2.py --pipeline_config_path=./training_outlook_action_ctx/training_1/pipeline.config --model_dir=./training_outlook_action_ctx/training_1 --checkpoint_dir=./training_outlook_action_ctx/training_1
After the first Loaded cuDNN version 8400, it starts throwing me the following error that repeats itself until it interrupts
WARNING:tensorflow:Ignoring ground truth with image id 1016176252 since it was previously added
W0810 10:17:12.131517 140545620840832 coco_evaluation.py:113] Ignoring ground truth with image id 1016176252 since it was previously added
WARNING:tensorflow:Ignoring detection with image id 1016176252 since it was previously added
W0810 10:17:12.131881 140545620840832 coco_evaluation.py:196] Ignoring detection with image id 1016176252 since it was previously added
WARNING:tensorflow:Ignoring ground truth with image id 1016176252 since it was previously added
W0810 10:17:12.652873 140545620840832 coco_evaluation.py:113] Ignoring ground truth with image id 1016176252 since it was previously added
WARNING:tensorflow:Ignoring detection with image id 1016176252 since it was previously added
W0810 10:17:12.653055 140545620840832 coco_evaluation.py:196] Ignoring detection with image id 1016176252 since it was previously added
WARNING:tensorflow:Ignoring ground truth with image id 1016176252 since it was previously added
here is my pipeline.config file
# Faster R-CNN with Resnet-50 (v1)
# Trained on COCO, initialized from Imagenet classification checkpoint
# This config is TPU compatible.
model {
faster_rcnn {
num_classes: 7
image_resizer {
fixed_shape_resizer {
width: 1024
height: 1024
}
}
feature_extractor {
type: 'faster_rcnn_resnet101_keras'
batch_norm_trainable: true
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
share_box_across_classes: true
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
use_static_shapes: true
use_matmul_crop_and_resize: true
clip_anchors_to_image: true
use_static_balanced_label_sampler: true
use_matmul_gather_in_matcher: true
}
}
train_config: {
batch_size: 2
sync_replicas: true
startup_delay_steps: 0
replicas_to_aggregate: 8
num_steps: 200000
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: .04
total_steps: 100000
warmup_learning_rate: .013333
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
fine_tune_checkpoint_version: V2
fine_tune_checkpoint: "/pretrained_models/faster_rcnn_resnet101_v1_1024x1024_coco17_tpu-8/checkpoint/ckpt-0"
fine_tune_checkpoint_type: "detection"
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_adjust_hue {
}
}
data_augmentation_options {
random_adjust_contrast {
}
}
data_augmentation_options {
random_adjust_saturation {
}
}
data_augmentation_options {
random_square_crop_by_scale {
scale_min: 0.6
scale_max: 1.3
}
}
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
use_bfloat16: true # works only on TPUs
}
train_input_reader: {
label_map_path: "./training_outlook_action_ctx/data/label_map.pbtxt"
tf_record_input_reader {
input_path: "./training_outlook_action_ctx/data/train.records"
}
}
eval_config: {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
batch_size: 2
}
eval_input_reader: {
label_map_path: "./training_outlook_action_ctx/data/label_map.pbtxt"
shuffle: false
tf_record_input_reader {
input_path: "./training_outlook_action_ctx/data/train.records"
}
}
OS: Debian GNU/Linux 11 (bullseye)
Python: 3.9.12
Tensorflow: 2.9.1
I tried to add num_examples and max_evals but failed. No matter how I adjust them, it still throws the same error
I must mention that the evaluation on the second dataset worked normally for me
Thanks in advance
Edi
guys. I found a solution for how I used the script to create images and annotations. More precisely, I used a script that will crop my first-level annotations and create new XML files for cropped images. The filename and path were not good for me in the XML files (wrong path when programming the script).
After the change, the evaluation code error disappeared.
For all the questions I am available.
I'm training detection models and am unclear on the data augmentation steps.
My config file for my model right now contains:
train_config {
batch_size: 4
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_vertical_flip{
}
}
data_augmentation_options {
random_adjust_brightness {
}
}
data_augmentation_options {
random_black_patches {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
Is this the correct way to apply data augmentations? Am I supposed to put all augmentation settings in the same block? EX: Having only one:
train_config {
batch_size: 4
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
random_horizontal_flip {
}
random_vertical_flip{
}
random_adjust_brightness {
}
random_black_patches {
}
}
Or does it even matter which method I choose?
Also, is there a way I can preview an example of one of my images under these augmentation settings? Maybe using matplotlib to display an image under these augmentation settings?
I've downloaded the EfficientDet D0 512x512 model from the object detection API model zoo, downloaded the PASCAL VOC dataset and preprocessed it with the create_pascal_tf_record.py file. Next I took one of the config files and adjusted it to fit the architecture and VOC dataset. When evaluating the resulting network with the pascal_voc_detection_metrics it gives me a near zero mAP for the first class (airplane), the other classes are performing fine. I'm assuming one of my settings in the config file is wrong (pasted down below), why does this happen and how do i fix this?
model {
ssd {
inplace_batchnorm_update: true
freeze_batchnorm: false
num_classes: 20
add_background_class: false
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
encode_background_as_zeros: true
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: [1.0, 2.0, 0.5]
scales_per_octave: 3
}
}
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 512
max_dimension: 512
pad_to_max_dimension: true
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
depth: 64
class_prediction_bias_init: -4.6
conv_hyperparams {
force_use_bias: true
activation: SWISH
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
random_normal_initializer {
stddev: 0.01
mean: 0.0
}
}
batch_norm {
scale: true
decay: 0.99
epsilon: 0.001
}
}
num_layers_before_predictor: 3
kernel_size: 3
use_depthwise: true
}
}
feature_extractor {
type: 'ssd_efficientnet-b0_bifpn_keras'
bifpn {
min_level: 3
max_level: 7
num_iterations: 3
num_filters: 64
}
conv_hyperparams {
force_use_bias: true
activation: SWISH
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
scale: true,
decay: 0.99,
epsilon: 0.001,
}
}
}
loss {
classification_loss {
weighted_sigmoid_focal {
alpha: 0.25
gamma: 1.5
}
}
localization_loss {
weighted_smooth_l1 {
}
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
normalize_loc_loss_by_codesize: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.5
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
fine_tune_checkpoint: "oracle/efficientdet_d0/checkpoint/ckpt-0"
fine_tune_checkpoint_version: V2
fine_tune_checkpoint_type: "detection"
batch_size: 3
startup_delay_steps: 0
use_bfloat16: false
num_steps: 30000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_scale_crop_and_pad_to_square {
output_size: 512
scale_min: 0.1
scale_max: 2.0
}
}
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: 8e-2
total_steps: 30000
warmup_learning_rate: .001
warmup_steps: 2500
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
update_trainable_variables: ["WeightSharedConvolutionalBoxPredictor"]
}
train_input_reader: {
label_map_path: "pascalVOC/pascal_label_map.pbtxt"
tf_record_input_reader {
input_path: "pascalVOC/pascal_train.record"
}
}
eval_config: {
metrics_set: "pascal_voc_detection_metrics"
use_moving_averages: false
batch_size: 1;
}
eval_input_reader: {
label_map_path: "pascalVOC/pascal_label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "pascalVOC/pascal_val.record"
}
}
There is a bug in the way pascal_voc_detection_metrics calculates the metric, fix can be found here
i am pretty new to this all so bare with me here.
I've made myself a program to recognise tools, issue is, while running it will see the object, but the name will be N/A, note that this doesn't happen to every label (doesn't recognise screwdrivers yet but when it thinks it sees one, it does label it Screwdriver instead of N/A)
Now, I've checked countless forums from people with this issue and i cannot find why this is happening.
I have 16 classes for the 16 objects, labelmap is in order and exactly as shown on multiple other sites.
All out of idea's here ..
:pipeline:
model { ssd {
num_classes: 16
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 512
max_dimension: 512
pad_to_max_dimension: false
}
}
feature_extractor {
type: "ssd_efficientnet-b0_bifpn_keras"
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 4e-05
}
}
initializer {
truncated_normal_initializer {
mean: 0.0
stddev: 0.03
}
}
activation: SWISH
batch_norm {
decay: 0.99
scale: true
epsilon: 0.001
}
force_use_bias: true
}
bifpn {
min_level: 3
max_level: 7
num_iterations: 3
num_filters: 64
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 4e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.01
}
}
activation: SWISH
batch_norm {
decay: 0.99
scale: true
epsilon: 0.001
}
force_use_bias: true
}
depth: 64
num_layers_before_predictor: 3
kernel_size: 3
class_prediction_bias_init: -4.6
use_depthwise: true
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 3
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 1e-08
iou_threshold: 0.5
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 1.5
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
add_background_class: false } } train_config { batch_size: 1 data_augmentation_options {
random_horizontal_flip {
} } data_augmentation_options {
random_scale_crop_and_pad_to_square {
output_size: 512
scale_min: 0.1
scale_max: 2.0
} } sync_replicas: true optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.08
total_steps: 300000
warmup_learning_rate: 0.001
warmup_steps: 2500
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false } fine_tune_checkpoint: "C:/Users/djust/Desktop/Object_detection/models/research/object_detection/efficientdet_d0_coco17_tpu-32/checkpoint/ckpt-0"
num_steps: 300000 startup_delay_steps: 0.0 replicas_to_aggregate:
8 max_number_of_boxes: 100 unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "detection" use_bfloat16: false
fine_tune_checkpoint_version: V2 } train_input_reader {
label_map_path:
"C:/Users/djust/Desktop/Object_detection/models/research/object_detection/training/labelmap.pbtxt"
tf_record_input_reader {
input_path: "C:/Users/djust/Desktop/Object_detection/models/research/object_detection/train.record"
} } eval_config { metrics_set: "coco_detection_metrics"
use_moving_averages: false batch_size: 1 } eval_input_reader {
label_map_path:
"C:/Users/djust/Desktop/Object_detection/models/research/object_detection/training/labelmap.pbtxt"
shuffle: false num_epochs: 1 tf_record_input_reader {
input_path: "C:/Users/djust/Desktop/Object_detection/models/research/object_detection/test.record"
} }
:Labelmap:
item {
display_name: 'person'
name: 'person'
id: 1 } item {
display_name: 'crimping_tool'
name: 'crimping_tool'
id: 2 } item {
display_name: 'drill_set'
name: 'drill_set'
id: 3 } item {
display_name: 'utility_knife'
name: 'utility_knife'
id: 4 } item {
display_name: 'screwdriver'
name: 'screwdriver'
id: 5 } item {
display_name: 'stripping_pliers'
name: 'stripping_pliers'
id: 6 } item {
display_name: 'cutting_pliers'
name: 'cutting_pliers'
id: 7 } item {
display_name: 'stripping_tool'
name: 'stripping_tool'
id: 8 } item {
display_name: 'pliers'
name: 'pliers'
id: 9 } item {
display_name: 'pipewrench'
name: 'pipewrench'
id: 10 } item {
display_name: 'measuring_tool'
name: 'measuring_tool'
id: 11 } item {
display_name: 'cable_cutter_angled'
name: 'cable_cutter_angled'
id: 12 } item {
display_name: 'stripping_tool_2'
name: 'stripping_tool_2'
id: 13 } item {
display_name: 'wrench'
name: 'wrench'
id: 14 } item {
display_name: 'hexkey_set'
name: 'hexkey_set'
id: 15 } item {
display_name: 'drill_set_2'
name: 'drill_set_2'
id: 16 }
A possible cause could be that in the TFrecords that you use the "label ID" is not correct. Can you validate that when converting your images and annotations to those TF records that the 'image/object/class/label' is set correctly?
'image/object/class/label':
dataset_util.int64_list_feature(category_ids)
I also noticed there is a "display_name" in your labelmap file, I've never used the display_name and I'm not sure if that could also be a cause of your N/A labels.
If the labels are correctly set in the tfrecord, then I would advise to try a labelmap file with the following structure:
item {
id: 1
name: 'person'
}
item {
id: 2
name: 'crimping_tool'
}
item {
id: 3
name: 'drill_set'
}
...
Need to know the proper configuration settings for the Tensorflow Object Detection API to add a class and do transfer learning
After reading https://github.com/tensorflow/models/issues/6479 and Retrain Tensorflow Object detection API it is still unclear on how to do transfer learning with the API.
I'm looking for the proper way to add a class to a trained model. For example, the SSD with Mobilenet v1
The methods I've seen using the object detection API involve making the following changes:
In the pipeline config file:
Change num_classes: 90 to num_classes: 1
Change fine_tune_checkpoint: to "../yourlocalpath/model.ckpt
Keep from_detection_checkpoint: true
Change train_input_reader/ input_path: to "../yourtrainimagepath/train.record"
Change train_input_reader/ label_map_path to "../yourlocalpath/classes.pbtxt"
Change eval_input_reader / input_path to "../yourtestimagepath/test.rocord"
Change eval_input_reader / label_map_path to "../yourlocalpath/classes.pbtxt"
Also,
Change the file: "../yourlocalpath/classes.pbtxt" to only contain:
item {
id: 1
name: 'some_new_class'
}
I trained 600 images for 200,000 steps (18 hours) to a loss of 1.5.
I achieved over 90% accuracy on the training data but less than 10% on the evaluation. This was clearly an overfit. My first take was that the model is too complex for a single item. It just memorized the training data. I also noticed that the other 90 original items were no longer found.
I then change the num_classes to 91 and simply added
item {
id: 91
name: 'some_new_class'
}
to the original classes.pbtxt file?
My results did not improve much (20%). (This time I stopped training around 100,000 steps but the learning curve pretty much flattened by that point).
For both cases, I chose not to change the "from_detection_checkpoint: true" setting.
because "starting from a detection checkpoint will usually result in a faster training job than a classification checkpoint." reference: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md#model-parameter-initialization
What is the proper way to train an object detector to detect all objects (old and new)?
I expect that when I conduct a prediction on an image containing already trained objects in addition to my new object, all are found.
Here are the config files used.
1st one with num_classes: 1
# SSD with Mobilenet v1, configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
ssd {
num_classes: 1
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
}
}
similarity_calculator {
iou_similarity {
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
}
}
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 1
box_code_size: 4
apply_sigmoid_to_scores: false
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
}
feature_extractor {
type: 'ssd_mobilenet_v1'
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
loss {
classification_loss {
weighted_sigmoid {
}
}
localization_loss {
weighted_smooth_l1 {
}
}
hard_example_miner {
num_hard_examples: 3000
iou_threshold: 0.99
loss_type: CLASSIFICATION
max_negatives_per_positive: 3
min_negatives_per_image: 0
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
batch_size: 10
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
}
fine_tune_checkpoint: "/home/adriansr/HoodML/Datasets/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "/home/adriansr/HoodML/Datasets/2016_USATF_Sprint_TrainingDataset/Analyze/train.record"
}
label_map_path: "/home/adriansr/HoodML/hoodbibod/training/classes.pbtxt"
}
eval_config: {
metrics_set: "coco_detection_metrics"
num_examples: 1100
}
eval_input_reader: {
tf_record_input_reader {
input_path: "/home/adriansr/HoodML/Datasets/2016_USATF_Sprint_TrainingDataset/Analyze/test.record"
}
label_map_path: "/home/adriansr/HoodML/hoodbibod/training/classes.pbtxt"
shuffle: false
num_readers: 1
}
2nd one with num_classes: 91
# SSD with Mobilenet v1, configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
ssd {
num_classes: 91
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
}
}
similarity_calculator {
iou_similarity {
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
}
}
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 1
box_code_size: 4
apply_sigmoid_to_scores: false
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
}
feature_extractor {
type: 'ssd_mobilenet_v1'
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
loss {
classification_loss {
weighted_sigmoid {
}
}
localization_loss {
weighted_smooth_l1 {
}
}
hard_example_miner {
num_hard_examples: 3000
iou_threshold: 0.99
loss_type: CLASSIFICATION
max_negatives_per_positive: 3
min_negatives_per_image: 0
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
batch_size: 10
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
}
fine_tune_checkpoint: "/home/adriansr/HoodML/Datasets/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "/home/adriansr/HoodML/Datasets/2016_USATF_Sprint_TrainingDataset/Analyze/train.record"
}
label_map_path: "/home/adriansr/HoodML/hoodbibod/training/mscoco_complete_label_map_with_bib.pbtxt"
}
eval_config: {
metrics_set: "coco_detection_metrics"
num_examples: 1100
}
eval_input_reader: {
tf_record_input_reader {
input_path: "/home/adriansr/HoodML/Datasets/2016_USATF_Sprint_TrainingDataset/Analyze/test.record"
}
label_map_path: "/home/adriansr/HoodML/hoodbibod/training/mscoco_complete_label_map_with_bib.pbtxt"
shuffle: false
num_readers: 1
}
classes.pbtxt
item {
id: 1
name: 'Bib'
}
mscoco_complete_label_map_with_bib.pbtxt
item {
name: "background"
id: 0
display_name: "background"
}
item {
name: "/m/01g317"
id: 1
display_name: "person"
}
item {
name: "/m/0199g"
id: 2
display_name: "bicycle"
}
item {
name: "/m/0k4j"
id: 3
display_name: "car"
}
item {
name: "/m/04_sv"
id: 4
display_name: "motorcycle"
}
item {
name: "/m/05czz6l"
id: 5
display_name: "airplane"
}
item {
name: "/m/01bjv"
id: 6
display_name: "bus"
}
item {
name: "/m/07jdr"
id: 7
display_name: "train"
}
item {
name: "/m/07r04"
id: 8
display_name: "truck"
}
item {
name: "/m/019jd"
id: 9
display_name: "boat"
}
item {
name: "/m/015qff"
id: 10
display_name: "traffic light"
}
item {
name: "/m/01pns0"
id: 11
display_name: "fire hydrant"
}
item {
name: "12"
id: 12
display_name: "12"
}
item {
name: "/m/02pv19"
id: 13
display_name: "stop sign"
}
item {
name: "/m/015qbp"
id: 14
display_name: "parking meter"
}
item {
name: "/m/0cvnqh"
id: 15
display_name: "bench"
}
item {
name: "/m/015p6"
id: 16
display_name: "bird"
}
item {
name: "/m/01yrx"
id: 17
display_name: "cat"
}
item {
name: "/m/0bt9lr"
id: 18
display_name: "dog"
}
item {
name: "/m/03k3r"
id: 19
display_name: "horse"
}
item {
name: "/m/07bgp"
id: 20
display_name: "sheep"
}
item {
name: "/m/01xq0k1"
id: 21
display_name: "cow"
}
item {
name: "/m/0bwd_0j"
id: 22
display_name: "elephant"
}
item {
name: "/m/01dws"
id: 23
display_name: "bear"
}
item {
name: "/m/0898b"
id: 24
display_name: "zebra"
}
item {
name: "/m/03bk1"
id: 25
display_name: "giraffe"
}
item {
name: "26"
id: 26
display_name: "26"
}
item {
name: "/m/01940j"
id: 27
display_name: "backpack"
}
item {
name: "/m/0hnnb"
id: 28
display_name: "umbrella"
}
item {
name: "29"
id: 29
display_name: "29"
}
item {
name: "30"
id: 30
display_name: "30"
}
item {
name: "/m/080hkjn"
id: 31
display_name: "handbag"
}
item {
name: "/m/01rkbr"
id: 32
display_name: "tie"
}
item {
name: "/m/01s55n"
id: 33
display_name: "suitcase"
}
item {
name: "/m/02wmf"
id: 34
display_name: "frisbee"
}
item {
name: "/m/071p9"
id: 35
display_name: "skis"
}
item {
name: "/m/06__v"
id: 36
display_name: "snowboard"
}
item {
name: "/m/018xm"
id: 37
display_name: "sports ball"
}
item {
name: "/m/02zt3"
id: 38
display_name: "kite"
}
item {
name: "/m/03g8mr"
id: 39
display_name: "baseball bat"
}
item {
name: "/m/03grzl"
id: 40
display_name: "baseball glove"
}
item {
name: "/m/06_fw"
id: 41
display_name: "skateboard"
}
item {
name: "/m/019w40"
id: 42
display_name: "surfboard"
}
item {
name: "/m/0dv9c"
id: 43
display_name: "tennis racket"
}
item {
name: "/m/04dr76w"
id: 44
display_name: "bottle"
}
item {
name: "45"
id: 45
display_name: "45"
}
item {
name: "/m/09tvcd"
id: 46
display_name: "wine glass"
}
item {
name: "/m/08gqpm"
id: 47
display_name: "cup"
}
item {
name: "/m/0dt3t"
id: 48
display_name: "fork"
}
item {
name: "/m/04ctx"
id: 49
display_name: "knife"
}
item {
name: "/m/0cmx8"
id: 50
display_name: "spoon"
}
item {
name: "/m/04kkgm"
id: 51
display_name: "bowl"
}
item {
name: "/m/09qck"
id: 52
display_name: "banana"
}
item {
name: "/m/014j1m"
id: 53
display_name: "apple"
}
item {
name: "/m/0l515"
id: 54
display_name: "sandwich"
}
item {
name: "/m/0cyhj_"
id: 55
display_name: "orange"
}
item {
name: "/m/0hkxq"
id: 56
display_name: "broccoli"
}
item {
name: "/m/0fj52s"
id: 57
display_name: "carrot"
}
item {
name: "/m/01b9xk"
id: 58
display_name: "hot dog"
}
item {
name: "/m/0663v"
id: 59
display_name: "pizza"
}
item {
name: "/m/0jy4k"
id: 60
display_name: "donut"
}
item {
name: "/m/0fszt"
id: 61
display_name: "cake"
}
item {
name: "/m/01mzpv"
id: 62
display_name: "chair"
}
item {
name: "/m/02crq1"
id: 63
display_name: "couch"
}
item {
name: "/m/03fp41"
id: 64
display_name: "potted plant"
}
item {
name: "/m/03ssj5"
id: 65
display_name: "bed"
}
item {
name: "66"
id: 66
display_name: "66"
}
item {
name: "/m/04bcr3"
id: 67
display_name: "dining table"
}
item {
name: "68"
id: 68
display_name: "68"
}
item {
name: "69"
id: 69
display_name: "69"
}
item {
name: "/m/09g1w"
id: 70
display_name: "toilet"
}
item {
name: "71"
id: 71
display_name: "71"
}
item {
name: "/m/07c52"
id: 72
display_name: "tv"
}
item {
name: "/m/01c648"
id: 73
display_name: "laptop"
}
item {
name: "/m/020lf"
id: 74
display_name: "mouse"
}
item {
name: "/m/0qjjc"
id: 75
display_name: "remote"
}
item {
name: "/m/01m2v"
id: 76
display_name: "keyboard"
}
item {
name: "/m/050k8"
id: 77
display_name: "cell phone"
}
item {
name: "/m/0fx9l"
id: 78
display_name: "microwave"
}
item {
name: "/m/029bxz"
id: 79
display_name: "oven"
}
item {
name: "/m/01k6s3"
id: 80
display_name: "toaster"
}
item {
name: "/m/0130jx"
id: 81
display_name: "sink"
}
item {
name: "/m/040b_t"
id: 82
display_name: "refrigerator"
}
item {
name: "83"
id: 83
display_name: "83"
}
item {
name: "/m/0bt_c3"
id: 84
display_name: "book"
}
item {
name: "/m/01x3z"
id: 85
display_name: "clock"
}
item {
name: "/m/02s195"
id: 86
display_name: "vase"
}
item {
name: "/m/01lsmm"
id: 87
display_name: "scissors"
}
item {
name: "/m/0kmg4"
id: 88
display_name: "teddy bear"
}
item {
name: "/m/03wvsk"
id: 89
display_name: "hair drier"
}
item {
name: "/m/012xff"
id: 90
display_name: "toothbrush"
}
item {
name: "/m/bib"
id: 91
display_name: "bib"
}
2 years late but... Fundamentally, you aren't able to train your network on a new class and have it not affect the identification accuracy of previously trained classes. By training on a dataset of new objects, and with a label map only containg that new object the model will only optimize to detect the new object, as you are changing the weights that enabled the detection of the old objects. You could try mergin your dataset with the one the model was originally trained on, and the training on the new merged set. Even this will be inadequate unless you are planning on somehow making sure the new object is featured in images with the old objects labelled as well (maybe some sort of synthetic data generation procedure may be useful).