Tensorflow Object Detection API - "fine_tune" vs "detection" vs "classification" - tensorflow

I am following this tutorial: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
In it, it has the following snippet in the file pipeline.config:
fine_tune_checkpoint_type: "detection" # Set this to "detection" since we want to be training the full detection model
Further investigation leads to the following discoveries:
There are at least 3 options for the field fine_tune_checkpoint_type - fine_tune,detection and classification
Not all models from the model zoo allow all options.
My questions are:
What do each of fine_tune,detection and classification mean in this context, and more importantly when is it appropriate to use each one.
How do I tell which options are compatible with models in the model zoo?
Ultimately I wish to do transfer learning - e.g. take an existing trained model and train it to draw boxes for one or more novel classes.

Those options indicates how to restore checkpoints and comes from here
I copy here the interesting part:
This option controls how variables are restored from the (pre-trained) fine_tune_checkpoint. For TF2 models, 3 different types are supported:
"classification": Restores only the
classification backbone part of the feature extractor. This option is typically used when you want to train a detection model starting from a pre-trained image classification model, e.g. a ResNet model pre-trained on ImageNet.
"detection": Restores the entire feature extractor. The only parts of the full detection model that are not restored are the box and class prediction heads. This option is typically used when you want to use a pre-trained detection model and train on a new dataset or task which requires different box and class prediction heads.
"full":Restores the entire detection model, including the feature extractor, its classification backbone, and the prediction heads. This option should only be used when the pre-training and fine-tuning tasks are the same. Otherwise, the model's parameters may have incompatible shapes, which will cause errors when attempting to restore the checkpoint. For more details about this parameter, see the restore_map (TF1) or restore_from_object (TF2) function documentation in the /meta_architectures/*meta_arch.py files.
I guess fine_tune is currently replaced by "full". Based on your needs the right choice appear to be "detection". To know which models supports wich options, as indicated above you have to look at the restore_from_object function definition in the proper /meta_architectures/*meta_arch.py files

Related

Which layers are frozen using Tensorflow 2 Object detection API?

How can I understand which layers are frozen fine-tuning a detection model from Tensorflow Model Zoo 2?
I have already set with success the Path for fine_tune_checkpoint and fine_tune_checkpoint_type: detection and in the file proto I have already read that "detection" means
// 2. "detection": Restores the entire feature extractor.
The only parts of the full detection model that are not restored are the box and class prediction heads.
This option is typically used when you want to use a pre-trained detection model
and train on a new dataset or task which requires different box and class prediction heads.
I didn't really understand what does that means. Restored means Frozen in this context?
As I understand it, currently the Tensorflow 2 Object detection does not freeze any layers when training from a fine tune checkpoint. There is a issue reported here to support specifying which layers to freeze in the pipeline config. If you look at the training step function, you can see that all trainable variables are used when applying gradients during training.
Restored here means that the model weights are copied from the checkpoint to be used as a starting point for training. Frozen would mean that the weights are not changed (i.e. no gradient is applied) during training.

Tensorflow object detection: Continue training

Lets say I train a pretrained network like ResNet and set it to detection in the pipeline.config file for the fine_tune_checkpoint_type attribute. As far as I understand this means that we take the pretrained weights of the model, except for the classification and box prediction heads. Further this means that we can create our own type of labels which will then result as the classification and box prediction heads for the model we want to create/train.
Now, let's say I train this network for 25000 steps and want to continue training later on without the model forgetting anything. Should I change the fine_tune_checkpoint_type in the pipeline.config to full in order to continue the training (and of course load the correct checkpoint file) or should I still let it be set as detection?
Edit:
This is based on the information found here https://github.com/tensorflow/models/blob/master/research/object_detection/protos/train.proto:
// 1. "classification": Restores only the classification backbone part of
// the feature extractor. This option is typically used when you want
// to train a detection model starting from a pre-trained image
// classification model, e.g. a ResNet model pre-trained on ImageNet.
// 2. "detection": Restores the entire feature extractor. The only parts
// of the full detection model that are not restored are the box and
// class prediction heads. This option is typically used when you want
// to use a pre-trained detection model and train on a new dataset or
// task which requires different box and class prediction heads.
// 3. "full": Restores the entire detection model, including the
// feature extractor, its classification backbone, and the prediction
// heads. This option should only be used when the pre-training and
// fine-tuning tasks are the same. Otherwise, the model's parameters
// may have incompatible shapes, which will cause errors when
// attempting to restore the checkpoint.
So, the classification only provides the classification backbone part of the feature extractor. This means that the model will start from scratch on many parts of the network.
detection restores the whole feature extractor but the "end result" will be forgotten, which means we can add our own classes and start learning these classifications from scratch.
full restores everything, even the classes and box prediction weights. However, this is fine as long as we do not add or remove any classes/labels.
Is this correct?
Yes, you have understood that correctly.
set fine_tune_checkpoint_type: full in piepline.config to retain all that model has learnt till the last checkpoint.
Yes, you can config the variables to be restored by setting fine_tune_checkpoint_type, the options are detection and classification. By setting it to detection essentially you can restore almost all variables from the checkpoint, and by setting it to classification, only variables from the feature_extractor scope are restored, (all the layers in backbone networks, like VGG, Resnet, MobileNet, they are called feature extractors).
Click here for more information.

How to extract weights of DQN agent in TF-Agents framework?

I am using TF-Agents for a custom reinforcement learning problem, where I train a DQN (constructed using DqnAgents from the TF-Agents framework) on some features from my custom environment, and separately use a keras convolutional model to extract these features from images. Now I want to combine these two models into a single model and use transfer learning, where I want to initialize the weights of the first part of the network (images-to-features) as well as the second part which would have been the DQN layers in the previous case.
I am trying to build this combined model using keras.layers and compiling it with the Tf-Agents tf.networks.sequential class to bring it to the necessary form required when passing it to the DqnAgent() class. (Let's call this statement (a)).
I am able to initialize the image feature extractor network's layers with the weights since I saved it as a .h5 file and am able to obtain numpy arrays of the same. So I am able to do the transfer learning for this part.
The problem is with the DQN layers, where I saved the policy from the previous example using the prescribed Tensorflow Saved Model Format (pb) which gives me a folder containing model attributes. However, I am unable to view/extract the weights of my DQN in this way, and the recommended tf.saved_model.load('policy_directory') is not really transparent with respect to what data I can see regarding the policy. If I have to follow the transfer learning as I do in statement (a), I need to extract the weights of my DQN and assign them to the new network. The documentation seems to be quite sparse for this case where transfer learning needs to be applied.
Can anyone help me in this, by explaining how I can extract weights from the Saved Model method (from the pb file)? Or is there a better way to go about this problem?

Why does the loss explode during training from scratch? - Tensorflow Object Detection Models

First of all I want to state out that I am familiar with the benefits of transfer learning. Moreover I am able to train a pretrained model from 'modelzoo' on my dataset. But for research purposes I want to train my model from scratch without transferlearning.
I want to adopt the Faster-RCNN Resnet 101 implementation from tensorsflow's Object Detection API to my dataset. If I use one of the pretrained models the training goes as expected and the loss is always in 'normal' ranges (never above about 6). But if I do not use transferlearning the loss jumps very frequently to extrem high values (about 80,000,000), but between those values the loss is in normal ranges. In addition to this I do not see any predictions of the network on images in TensorBoard. It seems like the network does not make any predictions at all. The only thing which I change is to comment out those two lines in the model.config file:
# fine_tune_checkpoint: 'path'
# from_detection_checkpoint: true
I tried a lot of things to find the reason: Changed optimizer, changed the learning rate, used gradient clipping, changed the initializer used different machines to train on but nothing helps. Moreover I inspected my label_map as well as my record file. To ensure that those files are correct I redid the steps mentioned above by using the pascal voc dataset, the script to create records and the label map from the api, but even with this code from the Object Detection API without any code changes, the loss explodes (Tensorflow Object Detection API own inputs).

Where can I find the pretrained models of fasterRCNN / R-FCN with Mobilenet Feature extractor trained on COCO datset?

I want train a custom dataset on FasterRCNN with Mobilenetv1 or v2. I want to use the pre-trained models in tensorflow zoo. But I cant find faster Rcnn model with mobilenet as base extractor. Where can I get it?
I have already tensorflow zoo in github. I have previous used SSD+Mobilenet config for the same. Now I want to compare the results with FasterRCNN and RCNN with Mobilenet.
The official repo has not released Faster RCNN with mobilenet models yet. But if you want you can still use some other models with mobilenet trained on COCO, the process is a bit complicated.
There are two important steps to proceed.
First one is to have corresponding feature extractor class. For Faster RCNN, the models directory already contains faster_rcnn_mobilenet feature extractor implementation so this step is OK. But for R-FCN, you will have to implement the feature extractor class yourself.
Second one is to change tensor names available in the checkpoint. For example, if you use ssd_mobilenet_v1_xxx as checkpoint, then all tensors within mobilenet scope are named as FeatureExtractor/MobilenetV1/XXX while if in the faster_rcnn_mobilenet_v1 model, the tensor names within mobilenet scope are FirstStageFeatureExtractor/MobilenetV1/XXX (and SecondStageFeatureExtractor/MobilenetV1/XXX). So essentially you need to remove FirstStage (as well as SecondStage) in the names of all feature extractor tensors, then these tensors will have exactly the same name as in the checkpoint, and will be correctly restored. If you do this, the function you need to modify is
def restore_map(self,
fine_tune_checkpoint_type='detection',
load_all_detection_checkpoint_vars=False):
in file faster_rcnn_meta_arch.py.