How to fine tune a trained model using fast.ai with freezing feature layers? - tensorflow

I am working on a classification and detection model where I trained both models on another dataset now I am training them both again on new image data, but the model contains two models like FPN + CNN. I want to freeze the last layer and trained on a new dataset.
How to fine-tune this model using fast.ai. Please need suggestions, tutorials, etc (need some code for guidance)

Related

Re-training keras model

I've got keras model traing and I'm using this model to generate data. I want to use that data to re-traing my model. After training this model seems to know how to predict new data, but somehow lost knowledge about previous data. I do not compile model again before training. There is some special actions to perform re-training in keras?

Is that a good idea to use transfer learning in real world projects?

SCENARIO
What if my intention is to train for a dataset of medical images and I have chosen a coco pre-trained model.
My Doubts
1 Since I have chosen medical images there is no point of train it on COCO dataset, right? if so what is a possible solution to do the same?
2 Adding more layers to a pre-trained model will screw the entire model? with classes of around 10 plus and 10000's of training datasets?
3 Without train from scratch what are the possible solutions , like fine-tuning the model?
PS - let's assume this scenario is based on deploying the model for business purposes.
Thanks-
Yes, it is a good idea to reuse the Pre-Trained Models or Transfer Learning in Real World Projects, as it saves Computation Time and as the Architectures are proven.
If your use case is to classify the Medical Images, that is, Image Classification, then
Since I have chosen medical images there is no point of train it on
COCO dataset, right? if so what is a possible solution to do the same?
Yes, COCO Dataset is not a good idea for Image Classification as it is efficient for Object Detection. You can reuse VGGNet or ResNet or Inception Net or EfficientNet. For more information, refer TF HUB Modules.
Adding more layers to a pre-trained model will screw the entire model?
with classes of around 10 plus and 10000's of training datasets?
No. We can remove the Top Layer of the Pre-Trained Model and can add our Custom Layers, without affecting the performance of the Pre-Trained Model.
Without train from scratch what are the possible solutions , like
fine-tuning the model?
In addition to using the Pre-Trained Models, you can Tune the Hyper-Parameters of the Model (Custom Layers added by you) using HParams of Tensorboard.

has ssd_mobilenet_v1_coco_2017_11_17 been trained on labelled dataset?

Has the COCO dataset been labeled on this pre-trained model?
I am trying to figure out if the object detection that happens after using this pre-trained model is actually an output of an unlabelled data, or there was some labeling which made the model more efficient.
Thanks in advance

How to use a trained alexnet model on my own data?

https://github.com/guerzh/tf_weights
I have a reference model, (a TensorFlow implementation of AlexNet with pretrained weights) that I wanted to test on my own personal data set of images. Do you guys know what would be the next steps to doing this?

How to Fine-tuning a Pretrained Network in Tensorflow?

Can anyone give an example of how to fine tune a pretrained imagenet network with new data and different classes similar to this:
Fine-tuning a Pretrained Network for Style Recognition
This TensorFlow tutorial describes how to retrain a image classifier for new data and new classes.