How to use a trained alexnet model on my own data? - tensorflow

https://github.com/guerzh/tf_weights
I have a reference model, (a TensorFlow implementation of AlexNet with pretrained weights) that I wanted to test on my own personal data set of images. Do you guys know what would be the next steps to doing this?

Related

Why does training a pretrained model take longer time?

From my limited experience in training and testing object detection models like faster rcnn I've noticed that whenever I set the variable pretrained to True the training time took way more than when I trained it with pretrained set to False. The model that I've particularly seen this effect on is Faster RCNN with ResNet50 fpn backbone that has pretrained weights from ImageNet dataset.
I've googled the sentence "Why does training a pretrained model take longer time?" and all it shows is examples of "How to use pretrained model...etc." and not "Why.." 😐
So I felt curious to know if anyone here could explain or hint.

How to fine tune a trained model using fast.ai with freezing feature layers?

I am working on a classification and detection model where I trained both models on another dataset now I am training them both again on new image data, but the model contains two models like FPN + CNN. I want to freeze the last layer and trained on a new dataset.
How to fine-tune this model using fast.ai. Please need suggestions, tutorials, etc (need some code for guidance)

Is that a good idea to use transfer learning in real world projects?

SCENARIO
What if my intention is to train for a dataset of medical images and I have chosen a coco pre-trained model.
My Doubts
1 Since I have chosen medical images there is no point of train it on COCO dataset, right? if so what is a possible solution to do the same?
2 Adding more layers to a pre-trained model will screw the entire model? with classes of around 10 plus and 10000's of training datasets?
3 Without train from scratch what are the possible solutions , like fine-tuning the model?
PS - let's assume this scenario is based on deploying the model for business purposes.
Thanks-
Yes, it is a good idea to reuse the Pre-Trained Models or Transfer Learning in Real World Projects, as it saves Computation Time and as the Architectures are proven.
If your use case is to classify the Medical Images, that is, Image Classification, then
Since I have chosen medical images there is no point of train it on
COCO dataset, right? if so what is a possible solution to do the same?
Yes, COCO Dataset is not a good idea for Image Classification as it is efficient for Object Detection. You can reuse VGGNet or ResNet or Inception Net or EfficientNet. For more information, refer TF HUB Modules.
Adding more layers to a pre-trained model will screw the entire model?
with classes of around 10 plus and 10000's of training datasets?
No. We can remove the Top Layer of the Pre-Trained Model and can add our Custom Layers, without affecting the performance of the Pre-Trained Model.
Without train from scratch what are the possible solutions , like
fine-tuning the model?
In addition to using the Pre-Trained Models, you can Tune the Hyper-Parameters of the Model (Custom Layers added by you) using HParams of Tensorboard.

How to load a pretrained vgg model in distributed tensorflow model training scene like faster-rcnn?

I want to implements a faster-rcnn model using distributed tensorflow, But I have difficult to load a pretrained vgg model,How to do it? thanks
The TensorFlow tutorial on retraining inception is a good start to read. Then try to reproduce what it does starting from an already trained vgg model.

How to Fine-tuning a Pretrained Network in Tensorflow?

Can anyone give an example of how to fine tune a pretrained imagenet network with new data and different classes similar to this:
Fine-tuning a Pretrained Network for Style Recognition
This TensorFlow tutorial describes how to retrain a image classifier for new data and new classes.