Is it possible to use my own CNN model in faster R-CNN? - faster-rcnn

Suppose I have a CNN model already trained. How can i use it as a classifier for faster r-cnn?

you can use your trained CNN model as a backbone network for extracting features.
I hope this answers your question.

Related

Why does training a pretrained model take longer time?

From my limited experience in training and testing object detection models like faster rcnn I've noticed that whenever I set the variable pretrained to True the training time took way more than when I trained it with pretrained set to False. The model that I've particularly seen this effect on is Faster RCNN with ResNet50 fpn backbone that has pretrained weights from ImageNet dataset.
I've googled the sentence "Why does training a pretrained model take longer time?" and all it shows is examples of "How to use pretrained model...etc." and not "Why.." 😐
So I felt curious to know if anyone here could explain or hint.

Can I change the weights in a CNN like tensorflow?

Can I change the weights in a CNN like tensorflow? Or the weights of the images are fixed?
In Tensorflow it is possible, depending on the version and on the layer you use. It is even possible for class weights in the fitting method. Best is you read it in the Documentation for your used Version:-)

Can I add Tensorflow Fake Quantization in a Keras sequential model?

I have searched this for a while, but it seems Keras only has quantization feature after the model is trained. I wish to add Tensorflow fake quantization to my Keras sequential model. According to Tensorflow's doc, I need these two functions to do fake quantization: tf.contrib.quantize.create_training_graph() and tf.contrib.quantize.create_eval_graph().
My question is has anyone managed to add these two functions in a Keras model? If yes, where should these two function be added? For example, before model.compile or after model.fit or somewhere else? Thanks in advance.
I worked around by post-training quantization. Since my final goal is to train a mdoel for mobile device, instead of fake quantization during training, I exported keras .h5 file and converted to Tenforflow lite .tflite file directly (with post_training_quantize flag set to true). I tested this on a simple cifar-10 model. The original keras model and the quantized tflite model have very close accuracy (the quantized one a bit lower).
Post-training quantization: https://www.tensorflow.org/performance/post_training_quantization
Convert Keras model to tensorflow lite: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/toco/g3doc/python_api.md
Used the tf-nightly tensorflow here: https://pypi.org/project/tf-nightly/
If you still want to do fake quantization (because for some model, post-training quantization may give poor accuracy according to Google), the original webpage is down last week. But you can find it from github: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize
Update: Turns out post-quantization does not really quantize the model. During inference, it still uses float32 kernels to do calculations. Thus, I've switched to quantization-aware training. The accuracy is pretty good for my cifar10 model.

How to load a pretrained vgg model in distributed tensorflow model training scene like faster-rcnn?

I want to implements a faster-rcnn model using distributed tensorflow, But I have difficult to load a pretrained vgg model,How to do it? thanks
The TensorFlow tutorial on retraining inception is a good start to read. Then try to reproduce what it does starting from an already trained vgg model.

How to Fine-tuning a Pretrained Network in Tensorflow?

Can anyone give an example of how to fine tune a pretrained imagenet network with new data and different classes similar to this:
Fine-tuning a Pretrained Network for Style Recognition
This TensorFlow tutorial describes how to retrain a image classifier for new data and new classes.