I have found TPU compatible pre-trained checkpoints for object detection but not for image classification. I'm specifically interested in Mobilenet_v2. I would switch to another network if such checkpoints were available and those for Mobilenet_v2 were not.
Yes, there is a TPU-compatible Resnet checkpoint in these GCS files:
gs://cloud-tpu-artifacts/resnet/resnet-nhwc-2018-02-07/checkpoint
gs://cloud-tpu-artifacts/resnet/resnet-nhwc-2018-02-07/model.ckpt-112603.data-00000-of-00001
gs://cloud-tpu-artifacts/resnet/resnet-nhwc-2018-02-07/model.ckpt-112603.index
gs://cloud-tpu-artifacts/resnet/resnet-nhwc-2018-02-07/model.ckpt-112603.meta
You can use this checkpoint if it meets your requirements. If not, you can follow this tutorial to train it yourself.
Related
How can I understand which layers are frozen fine-tuning a detection model from Tensorflow Model Zoo 2?
I have already set with success the Path for fine_tune_checkpoint and fine_tune_checkpoint_type: detection and in the file proto I have already read that "detection" means
// 2. "detection": Restores the entire feature extractor.
The only parts of the full detection model that are not restored are the box and class prediction heads.
This option is typically used when you want to use a pre-trained detection model
and train on a new dataset or task which requires different box and class prediction heads.
I didn't really understand what does that means. Restored means Frozen in this context?
As I understand it, currently the Tensorflow 2 Object detection does not freeze any layers when training from a fine tune checkpoint. There is a issue reported here to support specifying which layers to freeze in the pipeline config. If you look at the training step function, you can see that all trainable variables are used when applying gradients during training.
Restored here means that the model weights are copied from the checkpoint to be used as a starting point for training. Frozen would mean that the weights are not changed (i.e. no gradient is applied) during training.
How do I find more info on how the ssd_mobilenet_v1 tflite model on TFHub was trained?
Was it trained in such a way that made it easy to convert it to tflite by avoiding certain ops not supported by tflite? Or was it trained normally, and then converted using the tflite converter with TF Select and the tips on this github issue?
Also, does anyone know if there's an equivalent mobilenet tflite model trained on OpenImagesV6? If not, what's the best starting point for training one?
I am not sure about about the exact origin of the model, but looks like it does have TFLite-compatible ops. From my experience, the best place to start for TFLite-compatible SSD models is with the TF2 Detection Zoo. You can convert any of the SSD models using these instructions.
To train your own model, you can follow these instructions that leverage Google Cloud.
I am following a codelab tutorial by Google for image recognition:
https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#3
However, in this case the tutorial is using MobileNet v1 for object detection. In fact, these env variables are set:
IMAGE_SIZE=224
ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
But what if I would like to use MobileNet with SSD or SquezeNet for object detection? I guess ARCHITECTURE variable must change in something like
ARCHITECTURE="ssd_mobilenet_0.50_${IMAGE_SIZE}"
I can't find any helpful resource.
The tutorial you are following is using this retrain script which is an older version of the official tensorflow retrain script.
While you can only use either MobileNet or InceptionV3 by using the codelab script, you can follow the official documentation on image retraining to retrain using any model available on Tensorflow Hub.
UPDATE:
MobileNet and SqueezeNet are not suitable for object recognition, but only for image classification. Thus, SSDMobileNet is the possible way.
I am trying to use the Inception SSD model in tensorflow object detection API. To initialize the weights i want to use pretrained Inception V2 On image net as the feature extractor. I see the model config file lets you use a pretrained model on COCO but if I want to use an Image net model how should I go about it?
To train on Imagenet classification models, do the following:
1) Download a pre-trained model from the "Pre-trained models" section on the Slim page
2) Point the fine_tune_checkpoint at that directory
3) Set from_detection_checkpoint to be false (as you will now be fine-tuning from a classification checkpoint)
Note that training from an Imagenet classification checkpoint will require significantly more time.
I want to implements a faster-rcnn model using distributed tensorflow, But I have difficult to load a pretrained vgg model,How to do it? thanks
The TensorFlow tutorial on retraining inception is a good start to read. Then try to reproduce what it does starting from an already trained vgg model.