How to add feature extractor netwrok for example mobilenetv2 to tensorflow's object detection API - tensorflow

This tutorial discusses how to use objection detection API at tensorflow.
I am looking for the tutorial explaining how to add feature extractor such as mobilenetV2 to tensorflow's object detection framework.

Have you checked out the Tensorflow provided Model Zoo? :)
It includes various object detection models with various feature extractors such as MobileNet, Inception, ResNet etc.
Here below you will find a link to the Tensorflow Detection Model Zoo, where you can choose detection model architectures, Region-Based (R-CNN) or Single Shot Detector (SSD) models, and feature extractors.
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
You can download a frozen graph of the pre-trained models based on COCO, Kitti and Open-Images etc.

Related

What is the difference between TFHub and Model Garden?

TensorFlow Hub is a repository for pre-trained models. Model Garden (Model Zoo) also keeps SOTA models and provides facilities for downloading and leveraging its models like TfHub, and both of them are created by TensorFlow.
Why did Tensorflow make two concepts for a model repository?
When should we use TfHub for retrieving a well-known model, and when should we use Model Garden to download a model? What is the difference between them?
TF Hub provides trained models in SavedModel, TFLite, or TF.js format. These artifacts can be used for inference and some can be used in code for fine-tuning. TF Hub does not provide modeling library code to train your own models from scratch.
Model Garden is a modeling library for training BERT, image classification models, and more. Model Garden provides code for training your own models from scratch as well as some checkpoints to start from.

How to perform keypoint regression on a custom dataset in Tensorflow?

Is it possible to use CenterNet with MobilenetV2 backbone (TF Lite compatible) to perform keypoints detection on a custom dataset? Is there a tutorial somewhere?
I have something for training keypoint detection model for custom dataset on Centernet model with hourglass backbone.
This github repo Custom Keypoint Detection for dataset preparation, model training and inference on Centernet-hourglass104 keypoint detection model based on Tensorflow Object detection API with examples.
This could help you in training your keypoint detection model on custom dataset.
Any issues related to the project can be raised in the github itself and doubts can be cleared here.

What exactly is SSD_ResNet50_v1_FPN?

In the TensorFlow Models Zoo, the object detection has a few popular single shot object detection models named "retinanet/resnet50_v1_fpn_ ..." or "Retinanet (SSD with Resnet 50 v1)". The paper usually linked to these works is here but the paper presents a different model, Detectron. So I understand SSD_ResNet50 FPN" uses the FPN feature extraction concept from that paper but are there any other more detailed documentation avaliable to understand how SSD integrates with FPN exactly? And what is the exact model architecture for this TF zoo Model?

Is there a tutorial for using tensorflow object detection API for training object and key point detection?

This link provides a Google Colab notebook for inference of CenterNet HourGlass104 Keypoints 512x512 for object detection and pose key point detection. Is there a similar notebook or tutorial to train object detection and key point detection on custom datasets?
I have created a detailed github repo Custom Keypoint Detection for dataset preparation, model training and inference on Centernet-hourglass104 keypoint detection model based on Tensorflow Object detection API with examples.
This could help you in training your keypoint detection model on custom dataset.
Any issues related to the project can be raised in the github itself and doubts can be cleared here.

Understanding exactly what the pretrained model does on the Tensorflow object detection API

I am trying to understand what I need from any pre-trained model used in the API regardless of any additional code found on the Tensorflow object detection API.
For example: ssd_mobilenet_v1_coco_2017_11_17, depending on what I have understood: it is a model that is already trained to detect objects (there is a classification to know the category of the object + Regression to bound the objects with rectangles and those rectangles are actually the x,y,w,h coordinates on the object).
How do we benefit from the regression output of that model (x,y,w,h coordinates) to use them in another model?
Let's assume we want to print out just the coordinates x,y,w,h of a detected object on an image without any need of the code of Tensorflow object detection API, how can we do that?
Certainly you can use the pretrained model provided in tensorflow object detection model zoo without installing object detection api. The alternative solution is to use opencv.
Opencv has provided both c++ and python api to call .pb models generated by tensorflow. Here is a nice tutorial.