Tensorflow Object-detection api - tensorflow

How can we draw ground truth boundary box with predicted boundary box at the time of inference by making use of tensorflow object detection api?
How to calculate precision,recall & mAP for object detection using SSD model with KITTI like dataset?

I suggest you see the mentioned following websites :
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
if you use a pre-trained model which there are a large list of them in the TensorFlow model garden zoo, you can customize the model as well. another advantage of using TensorFlow models is that the precision of such models are measured previously and you don't need to measure them again as you see in the below picture:
but for the second part of your question, I can say that you can download the LabelImage GUI and use it to create bounding boxes on your train images and feed them to the TensorFlow object-detection-API

How can we draw a ground truth boundary box with a predicted bounding box at the time of inference by making use of TensorFlow object detection API?
Ans: you can draw a bounding box by XML file referred to the image by xml.etree.ElementTree function Extract XML coordinates if you want to draw a ground truth bounding box in the predicted image you can extract xmin,xmax,ymin and ymax in an XML file after that use cv2.rectangle to draw ground truth bounding box in the predicted image
How to calculate precision,recall & mAP for object detection using SSD model with KITTI like dataset?
Ans: in TF2 can use !python model_main_tf2.py --alsologtostderr --model_dir='___' --pipeline_config_path='____' --checkpoint_dir='____' or you can validated by yourself, for example, create the function to calculate IOU

Related

Multiple labels on a single bounding box for tensorflow SSD mobilenet

I have configured SSD mobilenet v1 and have trained the model previously as well. However in my dataset for each of the bounding box there are multiple class labels. My dataset is of faces each face have 2 labels: age and gender. Both these labels have the same bounding box coordinates.
After training on this dataset the problem that I encounter is that the model only labels the gender of the face and not the age. In yolo however both gender and age can be shown.
Is it possible to achieve multiple labels on a single bounding box using SSD mobile net ?
It depends on the implementation but SSD uses a softmax layer to predict a single class per bounding box, whereas YOLO predicts individual sigmoid confidence scores for each class. So in SSD a single class with argmax gets picked but in YOLO you can accept multiple classes above a threshold.
However you are really doing a multi-task learning problem with two types of outputs, so you should extend these models to predict both types of classes jointly.

Tensorflow object detection: why is the location in image affecting detection accuracy when using ssd mobilnet v1?

I'm training a model to detect meteors within a picture of the night sky and I have a fairly small dataset with about 85 images and each image is annotated with a bounding box. I'm using the transfer learning technique starting with the ssd_mobilenet_v1_coco_11_06_2017 checkpoint and Tensorflow 1.4. I'm resizing images to 600x600pixels during training. I'm using data augmentation in the pipeline configuration to randomly flip the images horizontally, vertically and rotate 90 deg. After 5000 steps, the model converges to a loss of about 0.3 and will detect meteors but it seems to matter where in the image the meteor is located. Do I have to train the model by giving examples of every possible location? I've attached a sample of a detection run where I tiled a meteor over the entire image and received various levels of detection (filtered to 50%). How can I improve this?detected meteors in image example
It could very well be your data and I think you are making a prudent move by improving the heterogeneity of your dataset, BUT it could also be your choice of model.
It is worth noting that ssd_mobilenet_v1_coco has the lowest COCO mAP relative to the other models in the TensorFlow Object Detection API model zoo. You aren't trying to detect a COCO object, but the mAP numbers are a reasonable aproximation for generic model accuracy.
At the highest possible level, the choice of model is largely a tradeoff between speed/accuracy. The model you chose, ssd_mobilenet_v1_coco, favors speed over accuracy. Consequently, I would reccomend you try one of the Faster RCNN models (e.g., faster_rcnn_inception_v2_coco) before you spend a signifigant amount of time preprocessing images.

Ignore some class in train

I'm using tensor-flow models object detection for my use case, and I have some boxes/classes that I would like to ignore in the training process because the quality of them is not the best.
I don't want to delete the boxes area with black rectangle because that will change the image
and I don't want them to be a Negative example in the training process
Is there an easy way to do that?
I'm using tensorflow models object detection faster-RCNN implementation with PASCAL VOC data presentation

Tensorflow object detection API, using polygon labeled dataset

I am a beginner in machine learning, I am trying to do my own object detection using my own dataset. However, it would be more practical if the object is labeled with polygon shaped bound. yet tensorflow object detection API can only accept bounding box.
So is it possible to modify the API such that, it can accept polygon labeled dataset??
Yes, it is possible. You have to give directory of the training set. But bounding box is recommended because during inference time, you will get bounding box around the object detected. You can see an example here in tensorflow.org.
For labeling you can use LabelImg, which is very simple and easy to use also will increase the detection accuracy.

How do I get detected boxes on evaluation set for tensorflow's object detection API?

I was working with the recently released Tensorflow's API for object detection, with Faster RCNN on Resnet 101 on my own dataset. It seems to train and evaluate on Validation data, but I was hoping if there was a way I could get/store bounding boxes for all images in the Eval set, in a file, or maybe, get the location in the source code where I can get the predicted bounding boxes with image names.
If you just want to obtain the detected bounding boxes given a set of images, the Jupyter notebook contains a good example of how to do this.