I use Tensorflow Object Detection API with MobilenetV2 as network backbone and SSD as meta-structure to do the object detection job.
In SSD, for each anchor point, we make several candidate bounding boxes with different aspect_ratios. For each bounding box, if its intersection with the bounding box ground-truth is greater than a threshold, we say that this bounding box is positive. Otherwise, it is negative. And then we use these positive and negative to do the training. (So it is important to note that it is NOT the entire image is used to train, but only one (or several) crops of these images are used)
To debug, I'd like to save these positive and negative crops to hard disk to see what are really samples that the algorithm uses to train.
I read the python code of Tensorflow Object Detection API but I'm lost :(
If you have any hint, please show me !
Thanks !
Related
I am trying to create a tensorflow object detection with Single Shot Multibox Detector (SSD) with MobileNet. My dataset consists of images larger than 300x300 pixels (e.g. 1280x1080). I know that tensorflow object detection reduces the images to 300x300 in the training process, what I am interested in is the following:
Does it have a positive or negative influence on the later object detection if I reduce the pictures to 300x300 pixels before the training with padding? Without padding I don't think it has any negative effects, but with padding I'm not sure if it has any effects that I'm overlooking.
Thanks a lot in advance!
I dont know SSD, but CNNs generally use convolutional layers as feature extractors, stacked upon another with different kernel sizes representing different feature sizes, i.e. using spatial correlation to their advantage. If you use padding, the padding will thus be incorporated into the extracted features, possible corrupting your results.
I am using Tensorflow Object Detection API for fine-tuning, using my own data. The goal is to detect 2 classes of objects. I am using the pre-trained faster_rcnn_resnet101_coco model.
The various detection box precision and recall measures are generally increasing (see screenshots below) and are fairly high:
The box classifier losses are decreasing. HOWEVER, the RPN losses are increasing (see screenshots below) -- It looks that the model is having a hard time distinguishing foregrounds from backgrounds (hence, the increasing RPN losses), but once the model is able to identify and locate the right foreground, it classifies well (hence, the decreasing box classifier losses)? I think this can be observed in the model's performance on test images: the false positive rate (on images that do not contain any of the two classes of target objects) is rather high. On the other hand, on images that do contain those target objects, the model does a fantastic job in accurately identifying and locating those objects.
So my question is essentially: what are some of the things I could try to help make sure RPN losses are also decreasing.
I'm training a model to detect meteors within a picture of the night sky and I have a fairly small dataset with about 85 images and each image is annotated with a bounding box. I'm using the transfer learning technique starting with the ssd_mobilenet_v1_coco_11_06_2017 checkpoint and Tensorflow 1.4. I'm resizing images to 600x600pixels during training. I'm using data augmentation in the pipeline configuration to randomly flip the images horizontally, vertically and rotate 90 deg. After 5000 steps, the model converges to a loss of about 0.3 and will detect meteors but it seems to matter where in the image the meteor is located. Do I have to train the model by giving examples of every possible location? I've attached a sample of a detection run where I tiled a meteor over the entire image and received various levels of detection (filtered to 50%). How can I improve this?detected meteors in image example
It could very well be your data and I think you are making a prudent move by improving the heterogeneity of your dataset, BUT it could also be your choice of model.
It is worth noting that ssd_mobilenet_v1_coco has the lowest COCO mAP relative to the other models in the TensorFlow Object Detection API model zoo. You aren't trying to detect a COCO object, but the mAP numbers are a reasonable aproximation for generic model accuracy.
At the highest possible level, the choice of model is largely a tradeoff between speed/accuracy. The model you chose, ssd_mobilenet_v1_coco, favors speed over accuracy. Consequently, I would reccomend you try one of the Faster RCNN models (e.g., faster_rcnn_inception_v2_coco) before you spend a signifigant amount of time preprocessing images.
I am a beginner in machine learning, I am trying to do my own object detection using my own dataset. However, it would be more practical if the object is labeled with polygon shaped bound. yet tensorflow object detection API can only accept bounding box.
So is it possible to modify the API such that, it can accept polygon labeled dataset??
Yes, it is possible. You have to give directory of the training set. But bounding box is recommended because during inference time, you will get bounding box around the object detected. You can see an example here in tensorflow.org.
For labeling you can use LabelImg, which is very simple and easy to use also will increase the detection accuracy.
I'm attempting to train a faster-rccn model for small digit detection. I'm using the newly released tensorflow object detection API and so far have been fine tuning a pre-trained faster_rcnn_resnet101_coco from the zoo. All my training attempts have resulted in models with high precision but low recall. Out of the ~120 objects (digits) on each image only ~20 objects are ever detected, but when detected the classification is accurate. (Also, I am able to train a simple convnet from scratch on my cropped images with high accuracy so the problem is in the detection aspect of the model.) Each digit is on average 60x30 in the original images (and probably about half that size after the image is resized before being fed into the model.) Here is an example image with detected boxes of what I'm seeing:
What is odd to me is how it is able to correctly detect neighboring digits but completely miss the rest that are very similar in terms of pixel dimensions.
I have tried adjusting the hyperparameters around anchor box generation and first_stage_max_proposals but nothing has improved the results so far. Here is an example config file I have used. What other hyperparameters should I try adjusting? Any other suggestions on how to diagnose the problem? Should I be looking into other architectures or does my task look doable with faster-rccn and/or SSD?
In the end the immediate problem was that I was not using the visualizer correctly. By updating the parameters for visualize_boxes_and_labels_on_image_array as described by Johnathan in the comments I was able to see that that I am at least detecting more boxes than I had thought.
I check your config gile, you are decreasing the resolution of your image to 1024. The region of your digit will not contain a lot of pixel and you are loosing some information. What I suggest is to train the model with an another dataset (smaller images). You can for example crop the images in 4 four area.
If you have a good GPU increase the max dimension in the image_resizer, but I guess you will run out of memory