How to remove incorrect boxes in YOLO? - yolo

I use YOLOv2 on Darknet in Window. I trained a image. I thought YOLOv2 uses NMS algorithm.
But my result was like this.
Where do I put some code for removing incorrect boxes?

Related

What format does YOLOv2 txt file data use?

My YOLOv2Tiny model is drawing bounding boxes in the correct place, however consistently smaller than the object. This got me wondering if I correctly annotated the images.
Does YOLOv2 use the Pascal VOC .xml format or the same YOLO .txt format that YOLOv4 uses?

How to train tensorflow object detection image segmentation mask_rcnn_inception_resnet_v2_atrous_coco instance segmentation on my own dataset

please help me with training my own dataset on mask_rcnn_inception_resnet_v2_atrous_coco model.
https://github.com/tensorflow/models/tree/master/research/object_detection
model:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
I have refered to https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/instance_segmentation.md ; but I can't clearly understand the steps.
Do we have to give the Bounding box coordinates of the object along with the mask.png file?
How to convert the mask data to tfRecord files (for instance segmentation).?
Can anyone suggest the labelling tool used for bounding box as well as mask.png file!!
tools like LabelBox, labelme, labelimg gives either bounding box coordinated or mask.png file or the polygon coordinates for the object.
please help
The best you give png mask and xml labelization it should be working with create_pet_tf_record.py, set faces_only=false in this file... You can see into the code what is expected in this file..
change path into to point your directories in pipeline configuration
Do we have to give the Bounding box coordinates of the object along with the mask.png file?
Answer: Yes, you need the original images, bounding box files, and mask images.
Use the following tool to annotate each object in your original images Label image
Once you're done with this, you need to annotate each pixel inside each bounding box. There are several tools you can use, for example you can use these tool VGG annotator

Should I include negative examples for Tensorflow object detection API?

I am building a RCNN detection network using Tensorflow's object detection API.
My goal is to detect bounding boxes for animals in outdoor videos. Most frames do not have animals and are just of dynamic backgrounds.
Most tutorials focus on training custom labels, but make no mention of negative training samples. How do these class of detectors deal with images which do not contain objects of interest? Does it just output a low probability, or will it force to try to draw a bounding box within an image?
My current plan is to use traditional background subtraction in opencv to generate potential frames and pass them to a trained network. Should I also include a class of 'background' bounding boxes as 'negative data'?
The final option would be to use opencv for background subtraction, RCNN to generate bounding boxes, then a classification model of crops to identify animals versus background.
In general it's not necessary to explicitly include "negative images". What happens in these detection models is that they use the parts of the image that don't belong to the annotated objects as negatives.
If you expect your model to differentiate between "found a figure" and "no figure", then you will almost certainly need to train it on negative examples. Label these as "no image". In the "no image" case, yes, use the entire image as the bounding box; don't suggest that the model recognize anything smaller.
In "no image" cases, you may get a smaller bounding box, but that doesn't matter: in inference, you'll simply ignore whatever box is returned for "no image".
Of course, the critical issue here is to try it out, and see how well it works for you.
I have found success by scanning my ground truth, copying the box areas plus a margin, then pasting tilings of those box areas onto new background images (guaranteed to have no objects), and creating corresponding XML files with the box category assertions.
I collect non-objects as "uncategorised" boxes - usually from glitches in the output from my latest model. These are tiled (just like the "is-objects") but are not updated in the XML files.
I produce tilings at various scales to build each new training set.
A further explanation and sample python code is here:
https://github.com/brentcroft/ground-truth-productions

Exporting Highlighted Selections in Tensorboard Projector

I am using google's tensorboard projector to visualize a corpus of documents. I am trying to curate the corpus, and would like to study the documents in clusters that appear during TSNE visualization. Is there a way to export the metadata of a group of documents that you select?
I would like to study the metadata of everything that's highlighted in that image. Is this something that can be done with the projector, or with tensor board code?
Thank you
If you go to "bookmarks" in the lower right corner, then press the "+" button, it will save the current state. You can then click the download icon to download the state file. From that, you should be able to parse out the information you want.

How to change colors of function plots in Tensorboard?

I'm trying to compare different learning-rate-decays using Tensorflow. Therefore I visualize the cost functions in Tensorboard ('EVENTS'-tab). My problem is that the different plots of the functions are in very similar colors making it hard to compare them. Is there any possibility to change those colors?
Just create different summary writes with different log files for each learning rate. Then launch the tensorboard tool using:
tensorboard --logdir=tag1:/path/to/summary/one,tag2:/path/to/summary/two
There's currently no way to change those colors, but a recent release has made the colors more differentiated. Try updating and seeing if that helps.
This is very far from an actual solution, but in case someone only wants to change the colors for a screenshot in a paper or presentation its a quick workaround:
Open your browser dev tools (F12)
Search for the color code you want to change (the default orange is #ff7043) and replace it with the color you want
As suggested here, creating (and pointing tensorboard to) a symlink of a run's logdir is one workaround to change the color used to plot that run.