How to generate the labels of custom data for YOLO - object-detection

The labels for YOLO is like [class, x , y, width, height] . Since the dataset is very large, is there any shortcut to generate the labels for YOLO, or we have to hardcode them through measurement?

Method 1: Using Pre-trained YOLOv4 models.
YOLOv4 models were pre-trained on COCO dataset. So, if your object(s) can be found in this list, then, you can use the pre-trained weights to pseudo-label your object(s).
To process a list of images data/new_train.txt and save results of detection in Yolo training format for each image as label <image_name>.txt, use: darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt
Method 2: Using Other Pre-trained Models. It's the same concept. Use other pre-trained models to detect your object (as long as they have trained their models on your object), then export/convert the labels to YOLO format.
Method 3: Use hand-crafted feature descriptors. Examples are shape detection, color-based detection, etc.
Method 4: Manual labelling. If everything else fails, do the labelling yourself or hire some data labelling services. Here's a list of tools that you can use if you want to label them yourself.

Related

How to make tensorflow object detection work on data having same shape but different color?

I have dataset of 3 types of cars: CarA, CarB, CarC.
All cars have the same shape but different color and/or logo.
I have trained an object detection model using tensor flow object detection api's with the base model as SSD ResNet50 V1 FPN 640x640 (RetinaNet50). Training was performed for 10000 steps and the training was stopped when loss reached 0.15
When I test the model, it classifies any car with the same shape as all 3 CarA, CarB, CarC. Is the model not able to distinguish based on color/logo and only works based on shape? I want to ask whether this color aspect can be handled better using any specific base model?

Different results in TfLite model vs model before quantization

I have taken Object Detection model from TF zoo v2,
I took mobilenet and trained it on my own TFrecords
I am using mobilenet because it is often found in the examples of converting it to Tflite and this is what I need because I run it on RPi3.
I am following ideas from the official example from Sagemaker docs
and github you can find here
What is interesting the accuracy done after step 2) training and 3) deploying is pretty nice! My trucks are discovered nicely with the custom trained model.
However, when converted to tflite the accuracy goes down no matter if I use tfliteconvert tool or using python tf.lite.Converter.
What is more, all detections are on borders of images, and usually in the bottom-right corner. Maybe I am not preparing images correctly? Or some misunderstanding of results?
You can check images I uploaded.
https://ibb.co/fSzfZvz
https://ibb.co/0GF101s
What could possibly go wrong?
I was lacking proper preprocessing of image.
After I have used pipeline config to build detection object which has preprocess function I utilized to build tensor before feeding it into Interpreter.
num_classes = 2
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
model_config.ssd.num_classes = num_classes
model_config.ssd.freeze_batchnorm = True
detection_model = model_builder.build(
model_config=model_config, is_training=True)

Load data for Mask RCNN

I want to train Mask-RCNN on my own dataset. I already have the segmented images (ground truths) of leaves which look something like the image below:
How can I load the dataset for training Mask RCNN?
Since Mask RCNN is pre-trained on COCO dataset,you need to train it with these images. For that purpose you have to label them and train it. Since a mask is involved use a tool such as VGG annotator to do the necessary annotation and labelling, it will generate a json file depending on your classes. Later based on your requirement you have to run the .py files for your classes, train and then generate it for testing.
You will have to convert this to TFrecords for the MASK RCNN model to be able to read the image and its annotations. Please refer this medium article 'https://medium.com/#vijendra1125/custom-mask-rcnn-using-tensorflow-object-detection-api-101149ce0765'
You can use coco annotations ( here is an example ) to annotate your dataset and then just run it like you use coco dataset.
Also you can check this code : https://github.com/matterport/Mask_RCNN/blob/master/samples/shapes/train_shapes.ipynb

Changing Inception-v4 architecture to do Multi-label classification in Tensorflow

I am working on image tagging and annotation problem, simply an image may contain multiple objects. I want to train inception-v4 for multi-label classification. My training data will be an image and a vector of length equals the number of classes and has 1 in each index if the object exists in the image. For example, If I have four classes (Person, car, tree, buildings). If an image contains a person and car. Then my vector will be (1, 1, 0, 0).
What changes do I need to make to train inception-v4 for the tagging and annotation problem?
Do I only need to change the input format and change the loss function from softmax to sigmoid_cross_entropy_with_logits in the inception-v4 architecture?
https://github.com/tensorflow/models/blob/master/slim/nets/inception_v4.py
Thank you in advance.
If you'd like to retrain a model to output different labels, check out the image_retraining example: https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/examples/image_retraining/retrain.py
In that example, we retrain the standard inception v3 model to recognize flowers instead of the standard ImageNet categories.

how to connect the pretrained model's input to the output of tf.train.shuffle_batch?

In classify_image.py, the input image is fed with a loaded image in
predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
What if I want to add new layers to the inception model and train the whole model again? Are the variables loaded from classify_image_graph_def.pb trainable? I saw that freeze_graph.py used convert_variables_to_constants to produce freezed graph. So can those loaded weights be trained again, are they constants? And how can I connect the input('shuffle_batch:0') to the inception model to the output of tf.train.shuffle_batch?
The model used in classify_image.py has its variables frozen into constants, and doesn't have any gradient ops, so it's not easy to turn it back into something trainable. You can see how we remove one layer and replace it with something trainable here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py
It's hard to generalize though. You'd be better off looking at some examples of fine-tuning here:
https://github.com/tensorflow/models/tree/master/inception#how-to-fine-tune-a-pre-trained-model-on-a-new-task