Training YOLO on dataset containing "hidden" tableware objects - training-data

I need to train YOLO on the dataset containing partly visible tableware overlapped by other objects.
Would it be better to exclude such "hard-detectable" examples from the dataset for training?

Related

how to detect not in trained-for-category images when using resnet50

I have trained resnet50 on four categories of images. It works fantastic when I feed it an image in any one of the four categories -- I have essentially 100% accuracy on images in these categories.
However, when I feed my trained Resnet50 model an image of a similar object, but not in one of the original four categories, the prediction comes back as one of the four existing classes. By this I mean, in the array that is returned with the likelihood of each category, in many cases the likelihood of one of the categories is basically 1. For example, when I query the model about image that is not in one of the four categories, the prediction array will look like
[1.3492944e-07 9.9999988e-01 8.3132584e-14 1.4716975e-24]
Here is the prediction array for an image that the model was trained on:
[1.8217645e-27 1.0000000e+00 3.6731971e-32 0.0000000e+00]
These scores are different, but not much different. Many of the images that are not in one of the trained-for categories have a 1.00000000 for one of the labels.
I had been planning on dealing with the oddball images by looking at the prediction array to see if the max(category labels prediction) was below some threshold. But most of my max(category labels predictions) are all above .99999 and so I can't differentiate between images in the training set and images not part of the training set.
I plan to train my model for N buckets. When I am running the system I will occasionally have images that are not in one of the N buckets and I need to know that. I don't care what they are, I just want to know when an image is not in one of the N buckets.
Resnet50 does a great job of forcing everything into one of the categories, even when it is not.
My images are super well defined! I wonder if I am somehow overtraining or overlooking some other obvious error.
Here is an example of an image that was correctly categorized:
in training set and correctly categorized
Here is an image that is not part of the training set that was then categorized into one of the categories:
not in training set and incorrectly categorized
In summary: I am trying to sort images and I need to know when one of the images is not part of the training categories so I can reject that image. Restated, I want to sort images into buckets: known, trained for buckets, and one unknown bucket.
Is there any way to do this?
Should I use a different classifier than Resnet50?
My images are grayscale, bicubic interpolated during resize (large to smaller), 150x150. I have about 1,600 training images and 200 validation images per category. My accuracy and val_accuracy are .9997 after 3 epochs.
Training and validation accuracy
Training and validation loss
Your model only knows about 4 classes. It or any other model say MobileNet will always look at an image and assign probabilities to each of the 4 classes. You could put in a picture of a water buffalo and it will still try to classify it. Usually but not always if the out of class image you put in is very different from your training images the class with the highest probability will have a probability value well below 1.0. However in your case the out of class image is NOT all that different from the images in your dataset hence a fairly high false probability prediction.
All I can think off is if your out of class images will be generically similar to each other you could create a 5th class and train your model with the data you have plus gather some "typical" out of class images. Then train the model on these 5 classes. I made a model that classified 50 different dog breeds. It was extremely accurate. I put in a picture of Donald Trump and he was predicted as being a chihuahua!

How to generate the labels of custom data for YOLO

The labels for YOLO is like [class, x , y, width, height] . Since the dataset is very large, is there any shortcut to generate the labels for YOLO, or we have to hardcode them through measurement?
Method 1: Using Pre-trained YOLOv4 models.
YOLOv4 models were pre-trained on COCO dataset. So, if your object(s) can be found in this list, then, you can use the pre-trained weights to pseudo-label your object(s).
To process a list of images data/new_train.txt and save results of detection in Yolo training format for each image as label <image_name>.txt, use: darknet.exe detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt
Method 2: Using Other Pre-trained Models. It's the same concept. Use other pre-trained models to detect your object (as long as they have trained their models on your object), then export/convert the labels to YOLO format.
Method 3: Use hand-crafted feature descriptors. Examples are shape detection, color-based detection, etc.
Method 4: Manual labelling. If everything else fails, do the labelling yourself or hire some data labelling services. Here's a list of tools that you can use if you want to label them yourself.

Selectively applying example weights

I have a weighted dataset, and a model composed of two parts.
How can I train the model in such a way that the dataset weights only apply to the first part of it (while the second part is trained as if each example had the same weight)?

How is keras predict working with datasets

I am new in using tf datasets with keras. Since you just handover one object, I don't understand what actually happens. If I handover a dataset to model predict, how does it know how and what elements to use from this object? Since a dataset of complex structure which inherits many kind of structures and levels I think, what happens if I take a dataset which as more "columns" than the dataset which was trained on. Are somehow the structure, names or levels saved during training from the dataset to remember when making predictions?
if tf.keras.Model.fit() receives tf.dataset() as input - it assumes that the dataset returns a tuple of either (inputs, targets) or (inputs, targets, sample_weights). Now, inputs part itself may be a complex structure of sub-inputs (like tuple of image and label for conditional VAE for instance).
If the dataset does not fit your model inputs - fit() will just fail.
See comment to fit() function in the TF source code

Can choose some data in training data after doing data augmentation?

I am training a UNET for semantic segmentation but I only have 200 labeled images. Given the small size of the dataset, it definitely needs some data augmentation techniques.
I have question about the test and the validation set.
I have custom data generator which keep feeding data from folder for training model.
So what I plan to do is:
do data augmentation for the training set and keep all of it in the same folder
"randomly" pick some of training data into test and validation set (of course, before training).
I am not sure if this is fine, since we just do some simple processing (flipping, transposing, adjusting brightness)
Would it be better to separate the data first and do the augmentation for the rest of data in the training folder?