Can Yolo train by immutable images? - object-detection

I have a lot of png images with alpha channels, with only one image in each class, such as:
Class 1
Class 2
Class 3
Class 4
I want to detect these individual images into another image, like this one:
The question is whether YOLO can do this task after training on a custom dataset and if it's feasible, is it correct for me to combine them into thousands of different background pictures and label them as training sets? If not, what can I do? Thanks.

Related

TensorFlow find and mark multiple image boundaries

My example is that I have an image with 5 other images on it. Whats the best way to have TensorFlow find/calculate the bounding boxes for each of those... need to take into account that in other images there might only be 3 separate images.
I've found that if I run a cv2.Laplacian on the source image it nicely outlines the 5 individual images but I'm not sure how best to use tensorflow to detect each of those bounding boxes?
UPDATE: My ONE issue is how do I use tensorflow to find each images boundaries? obviously I can find the 4 corners of the whole image but that doesn't help me - I need it to first know how many images their are and then find each of those boundaries.

How to solve object detection problem containing some static objects with no variation?

I am working with a object detection problem, here i am using Region based convolution neural network to detect and segment objects in an image.
I have 8 objects to be segmented(Android App Icons), 6 of them posses variations in background and rest 2 will be under white background(static).
I have already taken 200 variations of each object and trained on MaskRCNN, my model is able to depict the patterns very well in 6 objects with variation.But on rest 2 objects it's struggling even on train set even though it's a exact match.
Q. If i have n objects with variations and m objects with no variation(static), do i need to oversample it? Shall i use any other technique here in this case?
Here in image, icons in black bounding boxes are prominent to change(based on background and there position w.r.t background) but Icon in green bounding box will not have any variations(always be in white background)
I have tried adding more images containing static objects but no luck, Can anyone suggest how to go about in any such problem? I don't want to use sliding window(static image processing) approach here.

dimentions of images as an input of LeNet_5

I am still a beginner in deep learning, I am wondering is it necessary to have for the input images of a size equal to 32*32 (or X*X)? the dimensions of my images are 457*143.
Thank you.
If you want to implement a LeNet and train it from the scratch, you don't have to resize your images. However, if you want to do transfer learning, you'd better resize your images according to the image size of the dataset on which your neural net is already trained.

Tensorflow object detection API how to add background class samples?

I am using tensorflow object detection API. I have two classes of interest. In the first trial, I got reasonable results, but I found it was easy to get false positive of both classes in the pure background images. These background images (i.e., images without any class bbx) have not been included in the training set.
How can I add them into the training set? It seems not work if I simply add samples without bbx.
Your goal is to add negative images to your training dataset to strength the background class (id 0 in the detection API). You can reach this with the VOC Pascal XML annotation format. In your XML file is the height and width of your image without object. Usually you label objects the coordinates and height and width of your object and object name is in the XML file. If you use labelImg you can generate a XML file corresponded to your negative image with the verify button. Also can Roboflow generates XML files with and without objects.

TensorFlow: Collecting my own training data set & Using that training dataset to find the location of object

I'm trying to collect my own training data set for the image detection (Recognition, yet). Right now, I have 4 classes and 750 images for each. Each images are just regular images of the each classes; however, some of images are blur or contain outside objects such as, different background or other factors (but nothing distinguishable stuff). Using that training data set, image recognition is really bad.
My question is,
1. Does the training image set needs to contain the object in various background/setting/environment (I believe not...)?
2. Lets just say training worked fairly accurately and I want to know the location of the object on the image. I figure there is no way I can find the location just using the image recognition, so if I use the bounding box, how/where in the code can I see the location of the bounding box?
Thank you in advance!
It is difficult to know in advance what features your programm will learn for each class. But then again, if your unseen images will be in the same background, the background will play no role. I would suggest data augmentation in training; randomly color distortion, random flipping, random cropping.
You can't see in the code where the bounding box is. You have to label/annotate them yourself first in your collected data, using a tool as LabelMe for example. Then comes learning the object detector.