How to label satellite images for Image segmentation? - tensorflow

I want to detect Land mines in satellite Images. Initially I built a model with each image having multiple labels and trained it to classify the images.
However I want to use Image Segmentation technique as mentioned here : https://towardsdatascience.com/dstl-satellite-imagery-contest-on-kaggle-2f3ef7b8ac40
I downloaded the required images through aws s3 bucket. I want to label each pixel of the multispectral image I have generated from Band files.
However I am facing difficulty in how to labelling.
Are there any open source or otherwise tools to do the same.
EDIT : The images are 12 band multispectral satellite images.

You can use AWS Ground Truth to create a job that can label the images you require. AWS also has released which might help https://aws.amazon.com/about-aws/whats-new/2019/12/amazon-sagemaker-ground-truth-adds-auto-segment-feature-for-semantic-segmentation-labeling/

Related

TensorFlow find and mark multiple image boundaries

My example is that I have an image with 5 other images on it. Whats the best way to have TensorFlow find/calculate the bounding boxes for each of those... need to take into account that in other images there might only be 3 separate images.
I've found that if I run a cv2.Laplacian on the source image it nicely outlines the 5 individual images but I'm not sure how best to use tensorflow to detect each of those bounding boxes?
UPDATE: My ONE issue is how do I use tensorflow to find each images boundaries? obviously I can find the 4 corners of the whole image but that doesn't help me - I need it to first know how many images their are and then find each of those boundaries.

Train Model with same image in diferents orientation

It is a good a idea to train the model with the same images , but with diferents orientations? I a have a small set of images for the training thats the reason why Im trying to cover all the mobile camera-gallery user scenarios.
For example, the image: example.png with 3 copies; example90.png, example180.png and example.270.png with their diferents rotations. And also with diferents background colors, shadows, etc.
By the way, my test is to identify the type of animal.
Is that a good idea??
If you use Core ML with the Vision framework (and you probably should), Vision will automatically rotate the image so that "up" is really up. In that case it doesn't matter how the user held their camera when they took the picture (assuming the picture still has the EXIF data that describes its orientation).

Tensorflow object detection API how to add background class samples?

I am using tensorflow object detection API. I have two classes of interest. In the first trial, I got reasonable results, but I found it was easy to get false positive of both classes in the pure background images. These background images (i.e., images without any class bbx) have not been included in the training set.
How can I add them into the training set? It seems not work if I simply add samples without bbx.
Your goal is to add negative images to your training dataset to strength the background class (id 0 in the detection API). You can reach this with the VOC Pascal XML annotation format. In your XML file is the height and width of your image without object. Usually you label objects the coordinates and height and width of your object and object name is in the XML file. If you use labelImg you can generate a XML file corresponded to your negative image with the verify button. Also can Roboflow generates XML files with and without objects.

TensorFlow: Collecting my own training data set & Using that training dataset to find the location of object

I'm trying to collect my own training data set for the image detection (Recognition, yet). Right now, I have 4 classes and 750 images for each. Each images are just regular images of the each classes; however, some of images are blur or contain outside objects such as, different background or other factors (but nothing distinguishable stuff). Using that training data set, image recognition is really bad.
My question is,
1. Does the training image set needs to contain the object in various background/setting/environment (I believe not...)?
2. Lets just say training worked fairly accurately and I want to know the location of the object on the image. I figure there is no way I can find the location just using the image recognition, so if I use the bounding box, how/where in the code can I see the location of the bounding box?
Thank you in advance!
It is difficult to know in advance what features your programm will learn for each class. But then again, if your unseen images will be in the same background, the background will play no role. I would suggest data augmentation in training; randomly color distortion, random flipping, random cropping.
You can't see in the code where the bounding box is. You have to label/annotate them yourself first in your collected data, using a tool as LabelMe for example. Then comes learning the object detector.

If I resize images using Tensorflow Object Detection API, are the bboxes automatically resized too?

Tensorflow's Object Detection API has an option in the .config file to add an keep_aspect_ratio_resizer. If I resize my training data using this, will the corresponding bounding boxes be resized as well? If they don't match up then the network is seeing incorrect examples.
Yes, the boxes will be resized to be compatible with the images as well!