Unable to get readable class labels for Inception-ResNet-v2 model - tensorflow

I am using Inception-ResNet-v2 pretrained version to classify the images. I need human-readable class labels for this. I found one in following site: https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a.
However, when I try to validate these labels with the images, I find it doesn't map to correct labels. One such instance is I tried to classify "Panda" image- the class label it matches is : "barracouta, snoek" with score - 0.927924 and "giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca" with score - 0.001053.
Please provide me a source where I can find correct mappings of class label to human-readable text for this model.

https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a this human readable label works, but initialize the class label list with an "unused background", before loading these labels, as inception resnet v2 model is trained for 1001 classes and this has 1000.

Related

DeepLabV3, segmentation and classification/detection on coral

I am trying to use DeepLabV3 for image segmentation and object detection/classification on Coral.
I was able to sucessfully run the semantic_segmentation.py example using DeepLabV3 on the coral, but that only shows an image with an object segmented.
I see that it assigns labels to colors - how do i associate the labels.txt file that I made based off of the label info of the model to these colors? (how do i know which color corresponds to which label).
When I try to run the
engine = DetectionEngine(args.model)
using the deeplab model, I get the error
ValueError: Dectection model should have 4 output tensors!This model
has 1.
I guess this way is the wrong approach?
Thanks!
I believe you have reached out to us regarding the same query. I just wanted to paste the answer here for others to reference:
"The detection model usually have 4 output tensors to specifies the locations, classes, scores, and number and detections. You can read more about it here. In contrary, the segmentation model only have a single output tensor, so if you treat it the same way, you'll most likely segfault trying to access the wrong memory region. If you want to do all three tasks on the same image, my suggestion is to create 3 different engines and feed the image into each. The only problem with this is that each time you switch the model, there will likely be data transfer bottleneck for the model to get loaded onto the TPU. We have here an example on how you can run 2 models on a single TPU, you should be able to modify it to take 3 models."
On the last note, I just saw that you added:
how do i associate the labels.txt file that I made based off of the label info of the model to these colors
I just don't think this is something you can do for segmentation model but maybe I'm just confused on your query?
Take object detection model for example, there are 4 output tensors, the second tensor gives you an array of id associates with a certain class that you can map to a a label file. Segmentaion models only give the pixel surrounding an objects.
[EDIT]
Apology, looks like I'm the one confused on segmentation models.
Quote form my college :)
"You are interested to know the name of the label, you can find the corresponding integer to that label from result array in Semantic_segmentation.py. Where result is classification data of each pixel.
For example;
if you print result array in the with bird.jpg as input you would find few pixel's value as 3 which is corresponding 4th label in pascal_voc_segmentation_labels.txt (as indexing starts at 0 )."

How to makeup FSNS dataset with my own image for attention OCR tensorflow model

I want to apply attention-ocr to detect all digits on number board of cars.
I've read your README.md of attention_ocr on github(https://github.com/tensorflow/models/tree/master/research/attention_ocr), and also the way I should do to use my own image data to train model with the StackOverFlow page.(https://stackoverflow.com/a/44461910/743658)
However, I didn't get any information of how to store annotation or label of the picture, or the format of this problem.
For object detection model, I was able to make my dataset with LabelImg and converting this into csv file, and finally make .tfrecord file.
I want to make .tfrecord file on FSNS dataset format.
Can you give me your advice to go on this training steps?
Please reread the mentioned answer it has a section explaining how to store the annotation. It is stored in the three features image/text, image/class and image/unpadded_class. The image/text field is used for visualization, some models support unpadded sequences and use image/unpadded_class, while the default version relies on the text padded with null characters to have the same length stored in the feature image/class. Here is the excerpt to store the text annotation:
char_ids_padded, char_ids_unpadded = encode_utf8_string(
text, charset, length, null_char_id)
example = tf.train.Example(features=tf.train.Features(
feature={
'image/class': _int64_feature(char_ids_padded),
'image/unpadded_class': _int64_feature(char_ids_unpadded),
'image/text': _bytes_feature(text)
...
}
))
If you have worked with tensorflow object detection, then the apporach should be much easier for you.
You can create the annotation file (in .csv format) using labelImg or any other annotation tool.
However, before converting it into tensorflow format (.tfrecord), you should keep in mind the annotation format. (FSNS format in this case)
The format is : files text xmin ymin xmax ymax
So while annotating dont bother much about the class (as you would have done in object detection !! Some random name should suffice.)
Convert it into .tfrecords.
And finally labelMap is a list of characters which you have annotated.
Hope it helps !

Label file in tensorflow object detection training

I want to create my own .tfrecord files using tensorflow object detection API and use them for training. The record will be a subset of original dataset so the model will detect only specific categories.
The thing I dont understand and cant find any information about is, how are id`s assigned to labels in label_map.pbtxt during training.
What I do...
Step 1:
assign label_id during creation of the tfrecord file, where I put my own ids:
'image/object/class/label': dataset_util.int64_list_feature(category_ids)
'image/object/class/text': dataset_util.bytes_list_feature(category_names)
Step 2:
create labels file with e.g. two categories:
item { name: "apple" id: 53 display_name: "apple" }
item { name: "broccoli" id: 56 display_name: "broccoli" }
Step 3:
Train the model
After training, there are some objects detected, but with N/A label. When I set the id`s starting from 1 then it shows correct labels.
My questions are:
Why it did not map correctly to label with custom id?
Can the second id have other value than 2? I'm sure I saw skipped ids in labels file for coco dataset.
How to set the id to have custom value, if possible?
Thanks
I had the same problem with my label map. After Googling a bit, I found your question here and also this excerpt from the TensorFlow Object Detection repository:
Each dataset is required to have a label map associated with it. This label map defines a mapping from string class names to integer class Ids. The label map should be a StringIntLabelMap text protobuf. Sample label maps can be found in object_detection/data. Label maps should always start from id 1.
I also checked the source code for label_map_util.py and found this comment:
We only allow class into the list if its id-label_id_offset is
between 0 (inclusive) and max_num_classes (exclusive).
If there are several items mapping to the same id in the label map,
we will only keep the first one in the categories list
So in your example, which only has two classes, valid ID's are 1 and 2. Any higher value will be ignored.

Background images in one class object detection

When training a single class object detector in Tensorflow, I am trying to pass instances of images where no signal object exists, such that the model doesn't learn that every image contains at least one instance of that class. E.g. if my signal were cats, id want to pass pictures of other animals/landscapes as background -this could also reduce false positives.
I can see that a class id is reserved in the object detection API (0) for background, but I am unsure how to code this into the TFrecords for my background images - class could be 0 but what would be the bounding box coords? Or do i need a simpler classifier on top of this model to detect if there is a signal in the image or not, prior to detecting position?
Later approach of simple classifier, makes sense. I don't think there is a way to do the first part. You can use check on confidence score as well apart from checking the object is present.
It is good practice to create a dataset with not objects of interest, for the same you need to use the same tools (like - label img) that you have used for adding the boxes, image with no BB wil have xml files with no details of BB but only details of the image. The script create tf record will create the tf record from the xml files, look at the below links for more inforamtion -
Create tf record example -
https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_pet_tf_record.py
Using your own dataset-
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/using_your_own_dataset.md

YOLO object detection model?

Currently, I am reading the Yolo9000 model "https://arxiv.org/pdf/1612.08242.pdf" and I am very confused about how the model can predict the bounding box for object detection, I did many examples with Tensorflow, and in most of them we give to the model "Images and Label of images".
My questions are:
1- How we can pass the bounding box instead of labels to the model?
2- How can the the model learn that many boxes belong to one images?
In YOLO, we divide the image into 7X7 grid. For each of the grid location, the network predicts three things -
Probability of an object being present in that grid
If an object lies in this grid, what would be the co-ordinates of the
bounding box?
If an object lies in this grid, which class does it
belong to?
If we apply regression for all the above variables for all 49 grid locations, we will be able tell which grid locations have objects(using first parameter). For the grid locations that have objects, we can tell the bounding box co-ordinates and correct class using the second and third parameters.
Once we have designed a network that can output all the information we need, prepare the training data in this format i.e. find these parameters for every 7X7 grid location in every image in your dataset. Next you simply train the deep neural network to regress for these parameters.
To pass bounding boxes of an image we need to create it first. You can create bounding boxes for any image using specific tools. Here, you have to create boundaries that bound an object within it and then label that bounding box/rectangle. You to do this for every object in the image you want your model to train/recognize.
There is one very useful project in this link, you should check that out if you need to understand about bounding boxes.
I have just started learning object detection with tensorflow. So as and when I get proper info on providing bounding boxes to the object detection model I'll also update that here. Also if you have solved this problem by now, you can also provide the details to help out others facing same kind of problems.
1- How we can pass the bounding box instead of labels to the model?
If we want to train a model that performs object detection (not object classification), we have to pass the truth labels as .xml files, for example. An xml file contains information about objects that exist in an image. Each information about object is composed of 5 values:
class name of this object, such as car or human...
xmin: x coordinate of the box's top left point
ymin: y coordinate of the box's top left point
xmax: x coordinate of the box's bottom right point
ymax: y coordinate of the box'x bottom right point
One bounding box within an image is specified as a set of 5 values like above. If there are 3 objects in an image, the xml file will contain 3 sets of this values.
2- How can the the model learn that many boxes belong to one images?
As you know, the output of YOLOv2 or YOLO9000 has shape (13, 13, D), where D depends on how many class of object you're going to detect. You can see that there are 13x13 = 169 cells (grid cells) and each cell as D values (depth).
Among 169 grid cells, there are some grid cells that are responsible to predict bounding boxes. If the center of a true bounding box falls on a grid cell, this grid cell is responsible to predict that bounding box, when it is given the same image.
I think there must be a function that reads the xml annotation files and determines which grid cells are responsible to detect bounding boxes.
To make the model learn the box positions and shapes not only the classes, we have to build an appropriate loss function. The loss function used in YOLOv puts cost also on the box shapes and positions. So the loss is calculated as the weighted sum of the following individual loss values:
Loss on the class name
Loss on the box position (x-y coordinates)
Loss on the box shape (box width and height)
SIDE NOTE:
Actually, one grid cell can detect up to B boxes, where B depends on
implementations of YOLOv2. I used darkflow to train YOLOv2 on my
custom training data, in which B was 5. So the model can detect 169*B
boxes in total, and loss is the sum of 169*B small losses.
D = B*(5+C), where C is the number of classes you want to detect.
Before passed to the model, the box shapes and positions are
converted into relative values to the image size.