Why COCO evaluate with AP, AR by size? what are AR max =1, 10, 100's meaning - object-detection

I'm reading COCO Metrics right now. And I have 2 questions about it.
This is the Metrics of COCO
I'm wondering why COCO evaluate AP and AR by size. What effect does image size have?
They measure AR by max which are 1, 10, 100. And they said AR max=1 is 'AR given 1 detection per image". Then, if model detect multiple objects per image, how to calculate AR? I can't understand the meaning of 'max'.

Because small objects are hard to detect for deeplearning detectors, so the detetctor who's AP small higher, is strongger at detect small object.
Per class base, caculate AR1
https://github.com/rafaelpadilla/review_object_detection_metrics/blob/e4acc7ecab9a1e34110bad59e0c1e3d84ac7a127/src/evaluators/coco_evaluator.py#L306
https://github.com/rafaelpadilla/review_object_detection_metrics/blob/e4acc7ecab9a1e34110bad59e0c1e3d84ac7a127/src/evaluators/coco_evaluator.py#L85

Related

How to interpret a confusion matrix in yolov5 for single class?

I used the yolov5 network for object detection on my database which has only one class. But I do not understand the confusion matrix. Why FP is one and TN is zero?
You can take a look at this issue. In short, confusion matrix isn't the best metric for object detection because it all depends on confidence threshold. Even for one class detection try to use mean average precision as the main metric.
We usually use a confusion matrix when working with text data. FP (False Positive) means the model predicted YES while it was actually NO. FN (False Negative) means the model predicted NO while it was actually YES and TN means the model predicted NO while it was actually a NO. In object detection, normally mAP is used (mean Average Precision). Also, you should upload a picture of the confusion matrix, only then the community would be able to guide you on why your FP is one and TN is zero.
The matrix indicates that 100% of the background FPs are caused by a single class, which means detections that do not match with any ground truth label. So that it shows 1. Background FN is for the ground truth objects that can not be detected by the mode, which shows empty or null.

Why the value to explained various ratio is too low in binary classification problem?

My input data $X$ has 100 samples and 1724 features. I want to classify my data into two classes. But explained various ratio value is too low like 0.05,0.04, No matter how many principal components I fix while performing PCA. I have seen in the literature that usually explained variance ratio is close to 1. I'm unable to understand my mistake. I tried to perform PCA by reducing number of features and increasing number of samples. But it doesn't make any significant effect on my result.
`pca = PCA(n_components=10).fit(X)
Xtrain = pca.transform(X)
explained_ratio=pca.explained_variance_ratio_
EX=explained_ratio
fig,ax1=plt.subplots(ncols=1,nrows=1)
ax1.plot(np.arange(len(EX)),EX,marker='o',color='blue')
ax1.set_ylabel(r"$\lambda$")
ax1.set_xlabel("l")

Implement CVAE for a single image

I have a multi-dimensional, hyper-spectral image (channels, width, height = 15, 2500, 2500). I want to compress its 15 channel dimensions into 5 channels.So, the output would be (channels, width, height = 5, 2500, 2500). One simple way to do is to apply PCA. However, performance is not so good. Thus, I want to use Variational AutoEncoder(VAE).
When I saw the available solution in Tensorflow or keras library, it shows an example of clustering the whole images using Convolutional Variational AutoEncoder(CVAE).
https://www.tensorflow.org/tutorials/generative/cvae
https://keras.io/examples/generative/vae/
However, I have a single image. What is the best practice to implement CVAE? Is it by generating sample images by moving window approach?
One way of doing it would be to have a CVAE that takes as input (and output) values of all the spectral features for each of the spatial coordinates (the stacks circled in red in the picture). So, in the case of your image, you would have 2500*2500 = 6250000 input data samples, which are all vectors of length 15. And then the dimension of the middle layer would be a vector of length 5. And, instead of 2D convolutions that are normally used along the spatial domain of images, in this case it would make sense to use 1D convolution over the spectral domain (since the values of neighbouring wavelengths are also correlated). But I think using only fully-connected layers would also make sense.
As a disclaimer, I haven’t seen CVAEs used in this way before, but like this, you would also get many data samples, which is needed in order for the learning generalise well.
Another option would be indeed what you suggested -- to just generate the samples (patches) using a moving window (maybe with a stride that is the half size of the patch). Even though you wouldn't necessarily get enough data samples for the CVAE to generalise really well on all HSI images, I guess it doesn't matter (if it overfits), since you want to use it on that same image.

Can you forecast with multiple trajectories?

I am new to time-series machine learning and have a, perhaps, trivial question.
I would like like to forecast the temperature for a particular region. I could train a model using the hourly data points from the first 6 days of the week and then evaluate its performance on the final day. Therefore the training set would have 144 data points (6*24) and the test set would have 24 data points (24*1). Likewise, I can train a new model for regions B-Z and evaluate each of their individual performances. My question is, can you train a SINGLE model for the predictions across multiple different regions? So the region label should be an input of course since that will effect the temperature evolution.
Can you train a single model that forecasts for multiple trajectories rather than just one? Also, what might be a good metric for evaluating its performance? I was going to use mean absolute error but maybe a correlation is better?
Yes you can train with multiple series of data from different region the question that you ask is an ultimate goal of deep learning by create a 1 model to do every things, predict every region correctly and so on. However, if you want to generalize your model that much you normally need a really huge model, I'm talking about 100M++ parameter and to train that data you also need tons of Data maybe couple TB or PB, so you also need a super powerful computer to train that thing something like GOOGLE data center. Coming to your next question, the metric, you may use just simple RMS error or mean absolute error will work fine.
Here is what you need to focus Training Data, there is no super model that take garbage and turn it in to gold, same thing here garbage in garbage out. You need a pretty good datasets that can represent whole environment of what u are trying to solve. For example, you want to create model to predict that if you hammer a glass will it break, so you have maybe 10 data for each type of glass and all of them break when u hammer it. so, you train the model and it just predict break every single time, then you try to predict with a bulletproof glass and it does not break, so your model is wrong. Therefore, you need a whole data of different type of glass then your model maybe predict it correctly. Then compare this to your 144 data points, I'm pretty sure it won't work for your case.
Therefore, I would say yes you can build that 1 model fits all but there is a huge price to pay.

TensorFlow Object Detection API: evaluation mAP behaves weirdly?

I am training an object detector for my own data using Tensorflow Object Detection API. I am following the (great) tutorial by Dat Tran https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9. I am using the provided ssd_mobilenet_v1_coco-model pre-trained model checkpoint as the starting point for the training. I have only one object class.
I exported the trained model, ran it on the evaluation data and looked at the resulted bounding boxes. The trained model worked nicely; I would say that if there was 20 objects, typically there were 13 objects with spot on predicted bounding boxes ("true positives"); 7 where the objects were not detected ("false negatives"); 2 cases where problems occur were two or more objects are close to each other: the bounding boxes get drawn between the objects in some of these cases ("false positives"<-of course, calling these "false positives" etc. is inaccurate, but this is just for me to understand the concept of precision here). There are almost no other "false positives". This seems much better result than what I was hoping to get, and while this kind of visual inspection does not give the actual mAP (which is calculated based on overlap of the predicted and tagged bounding boxes?), I would roughly estimate the mAP as something like 13/(13+2) >80%.
However, when I run the evaluation (eval.py) (on two different evaluation sets), I get the following mAP graph (0.7 smoothed):
mAP during training
This would indicate a huge variation in mAP, and level of about 0.3 at the end of the training, which is way worse than what I would assume based on how well the boundary boxes are drawn when I use the exported output_inference_graph.pb on the evaluation set.
Here is the total loss graph for the training:
total loss during training
My training data consist of 200 images with about 20 labeled objects each (I labeled them using the labelImg app); the images are extracted from a video and the objects are small and kind of blurry. The original image size is 1200x900, so I reduced it to 600x450 for the training data. Evaluation data (which I used both as the evaluation data set for eval.pyand to visually check what the predictions look like) is similar, consists of 50 images with 20 object each, but is still in the original size (the training data is extracted from the first 30 min of the video and evaluation data from the last 30 min).
Question 1: Why is the mAP so low in evaluation when the model appears to work so well? Is it normal for the mAP graph fluctuate so much? I did not touch the default values for how many images the tensorboard uses to draw the graph (I read this question: Tensorflow object detection api validation data size and have some vague idea that there is some default value that can be changed?)
Question 2: Can this be related to different size of the training data and the evaluation data (1200x700 vs 600x450)? If so, should I resize the evaluation data, too? (I did not want to do this as my application uses the original image size, and I want to evaluate how well the model does on that data).
Question 3: Is it a problem to form the training and evaluation data from images where there are multiple tagged objects per image (i.e. surely the evaluation routine compares all the predicted bounding boxes in one image to all the tagged bounding boxes in one image, and not all the predicted boxes in one image to one tagged box which would preduce many "false false positives"?)
(Question 4: it seems to me the model training could have been stopped after around 10000 timesteps were the mAP kind of leveled out, is it now overtrained? it's kind of hard to tell when it fluctuates so much.)
I am a newbie with object detection so I very much appreciate any insight anyone can offer! :)
Question 1: This is the tough one... First, I think you don't understand correctly what mAP is, since your rough calculation is false. Here is, briefly, how it is computed:
For each class of object, using the overlap between the real objects and the detected ones, the detections are tagged as "True positive" or "False positive"; all the real objects with no "True positive" associated to them are labelled "False Negative".
Then, iterate through all your detections (on all images of the dataset) in decreasing order of confidence. Compute the accuracy (TP/(TP+FP)) and recall (TP/(TP+FN)), only counting the detections that you've already seen ( with confidence bigger than the current one) for TP and FP. This gives you a point (acc, recc), that you can put on a precision-recall graph.
Once you've added all possible points to your graph, you compute the area under the curve: this is the Average Precision for this category
if you have multiple categories, the mAP is the standard mean of all APs.
Applying that to your case: in the best case your true positive are the detections with the best confidence. In that case your acc/rec curve will look like a rectangle: you'd have 100% accuracy up to (13/20) recall, and then points with 13/20 recall and <100% accuracy; this gives you mAP=AP(category 1)=13/20=0.65. And this is the best case, you can expect less in practice due to false positives which higher confidence.
Other reasons why yours could be lower:
maybe among the bounding boxes that appear to be good, some are still rejected in the calculations because the overlap between the detection and the real object is not quite big enough. The criterion is that Intersection over Union (IoU) of the two bounding boxes (real one and detection) should be over 0.5. While it seems like a gentle threshold, it's not really; you should probably try and write a script to display the detected bounding boxes with a different color depending on whether they're accepted or not (if not, you'll get both a FP and a FN).
maybe you're only visualizing the first 10 images of the evaluation. If so, change that, for 2 reasons: 1. maybe you're just very lucky on these images, and they're not representative of what follows, just by luck. 2. Actually, more than luck, if these images are the first from the evaluation set, they come right after the end of the training set in your video, so they are probably quite similar to some images in the training set, so they are easier to predict, so they're not representative of your evaluation set.
Question 2: if you have not changed that part in the config file mobilenet_v1_coco-model, all your images (both for training and testing) are rescaled to 300x300 pixels at the start of the network, so your preprocessings don't matter.
Question 3: no it's not a problem at all, all these algorithms were designed to detect multiple objects in images.
Question 4: Given the fluctuations, I'd actually keep training it until you can see improvement or clear overtraining. 10k steps is actually quite small, maybe it's enough because your task is relatively easy, maybe it's not enough and you need to wait ten times that to have significant improvement...