TF Object Detection API: Inference in very high resolution images - tensorflow

I have trained a model (faster rcnn based) to identify 80x80 sized objects in 1000x600 images.
Inference works well when presented with 1000x600 test image.
However, my final goal is to be able to detect such objects (80x80) in very high res photographs (5000x4000 or higher, sometimes 10x of that).
What options do I have?
One way I am thinking is to split the large image into smaller images of 1000x600 and do inference on them. But there are challenges in that approach.
Anyone has tried this use case and found any workable solution?
--

What I would do is:
Reduce the size of the image 5000x4000 -> 1000x600
Predict the objects; you will get the xminx, xmaxs, ymins, ymaxs -> normalize them by width and height to get a value space of 0 and 1
Take the original image and re-normalized the object boxes by the original width and height
Your suggested approach to split the image should work as well but will be computationally more expensive.

You can either:
do patch-by-patch inference and use non-maximum-suppression to handle border cases, or
make your training images the same size as your testing images by padding.
Let us know what you ended up doing!

Related

Tenorflow small objects far from camera detection

I am using the tensorflow object detection API for the object detection task. However, I have objects that are captured from a high angle (camera at 10 m) and in a very small size where the size of images is 1920 x 1080.
Questions:
1) What is the best way to detect small objects under this condition?
2) What are the features of suitable dataset? Images from the same views (maybe!)?
I appreciate all of your answers, Thanks :)
You have to consider object detector's input size, even if you use high resolution image such as 1920x1080.
Because object detector resize input image to their architecture size(ex. general YOLO use 410x410 input in their architecture)
On the other hand, if you use 1920x1080 image as it is, your API will resize it to small resolution like 410x410.
It means your small objects in images will be disappeared during passing through convolution filter.
In my opinion,
1) If you know where small objects is located in whole image, CROP&SEPARATE image and USE as an input image.
Even though you do not know where small objects is, you can make several candidate list that is separated by some method.
2) I don't understand what you want to know, please let me know more specific.
I think you should try "faster_rcnn_resnet101" model with kitti dataset, this has the max image size of 1987. but this model is very slow compared to any other SSD models. The configuration link is below -
https://github.com/tensorflow/models/blob/001a2a61285e378fef5f45386f638cb5e9f153c7/research/object_detection/samples/configs/faster_rcnn_resnet101_kitti.config
Also the Faster rcnn models do better job compared to yolo in small object detection, not sure of performance with ssd model.

How to solve different size of input data for CNN with TensorFlow?

I am trying to build CNN model using TensorFlow at my own data set. But i faced with problem that is i have many pictures with different sizes. There are one kind of object in my pictures. If i make all pictures with same size, objects at pictures are not same size. In order to run CNN model with TensorFlow how to fix this problem? I heard one thing from others that is no matter having different size of input data, using tf.reduce_max, tf.reduce_mean is the best solution. if it is true that best solution to fix my problem, how to use this in my CNN model?
If i make all pictures with same size, objects at pictures are not same size.
If you know already how to make your input images to have the same size, you are ready for your task to train your CNN model. Unless you have a strict need to make the object for the picture to have the same size, it does not matter to the network.
Usual approach is to resize the images to a fixed size that is accepted by the network as input. This means distorting the aspect ratio of objects.
If that bothers you, you could try padding the images to a square (supposing the network input is a square) and then resize. This will keep the aspect ratio, but add some extra-information (the padding).
Another option is to crop the image to a square, if you are confident you are not losing important information and your task allows it.

Training Image Size Faster-RCNN

I will train my dataset with faster-rcnn for one class. All my images are 1920x1080 sizes. Should I resize or crop the images or I can train with this size?
Also my objects are really small (around 60x60).
In the config file there are dimensions written as min_dimension: 600 and max_dimension: 1024 for this reason I am confused to train the model with 1920x1080 size images.
If your objects are small, resizing the images to a smaller size is not a good idea. You can change the max_dimension to 1920 or 2000 which might make the speed a bit lower. For cropping the images, you should first consider how the objects are placed in the images. If cropping will cut a lot of objects, then you will have many cases of truncation which might have a negative effect on the model's performance.
If you insist on the faster-rcnn to cope with this task, personally I recommend:
Change input height and width, maximum and minimum value in the config file, which should work for your dataset in terms of successfully execution.
Change the original region proposal parameters (should be in config file, too) to certain ratio and scale like 1:1 and 60.
But if I were you, I would like to try:
Add some shortcuts in backbone since it is a small object detection task which is in need of features of high resolution.
Cut the fast-rcnn head off to enhance the performance, since I only need to detect one class to be THE class or not to be (being background or other class), and the output should be enough to encode the information at the RPN stage.

MobileNet-SSD input resolution

I have a working object detection model (fined-tuned MobileNet SSD) that detects my custom small robot. I'll feed it some webcam footage (which will be tied to a drone) and use the real-time bounding box information.
So, I am about to purchase the camera.
My questions: since SSD resizes the input images into 300x300, is the camera resolution very important? Does higher resolution mean better accuracy (even when it gets resized to 300x300 anyway)? Should I crop the camera footage into 1:1 aspect ratio at every frame before running my object detection model on it? Should I divide the image into MxN cropped segments and run inference one by one?
Because my robot is very small and the drone will be at a 4 meter altitude, so I'll effectively be trying to detect a very tiny spot on my input image.
Any sort of wisdom is greatly appreciated, thank you.
These are quite a few questions, I'll try to answer all of them. The detection model resizes the input images before feeding it to the network by some resizing method, e.g. bilinear. It would be better of course if the input image would be equal or larger than the input size to the network rather than smaller. A rule of thumb is that indeed higher resolution means better accuracy, but it highly depends on the setup and the task. If you're trying to detect a small object, and let's say for example that the original resolution is 1920x1080. Then after resizing the image, the small object would be even smaller (pixels-wise), and might be too small to detect. Therefore, indeed, it would be better to either split the image to smaller images (possibly with some overlap to avoid misdetection due to object splitting) and applying detection on each, or using a model with higher input resolution. Be aware that while the first is possible with your current model, you'll need to train a new model possibly with some architectural changes (e.g. adding SSD layers and modifying anchors, depends on the scales you want to detect) for the latter. Regarding the aspect ratio matter, you mostly need to stay consistent. It doesn't matter if you don't keep the original aspect ratio, but if you don't - do it both in training and evaluation/test/deployment.

Tensorflow for Poets Inception v3 image size

I am training my own image set using Tensorflow for Poets as an example,
https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/
What size do the images need to be. I have read that the script automatically resizes the image for you, but what size does it resize them to. Can you preresize your images to this to save on your disk space (10,000 1mb images).
How does it crop the images, does it chop off part of your image, or add white/black bars, or change the aspect ratio?
Also, I think Inception v3 uses 299x299 images, what if your image recogition requires more detailed accuracy, is it possible to increase the networks image size, like to 598x598?
I don't know what re-sizing option this implementation uses; if you haven't found that in the documentation, then I expect that we'd need to read the code.
The images can be of any size. Yes, you can shrink your images to save disk space. However, note that you lose image detail; there won't be a way to recover the lost information.
The good news is that you shouldn't need it; CNN models are built for an image size that contains enough detail to handle the problem at hand. Greater image detail generally does not translate to greater accuracy in classification. Doubling the image resolution is usually a waste of storage.
To do that, you'd have to edit the code to accept the larger "native" image size. Then you'd have to alter the model topology to account for the greater input size: either a larger step-down factor somewhere (which could defeat the greater resolution), or another layer on the model to capture the larger size.
To get a more accurate model, you generally need a stronger network topology. 2x resolution does not give us much more information to differentiate a horse from a school bus.