Does Tensorflow resize bounding boxes when training an object detection model? - tensorflow

I'm wondering about image resizing and then the intuitive bounding box resizing that would follow that.
For instance, when I use a 640x640 image in my dataset, and the model has a fixed_shape_resizer of 320x320, will the original bounding box be scaled down to match the smaller 320x320 size?

Yes, Tensorflow will automatically resize the bounding box to match the smaller input size.
Here is a Link to the code that changes the bounding box sizes.

Related

Weakly supervised object detection R-CNN of screen images

I have a set of icons and a screen recording, the icons are not annotated and have no bounding boxes, they are just png icons with image-level labels eg: "instagram", "facebook", "chrome".
The task is to find the icons within the screen recording and draw a bounding box around them, given the above prerequistes.
My idea of approach so far is:
Use selective search to find ROIs
Use a CNN to classify the regions
Filter out non-icons regions
Draw bounding boxes around positive labelled ROIs
Use resulting screen images with bounding boxes to train a FAST R-CNN
but I am stuck at step 2, I have no idea on how to train the CNN with the image-level labelled icons.
If I make a dataset of all the possible icon-images, with no background or context informations, is it possible to train the CNN to answer the question "Does the ROI includes a possible icon?"

Normal or not that no bounding boxes are shown on tensorboard Images tab when training object detection model?

I'm training an object detection model with Tensorflow (fine tuning from ssd_mobilenet_v2_320x320_coco17_tpu-8) and monitor the training task with tensorboard. I was expecting in the Images tab of tensorboard that images that are displayed would show a bounding box. What I see though is only images with an orange line drawn above the picture (the same orange that I expect for the bounding box). Am I missing something? Am I right when I say that a bounding box should appear or not? Picture of what I see is joined. Any help greatly appreciated.

Object Detection: Aspect Ratio and Scale of Anchor Boxes

I am working on an object detection problem on my own dataset. I want to figure out the scale and aspect ratio that I should specify in the config file of Faster RCNN provided by Tensorflow object detection api. The first step is image resizer. I am using Fixed shape resizer as it allows batch size of more than 1. I read that this uses bilinear interpolation for downsample and upsample. How to calculate the new ground truth box coordinate after this resizing. Also, once we have the new ground truth box coordinates, how do we calculate the scale and aspect ratio of anchor boxes that can be specified in the config file to improve the localisation loss.

Tensorflow gouping detection boxes for stacked/near by images

I am doing an object detection on sample toy sets using tensorflow RCNN on my own dataset.
Sometimes when two toys are closed to each other, the tensor flow is bounding both boxes together. So instead of returning two bounding boxes, tensorflow returning one bigger bounding box.
(as in the attached image both red and blue cars have been bounded together)
Does anyone know what is the issue here and how to do individual identification?
Detected Output image

If I resize images using Tensorflow Object Detection API, are the bboxes automatically resized too?

Tensorflow's Object Detection API has an option in the .config file to add an keep_aspect_ratio_resizer. If I resize my training data using this, will the corresponding bounding boxes be resized as well? If they don't match up then the network is seeing incorrect examples.
Yes, the boxes will be resized to be compatible with the images as well!