The model mistakes everything it knew from its pre-trained model as my custom object - tensorflow

I've followed an object detection tutorial from pythonprogramming.net to recognize a small robot (my custom object) based on the ssd_mobilenet_v1_coco model.
I've about 450 labelled images of my robot.
I used the official sample config for ssd_mobilenet_v1_coco, and only made the necessary changes like num_class = 1, and reduced the batch size to 7, and trained until I had a loss that was consistently between 1 and 2 (about 10000 epochs).
The problem is, the model detects everything it used to know from its pre-trained state as my small robot. So it identifies everything as being a robot even though they aren't.

I faced this issue before. And fixed it by adding images contains pre-trained objects as negative examples. Another way to fix it is training longer. If you do both that will fix the problem i think. And try increasing your dataset by the way (i was training with 6000 images).

Related

When should I stop the object detection model training while mAP are not stable?

I am re-training the SSD MobileNet with 900 images from the Berkeley Deep Drive dataset, and eval towards 100 images from that dataset.
The problem is that after about 24 hours of training, the totalloss seems unable to go below 2.0:
And the corresponding mAP score is quite unstable:
In fact, I have actually tried to train for about 48 hours, and the TotoalLoss just cannot go below 2.0, something ranging from 2.5~3.0. And during that time, mAP is even lower..
So here is my question, given my situation (I really don't need any "high-precision" model, as you can see, I pick 900 images for training and would like to simply do a PoC model training/predication and that's it), when should I stop the training and obtain a reasonably performed model?
indeed for detection you need to finetune the network, since you are using SSD, there are already some sources out there:
https://gluon-cv.mxnet.io/build/examples_detection/finetune_detection.html (This one specifically for an SSD Model, uses mxnet but you can use the same with TF)
You can watch a very nice finetuning intro here
This repo has a nice fine tuning option enabled as long as you write your dataloader, check it out here
In general your error can be attributed to many factors, the learning rate you are using, the characteristics of the images themselves (are they normalized?) If the ssd network you are using was trained with normalized data and you don't normalize to retrain then you'll get stuck while learning. Also what learning rate are they using?
From the model zoo I can see that for SSD there are models trained on COCO
And models trained on Open Images:
If for example you are using ssd_inception_v2_coco, there is a truncated_normal_initializer in the input layers, so take that into consideration, also make sure the input sizes are the same that the ones you provide to the model.
You can get very good detections even with little data if you also include many augmentations and take into account the rest of the things I mentioned, more details on your code would help to see where the problem lies.

Object detection project (root architecture) using Tensorflow + Keras. Image sample size for accurate training of model?

Im currenty working on a project at University, where we are using python + tensorflow and keras to train an image object detector, to detect different parts of the root system of Arabidopsis.
Our current ressults are pretty bad, as we do only have about 100 images to train the model with at this moment, but we are currently working on cultuvating more plants in order to get more images(more data) to train the tensorflow model.
We have implemented the following Mask_RCNN model:Github- Mask_RCNN tensorflow
We are looking to detect three object clases: stem, main root and secondary root.
But the model detects main roots incorrectly where the secondary roots are located.
It should be able to detect something like this:Root detection example
Training root data set that we are using right now:training images
What is the usual sample size that is used to train a neural network accurate results?
First off: I think there is no simple rule to estimate the sample size but at least it depends on:
1. Quality of your images
I downloaded the images and I think you need to preprocess them before you can use it to reduce the "problem complexity". In some projects, in which I worked with biological data, a background removal (image - low pass filter) was the key to get better results. But you should definitely remove/crop the area outside the region of your interest (like the tape and the ruler). I would try to get the cleanest data set as possible (including manually adjustments cv2/ gimp/ etc.) to focus the network to solve "the right problem".. After that you could apply some random distortion to make it also work on fuzzy/bad/realistic images as well.
2. The way you work with your data
There are a few tricks that enables you to "expand" your dataset.
Sometimes it's very helpful to let a generator method crop random small patches from your input data. This allows you to work with more batches (on small gpus) and gives your network more "variety", (just think about the conv2d task: if you don't use random cropping your filters will slide over the same areas over and over again (at the same image)). Because of the same reason: apply random distortion, flip and rotate your images.
3. Network architecture
In your case I would prefer a U-Net architecture with a last conv2d output of 3 (your classes) feature maps, a final softmax activation and an categorical_crossentropy, this enables you to play with the depth, because sometimes you need sophisticated architectures to solve a problem (close to 100%) but in your case you just want to see a first working result. So fewer layers and a simple architecture could also help you to get things work. Maybe there are some trained network weights for a U-Net which meets your requirements (search on kaggle for example). Because it is also helpful (to reduce the data you need) to use "transfer learning" -> use the first layers of an network (weights) which is already trained. Using a semantic segmentation the first filters will become something like an edge detection for the most given problems/images.
4. Your mental model of "accurate results"
This is the hardest part.. because it evolves during your project. Eg. in the same moment your networks starts to perform well on preprocessed input images you will start to think about architecture/data changes to make it work on fuzzy images as well. This is why you should start with a feasible problem but always improve your dataset (including rare kinds of roots) and tune your network architecture step by step.

Tensorflow faster rcnn giving good detection but still detecting false positives with coco objects

I have used the tensorflow API to detect the Guinness harp using the process described here - https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/.
I have mostly good results, whenever the logo is clear in the image it finds it nicely -
However, after retraining from a coco checkpoint, it still detects what I think are coco objects with a very high confidence rating i.e people, magazines. I cannot work out why this is is.
(see below)
I am using the faster_rcnn_inception_v2_coco.config found here - https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/faster_rcnn_inception_v2_coco.config
Training for more steps does not seem to help as the total loss averages out. The above screenshots were from 10,000 training steps. I am training on a cpu.
I am augmenting my training images using imgaug, and an example training image can be seen below ( i have included the debug bounding box around the target) -
However, if the training images were the problem, wouldn't the graph have trouble detecting the target altogether?
I had a similar issue recently, from what it somewhat looks like a case of underfitting, I tried multiple things to improve on the results.
The thing that worked for me was actually augmenting data using the library imgaug. You can augment the images as well as the bounding boxes using a simple script, try and increase the dataset by say 10/12 fold.
I would also suggest adding some background images, ie. images with no object, it was recommended by a few people in the tensorflow discussion in the issues.
Try and train the dataset again and monitor it using tensorboard. I think you will be able to reduce the number of false positives significantly.

Issue with Custom object detection using tensorflow when Training on a single type of object

I am training a pre built tensorflow based model for custom object detection.
I want to detect only 1 type of object. I have taken lot of images from different angles and in different light conditions. I am training on K80 Nvidia GPU. Everything is working and when I train I can see the loss function falling to 0.3. But the loss values drops very quickly to under 1 when I start training. I am using SSD mobile Net as the base configuration for the model. When I try to test the model, it just draws a big square on the input image, rather than detecting the desired object in the image. Basically, it fails to detect the object.
I tried to train the model with a different set of images of mac n chesse which had lot of variations. Then the model worked fine and detected images of mac n chesse in the input image. But when I have pictures of single object then the model fails to detect. Please help me understand what I am doing wrong here
The issue was with my training dataset. I was not properly cropping the object from the original image. Also I needed around 300 images to properly train the model. SSD worked well after giving a well cropped images.

Tensorflow high false-positive rate and non-max-suppression issue

I am training Tensorflow Object detection on Windows 10using faster_rcnn_inception_v2_coco as pretrained model. I'm on Windows 10, with tensorflow-gpu 1.6 on NVIDIA GeForce GTX 1080, CUDA 9.0 and CUDNN 7.0.
My dataset contain only one object, "Pistol", and 3000 images (2700 train set, 300 test set). The size of the images are from ~100x200 to ~800x600.
I trained this model for 55k iterations, where the mAP was ~0.8 and the TotalLoss seems converged to 0.001. But however, seeing the evaluation, that there are a lot of multiple bounding boxes on the same detected object (e.g. this and this), and lot of false positives (house detected as a pistol). For example, in this photo taked by me (blur filter was applied later), the model detect a person and a car as pistols, as well as the correct detection.
The dataset is uploaded here, together with the tfrecords and the label map.
I used this config file, where the only things that I changed are: num_classes to 1, the fine_tune_checkpoint, input_path and label_map_path for train and eval, and num_examples.
Since I thought that the multiple boxes are a non-max-suppression problem, I changed the score_threshold (line 73) from 0 to 0.01 and the iou_threshold (line 74) from 1 to 0.6. With the standard values the outcome was much worse than this.
What can I do to have a good detection? What should I change? Maybe I miss something about parameters tuning...
Thanks
I think that before diving into paramter tuning (i.e. the mentioned score_threshold) you will have to review your dataset.
I didn't check the entire dataset you shared but from a high level view the main problem I found is that most of the images are really small and with a highly variable aspect ratio.
In my opinion this enters in conflict with this part of your configuration file:
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
If take one of the images of your dataset and you manually apply that transformation you will see that the result is very noisy for small images and very deformed for many images that have a different aspect ratio.
I would highly recommend you to re-build your dataset with images with more definition and maybe try to preprocess the images with unusual aspect ration with padding, cropping or other strategies.
If you want to stick with the small images you'd have to at least change the min and max dimensions of the image_resizer but, from my experience, the biggest problem here is the dataset and I would invest the time in trying to fix that.
Pd.
I don't see the house false positive as a big problem if we consider that it's from a totally different domain of your dataset.
You could probably adjust the minium confidence to consider a detections as true positive and remove it.
If you take the current winner of COCO and feed it with strange images like from a cartoon you will see that it generates a lot of false positives.
So it's more like a problem with the current object detection approaches wich are not robust to domain changes.
A lot of people I see online have been running into the same issue using Tensorflow API. I think there are some inherent problems with the idea/process of using the pretrained models with custom classifier(s) at home. For example people want to use SSD Mobile or Faster RCNN Inception to detect objects like "Person w/ helmet," "pistol," or "tool box," etc. The general process is to feed in images of that object, but most of the time, no matter how many images...200 to 2000, you still end up with false positives when you go actually run it at your desk.
The object classifier works great when you show it the object in its own context, but you end up getting 99% match on every day items like your bedroom window, your desk, your computer monitor, keyboard, etc. People have mentioned the strategy of introducing negative images or soft images. I think the problem has to do with limited context in the images that most people use. The pretrained models were trained with over a dozen classifiers in many variety of environments like in one example could be a Car on the street. The CNN sees the car and then everything in that image that is not a car is a negative image which includes the street, buildings, sky, etc.. In another image, it can see a Bottle and everything in that image which includes desks, tables, windows, etc. I think the problem with training custom classifiers is that it is a negative image problem. Even if you have enough images of the object itself, there isn't enough data of that that same object in different contexts and backgrounds. So in a sense, there is not enough negative images even if conceptually you shouldn't need negative images. When you run the algorithm at home you get false positives all over the place identifying objects around your own room. I think the idea of transfer learning in this way is flawed. We just end up seeing a lot of great tutorials online of people identifying playing cards, Millenium Falcons, etc., but none of those models are deployable in the real world as they all would generate a bunch of false positives when it sees anything outside of its image pool. The best strategy would be to retrain the CNN from scratch with a multiple classifiers and add the desired ones in there as well. I suggest re-introducing a previous dataset from ImageNet or Pascal with 10-20 pre-existing classifiers and add your own ones and retrain it.