I am making a real time object detector as my project . I have the following doubts :
1) how many images of each item should I take to train accurately ?
2) will the model which has earlier been trained on different objects detect those objects if I used that to train other objects ?
3) which object detector model should I use ?
1) With tensorflow you can start with 150-200 images of each class to start testing with some decent initial results. You may have to increase the images based on results
2) Yes
3) You could start with any of the models, like ssd_mobilenet_v1_coco
Here are all of the models available which are trained on COCO dataset
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
Each of the pre-trained model is different from others in terms of speed of detection, accuracy etc., Based on your needs you need to pick
Additionally Seems you are new to Obeject detection, refer the following articles if you need a start on how to do
https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/
https://towardsdatascience.com/building-a-toy-detector-with-tensorflow-object-detection-api-63c0fdf2ac95
https://medium.com/#dana.yu/training-a-custom-object-detection-model-41093ddc5797
Related
I am re-training the SSD MobileNet with 900 images from the Berkeley Deep Drive dataset, and eval towards 100 images from that dataset.
The problem is that after about 24 hours of training, the totalloss seems unable to go below 2.0:
And the corresponding mAP score is quite unstable:
In fact, I have actually tried to train for about 48 hours, and the TotoalLoss just cannot go below 2.0, something ranging from 2.5~3.0. And during that time, mAP is even lower..
So here is my question, given my situation (I really don't need any "high-precision" model, as you can see, I pick 900 images for training and would like to simply do a PoC model training/predication and that's it), when should I stop the training and obtain a reasonably performed model?
indeed for detection you need to finetune the network, since you are using SSD, there are already some sources out there:
https://gluon-cv.mxnet.io/build/examples_detection/finetune_detection.html (This one specifically for an SSD Model, uses mxnet but you can use the same with TF)
You can watch a very nice finetuning intro here
This repo has a nice fine tuning option enabled as long as you write your dataloader, check it out here
In general your error can be attributed to many factors, the learning rate you are using, the characteristics of the images themselves (are they normalized?) If the ssd network you are using was trained with normalized data and you don't normalize to retrain then you'll get stuck while learning. Also what learning rate are they using?
From the model zoo I can see that for SSD there are models trained on COCO
And models trained on Open Images:
If for example you are using ssd_inception_v2_coco, there is a truncated_normal_initializer in the input layers, so take that into consideration, also make sure the input sizes are the same that the ones you provide to the model.
You can get very good detections even with little data if you also include many augmentations and take into account the rest of the things I mentioned, more details on your code would help to see where the problem lies.
Open CV provides a simple API to detect and extract faces from given images. ( I do not think it works perfectly fine though because I experienced that it cuts frames from the input pictures that have nothing to do with face images. )
I wonder if tensorflow API can be used for face detection. I failed finding relevant information but hoping that maybe an experienced person in the field can guide me on this subject. Can tensorflow's object detection API be used for face detection as well in the same way as Open CV does? (I mean, you just call the API function and it gives you the face image from the given input image.)
You can, but some work is needed.
First, take a look at the object detection README. There are some useful articles you should follow. Specifically: (1) Configuring an object detection pipeline, (3) Preparing inputs and (3) Running locally. You should start with an existing architecture with a pre-trained model. Pretrained models can be found in Model Zoo, and their corresponding configuration files can be found here.
The most common pre-trained models in Model Zoo are on COCO dataset. Unfortunately this dataset doesn't contain face as a class (but does contain person).
Instead, you can start with a pre-trained model on Open Images, such as faster_rcnn_inception_resnet_v2_atrous_oid, which does contain face as a class.
Note that this model is larger and slower than common architectures used on COCO dataset, such as SSDLite over MobileNetV1/V2. This is because Open Images has a lot more classes than COCO, and therefore a well working model need to be much more expressive in order to be able to distinguish between the large amount of classes and localizing them correctly.
Since you only want face detection, you can try the following two options:
If you're okay with a slower model which will probably result in better performance, start with faster_rcnn_inception_resnet_v2_atrous_oid, and you can only slightly fine-tune the model on the single class of face.
If you want a faster model, you should probably start with something like SSDLite-MobileNetV2 pre-trained on COCO, but then fine-tune it on the class of face from a different dataset, such as your own or the face subset of Open Images.
Note that the fact that the pre-trained model isn't trained on faces doesn't mean you can't fine-tune it to be, but rather that it might take more fine-tuning than a pre-trained model which was pre-trained on faces as well.
just increase the shape of the input, I tried and it's work much better
I am looking for some pre-trained deep learning model which can recognise an object in an image. Usually the images are of type used in shopping websites for products. I want to recognise what is the product in the image. I have come across some pre-trained models like VGG, Inception but they seems to be trained on some few general objects like 1000 objects. I am looking for something which is trained on more like 10000 or more.
I think the best way to do this is to build your own training set with the labels that you need to predict, then take an existing pre-trained model like VGG, remove the last fully connected layers and train the mode with your data, the process called transfer learning. Some more info here.
I am new to object detection and trying to retrain object-detection API in TensorFlow to detect a specific car model in photos. When preparing my own training data to retrain the model, besides things like drawing bounding boxes, etc, my question is, should I also prepare negative examples in the training data (cars that are not the model I am interested in) to reach good performance?
I have read through some tutorials and they usually give example in detecting one type of object, and they prepared training data with the label only for that type. I was thinking, since the model first proposal some area of interest, then try to classify those areas, should I also prepare negative examples if I want to detect very specific stuff from photos.
I am retaining faster_rcnn based model. Thanks for the help.
Yes, you will need negative examples also for better performance. Seems like are you thinking about using transfer learning to train a pre-trained faster_rcnn model to add a new class for your custom car. You should start an equal number of positive and negative examples (images with labelled bounding boxes). You will need have examples of several negative classes (e.g. negative car type 1, negative car type 2, negative car type 3) in addition to your target car type.
You can look at examples of one positive class and several negative classes training data for transfer learning in the data folder of the my github repo at: PSV Detector Github
I've trained a model with a custom dataset (Garfield images) with Tensorflow Object Detection API (ssd_mobilenet_v1 model) and referring it in the android sample application available on Tensorflow repository. The application can only detected the images in distances less or equal 20cm approximately.
Do you have any clue about I can improve the model to perform recognitions in longer distances (about 30cm or more) ?
I don't know with this limitation is related with input size I'm using (tested with images with 300x300 and 68x68) or any custom data augmentation is needed to improve that.
SSD models are known to have worse performance on small objects. Have you tried using one of our FasterRCNN models to see if the result is acceptable?