Given that I have harris corner detection and canny edge detection working, what is the simplest way to determine whether two images have corresponding edges or shape? The use case intended would be to classify really simple shapes and objects like a banana or a bottle.
Related
I was wondering, which would be better for the following:
I want to create a model to help distinct car models, take the Mercedes C250 (2014) and Mercedes C63 (2014) as an example.
I understand, object helps to identify multiple well... objects in a given image, however, looking at a tutorial online and seeing how IBM cloud can allow you to annotate such specifics say the badge on the car, certain detailing etc. Would an object detection work better for me as opposed to just an image classifier?
I understand, the more data that is fed, the better the results, but in a general sense, what should be the approach? Image classifier or object detection? Or maybe something else? I've used and trained multiple image classifiers but I am not happy at all with the results.
Any help or suggestions would be much appreciated.
Object detection better because simple image classifier broke if you have more than one different cars at one photo.
at the moment I study neural network. I tried to use different models to recognize people and came across one very interesting question for me. I used yolo v3, mask r-cnn, but all of them in the photos taken from an indirect angle missed people in the photo. Which of the existing models is the most accurate and effective ?
This is the main problem with deep learning models. For every instance of an object you want to detect, there should be at least one similar object to it (in case of angle, size, color, shape, etc) in the training set. The more similar objects in the training data, the higher probability of the object to be detected.
In case of speed and accuracy, YOLO V3 is currently one of the best. Mask RCNN is also one of the best models if you want the exact boundaries of the object (segmentation). If there is no need for the exact boundaries of the objects, I would recommend using YOLO for its efficiency, You can work on your training data and try to add multiple instances of people with different sizes, angles, shapes, and also include cases of truncation and occlusion (when just parts of a person is visible) to get more generalization in the model's performance.
I am new to machine learning field and based on what I have seen on youtube and read on internet I conjectured that it might be possible to count pedestrians in a video using tensorflow's object detection API.
Consequently, I did some research on tensorflow and read documentation about how to install tensorflow and then finally downloaded tensorflow and installed it. Using the sample files provided on github I adapted the code related to object_detection notebook provided here ->https://github.com/tensorflow/models/tree/master/research/object_detection.
I executed the adapted code on the videos that I collected while making changes to visualization_utils.py script so as to report number of objects that cross a defined region of interest on the screen. That is I collected bounding boxes dimensions (left,right,top, bottom) of person class and counted all the detection's that crossed the defined region of interest (imagine a set of two virtual vertical lines on video frame with left and right pixel value and then comparing detected bounding box's left & right values with predefined values). However, when I use this procedure I am missing on lot of pedestrians even though they are detected by the program. That is the program correctly classifies them as persons but sometimes they don't meet the criteria that I defined for counting and as such they are not counted. I want to know if there is a better way of counting unique pedestrians using the code rather than using the simplistic method that I am trying to develop. Is the approach that I am using the right one ? Could there be other better approaches ? Would appreciate any kind of help.
Please go easy on me as I am not a machine learning expert and just a novice.
You are using a pretrained model which is trained to identify people in general. I think you're saying that some people are pedestrians whereas some other people are not pedestrians, for example, someone standing waiting at the light is a pedestrian, but someone standing in their garden behind the street is not a pedestrian.
If I'm right, then you've reached the limitations of what you'll get with this model and you will probably have to train a model yourself to do what you want.
Since you're new to ML building your own dataset and training your own model probably sounds like a tall order, there's a learning curve to be sure. So I'll suggest the easiest way forward. That is, use the object detection model to identify people, then train a new binary classification model (about the easiest model to train) to identify if a particular person is a pedestrian or not (you will create a dataset of images and 1/0 values to identify them as pedestrian or not). I suggest this because a boolean classification model is about as easy a model as you can get and there are dozens of tutorials you can follow. Here's a good one:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/neural_network.ipynb
A few things to note when doing this:
When you build your dataset you will want a set of images, at least a few thousand along with the 1/0 classification for each (pedestrian or not pedestrian).
You will get much better results if you start with a model that is pretrained on imagenet than if you train it from scratch (though this might be a reasonable step-2 as it's an extra task). Especially if you only have a few thousand images to train it on.
Since your images will have multiple people in it you have a problem of identifying which person you want the model to classify as a pedestrian or not. There's no one right way to do this necessarily. If you have a yellow box surrounding the person the network may be successful in learning this notation. Another valid approach might be to remove the other people that were detected in the image by deleting them and leaving that area black. Centering on the target person may also be a reasonable approach.
My last bullet-point illustrates a problem with the idea as it's I've proposed it. The best solution would be to alter the object detection network to ouput both a bounding box per-person, and also a pedestrian/non pedestrian classification with it; or to only train the model to identify pedestrians, specifically, in the first place. I mention this as more optimal, but I consider it a more advanced task than my first suggestion, and a more complex dataset to manage. It's probably not the first thing you want to tackle as you learn your way around ML.
I am able to use Tensorflow to train the model on my own dataset. For example, I have trained a model to only detect the safety helmet and the result is good.
My plan for next step is to classify the identified safety helmet by colors. But I still in search of methods.
I am wondering should I retrain the model with different label map like: [item1 red_helmet] [item2 blue_helmet] and label my training dataset respectively? Or is there any other tricky way to achieve the same outcome?
You already have the region of interest in the picture.
All you need is extract the helmets from the picture and pass the cropped images to openCV routine that can detect colours.
Thats it, you are done :)
We have been using Tensorflow for image classification, and we all see the results for the Admiral Grace Hopper, and we get:
military uniform (866): 0.647296
suit (794): 0.0477196
academic gown (896): 0.0232411
bow tie (817): 0.0157356
bolo tie (940): 0.0145024
I was wondering if there is any way to get the coordinates for each category within the image.
Tensorflow doesn't have sample code yet on image detection and localization but it's an open research problem with different approaches to do it using deep nets; for example you can lookup the papers on algorithms called OverFeat and YOLO (You Only Look Once).
Also, usually there's some preprocessing on the object coordinates labels, or postprocessing to suppress duplicate detections. Usually a second, different network is used to classify the object once it's detected.