Can Detectron2 identify human faces or not? - object-detection

I want to know that detectron2 can identify human faces or not.
I know detectron2 is an object detection framework so its possible or not?

Yes, it is also possible to detect faces with detectron2. You can either train your own model or use a pre trained model, like it is shown for example here

Related

CoreML - Image Classifier vs Object Detection

I was wondering, which would be better for the following:
I want to create a model to help distinct car models, take the Mercedes C250 (2014) and Mercedes C63 (2014) as an example.
I understand, object helps to identify multiple well... objects in a given image, however, looking at a tutorial online and seeing how IBM cloud can allow you to annotate such specifics say the badge on the car, certain detailing etc. Would an object detection work better for me as opposed to just an image classifier?
I understand, the more data that is fed, the better the results, but in a general sense, what should be the approach? Image classifier or object detection? Or maybe something else? I've used and trained multiple image classifiers but I am not happy at all with the results.
Any help or suggestions would be much appreciated.
Object detection better because simple image classifier broke if you have more than one different cars at one photo.

How does custom object detection actually work?

I am currently testing out custom object detection using the Tensorflow API. But I don't quite seem to understand the theory behind it.
So if I for example download a version of MobileNet and use it to train on, lets say, red and green apples. Does it forget all the things that is has already been trained on? And if so, why does it then benefit to use MobileNet over building a CNN from scratch.
Thanks for any answers!
Does it forget all the things that is has already been trained on?
Yes, if you re-train a CNN previously trained on a large database with a new database containing fewer classes it will "forget" the old classes. However, the old pre-training can help learning the new classes, this is a training strategy called "transfert learning" of "fine tuning" depending on the exact approach.
As a rule of thumb it is generally not a good idea to create a new network architecture from scratch as better networks probably already exist. You may want to implement your custom architecture if:
You are learning CNN's and deep learning
You have a specific need and you proved that other architectures won't fit or will perform poorly
Usually, one take an existing pre-trained network and specialize it for their specific task using transfert learning.
A lot of scientific literature is available for free online if you want to learn. you can start with the Yolo series and R-CNN, Fast-RCNN and Faster-RCNN for detection networks.
The main concept behind object detection is that it divides the input image in a grid of N patches, and then for each patch, it generates a set of sub-patches with different aspect ratios, let's say it generates M rectangular sub-patches. In total you need to classify MxN images.
In general the idea is then analyze each sub-patch within each patch . You pass the sub-patch to the classifier in your model and depending on the model training, it will classify it as containing a green apple/red apple/nothing. If it is classified as a red apple, then this sub-patch is the bounding box of the object detected.
So actually, there are two parts you are interested in:
Generating as many sub-patches as possible to cover as many portions of the image as possible (Of course, the more sub-patches, the slower your model will be) and,
The classifier. The classifier is normally an already exisiting network (MobileNeet, VGG, ResNet...). This part is commonly used as the "backbone" and it will extract the features of the input image. With the classifier you can either choose to training it "from zero", therefore your weights will be adjusted to your specific problem, OR, you can load the weigths from other known problem and use them in your problem so you won't need to spend time training them. In this case, they will also classify the objects for which the classifier was training for.
Take a look at the Mask-RCNN implementation. I find very interesting how they explain the process. In this architecture, you will not only generate a bounding box but also segment the object of interest.

Is this the correct way of using YOLO for image classification in a custom project?

I'm a beginner in computer vision. Could anyone tell me whether what I'm considering to do is correct or not? I wanted to detect a certain cyst in teeth. So my dataset consists of a part of the dental x-ray that contains that cyst. I train my model with these pictures. The one with the colored area contains cyst (infected teeth), and the one below it is the uninfected teet.
Image with cyst
Uninfected teeth
After training my model, I want to use it on a full dental x-ray, and determine if this picture has the cyst or not. A full dental x-ray is shown below.
Full dental X-Ray
Does this work? Or I'm completely wrong?
Instead of treating this as an object detection problem, you would get far better results if you were to treat this as a classification problem.
There are already various architectures for such classification tasks.
There are various architectures in TensorFlow to get you started.
Take a look at this. If you have enough data you can train them from scratch instead of using pre-trained weights
Note - The architecture provided in TensorFlow will almost always give you better results than the architectures that you create.
Object detection is suitable for cases where you have well-defined objects. If you take a look at recently published research papers you can see that these types of problems are considered as classification problems instead of object detection problems.

retrain model when I am only interested of a subcategory of the existing classes

I want to use a trained model form the tensorflow object detection API, specifically I want to use faster_rcnn_inception_resnet_v2_atrous_oid_v4 trained on google open imaged. I am not interested in detecting all the 601 classes, but rather would like to detect 10 subclasses. Will I gain improvement in accuracy if I retain the last layer or is it better to filter the layers I am not interested after the model is done with prediction. If I went with retaining, is it ok to retain the model with images form google open images again or it is better to use different data.
This official example seems to help you.

how to use tensorflow object detection API for face detection

Open CV provides a simple API to detect and extract faces from given images. ( I do not think it works perfectly fine though because I experienced that it cuts frames from the input pictures that have nothing to do with face images. )
I wonder if tensorflow API can be used for face detection. I failed finding relevant information but hoping that maybe an experienced person in the field can guide me on this subject. Can tensorflow's object detection API be used for face detection as well in the same way as Open CV does? (I mean, you just call the API function and it gives you the face image from the given input image.)
You can, but some work is needed.
First, take a look at the object detection README. There are some useful articles you should follow. Specifically: (1) Configuring an object detection pipeline, (3) Preparing inputs and (3) Running locally. You should start with an existing architecture with a pre-trained model. Pretrained models can be found in Model Zoo, and their corresponding configuration files can be found here.
The most common pre-trained models in Model Zoo are on COCO dataset. Unfortunately this dataset doesn't contain face as a class (but does contain person).
Instead, you can start with a pre-trained model on Open Images, such as faster_rcnn_inception_resnet_v2_atrous_oid, which does contain face as a class.
Note that this model is larger and slower than common architectures used on COCO dataset, such as SSDLite over MobileNetV1/V2. This is because Open Images has a lot more classes than COCO, and therefore a well working model need to be much more expressive in order to be able to distinguish between the large amount of classes and localizing them correctly.
Since you only want face detection, you can try the following two options:
If you're okay with a slower model which will probably result in better performance, start with faster_rcnn_inception_resnet_v2_atrous_oid, and you can only slightly fine-tune the model on the single class of face.
If you want a faster model, you should probably start with something like SSDLite-MobileNetV2 pre-trained on COCO, but then fine-tune it on the class of face from a different dataset, such as your own or the face subset of Open Images.
Note that the fact that the pre-trained model isn't trained on faces doesn't mean you can't fine-tune it to be, but rather that it might take more fine-tuning than a pre-trained model which was pre-trained on faces as well.
just increase the shape of the input, I tried and it's work much better