One class classification of images using Neural Network - tensorflow

The goal is classifying dog and non-dog. The train dataset only contains dog images. Neural network will be trained using this train dataset only and then tested using test dataset that contains dog and non-dog images.
I followed the encoder Datacamp tutorial and in my case, the autoencoder classified all test images as dog which is wrong.
Building CNN for one class classification is not possible too. Any idea how to do this?

The correct term for what you're asking about is binary classification.
And you can safely use a CNN for binary classification. The question is just how you model the output layer. You can either use a softmax layer with two output units (argmax(output) is class 1: dog, class 2: not dog) or use the more traditional method of using a single sigmoid unit as the output (output>0.5: dog, <0.5: no dog). As the softmax normalization is a multiclass extension of sigmoid activation, a sigmoid layer makes more sense on a theoretical basis but if you have trouble implementing this, a softmax should work just as well.
Of course, you'll have to adapt your data labels to fit either approach.
Edit: You need both images of dogs and those that include no dogs. The network needs to know the distributions for both. Just think about your use case and what kind of images you expect that are not dogs. Then collect images that are not dogs and label them accordingly.

Related

variational autoencoder with limited data

Im working on a binary classificaton project, and im using VAE (variational autoencoder) to handle the imbalance between the 2 classes by generating new samples for the minority class.
the first class (majority class) contains 20000 samples, and the second one (minority class) contains 500 samples.
After training VAE model on the minority class, i generated new samples for this class and add them to the training set, then i trained two classification models, a model on trained on the imbalanced data (only training set) and the second one trained with training set + data generated by VAE). The problem is the first model is giving results better than the second(f1-score, Roc auc...), and i thought that maybe the problem was because of the limited amount of data that the VAE was trained on.
Any help please.
Though 500 training Images are not good enough to generate diversified images from a VAE, you can still try producing some. It's better to take mean of latents of 10 different images (or even more) and pass it through the decoder ( if you're already doing this, ignore it. If you're doing some other method, try this).
If it's still not working, then, I suggest you to build a Conditional VAE on your entire dataset. In conditional VAE, you train VAE using the labels so that your models learns not only reconstruction but also what class of image it is reconstructing. This helps you to generate an Image of any particular class.

Can SigmoidFocalCrossEntropy in Tensorflow (tf-addons) be used in Multiclass Classification? ( What is the right way)?

Focal Loss given in Tensorflow is used for class imbalance. For Binary class classification, there are a lots of codes available but for Multiclass classification, a very little help is there. I ran the code with One Hot Encoded target variables of 250 classes and it gave me results without any error.
y = pd.get_dummies(df['target']) # One hot encoded target classes
model.compile(
optimizer="adam", loss=tfa.losses.SigmoidFocalCrossEntropy(), metrics= metric
)
I just want to know whoever wrote this code or someone having enough knowledge of this code, can it be used be used for Multiclass Classification. If no then how come it did not give me errors, instead better results than CrossEntropy. Also, in other implementations like this one, the value of alpha has to be given for every class but just one value in Tensorflow's implementations.
What is the correct way to use this?
Some basics first.
Categorical Crossentropy is designed to incentivize a model a model to predict 100% for the correct label. It was designed for models that predict single-label multi-class classification - like CIFAR10 or Imagenet. Usually these models finish in a Dense layer with more than one output.
Binary Crossentropy is designed to incentivize a model to predict 100% if the label is one, or, 0% is the label is zero. Usually these models finish in a Dense layer with exactly one output.
When you apply Binary Crossentropy to a single-label multi-class classification problem, you are doing something that is mathematically valid but defines a slightly different task: you are incentivizing a single-label classification model to not only get the true label correct, but also minimize the false labels.
For example, if your target is dog, and your model predict 60% dog, CCE doesn't care if your model predicts 20% cat and 20% French horn, or, 40% cat and 0% French horn. So this is aligned with a top-1 accuracy concept.
But if you take that same model and apply BCE, and your model predictions 60% dog, BCE DOES care if your models predict 20%/20% cat/frenchhorn, vs 40%/0% cat/frenchhorn. To put it in precise terminology, the former is more "calibrated" and so it has some additional measure of goodness. However, this has little correlation to top-1 accuracy.
When you use BCE, presumably you are wasting the model's energy to focus on calibration at the expense of top-1 acc. But as you might have seen, it doesn't always work out that way. Sometimes BCE gives you superior results. I don't know that there's a clear explanation of that but I'd assume that the additional signals (in the case of Imagenet, you'll literally get 1000 times more signals) somehow creates a smoother loss value that perhaps helps smooth the gradients you receive.
The alpha value of focal loss additionally penalizes very wrong predictions and lessens the penalty if your model predicts something close to the right answer - like predicting 90% cat if the ground truth is cat. This would be a shift from the original definition of CCE, based on the theory of Maximum Likelihood Estimation... which focuses on calibration... vs the normal metric most ML practitioners care about: top-1 accuracy.
Focal loss was originally designed for binary classification so the original formulation only has a single alpha value. The repo you pointed to extends the concept of Focal Loss to single-label classification and therefore there are multiple alpha values: one per class. However, by my read, it loses the additional possible smoothing effect of BCE.
Net net, for the best results, you'll want to benchmark CCE, BCE, Binary Focal Loss (out of TFA and per the original paper), and the single-label multi-class Focal Loss that you found in that repo. In general, those the discovery of those alpha values is done via guess & check, or grid search.
There's a lot of manual guessing and checking in ML unfortunately.

Limiting probability percentage of irrelevant image in CNN

I am training a cnn model with five classes using keras library. Using model.predict function i get prediction percentage of the classes. My problem is for a image which doesn't belong to these classes and completely irrelevant, the predict class still predicts the percentages according to the classes.
How do I prevent it? How do I identify it as irrelevant?
I assume you are using a softmax activation on your last layer to generate the probabilities for each class. By definition, the sum of the outputs from the softmax activation must add up to 1. Therefore, it is impossible for the neural net to say that the image does not belong to any of your classes, with your current setup.
There are two potential ways you could address this:
Add another class that represents "other" or "unknown" objects (so you have 6 classes).
Add another output to your neural net (or train a completely independent neural net) that does binary classification on whether or not the image is in one of the 5 classes. That way, if your secondary output says that the image is not in the 5 classes, you can ignore the softmax output.
In both cases, you will need to augment your dataset with images that do not fall in your 5 classes.

Changing Inception-v4 architecture to do Multi-label classification in Tensorflow

I am working on image tagging and annotation problem, simply an image may contain multiple objects. I want to train inception-v4 for multi-label classification. My training data will be an image and a vector of length equals the number of classes and has 1 in each index if the object exists in the image. For example, If I have four classes (Person, car, tree, buildings). If an image contains a person and car. Then my vector will be (1, 1, 0, 0).
What changes do I need to make to train inception-v4 for the tagging and annotation problem?
Do I only need to change the input format and change the loss function from softmax to sigmoid_cross_entropy_with_logits in the inception-v4 architecture?
https://github.com/tensorflow/models/blob/master/slim/nets/inception_v4.py
Thank you in advance.
If you'd like to retrain a model to output different labels, check out the image_retraining example: https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/examples/image_retraining/retrain.py
In that example, we retrain the standard inception v3 model to recognize flowers instead of the standard ImageNet categories.

GAN with not a random input

I'm very interested in GAN those times.
I coded one for MNIST with the following structure :
Generator model
Discriminator model
Gen + Dis model
Generator model generate batches of image from random distribution.
Discrimator is trained over it and real images.
Then Discriminator is freeze in Gen+Dis model and Generator trained. (With the frozen Discriminator who says if the generator is good or not)
Now, imagine I don't want to feed my generator with a random distribution but with images. (For upscaling for example, or generate an real image from a draw)
Do I need to change something in it ?
(Except the conv model who will be more complex)
Should I continue to use the binary_crossentropy as loss function ?
Thanks you very much!
You can indeed put a variational autoencoder (VAE) in front in order to generate the initial distribution z (see paper).
If you are interested in the topic I can recommend the this course at Kadenze.