Which Loss function & Metrics is more suitable for multi-label classification? Binary or Categorical cross-entropy and Why? - tensorflow

According to my knowledge(please correct me if I'm wrong),
Multi-label classification(mutually inclusive) i.e., samples might have more than 1 correct values (for example movie genre, disease detection, etc).
Multi-Class classification(mutually exclusive) i.e., samples will always have 1 correct value (for example Cat or Dog, object detection, etc) this includes Binary Classification.
Assuming output is one-hot encoding.
What are the Loss function and metrics on has to use for these 2 types?
loss func. metrics
1. multi-label: (binary, categorical) (binary_accuracy, TopKCategorical accuracy, categorical_accuracy, AUC)
2. multi-class: (binary) (binary_accuracy,f1, recall, precision)
Please tell me from the above table which of them is/are more suitable, which of them is/are wrong & Why?

If you are trying to use multi-class classification provided that the labels (y) is one hot encoded, use the loss function as categorical crossentropy and use adam optimizer (It is suitable for most cases). Also, while using multi-class classification, the number of output nodes should be the same as the number of classes (or) labels. Say if your model is going to classify the input into 4 classes, You can configure the output layer as follows..
model.add(4, activation = "softmax")
Also, forgot to mention that softmax activation should be used in the output layer for multiclass classification problems.
Incase if your y is not one hot encoded, I would advise you to choose the loss function as sparse categorical crossentropy. No other changes will be necessary.
Also, I usually split the data into test data and train data and feed them to the model like this to get the accuracy in each epoch..
history = model.fit(train_data, validation_data = test_data, epochs = 10)
Hope it solved your problem.

Related

Output Format using Sparse Categorical Cross Entropy in Keras for Multi-Class Classification

I've built a u-net architecture using Keras Functional API but I'm having trouble using the sparse categorical cross entropy loss function. My learning task is multi-class, pixel-wise classification for many 256x256 images. The intended output is a 256x256 mask images with integer values from 0-31 (not every mask will contain each class). I have 32 classes so one-hot encoding gives me an OOM error which is why I don't use categorical cross entropy. The majority of the mask pixels are 0s (which may be part of the problem).
I keep getting loss = nan. I've normalized my input data to have mean = 0, std = 1. If I leave the masks as they are, I get an accuracy around 0.97 and the output masks are all 1s (which is obviously incorrect). If I add 1 to all my masks before performing training, the accuracy is 0. I'm using relu activations with a SoftMax in the last convolutional layer.
It seems the problem likely has to do with the format of my output data, so my main question is, what format should it be in for sparse categorical cross entropy? Should I normalize the mask values to be 0-1? Alternatively, are there any other loss functions or accuracy metrics I can use for training? As far as multi-class classification goes the only function I know of is categorical cross entropy. I can provide additional information about my data, network, etc. if needed.

why not use the max value of output tensor instead of Softmax Function?

I built a CNN model on images one-class classification.
The output tensor is a list which has 65 elements. I make this tensor input to Softmax Function, and got the classified result.
I think the max value in this output tensor is the classified result, why not use this way to do classification task? Just the Softmax Function can be taken the derivative easily?
Softmax is used for multi-class classification. In multi-class class classification the model is expected to classify the input to single class with higher probability. Predicting with high probability enforces probabilities for other classes to be low.
As you stated one of the reason why one uses Softmax over max function is the softmax function is diffrential over Real Numbers and max function is not.
Edit:
There are some other properties of softmax function that makes it suitable to use for neural networks compared to max. Firstly it is soft version of max function. Let's say the logits of neural network has 4 outputs of [0.5, 0.5, 0.69, 0.7]. Hard max returns 1 for maximum index(in this case for 4th index) and 0 for other indexes. This results information loss.
Second important property of softmax is the output of sofmax function are in interval [0,1] and the sum of these values is equal to 1. For this reason the output of softmax function can be interpreted as probability. This means output can be considered as the confidence of the model to classify inputs to one of each output classes.

What are the differences between all these cross-entropy losses in Keras and TensorFlow?

What are the differences between all these cross-entropy losses?
Keras is talking about
Binary cross-entropy
Categorical cross-entropy
Sparse categorical cross-entropy
While TensorFlow has
Softmax cross-entropy with logits
Sparse softmax cross-entropy with logits
Sigmoid cross-entropy with logits
What are the differences and relationships between them? What are the typical applications for them? What's the mathematical background? Are there other cross-entropy types that one should know? Are there any cross-entropy types without logits?
There is just one cross (Shannon) entropy defined as:
H(P||Q) = - SUM_i P(X=i) log Q(X=i)
In machine learning usage, P is the actual (ground truth) distribution, and Q is the predicted distribution. All the functions you listed are just helper functions which accepts different ways to represent P and Q.
There are basically 3 main things to consider:
there are either 2 possibles outcomes (binary classification) or more. If there are just two outcomes, then Q(X=1) = 1 - Q(X=0) so a single float in (0,1) identifies the whole distribution, this is why neural network in binary classification has a single output (and so does logistic regresssion). If there are K>2 possible outcomes one has to define K outputs (one per each Q(X=...))
one either produces proper probabilities (meaning that Q(X=i)>=0 and SUM_i Q(X=i) =1 or one just produces a "score" and has some fixed method of transforming score to probability. For example a single real number can be "transformed to probability" by taking sigmoid, and a set of real numbers can be transformed by taking their softmax and so on.
there is j such that P(X=j)=1 (there is one "true class", targets are "hard", like "this image represent a cat") or there are "soft targets" (like "we are 60% sure this is a cat, but for 40% it is actually a dog").
Depending on these three aspects, different helper function should be used:
outcomes what is in Q targets in P
-------------------------------------------------------------------------------
binary CE 2 probability any
categorical CE >2 probability soft
sparse categorical CE >2 probability hard
sigmoid CE with logits 2 score any
softmax CE with logits >2 score soft
sparse softmax CE with logits >2 score hard
In the end one could just use "categorical cross entropy", as this is how it is mathematically defined, however since things like hard targets or binary classification are very popular - modern ML libraries do provide these additional helper functions to make things simpler. In particular "stacking" sigmoid and cross entropy might be numerically unstable, but if one knows these two operations are applied together - there is a numerically stable version of them combined (which is implemented in TF).
It is important to notice that if you apply wrong helper function the code will usually still execute, but results will be wrong. For example if you apply softmax_* helper for binary classification with one output your network will be considered to always produce "True" at the output.
As a final note - this answer considers classification, it is slightly different when you consider multi label case (when a single point can have multiple labels), as then Ps do not sum to 1, and one should use sigmoid_cross_entropy_with_logits despite having multiple output units.
Logits
For this purpose, "logits" can be seen as the non-activated outputs of the model.
While Keras losses always take an "activated" output (you must apply "sigmoid" or "softmax" before the loss)
Tensorflow takes them with "logits" or "non-activated" (you should not apply "sigmoid" or "softmax" before the loss)
Losses "with logits" will apply the activation internally.
Some functions allow you to choose logits=True or logits=False, which will tell the function whether to "apply" or "not apply" the activations.
Sparse
Sparse functions use the target data (ground truth) as "integer labels": 0, 1, 2, 3, 4.....
Non-sparse functions use the target data as "one-hot labels": [1,0,0], [0,1,0], [0,0,1]
Binary crossentropy = Sigmoid crossentropy
Problem type:
single class (false/true); or
non-exclusive multiclass (many classes may be correct)
Model output shape: (batch, ..., >=1)
Activation: "sigmoid"
Categorical crossentropy = Softmax crossentropy
Problem type: exclusive classes (only one class may be correct)
Model output shape: (batch, ..., >=2)
Activation: "softmax"

Output of tf.softmax_cross_entroy_with_logits unnormalized?

I implemented a simple cnn network for image classification (binary classification). I am using tensorflow in Python.
I am using tf.softmax_cross_entropy_with logits as a cost function. I feed the cost function with unnormalized logits from the output layer of my model. Should the function output normalized probabilities, or am I wrong?
During training of my model I am printing cost of every single example. If the model correctly predicts the output, the cost equals 0.0, otherwise the cost is very big, unnormalized value). While the function 'softmaxes' input before calculating cross entropy, why the output is unnormalized?
You are mistaking cross-entropy (your loss function) with softmax (the "virtual" output of your net -- see below). Softmax is normalized, but cross-entropy is not -- it can take arbitrarily high values to penalize bad predictions.
When you use a non-normalized net output in combination with tf.softmax_cross_entropy_with logits, you actually don't observe the softmax output: it is processed within the cost function and remains virtual. To peek at the softmax you can compute it explicitely using tf.nn.softmax on the non-normalized output of your net.

Which is the loss function for multi-class classification in XGBoost?

I'm trying to know which loss function uses XGBoost for multi-class classification. I found in this question the loss function for logistic classification in the binary case.
I had though that for the multi-class case it might be the same as in GBM (for K classes) which can be seen here, where y_k=1 if x's label is k and 0 in any other case, and p_k(x) is the softmax function. However, I have made the first and second order gradient using this loss function and the hessian doesn't match the one defined in the code here (in function GetGradient in SoftmaxMultiClassObj) by a constant 2.
Could you please tell me which is the loss function used?
Thank you in advance.
The loss function used for multiclass is, as you suspect, the softmax objective function. As of now the only options for multiclass are shown in the quote below, the multi:softprob returning all probabilities instead of just those of the most likely class.
“multi:softmax” –set XGBoost to do multiclass classification using the softmax objective, you also need to set num_class(number of classes)
“multi:softprob” –same as softmax, but output a vector of ndata * nclass, which can be further reshaped to ndata, nclass matrix. The result contains predicted probability of each data point belonging to each class.
See https://xgboost.readthedocs.io/en/latest//parameter.html#learning-task-parameters.