How to decide on activation function? - tensorflow

Currently there are a lot of activation functions like sigmoid, tanh, ReLU ( being the preferred choice ), but I have a question that concerns which choices are needed to be considered so that a certain activation function should be selected.
For example : When we want to Upsample a network in GANs, we prefer using LeakyReLU.
I am a newbie in this subject, and have not found a concrete solution as to which activation function to use in different situations.
My knowledge uptil now :
Sigmoid : When you have a binary class to identify
Tanh : ?
ReLU : ?
LeakyReLU : When you want to upsample
Any help or article will be appreciated.

This is an open research question. The choice of activation is also very intertwined with the architecture of the model and the computation / resources available so it's not something that can be answered in silo. The paper Efficient Backprop, Yann LeCun et. al. has a lot of good insights into what makes a good activation function.
That being said, here are some toy examples that may help get intuition for activation functions. Consider a simple MLP with one hidden layer and a simple classification task:
In the last layer we can use sigmoid in combination with the binary_crossentropy loss in order to use intuition from logistic regression - because we're just doing simple logistic regression on the learned features that the hidden layer gives to the last layer.
What types of features are learned depends on the activation function used in that hidden layer and the number of neurons in that hidden layer.
Here is what ReLU learns when using two hidden neurons:
https://miro.medium.com/max/2000/1*5nK725uTBUeoIA0XjEyA_A.gif
(on the left is what the decision boundary looks like in the feature space)
As you add more neurons you get more pieces with which to approximate the decision boundary. Here is with 3 hidden neurons:
And 10 hidden neurons:
Sigmoid and Tanh produce similar decsion boundaries (this is tanh https://miro.medium.com/max/2000/1*jynT0RkGsZFqt3WSFcez4w.gif - sigmoid is similar) which are more continuous and sinusoidal.
The main difference is that sigmoid is not zero-centered which doesn't make it a good choice for a hidden layer - especially in deep networks.

Related

Multiple Activation Functions for multiple Layers (Neural Networks)

I have a binary classification problem for my neural network.
I already got good results using the ReLU activation function in my hidden layer and the sigmoid function in the output layer.
Now I'm trying to get even better results.
I added a second hidden layer with the ReLU activation function, and the results got even better.
I tried to use the leaky ReLU function for the second hidden layer instead of the ReLU function and got even better results, but I'm not sure if this is even allowed.
So I have something like that:
Hidden layer 1: ReLU activation function
Hidden layer 2: leaky ReLU activation function
Hidden layer 3: sigmoid activation function
I can't find many resources on it, and those I found always use the same activation function on all hidden layers.
If you mean the Leaky ReLU, I can say that, in fact, the Parametric ReLU (PReLU) is the activation function that generalizes the tradional rectified unit as well as the leaky ReLU. And yes, PReLU impoves model fitting with no significant extra computational cost and little overfitting risk.
For more details, you can check out this link Delving Deep into Rectifiers

Why is the code for a neural network with a sigmoid so different than the code with softmax_cross_entropy_with_logits?

When using neural networks for classification, it is said that:
You generally want to use softmax cross-entropy output, as this gives you the probability of each of the possible options.
In the common case where there are only two options, you want to use sigmoid, which is the same thing except avoids redundantly outputting p and 1-p.
The way to calculate softmax cross entropy in TensorFlow seems to be along the lines of:
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
So the output can be connected directly to the minimization code, which is good.
The code I have for sigmoid output, likewise based on various tutorials and examples, is along the lines of:
p = tf.sigmoid(tf.squeeze(...))
cost = tf.reduce_mean((p - y)**2)
I would have thought the two should be similar in form since they are doing the same jobs in almost the same way, but the above code fragments look almost completely different. Furthermore, the sigmoid version is explicitly squaring the error whereas the softmax isn't. (Is the squaring happening somewhere in the implementation of softmax, or is something else going on?)
Is one of the above simply incorrect, or is there a reason why they need to be completely different?
The soft-max cross-entropy cost and the square loss cost of a sigmoid are completely different cost functions. Though they seem to be closely related, it is not the same thing.
It is true that both functions are "doing the same job" if the job is defined as "be the output layer of a classification network". Similarly, you can say that "softmax regression and neural networks are doing the same job". It is true, both techniques are trying to classify things, but in a different way.
The softmax layer with cross-entropy cost is usually preferred over sigmoids with l2-loss. Softmax with cross-entropy has its own pros, such as a stronger gradient of the output layer and normalization to probability vector, whereas the derivatives of the sigmoids with l2-loss are weaker. You can find plenty of explanations in this beautiful book.

Keras binary_crossentropy vs categorical_crossentropy for multi class single label classification

I've been using binary cross-entropy but recently found out I may be better off using cateogrical cross entropy.
For the problem I'm solving the following is true:
There are 10 possible classes.
A given input only maps to 1 label.
I'm getting much higher accuracies with binary cross-entropy. Should I switch to categorical cross-entropy?
At the moment I'm using standard accuracy (metrics=['accuracy']) and a sigmoid activation layer for the last layer. Can I keep these the same?
If I understand correctly, you have a multiclass problem and your classes are mutually exclusive. You should use categorical_crossentropy and change your output activation function to softmax.
binary_crossentropy, as the name suggests, must be used as loss function only for 2-class problems.

What is the meaning of the word logits in TensorFlow? [duplicate]

This question already has answers here:
What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?
(8 answers)
Closed 2 years ago.
In the following TensorFlow function, we must feed the activation of artificial neurons in the final layer. That I understand. But I don't understand why it is called logits? Isn't that a mathematical function?
loss_function = tf.nn.softmax_cross_entropy_with_logits(
logits = last_layer,
labels = target_output
)
Logits is an overloaded term which can mean many different things:
In Math, Logit is a function that maps probabilities ([0, 1]) to R ((-inf, inf))
Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to > 0.5.
In ML, it can be
the vector of raw (non-normalized) predictions that a classification
model generates, which is ordinarily then passed to a normalization
function. If the model is solving a multi-class classification
problem, logits typically become an input to the softmax function. The
softmax function then generates a vector of (normalized) probabilities
with one value for each possible class.
Logits also sometimes refer to the element-wise inverse of the sigmoid function.
Just adding this clarification so that anyone who scrolls down this much can at least gets it right, since there are so many wrong answers upvoted.
Diansheng's answer and JakeJ's answer get it right.
A new answer posted by Shital Shah is an even better and more complete answer.
Yes, logit as a mathematical function in statistics, but the logit used in context of neural networks is different. Statistical logit doesn't even make any sense here.
I couldn't find a formal definition anywhere, but logit basically means:
The raw predictions which come out of the last layer of the neural network.
1. This is the very tensor on which you apply the argmax function to get the predicted class.
2. This is the very tensor which you feed into the softmax function to get the probabilities for the predicted classes.
Also, from a tutorial on official tensorflow website:
Logits Layer
The final layer in our neural network is the logits layer, which will return the raw values for our predictions. We create a dense layer with 10 neurons (one for each target class 0–9), with linear activation (the default):
logits = tf.layers.dense(inputs=dropout, units=10)
If you are still confused, the situation is like this:
raw_predictions = neural_net(input_layer)
predicted_class_index_by_raw = argmax(raw_predictions)
probabilities = softmax(raw_predictions)
predicted_class_index_by_prob = argmax(probabilities)
where, predicted_class_index_by_raw and predicted_class_index_by_prob will be equal.
Another name for raw_predictions in the above code is logit.
As for the why logit... I have no idea. Sorry.
[Edit: See this answer for the historical motivations behind the term.]
Trivia
Although, if you want to, you can apply statistical logit to probabilities that come out of the softmax function.
If the probability of a certain class is p,
Then the log-odds of that class is L = logit(p).
Also, the probability of that class can be recovered as p = sigmoid(L), using the sigmoid function.
Not very useful to calculate log-odds though.
Summary
In context of deep learning the logits layer means the layer that feeds in to softmax (or other such normalization). The output of the softmax are the probabilities for the classification task and its input is logits layer. The logits layer typically produces values from -infinity to +infinity and the softmax layer transforms it to values from 0 to 1.
Historical Context
Where does this term comes from? In 1930s and 40s, several people were trying to adapt linear regression to the problem of predicting probabilities. However linear regression produces output from -infinity to +infinity while for probabilities our desired output is 0 to 1. One way to do this is by somehow mapping the probabilities 0 to 1 to -infinity to +infinity and then use linear regression as usual. One such mapping is cumulative normal distribution that was used by Chester Ittner Bliss in 1934 and he called this "probit" model, short for "probability unit". However this function is computationally expensive while lacking some of the desirable properties for multi-class classification. In 1944 Joseph Berkson used the function log(p/(1-p)) to do this mapping and called it logit, short for "logistic unit". The term logistic regression derived from this as well.
The Confusion
Unfortunately the term logits is abused in deep learning. From pure mathematical perspective logit is a function that performs above mapping. In deep learning people started calling the layer "logits layer" that feeds in to logit function. Then people started calling the output values of this layer "logit" creating the confusion with logit the function.
TensorFlow Code
Unfortunately TensorFlow code further adds in to confusion by names like tf.nn.softmax_cross_entropy_with_logits. What does logits mean here? It just means the input of the function is supposed to be the output of last neuron layer as described above. The _with_logits suffix is redundant, confusing and pointless. Functions should be named without regards to such very specific contexts because they are simply mathematical operations that can be performed on values derived from many other domains. In fact TensorFlow has another similar function sparse_softmax_cross_entropy where they fortunately forgot to add _with_logits suffix creating inconsistency and adding in to confusion. PyTorch on the other hand simply names its function without these kind of suffixes.
Reference
The Logit/Probit lecture slides is one of the best resource to understand logit. I have also updated Wikipedia article with some of above information.
Logit is a function that maps probabilities [0, 1] to [-inf, +inf].
Softmax is a function that maps [-inf, +inf] to [0, 1] similar as Sigmoid. But Softmax also normalizes the sum of the values(output vector) to be 1.
Tensorflow "with logit": It means that you are applying a softmax function to logit numbers to normalize it. The input_vector/logit is not normalized and can scale from [-inf, inf].
This normalization is used for multiclass classification problems. And for multilabel classification problems sigmoid normalization is used i.e. tf.nn.sigmoid_cross_entropy_with_logits
Personal understanding, in TensorFlow domain, logits are the values to be used as input to softmax. I came to this understanding based on this tensorflow tutorial.
https://www.tensorflow.org/tutorials/layers
Although it is true that logit is a function in maths(especially in statistics), I don't think that's the same 'logit' you are looking at. In the book Deep Learning by Ian Goodfellow, he mentioned,
The function σ−1(x) is called the logit in statistics, but this term
is more rarely used in machine learning. σ−1(x) stands for the
inverse function of logistic sigmoid function.
In TensorFlow, it is frequently seen as the name of last layer. In Chapter 10 of the book Hands-on Machine Learning with Scikit-learn and TensorFLow by Aurélien Géron, I came across this paragraph, which stated logits layer clearly.
note that logits is the output of the neural network before going
through the softmax activation function: for optimization reasons, we
will handle the softmax computation later.
That is to say, although we use softmax as the activation function in the last layer in our design, for ease of computation, we take out logits separately. This is because it is more efficient to calculate softmax and cross-entropy loss together. Remember that cross-entropy is a cost function, not used in forward propagation.
(FOMOsapiens).
If you check math Logit function, it converts real space from [0,1] interval to infinity [-inf, inf].
Sigmoid and softmax will do exactly the opposite thing. They will convert the [-inf, inf] real space to [0, 1] real space.
This is why, in machine learning we may use logit before sigmoid and softmax function (since they match).
And this is why "we may call" anything in machine learning that goes in front of sigmoid or softmax function the logit.
Here is G. Hinton video using this term.
Here is a concise answer for future readers. Tensorflow's logit is defined as the output of a neuron without applying activation function:
logit = w*x + b,
x: input, w: weight, b: bias. That's it.
The following is irrelevant to this question.
For historical lectures, read other answers. Hats off to Tensorflow's "creatively" confusing naming convention. In PyTorch, there is only one CrossEntropyLoss and it accepts un-activated outputs. Convolutions, matrix multiplications and activations are same level operations. The design is much more modular and less confusing. This is one of the reasons why I switched from Tensorflow to PyTorch.
logits
The vector of raw (non-normalized) predictions that a classification model generates, which is ordinarily then passed to a normalization function. If the model is solving a multi-class classification problem, logits typically become an input to the softmax function. The softmax function then generates a vector of (normalized) probabilities with one value for each possible class.
In addition, logits sometimes refer to the element-wise inverse of the sigmoid function. For more information, see tf.nn.sigmoid_cross_entropy_with_logits.
official tensorflow documentation
They are basically the fullest learned model you can get from the network, before it's been squashed down to apply to only the number of classes we are interested in. Check out how some researchers use them to train a shallow neural net based on what a deep network has learned: https://arxiv.org/pdf/1312.6184.pdf
It's kind of like how when learning a subject in detail, you will learn a great many minor points, but then when teaching a student, you will try to compress it to the simplest case. If the student now tried to teach, it'd be quite difficult, but would be able to describe it just well enough to use the language.
The logit (/ˈloʊdʒɪt/ LOH-jit) function is the inverse of the sigmoidal "logistic" function or logistic transform used in mathematics, especially in statistics. When the function's variable represents a probability p, the logit function gives the log-odds, or the logarithm of the odds p/(1 − p).
See here: https://en.wikipedia.org/wiki/Logit

softmax and sigmoid function for the output layer

In the deep learning implementations related to object detection and semantic segmentation, I have seen the output layers using either sigmoid or softmax. I am not very clear when to use which? It seems to me both of them can support these tasks. Are there any guidelines for this choice?
softmax() helps when you want a probability distribution, which sums up to 1. sigmoid is used when you want the output to be ranging from 0 to 1, but need not sum to 1.
In your case, you wish to classify and choose between two alternatives. I would recommend using softmax() as you will get a probability distribution which you can apply cross entropy loss function on.
The sigmoid and the softmax function have different purposes. For a detailed explanation of when to use sigmoid vs. softmax in neural network design, you can look at this article: "Classification: Sigmoid vs. Softmax."
Short summary:
If you have a multi-label classification problem where there is more than one "right answer" (the outputs are NOT mutually exclusive) then you can use a sigmoid function on each raw output independently. The sigmoid will allow you to have high probability for all of your classes, some of them, or none of them.
If you instead have a multi-class classification problem where there is only one "right answer" (the outputs are mutually exclusive), then use a softmax function. The softmax will enforce that the sum of the probabilities of your output classes are equal to one, so in order to increase the probability of a particular class, your model must correspondingly decrease the probability of at least one of the other classes.
Object detection is object classification used on a sliding window in the image. In classification it is important to find the correct output in some class space. E.g. you detect 10 different objects and you want to know which object is the most likely one in there. Then softmax is good because of its proberty that the whole layer sums up to 1.
Semantic segmentation on the other hand segments the image in some way. I have done semantic medical segmentation and there the output is a binary image. This means you can have sigmoid as output to predict if this pixel belongs to this specific class, because sigmoid values are between 0 and 1 for each output class.
In general Softmax is used (Softmax Classifier) when ‘n’ number of classes are there. Sigmoid or softmax both can be used for binary (n=2) classification.
Sigmoid:
S(x) = 1/ ( 1+ ( e^(-x) ))
Softmax:
σ(x)j = e / **Σ**{k=1 to K} e^zk for(j=1.....K)
Softmax is kind of Multi Class Sigmoid, but if you see the function of Softmax, the sum of all softmax units are supposed to be 1. In sigmoid it’s not really necessary.
Digging deep, you can also use sigmoid for multi-class classification. When you use a softmax, basically you get a probability of each class, (join distribution and a multinomial likelihood) whose sum is bound to be one. In case you use sigmoid for multi class classification, it’d be like a marginal distribution and a Bernoulli likelihood, p(y0/x) , p(y1/x) etc