Using binary_crossentropy loss in Keras (Tensorflow backend) - tensorflow

In the training example in Keras documentation,
https://keras.io/getting-started/sequential-model-guide/#training
binary_crossentropy is used and sigmoid activation is added in the network's last layer, but is it necessary that add sigmoid in the last layer? As I found in the source code:
def binary_crossentropy(output, target, from_logits=False):
"""Binary crossentropy between an output tensor and a target tensor.
Arguments:
output: A tensor.
target: A tensor with the same shape as `output`.
from_logits: Whether `output` is expected to be a logits tensor.
By default, we consider that `output`
encodes a probability distribution.
Returns:
A tensor.
"""
# Note: nn.softmax_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
if not from_logits:
# transform back to logits
epsilon = _to_tensor(_EPSILON, output.dtype.base_dtype)
output = clip_ops.clip_by_value(output, epsilon, 1 - epsilon)
output = math_ops.log(output / (1 - output))
return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
Keras invokes sigmoid_cross_entropy_with_logits in Tensorflow, but in sigmoid_cross_entropy_with_logits function, sigmoid(logits) is calculated again.
https://www.tensorflow.org/versions/master/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits
So I don't think it makes sense that add a sigmoid at last, but seemingly all the binary/multi-label classification examples and tutorials in Keras I found online added sigmoid at last. Besides I don't understand what is the meaning of
# Note: nn.softmax_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
Why Keras expects probabilities? Doesn't it use the nn.softmax_cross_entropy_with_logits function? Does it make sense?
Thanks.

You're right, that's exactly what's happening. I believe this is due to historical reasons.
Keras was created before tensorflow, as a wrapper around theano. And in theano, one has to compute sigmoid/softmax manually and then apply cross-entropy loss function. Tensorflow does everything in one fused op, but the API with sigmoid/softmax layer was already adopted by the community.
If you want to avoid unnecessary logit <-> probability conversions, call binary_crossentropy loss withfrom_logits=True and don't add the sigmoid layer.

In categorical cross entropy :
if it is prediction it will compute the cross entropy directly
if it is logit it will apply softmax_cross entropy with logit
In Binary cross entropy:
if it is prediction it will convert it back to logit then apply sigmoied cross entropy with logit
if it is logit it will apply sigmoied cross entropy with logitdirectly

In Keras by default we use activation sigmoid on the output layer and then use the keras binary_crossentropy loss function, independent of the backend implementation (Theano, Tensorflow or CNTK).
If you look more in depth for the pure Tensorflow case you find that the tensorflow backend binary_crossentropy function (which you pasted in your question) uses tf.nn.sigmoid_cross_entropy_with_logits. The later function also add the sigmoid activation. To avoid double sigmoid, the tensorflow backend binary_crossentropy, will by default (with from_logits=False) calculate the inverse sigmoid (logit(x)=log(x/1-x)) to get the output back into the raw state from the network with no activation.
The extra activation sigmoid, and inverse sigmoid calculation can be avoided by using no sigmoid activation function in your last layer, and then call the tensorflow backend binary_crossentropy with parameter from_logits=True (Or directly use tf.nn.sigmoid_cross_entropy_with_logits)

Related

Is an output layer with 2 units and softmax ideal for binary classification using LSTM?

I am using an LSTM for binary classification and initially tried a model with 1 unit in the output(Dense) layer with sigmoid as the activation function.
However, it didn't perform well and I saw a few notebooks where they used 2 units in the output layer(the layer immediately after the LSTM) with softmax as the activation function. Is there any advantage to using 2 output layers and using softmax instead of a single unit and sigmoid(For the purpose of binary classification)? I am using binary_crossentropy as the loss function
Softmax should be better than sigmoid as the slope of derivative of sigmoid would almost be closer to one(vanishing gradient problem)., which makes it difficult to classify. That might be the reason for softmax to perform better than sigmoid

Output layer doesn't have activation function in custom estimator

In the custom estimator, output layer doesn't have activation.
logits = tf.layers.dense(net, params['n_classes'], activation=None)
then using sparse_softmax_cross_entropy to calculate loss
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
Questions
In general, output layer should also have activation function?
sparse_softmax_cross_entropy means using softmax as activation
function of the output layer when calculate the loss?
Computing the softmax and the cross entropy based on it "naively" can be numerically unstable. This is why it is recommended not to have an activation in your output layer (usually it would be tf.nn.softmax for classification). Instead, Tensorflow supplies loss functions such as sparse_softmax_cross_entropy which apply the softmax internally (in a numerically stable fashion) and then compute the cross entropy based on that. That is, you are supposed to supply model outputs without your own softmax (commonly called logits).
E.g. in the API docs for the softmax op you can usually find passages such as
WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

When using Keras categorical_crossentropy loss, should you use softmax on the last layer?

Most examples I've seen implement softmax on the last layer. But I read that Keras categorical_crossentropy automatically applies softmax after the last layer so doing it is redundant and leads to reduced performance. Who is right?
By default, Keras categorical_crossentropy does not apply softmax to the output (see the categorical_crossentropy implementation and the Tensorflow backend call). However, if you use the backend function directly, there exists the option of setting from_logits=True.

equivalence of categorical_crossentropy function of theano in tensorflow

What might be the equivalent function of the following theano function in tensorflow?
Theano.tensor.nnet.categorical_crossentropy(o, y)
I think you would want to use softmax cross-entropy loss from Tensorflow. Remember that the input to this layer is unscaled logits i.e. you cannot feed softmax output to this layer. It will give wrong results.
Another important reason to use this loss instead of a combination of softmax + categorical cross-entropy is that the softmax loss is more stable. See this loss in Caffe. Also for some discussion about stability, see this.
For 2D tensors with probability distributions in the 2nd dimension:
def crossentropy(p_approx, p_true):
return -tf.reduce_sum(tf.multiply(p_true, tf.log(p_approx)), 1)

Keras CTC Loss input

I'm trying to use CTC for speech recognition using keras and have tried the CTC example here. In that example, the input to the CTC Lambda layer is the output of the softmax layer (y_pred). The Lambda layer calls ctc_batch_cost that internally calls Tensorflow's ctc_loss, but the Tensorflow ctc_loss documentation say that the ctc_loss function performs the softmax internally so you don't need to softmax your input first. I think the correct usage is to pass inner to the Lambda layer so you only apply softmax once in ctc_loss function internally. I have tried the example and it works. Should I follow the example or the Tensorflow documentation?
The loss used in the code you posted is different from the one you linked. The loss used in the code is found here
The keras code peforms some pre-processing before calling the ctc_loss that makes it suitable for the format required. On top of requiring the input to be not softmax-ed, tensorflow's ctc_loss also expects the dims to be NUM_TIME, BATCHSIZE, FEATURES. Keras's ctc_batch_cost does both of these things in this line.
It does log() which gets rid of the softmax scaling and it also shuffles the dims so that its in the right shape. When I say gets rid of softmax scaling, it obviously does not restore the original tensor, but rather softmax(log(softmax(x))) = softmax(x). See below:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
x = [1,2,3]
y = softmax(x)
z = np.log(y) # z =/= x (obviously) BUT
yp = softmax(z) # yp = y #####