I have a question regarding Tensorflow:
Which loss function is used in tf.nn.sampled_softmax_loss?
I believe it's cross-entropy, but it is not written on the official website. Can anyone confirm my guess?
Based on this other question, it looks like it is cross entropy.
Besides, the main difference between sampled_softmax_loss and softmax_cross_entropy_with_logits (the standard cross_entropy loss in TF) is that the first only takes into account a subset V of your vocabulary to calculate your loss, while the second takes into account your entire vocabulary.
Related
I'm new to Deep Learning and I saw this for the first time. Having MAE as loss function and MSE to metric. What is the purpose of this and what is gained?
(loss=tf.metrics.MeanAbsoluteError(), metrics=[tf.losses.MeanSquaredError()])
In some cases it is useful to have a loss function different from the metric you are going to evaluate.
Consider the case in which you want to denoise an image, that is you design a network that takes as input a noise image and outputs its clean version. Here, your metric might be the Peak-Signal-to-Noise Ratio (PSNR) or some sort of structural similarity (SSIM) between your output and the ground truth clean image. However, during training, you might consider different loss function, such as L1 (MAE), L2 (MSE) or even a Perceptual Loss, such as the VGG loss, because these have been proved to lead to better results than directly optimizing for PSNR or SSIM.
Why does the Total Variation Loss in Tensorflow suggest to use reduce_sum instead of reduce_mean as a loss function?
This can be used as a loss-function during optimization so as to
suppress noise in images. If you have a batch of images, then you
should calculate the scalar loss-value as the sum:
loss = tf.reduce_sum(tf.image.total_variation(images))
I contacted the author and it seems there wasn't any important reason behind it at all. He mentioned that maybe reduce_sum worked better for his test case than reduce_mean but encouraged me to test both cases and choose the one which gives me the best results.
Output of the following tensorflow function should give average log perplexity. I went through the source code. But I don't understand how they calculate that loss.
tf.contrib.legacy_seq2seq.sequence_loss(logits, targets, weights, average_across_timesteps=True, average_across_batch=True, softmax_loss_function=None, name=None)
I went through the tensorflow implementation. Through the perplexity has some broad meaning here in this function perplexity means
two to the power of your total cross entropy loss.
Please refer the first answer of this question.
i notice there are two functions about negative Sampling in tensorflow to compute the loss (sampled_softmax_loss and nce_loss). the paramaters of these two function are similar, but i really want to know what is the difference between the two?
Sample softmax is all about selecting a sample of the given number and try to get the softmax loss. Here the main objective is to make the result of the sampled softmax equal to our true softmax. So algorithm basically concentrate lot on selecting the those samples from the given distribution.
On other hand NCE loss is more of selecting noise samples and try to mimic the true softmax. It will take only one true class and a K noise classes.
Sampled softmax tries to normalise over all samples in your output. Having a non-normal distribution (logarithmic over your labels) this is not an optimal loss function. Note that although they have the same parameters, they way you use the function is different. Take a look at the documentation here: https://github.com/calebchoo/Tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.nce_loss.md and read this line:
By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see log_uniform_candidate_sampler.
Take a look at this paper where they explain why they use it for word embeddings: http://papers.nips.cc/paper/5165-learning-word-embeddings-efficiently-with-noise-contrastive-estimation.pdf
Hope this helps!
Check out this documentation from TensorFlow https://www.tensorflow.org/extras/candidate_sampling.pdf
They seem pretty similar, but sampled softmax is only applicable for a single label while NCE extends to the case where your labels are a multiset. NCE can then model the expected counts rather than presence/absence of a label. I'm not clear on an exact example of when to use the sampled_softmax.
Given a simple mini-batch gradient descent problem on mnist in tensorflow (like in this tutorial), how can I retrieve the gradients for each example in the batch individually.
tf.gradients() seems to return gradients averaged over all examples in the batch. Is there a way to retrieve gradients before aggregation?
Edit: A first step towards this answer is figuring out at which point tensorflow averages the gradients over the examples in the batch. I thought this happened in _AggregatedGrads, but that doesn't appear to be the case. Any ideas?
tf.gradients returns the gradient with respect to the loss. This means that if your loss is a sum of per-example losses, then the gradient is also the sum of per-example loss gradients.
The summing up is implicit. For instance if you want to minimize the sum of squared norms of Wx-y errors, the gradient with respect to W is 2(WX-Y)X' where X is the batch of observations and Y is the batch of labels. You never explicitly form "per-example" gradients that you later sum up, so it's not a simple matter of removing some stage in the gradient pipeline.
A simple way to get k per-example loss gradients is to use batches of size 1 and do k passes. Ian Goodfellow wrote up how to get all k gradients in a single pass, for this you would need to specify gradients explicitly and not rely on tf.gradients method
To partly answer my own question after tinkering with this for a while. It appears that it is possible to manipulate gradients per example while still working in batch by doing the following:
Create a copy of tf.gradients() that accepts an extra tensor/placeholder with example-specific factors
Create a copy of _AggregatedGrads() and add a custom aggregation method that uses the example-specific factors
Call your custom tf.gradients function and give your loss as a list of slices:
custagg_gradients(
ys=[cross_entropy[i] for i in xrange(batch_size)],
xs=variables.trainable_variables(),
aggregation_method=CUSTOM,
gradient_factors=gradient_factors
)
But this will probably have the same complexity as doing individual passes per example, and I need to check if the gradients are correct :-).
One way of retrieving gradients before aggregation is to use the grads_ys parameter. A good discussion is found here:
Use of grads_ys parameter in tf.gradients - TensorFlow
EDIT:
I haven't been working with Tensorflow a lot lately, but here is an open issue tracking the best way to compute unaggregated gradients:
https://github.com/tensorflow/tensorflow/issues/675
There is a lot of sample code solutions provided by users (including myself) that you can try based on your needs.