My understanding of noise contrastive estimation is that we sample some vectors from our word embeddings (the negative sample), and then calculate the log-likelihood of each. Then we want to maximize the difference between the probability of the target word and the log-likelihood of each of the negative sample words (So if I'm correct about this, we want to optimize the loss function so that it gets as close to 1 as possible).
My question is this:
What is the purpose of the num_classes parameters to the nce_loss function? My best guess is that the number of classes is passed in so that Tensorflow knows the size of the distribution from which the negative samples our drawn, but this might not make sense, since we could just infer the size of the distribution from the variable itself. Otherwise, I can't think of a reason for why we would need to know the total possible number of classes, especially if the language model is only outputting k + 1 predictions (negative sample size + 1 for the target word).
Your guess is correct. The num_classes argument is used to sample negative labels from the log-uniform (Zipfian) distribution.
Here's the link to the source code:
# Sample the negative labels.
# sampled shape: [num_sampled] tensor
# true_expected_count shape = [batch_size, 1] tensor
# sampled_expected_count shape = [num_sampled] tensor
if sampled_values is None:
sampled_values = candidate_sampling_ops.log_uniform_candidate_sampler(
true_classes=labels,
num_true=num_true,
num_sampled=num_sampled,
unique=True,
range_max=num_classes)
The range_max=num_classes argument basically defines the shape of this distribution and also the range of the sampled values - [0, range_max). Note that this range can't be accurately inferred from the labels, because a particular mini-batch can have only small word ids, which would skew the distribution significantly.
Related
I'm training a network with temporal data, and determine which of ~60 outputs are "active" at any given timestep (classified as 1 or 0 in the label data) - so I have an output of 60x1 floats that should represent a probability.
My input data is shaped as (X, 1, frames, dataPoints) - where X is the number of recorded sequences I have (I'm new to ML, I think this is 'batches'), frames is how long the longest sequence is (the rest are -1 padded and masked), and dataPoints is the actual input data for any given frame.
This is mostly an LTSM layer with return_sequences, but my input data is unbalanced.
For any given timestep, odds are ~85% that AN output is activated - but for any given output it's likely active at most 5% of the time.
When I attempted to apply a class weight of {0: 0.01, 1:0.99} (pending tuning), I get an error stating "class_weight not supported for 3+ dimensional targets". I've done some googling and people are suggesting compiling with sample_weight_mode of temporal and modifying sample weight, but (A) that doesn't seem right for my data (no individual sample is more important, but each 1 classification within all the samples is important), and (B) I don't understand the dimensionality of what that's doing.
How can I apply the class weighting to help balance each 1 classification with this data structure?
Side note: I'm rescaling the output of the LSTM to 0->1 since it uses tanh activation (and must use tanh activation for CUDA acceleration), and from_logits=False in my binary cross entropy loss.
Extra points if I can just use built-in tf/keras stuff and not have to write a custom loss function.
EDIT to include some code:
I have a data generator that outputs x and y in the shape of:
x.shape == (1, frameCount, inputFeatureLength) where frameCount is the number of frames in the temporal sequence, and inputFeatureLength is the size of the input data (around 100).
y.shape == (1, frameCount, outputSize) where outputSize is about 60 features.
I can successfully compile the mode, but when I try to model.fit with class_weight={0:0.01, 1:0.99} as an argument, I get the error ValueError: class_weight not supported for 3+ dimensional targets.
I've looked into sample weights, but as far as I can tell even using sample_weight_mode="temporal" on model.fit it'll let me give sample weights per frame of output, but not per each of the ~60 outputs per frame.
Recently, I'm working on a project "predicting future trajectories of objects from their past trajectories by using LSTMs in Tensorflow."
(Here, a trajectory means a sequence of 2D positions.)
Input to the LSTM is, of course, 'past trajectories' and output is 'future trajectories'.
The size of mini-batch is fixed when training. However, the number of past trajectories in a mini-batch can be different. For example, let the mini-batch size be 10. If I have only 4 past trajectories for the current training iteration, 6 out of 10 in the mini-batch is padded with zero value.
When calculating the loss for the back-propagation, I let the loss from the 6 be zero so that the only 4 contribute to the back-propagation.
The problem that I concern is..it seems that Tensorflow still calculates gradients for the 6 even if their loss is zero. As a result, the training speed becomes slower as I increase the mini-batch size even if I used the same training data.
I also used tf.where function when calculating the loss. However, the training time does not decrease.
How can I reduce the training time?
Here I attached my pseudo code for training.
# For each frame in a sequence
for f in range(pred_length):
# For each element in a batch
for b in range(batch_size):
with tf.variable_scope("rnnlm") as scope:
if (f > 0 or b > 0):
scope.reuse_variables()
# for each pedestrian in an element
for p in range(MNP):
# ground-truth position
cur_gt_pose = ...
# loss mask
loss_mask_ped = ... # '1' or '0'
# go through RNN decoder
output_states_dec_list[b][p], zero_states_dec_list[b][p] = cell_dec(cur_embed_frm_dec,
zero_states_dec_list[b][p])
# fully connected layer for output
cur_pred_pose_dec = tf.nn.xw_plus_b(output_states_dec_list[b][p], output_wd, output_bd)
# go through embedding function for the next input
prev_embed_frms_dec_list[b][p] = tf.reshape(tf.nn.relu(tf.nn.xw_plus_b(cur_pred_pose_dec, embedding_wd, embedding_bd)), shape=(1, rnn_size))
# calculate MSE loss
mse_loss = tf.reduce_sum(tf.pow(tf.subtract(cur_pred_pose_dec, cur_gt_pose_dec), 2.0))
# only valid ped's traj contributes to the loss
self.loss += tf.multiply(mse_loss, loss_mask_ped)
I think you're looking for the function tf.stop_gradient. Using this, you could do something like tf.where(loss_mask, tensor, tf.stop_gradient(tensor)) to achieve the desired result, assuming that the dimensions are correct.
However, it looks like this is probably not your issue. It seems as though for each item in your dataset, you are defining new graph nodes. This is not how TensorFlow is supposed to function, you should only have one graph, built beforehand that performs some fixed function, regardless of the batch size. You should definitely not be defining new nodes for every element in the batch, since that cannot efficiently take advantage of parallelism.
I have a model that outputs a Softmax, and I would like to develop a custom loss function. The desired behaviour would be:
1) Softmax to one-hot (normally I do numpy.argmax(softmax_vector) and set that index to 1 in a null vector, but this is not allowed in a loss function).
2) Multiply the resulting one-hot vector by my embedding matrix to get an embedding vector (in my context: the word-vector that is associated to a given word, where words have been tokenized and assigned to indices, or classes for the Softmax output).
3) Compare this vector with the target (this could be a normal Keras loss function).
I know how to write a custom loss function in general, but not to do this. I found this closely related question (unanswered), but my case is a bit different, since I would like to preserve my softmax output.
It is possible to mix tensorflow and keras in you customer loss function. Once you can access to all Tensorflow function, things become very easy. I just give you a example of how this function could be imlement.
import tensorflow as tf
def custom_loss(target, softmax):
max_indices = tf.argmax(softmax, -1)
# Get the embedding matrix. In Tensorflow, this can be directly done
# with tf.nn.embedding_lookup
embedding_vectors = tf.nn.embedding_lookup(you_embedding_matrix, max_indices)
# Do anything you want with normal keras loss function
loss = some_keras_loss_function(target, embedding_vectors)
loss = tf.reduce_mean(loss)
return loss
Fan Luo's answer points in the right direction, but ultimately will not work because it involves non-derivable operations. Note such operations are acceptable for the real value (a loss function takes a real value and a predicted value, non-derivable operations are only fine for the real value).
To be fair, that was what I was asking in the first place. It is not possible to do what I wanted, but we can get a similar and derivable behaviour:
1) Element-wise power of the softmax values. This makes smaller values much smaller. For example, with a power of 4 [0.5, 0.2, 0.7] becomes [0.0625, 0.0016, 0.2400]. Note that 0.2 is comparable to 0.7, but 0.0016 is negligible with respect to 0.24. The higher my_power is, the more similar to a one-hot the final result will be.
soft_extreme = Lambda(lambda x: x ** my_power)(softmax)
2) Importantly, both softmax and one-hot vectors are normalized, but not our "soft_extreme". First, find the sum of the array:
norm = tf.reduce_sum(soft_extreme, 1)
3) Normalize soft_extreme:
almost_one_hot = Lambda(lambda x: x / norm)(soft_extreme)
Note: Setting my_power too high in 1) will result in NaNs. If you need a better softmax to one-hot conversion, then you may do steps 1 to 3 two or more times in a row.
4) Finally we want the vector from the dictionary. Lookup is forbidden, but we can take the average vector using matrix multiplication. Because our soft_normalized is similar to one-hot encoding this average will be similar to the vector associated to the highest argument (original intended behaviour). The higher my_power is in (1), the truer this will be:
target_vectors = tf.tensordot(almost_one_hot, embedding_matrix, axes=[[1], [0]])
Note: This will not work directly using batches! In my case, I reshaped my "one hot" (from [batch, dictionary_length] to [batch, 1, dictionary_length] using tf.reshape. Then tiled my embedding_matrix batch times and finally used:
predicted_vectors = tf.matmul(reshaped_one_hot, tiled_embedding)
There may be more elegant solutions (or less memory-hungry, if tiling the embedding matrix is not an option), so feel free to explore more.
I'm reading through the MNIST TensorFlow guide, and trying to get a good understanding of what's going on.
The first set of steps, with added comments, looks like this:
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
# Download the data set.
# Comprises thousands of images, each with a label.
# Our images are 28x28, so we have 784 pixels in total.
# one_hot means our labels are treated as a vector with a
# length of 10. e.g. for the number 4, it'd be
# [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# x isn't a specific value. It's a placeholder, a value that
# we'll input when we ask TensorFlow to run a computation.
# We want to input any number of MNIST images, each flattened
# into a 784-dimensional vector (e.g. an array made up of a
# double for each pixel, representing pixel brightness).
# Takes the form of [Image, Pixel].
x = tf.placeholder(tf.float32, [None, 784])
# Variables are modifiable tensors, which live in TensorFlow's
# graph of interacting operations. It can be used and modified
# by the computation. Model parameters are usually set as Variables.
# Weights
# Takes the form of [Pixel, Digit]
W = tf.Variable(tf.zeros([784, 10]))
# Biases
# Takes the form of [Digit]
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
So now I'm trying to breakdown this last line to figure out what's going on.
They provide this diagram:
Ignoring the softmax step, and ignoring the adding of biases, so just looking at that top line:
(W1,1 * x1) + (W1,2 * x2) + (W1,3 * x3).
Since x is now 1-dimensional, I'll assume it's specific to a particular image, and so the x value is each pixel within that image. We thus have:
(Weight of 1st pixel for 1st digit * value of 1st pixel) + (Weight of 1st pixel for 2nd digit * value of 2nd pixel) + (Weight of 1st pixel for 3rd digit * value of 3rd pixel)
This doesn't seem right. The weight tensor's first dimension representing pixels, where the x tensor's second dimension represents pixels, means we're multiplying the values of different pixels... this doesn't make any sense to me.
Am I misunderstanding something here?
This model is very simple and probably isn't worth of in-depth discussion, but your conclusion isn't correct. Pixel values are never multiplied. This is a linear model:
tf.matmul(x, W) + b
... which naively assumes an image is a bunch of independent pixels. Each pixel gets multiplied by different weights corresponding to 10 classes. In other words, this linear layer assigns a weight to each (pixel, class) pair. This directly corresponds to its shape: [784, 10] (I'm ignoring the bias term for simplicity).
As a result of this multiplication, a final 10-length vector contains the scores for each class. Each score takes into account each pixel, more precisely it's a weighted sum of all pixel values. The score then goes to the loss function to compare the output with the ground truth, so that in the next iteration we could tweak those weights in the right direction.
Though it's very simple, it is still a reasonable approach.
The sequence_Loss module's source_code has three parameters that are required they list them as outputs, targets, and weights.
Outputs and targets are self explanatory, but I'm looking to better understand is what is the weight parameter?
The other thing I find confusing is that it states that the targets should be the same length as the outputs, what exactly do they mean by the length of a tensor? Especially if its a 3 dimensional tensor.
Think of the weights as a mask applied to the input tensor. In some NLP applications, we often have different sentence length for each sentence. In order to parallel/batch multiple instance sentences into a minibatch to feed into a neural net, people use a mask matrix to denotes which element in the the input tensor is actually a valid input. For instance, the weight can be a np.ones([batch, max_length]) that means all of the input elements are legit.
We can also use a matrix of the same shape as the labels such as np.asarray([[1,1,1,0],[1,1,0,0],[1,1,1,1]]) (we assume the labels shape is 3x4), then the crossEntropy of the first row last column will be masked out as 0.
You can also use weight to calculate weighted accumulation of cross entropy.
We used this in a class and our professor said we could just pass it ones of the right shape (the comment says "list of 1D batch-sized float-Tensors of the same length as logits"). That doesn't help with what they mean, but maybe it will help you get your code to run. Worked for me.
This code should do the trick: [tf.ones(batch_size, tf.float32) for _ in logits].
Edit: from TF code:
for logit, target, weight in zip(logits, targets, weights):
if softmax_loss_function is None:
# TODO(irving,ebrevdo): This reshape is needed because
# sequence_loss_by_example is called with scalars sometimes, which
# violates our general scalar strictness policy.
target = array_ops.reshape(target, [-1])
crossent = nn_ops.sparse_softmax_cross_entropy_with_logits(
logit, target)
else:
crossent = softmax_loss_function(logit, target)
log_perp_list.append(crossent * weight)
The weights that are passed are multiplied by the loss for that particular logit. So I guess if you want to take a particular prediction extra-seriously you can increase the weight above 1.