Custom TensorFlow loss function with batch size > 1? - tensorflow

I have some neural network with following code snippets, note that batch_size == 1 and input_dim == output_dim:
net_in = tf.Variable(tf.zeros(shape = [batch_size, input_dim]), dtype=tf.float32)
input_placeholder = tf.compat.v1.placeholder(shape = [batch_size, input_dim], dtype=tf.float32)
assign_input = net_in.assign(input_placeholder)
# Some matmuls, activations, dropouts, normalizations...
net_out = tf.tanh(output_before_activation)
def loss_fn(output, input):
#input.shape = output.shape = (batch_size, input_dim)
output = tf.reshape(output, [input_dim,]) # shape them into 1d vectors
input = tf.reshape(input, [input_dim,])
return my_fn_that_only_takes_in_vectors(output, input)
# Create session, preprocess data ...
for epoch in epoch_num:
for batch in range(total_example_num // batch_size):
sess.run(assign_input, feed_dict = {input_placeholder : some_appropriate_numpy_array})
sess.run(optimizer.minimize(loss_fn(net_out, net_in)))
Currently the neural network above works fine, but it is very slow because it updates gradient every sample (batch size = 1). I would like to set batch size > 1, but my_fn_that_only_takes_in_vectors cannot accommodate matrices whose first dimension is not 1. Due to the nature of my custom loss, flattening the batch input into a vector of length (batch_size * input_dim) seems to not work.
How would I write my new custom loss_fn now that the input and output are N x input_dim where N > 1? In Keras this would not have been an issue because keras somehow takes the average of the gradients of each example in the batch. For my TensorFlow function, should I take each row as a vector individually, pass them to my_fn_that_only_takes_in_vectors, then take the average of the results?

You can use a function that computes the loss on the whole batch, and works independently on the batch size. Basically the operations are applied to the whole first dimension of the input (the first dimension represents the element number in the batch). Here is an example, I hope this helps to see how the operations are carried out:
def my_loss(y_true, y_pred):
dx2 = tf.math.squared_difference(y_true[:, 0], y_true[:, 2]) # shape (BatchSize, )
dy2 = tf.math.squared_difference(y_true[:, 1], y_true[:, 3]) # shape: (BatchSize, )
denominator = dx2 + dy2 # shape: (BatchSize, )
dst_vec = tf.math.squared_difference(y_true, y_pred) # shape: (Batch, n_labels)
numerator = tf.reduce_sum(dst_vec, axis=-1) # shape: (BatchSize,)
loss_vector = tf.cast(numerator / denominator, dtype="float32") # shape: (BatchSize,) this is a vector containing the loss of each element of the batch
loss = tf.reduce_sum(loss_vector ) #if you want to sum the losses
return loss
I am not sure whether you need to return the sum or the avg of the losses for the batch.
If you sum, make sure to use a validation dataset with same batch size, otherwise the loss is not comparable.

Related

Tensorflow Neural Machine Translation Example - Loss Function

Im stepping through the code here: https://www.tensorflow.org/tutorials/text/nmt_with_attention
as a learning method and I am confused as to when the loss function is called and what is passed. I added two print statements in the loss_function and when the training loop runs, it only prints out
(64,)
(64, 4935)
at the very start multiple times and then nothing again. I am confused on two fronts:
Why doesnt the loss_function() get called repeatedly through the training loop and print the shapes? I expected that the loss function would get called at the end of each batch which is of size 64.
I expected the shapes of the actuals to be (batch size, time steps) and the predictions to be (batch size, time steps, vocabulary size). It looks like the loss gets called seperately for every time step (64 is the batch size and 4935 is the vocabulary size).
The relevant bits I believe are reproduced below.
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
print(real.shape)
print(pred.shape)
loss_ = loss_object(rea
l, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask #set padding entries to zero loss
return tf.reduce_mean(loss_)
#tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
print(targ[:, t])
print(predictions)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
#print(batch)
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
The loss is treated similar to the rest of the graph. In tensorflow calls like tf.keras.layers.Dense and tf.nn.conv2d don't actually do the operation, but instead they define the graph for the operations. I have another post here How do backpropagation works in tensorflow that explains the backprop and some motivation of why this is.
The loss function you have above is
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
print(real.shape)
print(pred.shape)
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask #set padding entries to zero loss
result = tf.reduce_mean(loss_)
return result
Think of this function as a generate that returns result. Result defines the graph to compute the loss. Perhaps a better name for this function would be loss_function_graph_creator ... but that's another story.
Result, which is a graph that contains weights, bias, and information about how to both do the forward propagation and the back propagation is all model.fit needs. It no longer needs this function and it doesn't need to run the function every loop.
Truly, what is happening under the covers is that given your model (called my_model), the compile line
model.compile(loss=loss_function, optimizer='sgd')
is effectively the following lines
input = tf.keras.Input()
output = my_model(input)
loss = loss_function(input,output)
opt = tf.keras.optimizers.SGD()
gradient = opt.minimize(loss)
get_gradient_model = tf.keras.Model(input,gradient)
and there you have the gradient operation which can be use in a loop to get the gradients, which is conceptually what model.fit does.
Q and A
Is the fact that this function: #tf.function def train_step(inp, targ, enc_hidden): has the tf.function decorator (and the loss function is called in it) what makes this code run as you describe and not normal python?
No. It is not 'normal' python. It only defines the flow of tensors through the graph of matrix operations that will (hopefully) run on your GPU. All the tensorflow operations just set up the operations on the GPU (or a simulated GPU if you don't have one).
How can I tell the actual shapes being passed into loss_function (the second part of my question)?
No problem at all... simply run this code
loss_function(y, y).shape
This will compute the loss function of your expected output compared exactly to the same output. The loss will (hopefully) be zero, but actually calculating the value of the loss wasn't the point. You want the shape and this will give it to you.

What's the difference between tf.nn.ctc_loss with pytorch.nn.CTCLoss

For the same input and label:
the output of pytorch.nn.CTCLoss is 5.74,
the output of tf.nn.ctc_loss is 129.69,
but the output of math.log(tf ctc loss) is 4.86
So what's the difference between pytorch.nn.CTCLoss with tf.nn.ctc_loss?
tf: 1.13.1
pytorch: 1.1.0
I had try to these:
log_softmax the input, and then send it to pytorch.nn.CTCLoss,
tf.nn.log_softmax the input, and then send it to tf.nn.ctc_loss
directly send the input to tf.nn.ctc_loss
directly send the input to tf.nn.ctc_loss, and then math.log(output of tf.nn.ctc_loss)
In the case 2, case 3, and case 4, the result of calculation is difference from pytorch.nn.CTCLoss
from torch import nn
import torch
import tensorflow as tf
import math
time_step = 50 # Input sequence length
vocab_size = 20 # Number of classes
batch_size = 16 # Batch size
target_sequence_length = 30 # Target sequence length
def dense_to_sparse(dense_tensor, sequence_length):
indices = tf.where(tf.sequence_mask(sequence_length))
values = tf.gather_nd(dense_tensor, indices)
shape = tf.shape(dense_tensor, out_type=tf.int64)
return tf.SparseTensor(indices, values, shape)
def compute_loss(x, y, x_len):
ctclosses = tf.nn.ctc_loss(
y,
tf.cast(x, dtype=tf.float32),
x_len,
preprocess_collapse_repeated=False,
ctc_merge_repeated=False,
ignore_longer_outputs_than_inputs=False
)
ctclosses = tf.reduce_mean(ctclosses)
with tf.Session() as sess:
ctclosses = sess.run(ctclosses)
print(f"tf ctc loss: {ctclosses}")
print(f"tf log(ctc loss): {math.log(ctclosses)}")
minimum_target_length = 10
ctc_loss = nn.CTCLoss(blank=vocab_size - 1)
x = torch.randn(time_step, batch_size, vocab_size) # [size] = T,N,C
y = torch.randint(0, vocab_size - 2, (batch_size, target_sequence_length), dtype=torch.long) # low, high, [size]
x_lengths = torch.full((batch_size,), time_step, dtype=torch.long) # Length of inputs
y_lengths = torch.randint(minimum_target_length, target_sequence_length, (batch_size,),
dtype=torch.long) # Length of targets can be variable (even if target sequences are constant length)
loss = ctc_loss(x.log_softmax(2).detach(), y, x_lengths, y_lengths)
print(f"torch ctc loss: {loss}")
x = x.numpy()
y = y.numpy()
x_lengths = x_lengths.numpy()
y_lengths = y_lengths.numpy()
x = tf.cast(x, dtype=tf.float32)
y = tf.cast(dense_to_sparse(y, y_lengths), dtype=tf.int32)
compute_loss(x, y, x_lengths)
I expect the output of tf.nn.ctc_loss is same with the output of pytorch.nn.CTCLoss, but actually they are not, but how can i make them same?
The automatic mean reduction of the CTCLoss of pytorch is not the same as computing all the individual losses, and then doing the mean (as you are doing in the Tensorflow implementation). Indeed from the doc of CTCLoss (pytorch):
``'mean'``: the output losses will be divided by the target lengths and
then the mean over the batch is taken.
To obtain the same value:
1- Change the reduction method to sum:
ctc_loss = nn.CTCLoss(reduction='sum')
2- Divide the loss computed by the batch_size:
loss = ctc_loss(x.log_softmax(2).detach(), y, x_lengths, y_lengths)
loss = (loss.item())/batch_size
3- Change the parameter ctc_merge_repeated of Tensorflow to True (I am assuming it is the case in the pytorch CTC as well)
ctclosses = tf.nn.ctc_loss(
y,
tf.cast(x, dtype=tf.float32),
x_len,
preprocess_collapse_repeated=False,
ctc_merge_repeated=True,
ignore_longer_outputs_than_inputs=False
)
You will now get very close results between the pytorch loss and the tensorflow loss (without taking the log of the value). The small difference remaining probably comes from slight differences in between the implementations.
In my last three runs, I got the following values:
pytorch loss : 113.33 vs tf loss = 113.52
pytorch loss : 116.30 vs tf loss = 115.57
pytorch loss : 115.67 vs tf loss = 114.54

Tf.Print() doesn't print the shape of the tensors?

I have written a simple classification program using Tensorflow and getting the output except I tried to print the shape of tensors for Model parameters, features & bias.
The function definations:
import tensorflow as tf, numpy as np
from tensorflow.examples.tutorials.mnist import input_data
def get_weights(n_features, n_labels):
# Return weights
return tf.Variable( tf.truncated_normal((n_features, n_labels)) )
def get_biases(n_labels):
# Return biases
return tf.Variable( tf.zeros(n_labels))
def linear(input, w, b):
# Linear Function (xW + b)
# return np.dot(input,w) + b
return tf.add(tf.matmul(input,w), b)
def mnist_features_labels(n_labels):
"""Gets the first <n> labels from the MNIST dataset
"""
mnist_features = []
mnist_labels = []
mnist = input_data.read_data_sets('dataset/mnist', one_hot=True)
# In order to make quizzes run faster, we're only looking at 10000 images
for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)):
# Add features and labels if it's for the first <n>th labels
if mnist_label[:n_labels].any():
mnist_features.append(mnist_feature)
mnist_labels.append(mnist_label[:n_labels])
return mnist_features, mnist_labels
The graph creation :
# Number of features (28*28 image is 784 features)
n_features = 784
# Number of labels
n_labels = 3
# Features and Labels
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Weights and Biases
w = get_weights(n_features, n_labels)
b = get_biases(n_labels)
# Linear Function xW + b
logits = linear(features, w, b)
# Training data
train_features, train_labels = mnist_features_labels(n_labels)
print("Total {0} data points of Training Data, each having {1} features \n \
Total {2} number of labels,each having 1-hot encoding {3}".format(len(train_features),len(train_features[0]),\
len(train_labels),train_labels[0]
)
)
# global variables initialiser
init= tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
The problem is here :
# shapes =tf.Print ( tf.shape(features), [tf.shape(features),
# tf.shape(labels),
# tf.shape(w),
# tf.shape(b),
# tf.shape(logits)
# ], message= "The shapes are:" )
# print("Verify shapes",shapes)
logits = tf.Print(logits, [tf.shape(features),
tf.shape(labels),
tf.shape(w),
tf.shape(b),
tf.shape(logits)],
message= "The shapes are:")
print(logits)
I looked at here, but didn't find much useful.
# Softmax
prediction = tf.nn.softmax(logits)
# Cross entropy
# This quantifies how far off the predictions were.
# You'll learn more about this in future lessons.
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
# You'll learn more about this in future lessons.
loss = tf.reduce_mean(cross_entropy)
# Rate at which the weights are changed
# You'll learn more about this in future lessons.
learning_rate = 0.08
# Gradient Descent
# This is the method used to train the model
# You'll learn more about this in future lessons.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: train_features, labels: train_labels})
# Print loss
print('Loss: {}'.format(l))
The output I am getting is :
Extracting dataset/mnist/train-images-idx3-ubyte.gz
Extracting dataset/mnist/train-labels-idx1-ubyte.gz
Extracting dataset/mnist/t10k-images-idx3-ubyte.gz
Extracting dataset/mnist/t10k-labels-idx1-ubyte.gz
Total 3118 data points of Training Data, each having 784 features
Total 3118 number of labels,each having 1-hot encoding [0. 1. 0.]
Tensor("Print_22:0", shape=(?, 3), dtype=float32)
Loss: 5.339271068572998
Could anyone help me understand, Why I am not able to see the shapes of the tensors?
That is not how you use tf.Print. It is an op that does nothing on its own (simply returns the input) but prints the requested tensors as a side effect. You should do something like
logits = tf.Print(logits, [tf.shape(features),
tf.shape(labels),
tf.shape(w),
tf.shape(b),
tf.shape(logits)],
message= "The shapes are:")
Now, whenever logits is evaluated (as it will be for computing the loss/gradients), the shape information will be printed.
What you are doing right now is simply printing the return value of the tf.Print op, which is just its input (tf.shape(features)).
After #xdurch0 suggestion, I tried this
shapes = tf.Print(logits, [tf.shape(features),
tf.shape(labels),
tf.shape(w),
tf.shape(b),
tf.shape(logits)],
message= "The shapes are:")
# Run optimizer and get loss
_, l, resultingShapes = session.run( [optimizer, loss, shapes],
feed_dict={features: train_features, labels: train_labels})
print('The shapes are: '. resultingShapes.shape)
and it worked partially,
Extracting dataset/mnist/train-images-idx3-ubyte.gz
Extracting dataset/mnist/train-labels-idx1-ubyte.gz
Extracting dataset/mnist/t10k-images-idx3-ubyte.gz
Extracting dataset/mnist/t10k-labels-idx1-ubyte.gz
Total 3118 data points of Training Data, each having 784 features
Total 3118 number of labels, each having 1-hot encoding [0. 1. 0.]
The shapes are: (3118, 3)
Loss: 10.223002433776855
could #xdurch0 suggest something to get the desired results?
My DESIRED RESULTS are:
tf.shape(features): (3118, 784) tf.shape(labels) :(3118, 3) ,
tf.shape(w) : (784,3), tf.shape(b) : (3,1), tf.shape(logits):(3118,3)

tensorflow cross entropy loss for sequence with different lengths

i'm building a seq2seq model with LSTM using tensorflow. The loss function i'm using is the softmax cross entropy loss. The problem is my input sequences have different lenghts so i padded it. The output of the model have the shape [max_length, batch_size, vocab_size]. How can i calculate the loss that the 0 padded values don't affect the loss? tf.nn.softmax_cross_entropy_with_logits provide axis parameter so we can calculate the loss with 3-dimention but it doesn't provide weights. tf.losses.softmax_cross_entropy provides weights parameter but it recieves input with shape [batch_size, nclass(vocab_size)]. Please help!
I think you'd have to write your own loss function. Check out https://danijar.com/variable-sequence-lengths-in-tensorflow/.
In this case you need to pad the two logits and labels so that they have the same length. So, if you have the tensors logits with the size of (batch_size, length, vocab_size) and labels with the size of (batch_size, length) in which length is the size of your sequence. First, you have to pad them to same length:
def _pad_tensors_to_same_length(logits, labels):
"""Pad x and y so that the results have the same length (second dimension)."""
with tf.name_scope("pad_to_same_length"):
logits_length = tf.shape(logits)[1]
labels_length = tf.shape(labels)[1]
max_length = tf.maximum(logits_length, labels_length)
logits = tf.pad(logits, [[0, 0], [0, max_length - logits_length], [0, 0]])
labels = tf.pad(labels, [[0, 0], [0, max_length - labels_length]])
return logits, labels
Then you can do the padded cross entropy:
def padded_cross_entropy_loss(logits, labels, vocab_size):
"""Calculate cross entropy loss while ignoring padding.
Args:
logits: Tensor of size [batch_size, length_logits, vocab_size]
labels: Tensor of size [batch_size, length_labels]
vocab_size: int size of the vocabulary
Returns:
Returns the cross entropy loss
"""
with tf.name_scope("loss", values=[logits, labels]):
logits, labels = _pad_tensors_to_same_length(logits, labels)
# Calculate cross entropy
with tf.name_scope("cross_entropy", values=[logits, labels]):
xentropy = tf.nn.softmax_cross_entropy_with_logits_v2(
logits=logits, labels=targets)
weights = tf.to_float(tf.not_equal(labels, 0))
return xentropy * weights
The function below takes two tensors with shapes (batch_size,time_steps,vocab_len). computes the mask for zeroing the time steps related to padding. the mask will remove the loss of padding from the categorical cross entropy.
# the labels that has 1 as the first element
def mask_loss(y_true, y_pred):
mask_value = np.zeros((vocab_len))
mask_value[0] = 1
# find out which timesteps in `y_true` are not the padding character
mask = K.equal(y_true, mask_value)
mask = 1 - K.cast(mask, K.floatx())
mask = K.sum(mask,axis=2)/2
# multplying the loss by the mask. the loss for padding will be zero
loss = tf.keras.layers.multiply([K.categorical_crossentropy(y_true, y_pred), mask])
return K.sum(loss) / K.sum(mask)

RNN & Batches in Tensorflow

The batche approach for RNN in Tensorflow is not clear to me. For example tf.nn.rnn Take as input list of Tensors [BATCH_SIZE x INPUT_SIZE]. We normally are feeding to session batches of data, so why it take list of batches not single batch?
This leads to next confusion for me:
data = []
for _ in range(0, len(train_input)):
data.append(tf.placeholder(tf.float32, [CONST_BATCH_SIZE, CONST_INPUT_SIZE]))
lstm = tf.nn.rnn_cell.BasicLSTMCell(CONST_NUM_OF_HIDDEN_STATES)
val, state = tf.nn.rnn(lstm, data, dtype=tf.float32)
I pass list of Tensors [CONST_BATCH_SIZE x CONST_INPUT_OTPUT_SIZE] to tf.nn.rnn and got output value that is list of Tensors [CONST_BATCH_SIZE x CONST_NUM_OF_HIDDEN_STATES]. Now I want to use softmax for all HIDDEN_STATES outputs and need to calculate weights with matmaul + bias
Should I use for matmul:
weight = tf.Variable(tf.zeros([CONST_NUM_OF_HIDDEN_STATES, CONST_OTPUT_SIZE]))
for i in val:
mult = tf.matmul(i, weight)
bias = tf.Variable(tf.zeros([CONST_OTPUT_SIZE]))
prediction = tf.nn.softmax(mult + bias)
Or should I create 2D array from val and then use tf.matmul without for?
This should work. output is batched data from RNN. For all the batch input probs will have the probability.
logits = tf.matmul(output, softmax_w) + softmax_b
probs = tf.nn.softmax(logits)