Tensorflow weighted vs sigmoid cross-entropy loss - tensorflow

I am trying to implement multi-label classification using TensorFlow (i.e., each output pattern can have many active units). The problem has imbalanced classes (i.e., much more zeros than ones in the labels distribution, which makes label patterns very sparse).
The best way to tackle the problem should be to use the tf.nn.weighted_cross_entropy_with_logits function. However, I get this runtime error:
ValueError: Tensor conversion requested dtype uint8 for Tensor with dtype float32
I can't understand what is wrong here. As input to the loss function, I pass the labels tensor, the logits tensor, and the positive class weight, which is a constant:
positive_class_weight = 10
loss = tf.nn.weighted_cross_entropy_with_logits(targets=labels, logits=logits, pos_weight=positive_class_weight)
Any hints about how to solve this? If I just pass the same labels and logits tensors to the tf.losses.sigmoid_cross_entropy loss function, everything works well (in the sense that Tensorflow runs properly, but of course following training predictions are always zero).
See related problem here.

The error is likely to be thrown after the loss function, because the only significant difference between tf.losses.sigmoid_cross_entropy and tf.nn.weighted_cross_entropy_with_logits is the shape of the returned tensor.
Take a look at this example:
logits = tf.linspace(-3., 5., 10)
labels = tf.fill([10,], 1.)
positive_class_weight = 10
weighted_loss = tf.nn.weighted_cross_entropy_with_logits(targets=labels, logits=logits, pos_weight=positive_class_weight)
print(weighted_loss.shape)
sigmoid_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=labels, logits=logits)
print(sigmoid_loss.shape)
Tensors logits and labels are kind of artificial and both have shape (10,). But it's important that weighted_loss and sigmoid_loss are different. Here's the output:
(10,)
()
This is because tf.losses.sigmoid_cross_entropy performs reduction (the sum by default). So in order to replicate it, you have to wrap the weighted loss with tf.reduce_sum(...).
If this doesn't help, make sure that labels tensor has type float32. This bug is very easy to make, e.g., the following declaration won't work:
labels = tf.fill([10,], 1) # the type is not float!
You might be also interested to read this question.

Related

What shape does my loss tensor need to be in tensorflow 2 using Keras API?

I have been playing around with custom loss functions for a while with some success, but I'm struggling with a new loss function, and I wonder if it might be due to the loss result tensor's shape.
My y_true and y_pred tensors have shape == (100, 216, 563). Due to the nature of the data and the calculations I'm performing in my loss function, it makes perfect sense to output a loss tensor of shape == (100, 563) because the second dimension gets reduced away with a reduce_prod() operation.
However, if I use this loss function alone, the loss value steadily increases instead of decreasing... I've not seen this before. If it was all over the place I'd think it was just a bad idea for a loss function or my maths was wrong somewhere, but as far as I can tell the maths is right.
Will this weird shape with a missing middle dimension throw off the gradient calculations? I've tried already using keepdims=True in my reduce_foo() methods, but this makes no difference to the increasing loss value (and the results still have a different shape, shape == (100, 1, 563)
Looking through tensorflow docs, I can find examples of both a loss with matching shape to y_pred and y_true, and another loss with a single scalar value. Are there any specific rules stated anywhere as to what shape the output loss should be or can anyone give me insights that might help me understand why the loss should be a specific shape (if that is even my problem)?

Using TensorFlow hessians for second partial derivative test

Second partial derivative test is a simple way to tell whether a critical point is a minimum, a maximum, or a saddle. I am currently toying with the idea of implementing such test for a simple neural network in tensorflow. The following set of weights is used for modeling an XOR neural network with 2 inputs, 1 hidden layer with 2 hidden units, and 1 output unit:
weights = {
'h1': tf.Variable(np.empty([2, 2]), name="h1", dtype=tf.float64),
'b1': tf.Variable(np.empty([2]), name="b1", dtype=tf.float64),
'h2': tf.Variable(np.empty([2, 1]), name="h2", dtype=tf.float64),
'b2': tf.Variable(np.empty([1]), name="b2", dtype=tf.float64)
}
Both the gradients and the hessians can now be obtained as follows:
gradients = tf.gradients(mse_op, [weights['h1'], weights['b1'], weights['h2'], weights['b2']])
hessians = tf.hessians(mse_op, [weights['h1'], weights['b1'], weights['h2'], weights['b2']])
Where mse_op is the MSE error of the network.
Both gradients and hessians compute just fine. The dimensionality of the gradients is equal to the dimensionality of the original inputs. The dimensionality of the hessians obviously differs.
The question: is it a good idea, and is it even possible to conveniently compute the eigenvalues of the hessians generated by tf.hessian applied to the given set of weights? Will the eigenvalues be representative of what I think they represent - i.e., will I be able to say that if overall, both positive and negative values are present, then we can conclude that the point is a saddle point?
So far, I have tried the following out-of-the-box approach to calculate the eigenvalues of each of the hessians:
eigenvals1 = tf.self_adjoint_eigvals(hessians[0])
eigenvals2 = tf.self_adjoint_eigvals(hessians[1])
eigenvals3 = tf.self_adjoint_eigvals(hessians[2])
eigenvals4 = tf.self_adjoint_eigvals(hessians[3])
1,2, and 4 work, but the 3rd one bombs out, complaining that Dimensions must be equal, but are 2 and 1 for 'SelfAdjointEigV2_2' (op: 'SelfAdjointEigV2') with input shapes: [2,1,2,1]. Should I just reshape the hessian somehow and carry on, or am I on the wrong track entirely?
After some fiddling, I have figured out that, given n*m matrix of input variables, TensorFlow's tf.hessians produces [n,m,n,m] tensor, which can be reshaped into square [n*m, n*m] Hessian matrix as follows:
sq_hess = tf.reshape(hessians[0], [tf.size(weights['h1']), tf.size(weights['h1'])])
Further, one can calculate the eigenvalues of the resulting square hessian:
eigenvals = tf.self_adjoint_eigvals(sq_hess)
This might be trivial, but it took me some time to wrap my head around this. I believe the behaviour of tf.hessians is not very well documented. Once you put together the dimensionalities, though, everything makes sense!

Simple example of CuDnnGRU based RNN implementation in Tensorflow

I am using the following code for standard GRU implementation:
def BiRNN_deep_dynamic_FAST_FULL_autolength(x,batch_size,dropout,hidden_dim):
seq_len=length_rnn(x)
with tf.variable_scope('forward'):
lstm_cell_fwd =tf.contrib.rnn.GRUCell(hidden_dim,kernel_initializer=tf.contrib.layers.xavier_initializer(),bias_initializer=tf.contrib.layers.xavier_initializer())
lstm_cell_fwd = tf.contrib.rnn.DropoutWrapper(lstm_cell_fwd, output_keep_prob=dropout)
with tf.variable_scope('backward'):
lstm_cell_back =tf.contrib.rnn.GRUCell(hidden_dim,kernel_initializer=tf.contrib.layers.xavier_initializer(),bias_initializer=tf.contrib.layers.xavier_initializer())
lstm_cell_back = tf.contrib.rnn.DropoutWrapper(lstm_cell_back, output_keep_prob=dropout)
outputs,_= tf.nn.bidirectional_dynamic_rnn(cell_fw=lstm_cell_fwd,cell_bw= lstm_cell_back,inputs=x,sequence_length=seq_len,dtype=tf.float32,time_major=False)
outputs_fwd,outputs_bck=outputs
### fwd matrix is the matrix that keeps all the last [-1] vectors
fwd_matrix=tf.gather_nd(outputs_fwd, tf.stack([tf.range(batch_size), seq_len-1], axis=1)) ### 99,64
outputs_fwd=tf.transpose(outputs_fwd,[1,0,2])
outputs_bck=tf.transpose(outputs_bck,[1,0,2])
return outputs_fwd,outputs_bck,fwd_matrix
Can anyone provide a simple example of how to use the tf.contrib.cudnn_rnn.CudnnGRU Cell in a similar fashion? Just swapping out the cells doesn't work.
First issue is that there is no dropout wrapper for CuDnnGRU cell, which is fine. Second it doesnt seem to work with tf.nn.bidirectional_dynamic_rnn. Any help appreciated.
CudnnGRU is not an RNNCell instance. It's more akin to dynamic_rnn.
The tensor manipulations below are equivalent, where input_tensor is a time-major tensor, i.e. of shape [max_sequence_length, batch_size, embedding_size]. CudnnGRU expects the input tensor to be time-major (as opposed to the more standard batch-major format i.e. of shape [batch_size, max_sequence_length, embedding_size]), and it's a good practice to use time-major tensors with RNN ops anyways since they're somewhat faster.
CudnnGRU:
rnn = tf.contrib.cudnn_rnn.CudnnGRU(
num_rnn_layers, hidden_size, direction='bidirectional')
rnn_output = rnn(input_tensor)
CudnnCompatibleGRUCell:
rnn_output = input_tensor
sequence_length = tf.reduce_sum(
tf.sign(inputs),
reduction_indices=0) # 1 if `input_tensor` is batch-major.
for _ in range(num_rnn_layers):
fw_cell = tf.contrib.cudnn_rnn.CudnnCompatibleGRUCell(hidden_size)
bw_cell = tf.contrib.cudnn_rnn.CudnnCompatibleGRUCell(hidden_size)
rnn_output = tf.nn.bidirectional_dynamic_rnn(
fw_cell, bw_cell, rnn_output, sequence_length=sequence_length,
dtype=tf.float32, time_major=True)[1] # Set `time_major` accordingly
Note the following:
If you were using LSTMs, you need not use CudnnCompatibleLSTMCell; you can use the standard LSTMCell. But with GRUs, the Cudnn implementation has inherently different math operations, and in particular, more weights (see the documentation).
Unlike dynamic_rnn, CudnnGRU doesn't allow you to specify sequence lengths. Still, it is over an order of magnitude faster, but you will have to be careful on how you extract your outputs (e.g. if you're interested in the final hidden state of each sequence that is padded and of varying length, you will need each sequence's length).
rnn_output is probably a tuple with lots of (distinct) stuff in both cases. Refer to the documentation, or just print it out, to inspect what parts of the output you need.

In tensorflow, how to calculate sequence loss using output from dynamic_decode

Hi fellow tensorflowers,
I am trying to implement a sequence to sequence model using new seq2seq module that is under development and release with TF1.0 and 1.1.
There is dynamic_decode function here that returns logits in form of rnn_output.
Then, I need to calculate loss using the output of rnn.
When I run it naively, just by calling tf.contrib.seq2seq.loss.sequence_loss with (rnn_output, weights, logits) it crashes with:
InvalidArgumentError (see above for traceback): Incompatible shapes: [1856,1,1024] vs. [9600,1,1024]
[[Node: optimize/gradients/loss/sequence_loss/sampled_softmax_loss/Mul_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](optimize/gradients/loss/sequence_loss/sampled_softmax_loss/Mul_grad/Shape/_3099, optimize/gradients/loss/sequence_loss/sampled_softmax_loss/Mul_grad/Shape_1/_3101)]]
[[Node: optimize/gradients/Add/_824 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:3", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_2787_optimize/gradients/Add", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:3"](^_cloopMainDynamicDecoderWithAttention/decoder/decoder/while/BasicDecoderStep/multi_rnn_cell/cell_1/multi_rnn_cell/cell_2/lstm_cell/zeros/_128)]]
Which is natural, since rnn_output is dynamicly shaped.
I have two possible solutions:
1. "pack" dynamic tensor into a tensor of size equal to maximum allowed length. I don't know how to pack a dynamic tensor into a tensor of fixed size, but it probably has to do smth with new interfaces for dynamic shape: tf.while_loop and TensorArrays. It would be great to hear some advice on that
2. Dynamically calculate sequence_loss. But my knowledge of inner tensorflow implementation is too limited to assess correctly whether it's something easy to do. Any suggestions here?
The general question
What is a right approach to calculate sampled/normal softmax cross-entropy loss from dynamicaly shaped rnn_output of dynamic_decode?
I have the following code:
decoder_outputs, decoder_state = seq2seq.dynamic_decode(my_decoder, output_time_major=False, parallel_iterations=512,
swap_memory = True)
self.logits = decoder_outputs.rnn_output
self.loss = loss.sequence_loss(self.logits, tf.transpose(tf.stack(targets), [1,0], name="targets_"),
tf.transpose(tf.stack(self.target_weights), [1,0], name="weights_"),
softmax_loss_function = softmax_loss_function)
ipdb> tf.version '1.1.0-rc0'
python: 2.7
It's a trouble with tf.contrib.seq2seq.loss.sequence_loss, for sure.
If you use dynamic RNNs and don't unroll your BPTT manually, you may use much simplier loss function.
What I did, is basically:
loss = tf.reduce_sum(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=self.answers,
logits=presoftmax
)
)/self.batch_sz
I know, it's not purely scientific. You'll need to shape it for your task. It's just a hint.
I guess you are using GreedyEmbeddingHelper? During training, you should use TF's "TrainingHelper". The output dimension should match your target dimension because at ever time step, the target is used as your input.

understanding tensorflow sequence_loss parameters

The sequence_Loss module's source_code has three parameters that are required they list them as outputs, targets, and weights.
Outputs and targets are self explanatory, but I'm looking to better understand is what is the weight parameter?
The other thing I find confusing is that it states that the targets should be the same length as the outputs, what exactly do they mean by the length of a tensor? Especially if its a 3 dimensional tensor.
Think of the weights as a mask applied to the input tensor. In some NLP applications, we often have different sentence length for each sentence. In order to parallel/batch multiple instance sentences into a minibatch to feed into a neural net, people use a mask matrix to denotes which element in the the input tensor is actually a valid input. For instance, the weight can be a np.ones([batch, max_length]) that means all of the input elements are legit.
We can also use a matrix of the same shape as the labels such as np.asarray([[1,1,1,0],[1,1,0,0],[1,1,1,1]]) (we assume the labels shape is 3x4), then the crossEntropy of the first row last column will be masked out as 0.
You can also use weight to calculate weighted accumulation of cross entropy.
We used this in a class and our professor said we could just pass it ones of the right shape (the comment says "list of 1D batch-sized float-Tensors of the same length as logits"). That doesn't help with what they mean, but maybe it will help you get your code to run. Worked for me.
This code should do the trick: [tf.ones(batch_size, tf.float32) for _ in logits].
Edit: from TF code:
for logit, target, weight in zip(logits, targets, weights):
if softmax_loss_function is None:
# TODO(irving,ebrevdo): This reshape is needed because
# sequence_loss_by_example is called with scalars sometimes, which
# violates our general scalar strictness policy.
target = array_ops.reshape(target, [-1])
crossent = nn_ops.sparse_softmax_cross_entropy_with_logits(
logit, target)
else:
crossent = softmax_loss_function(logit, target)
log_perp_list.append(crossent * weight)
The weights that are passed are multiplied by the loss for that particular logit. So I guess if you want to take a particular prediction extra-seriously you can increase the weight above 1.