Tensorflow: Differentiable Primitives - tensorflow

I was under the impression that all tensorflow primitives are differentiable. Under this "illusion" I wrote this function in the hopes that tensorflow will just automatically differentiate it and I can backprop erros through it.
Rank-weight function:
def ranked(a):
lens = tf.convert_to_tensor(tf.range(1, (tf.size(a) + 1)))
rankw01 = tf.cast(tf.convert_to_tensor(tf.contrib.framework.argsort(tf.contrib.framework.argsort(a)) + 1),
tf.float64)
rankw02 = tf.convert_to_tensor(rankw01 - ((tf.size(a) + 1)/2))
rankw03 = tf.divide(rankw02, tf.reduce_sum(tf.gather(rankw02, tf.where(tf.greater(rankw02, 0)))))
rankw04 = tf.cast(rankw03, tf.float32)
return rankw04
Unfortunately the function works as expected in the forward pass but does not work in the reverse pass because the derivative does not exist (from the error I keep getting).
The function is explained in the attached image:
I have the following questions:
1: Why can't I take the derivative of the function above.
2: If it is an implementation issue, can you suggest how I can rewrite it so I can take its derivative and backprop errors through it?
3: Are all tensorflow ops differentiable?

So I followed #DomJack 's advice and removed the tf.convert_to_tensor calls and did a little house cleaning all the way through.
Now the function is differentiable.
def ranked(a):
rankw01 = tf.cast(tf.contrib.framework.argsort(tf.contrib.framework.argsort(a)) + 1, tf.float32)
rankw02 = rankw01 - tf.cast((tf.shape(a)[-1] + 1)/2, tf.float32)
rankw03 = tf.div(rankw02, tf.reduce_sum(tf.nn.relu(rankw02), axis = -1, keepdims=True))
return rankw033

Related

How to use the tf.case api of TensorFlow correctly?

I want to design a follow function for expanding any 1D/2D/3D matrix to a 4D matrix.
import tensorflow as tf
def inputs_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3),
tf.equal(_ranks, 2): lambda: tf.expand_dims(tf.expand_dims(inputs, 0), 3),
tf.equal(_ranks, 1): lambda: tf.expand_dims(tf.expand_dims(tf.expand_dims(inputs, 0), 0), 3)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = inputs_2_4D(mat_1d)
print(result.eval())
The function, however, cannot run well. It can only perform to output a 4-D matrix when the mat_3d and mat-4d tensors are passed into it. There will be some errors information if a 1D or 2D matrix are passed to the function.
When passing mat_3dormat_4dinto inputs_2_4D(), they can be expanded to a 4D matrix or original matrix:
mat_3d -----> [[[[1]
[1]]]]
mat_4d -----> [[[[1 1]]]]
When mat_1dormat_2dmatrixes are passed into inputs_2_4D, error information:
ValueError: dim 3 not in the interval [-2, 1]. for 'case/cond/ExpandDims' (op: 'ExpandDims') with input shapes: [2], [] and with computed input tensors: input[1] = <3>.
I tested another similar function before. That function can run correctly.
import tensorflow as tf
def test_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.constant(3),
tf.equal(_ranks, 2): lambda: tf.constant(2),
tf.equal(_ranks, 1): lambda: tf.constant(1)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = test_2_4D(mat_3d)
print(result.eval())
This function can correctly output the corresponding results when passing all of matrixes.
test_2_4D() RESULTS:
mat_1d -----> 1
mat_2d -----> 2
mat_3d -----> 3
mat_4d -----> [[[[1 1]]]]
I don't know why the correct branch in inputs_2_4D() cannot be found while the tf.equal() in each branch were executed. I feel that the 1st and 2nd branches in the function seem to still work if the input matrix is "mat_1d" or "mat_2d". So, the program will crash down. Please help me to analyze this problem!
I think I worked out what the problem is here. Turns out all condition/function pairs are evaluated. This can be revealed by giving the ops different names. The problem is that if your input is, say, rank 2, Tensorflow seems to still evaluate tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3). This leads to a crash because it cannot expand dim 3 for a rank-2 tensor (the maximum allowed value is 2).
This actually makes sense since with tf.case you're basically saying "I don't know which of these cases is going to be true at runtime, so check which one is appropriate and execute the corresponding function". However this means that Tensorflow needs to prepare execution paths for all possible cases, which in this case leads to invalid computations (trying to expand invalid dimensions).
At this point it would be nice to know a little more about your problem, i.e. why exactly you need that function. If you have different inputs and you simply want to bring them all to 4D, but each input always has the same dimensionality, consider simply using Python if-statements. Example:
inputs3d = tf.constant([[[1,1]]]) # this is always 3D
inputs2d = tf.constant([[1,1]]) # this is alwayas 2D
...
def inputs_2_4D(inputs):
_rank = len(inputs.shape.as_list())
if _rank == 3:
return tf.expand_dims(inputs, 3)
elif _rank == 2:
return tf.expand_dims(tf.expand_dims(inputs, 0), 3)
...
This will check the input rank while the graph is being built (not at runtime like tf.case) and really only prepare those expand_dims ops that are appropriate for the given input.
However if you have a single inputs tensor and this could have different ranks at different times of your program this would require a different solution. Please let us know which problem you're trying to solve!
I have implement the functionality I want through 2 ways. Now, I provide my code to share.
The 1st method based on tf.cond:
def inputs_2_4D(inputs):
_rank1d = tf.rank(inputs)
def _1d_2_2d(): return tf.expand_dims(inputs, 0)
def _greater_than_1d(): return tf.identity(inputs)
_tmp_2d = tf.cond(_rank1d < 2, _1d_2_2d, _greater_than_1d)
_rank2d = tf.rank(_tmp_2d)
def _2d_2_3d(): return tf.expand_dims(_tmp_2d, 0)
def _greater_than_2d(): return tf.identity(_tmp_2d)
_tmp_3d = tf.cond(_rank2d < 3, _2d_2_3d, _greater_than_2d)
_rank3d = tf.rank(_tmp_3d)
def _3d_2_4d(): return tf.expand_dims(_tmp_3d, 3)
def _greater_than_3d(): return tf.identity(_tmp_3d)
return (tf.cond(_rank3d < 4, _3d_2_4d, _greater_than_3d))
The 2nd method based on tf.case with tf.cond:
def inputs_2_4D_1(inputs):
_rank = tf.rank(inputs)
def _assign_original(): return tf.identity(inputs)
def _dummy(): return tf.expand_dims(inputs, 0)
_1d = tf.cond(tf.equal(_rank, 1), _assign_original, _dummy)
_2d = tf.cond(tf.equal(_rank, 2), _assign_original, _dummy)
_3d = tf.cond(tf.equal(_rank, 3), _assign_original, _dummy)
def _1d_2_4d(): return tf.expand_dims(tf.expand_dims(tf.expand_dims(_1d, 0), 0), 3)
def _2d_2_4d(): return tf.expand_dims(tf.expand_dims(_2d, 0), 3)
def _3d_2_4d(): return tf.expand_dims(_3d, 3)
return (tf.case({tf.equal(_rank, 1): _1d_2_4d,
tf.equal(_rank, 2): _2d_2_4d,
tf.equal(_rank, 3): _3d_2_4d},
default=_assign_original))
I think the efficiency of the 2nd method should be less than the 1st method's, because the function _dummy() always wastes 2 operations when allocating inputs into _1d,_2d,_3d respectively.

Custom Keras metric, changing

I am currently trying to create my own loss function for Keras (using Tensorflow backend). This is a simple categorical crossentropy but I am applying a factor on the 1st column to penalize more loss from the 1st class.
Yet I am new to Keras and I can't figure out how to translate my function (below) as I have to use symbolic expressions and it seems I can't go element-wise:
def custom_categorical_crossentropy(y_true, y_pred):
y_pred = np.clip(y_pred, _EPSILON, 1.0-_EPSILON)
out = np.zeros(y_true.shape).astype('float32')
for i in range(0,y_true.shape[0]):
for j in range (0,y_true.shape[1]):
#penalize more all elements on class 1 so that loss takes its low proportion in the dataset into account
if(j==0):
out[i][j] = -(prop_database*(y_true[i][j] * np.log(y_pred[i][j]) + (1.0 - y_true[i][j]) * np.log(1.0 - y_pred[i][j])))
else:
out[i][j] = -(y_true[i][j] * np.log(y_pred[i][j]) + (1.0 - y_true[i][j]) * np.log(1.0 - y_pred[i][j]))
out = np.mean(out.astype('float32'), axis=-1)
return tf.convert_to_tensor(out,
dtype=tf.float32,
name='custom_loss')
Can someone help me?
Many thanks!
You can use class_weight in the fit method to penalize classes without creating functions:
weights = {
0:2,
1:1,
2:1,
3:1,
...
}
model.compile(optimizer=chooseOne, loss='categorical_crossentropy')
model.fit(......., class_weight = weights)
This will make the first class be twice as important as the others.

Output error rate per label / confusion matrix

I train an image classifier using Keras up to around 98% test accuracy. Now I know that the overall accuracy is 98%, but i want to know the accuracy/error per distinct class/label.
Has Keras a builtin function for that or would I have to test this myself per class/label?
Update: Thanks #gionni. I didn't know the actual term was "Confusion Matrix". But that's what I am actually looking for. That being said, is there a function to generate one? I have to use Keras 1.2.2 by the way.
I had similar issue so I could share my code with you. The following function computes a single class accuracy:
def single_class_accuracy(interesting_class_id):
def fn(y_true, y_pred):
class_id_preds = K.argmax(y_pred, axis=-1)
# Replace class_id_preds with class_id_true for recall here
positive_mask = K.cast(K.equal(class_id_preds, interesting_class_id), 'int32')
true_mask = K.cast(K.equal(y_true, interesting_class_id), 'int32')
acc_mask = K.cast(K.equal(positive_mask, true_mask), 'float32')
class_acc = K.mean(acc_mask)
return class_acc
return fn
Now - if you want to get an accuracy for 0 class you could add it to metrics while compiling a model:
model.compile(..., metrics=[..., single_class_accuracy(0)])
If you want to have all classes accuracy you could type:
model.compile(...,
metrics=[...] + [single_class_accuracy(i) for i in range(nb_of_classes)])
There may be better options, but you can use this:
import numpy as np
#gather each true label
distinct, counts = np.unique(trueLabels,axis=0,return_counts=True)
for dist,count in zip(distinct, counts):
selector = (trueLabels == dist).all(axis=-1)
selectedX = testData[selector]
selectedY = trueLabels[selector]
print('\n\nEvaluating for ' + str(count) + ' occurrences of class ' + str(dist))
print(model.evaluate(selectedX,selectedY,verbose=0))

TensorFlow loss function zeroes out after first epoch

I am trying to implement a discriminative loss function for instance segmentation of images based on this paper: https://arxiv.org/pdf/1708.02551.pdf (This link is just for the readers' reference; I don't expect anyone to read it to help me out!)
My problem: Once I move from a simple loss function to a more complicated one (like you see in the attached code snippet), the loss function zeroes out after the first epoch. I checked the weights, and almost all of them seem to hover closely around -300. They are not exactly identical, but very close to each other (differing only in the decimal places).
Relevant code that implements the discriminative loss function:
def regDLF(y_true, y_pred):
global alpha
global beta
global gamma
global delta_v
global delta_d
global image_height
global image_width
global nDim
y_true = tf.reshape(y_true, [image_height*image_width])
X = tf.reshape(y_pred, [image_height*image_width, nDim])
uniqueLabels, uniqueInd = tf.unique(y_true)
numUnique = tf.size(uniqueLabels)
Sigma = tf.unsorted_segment_sum(X, uniqueInd, numUnique)
ones_Sigma = tf.ones((tf.shape(X)[0], 1))
ones_Sigma = tf.unsorted_segment_sum(ones_Sigma,uniqueInd, numUnique)
mu = tf.divide(Sigma, ones_Sigma)
Lreg = tf.reduce_mean(tf.norm(mu, axis = 1))
T = tf.norm(tf.subtract(tf.gather(mu, uniqueInd), X), axis = 1)
T = tf.divide(T, Lreg)
T = tf.subtract(T, delta_v)
T = tf.clip_by_value(T, 0, T)
T = tf.square(T)
ones_Sigma = tf.ones_like(uniqueInd, dtype = tf.float32)
ones_Sigma = tf.unsorted_segment_sum(ones_Sigma,uniqueInd, numUnique)
clusterSigma = tf.unsorted_segment_sum(T, uniqueInd, numUnique)
clusterSigma = tf.divide(clusterSigma, ones_Sigma)
Lvar = tf.reduce_mean(clusterSigma, axis = 0)
mu_interleaved_rep = tf.tile(mu, [numUnique, 1])
mu_band_rep = tf.tile(mu, [1, numUnique])
mu_band_rep = tf.reshape(mu_band_rep, (numUnique*numUnique, nDim))
mu_diff = tf.subtract(mu_band_rep, mu_interleaved_rep)
mu_diff = tf.norm(mu_diff, axis = 1)
mu_diff = tf.divide(mu_diff, Lreg)
mu_diff = tf.subtract(2*delta_d, mu_diff)
mu_diff = tf.clip_by_value(mu_diff, 0, mu_diff)
mu_diff = tf.square(mu_diff)
numUniqueF = tf.cast(numUnique, tf.float32)
Ldist = tf.reduce_mean(mu_diff)
L = alpha * Lvar + beta * Ldist + gamma * Lreg
return L
Question: I know it's hard to understand what the code does without reading the paper, but I have a couple questions:
Is there something glaringly wrong with the loss function defined
above?
Anyone has a general idea as to why the loss function could zero out after the first epoch?
Thank you very much for your time and help!
I think your problem suffers from tf.norm which is not safe (leads to zeros somewhere in the vector and hence nan in its gradients).
It would be better to replace tf.norm by this custom function:
def tf_norm(inputs, axis=1, epsilon=1e-7, name='safe_norm'):
squared_norm = tf.reduce_sum(tf.square(inputs), axis=axis, keep_dims=True)
safe_norm = tf.sqrt(squared_norm+epsilon)
return tf.identity(safe_norm, name=name)
In your Ldist calculation you use tf.tile and tf.reshape to find the distance between different cluster means in the following manner (suppose we have three clusters):
mu_1 - mu_1
mu_2 - mu_1
mu_3 - mu_1
mu_1 - mu_2
mu_2 - mu_2
mu_3 - mu_2
mu_1 - mu_3
mu_2 - mu_3
mu_3 - mu_3
The problem is that your distance vector contains zero vectors and you perform a norm operation afterwards. tf.norm gets numerical unstable since it performs a division over the length of the vector. The result is that the gradient either gets zero or inf. See this github issue.
The solution would be to remove those zero vectors in a fashion like this Stackoverflow question.

How can I implement a Binarizer Layer in TensorFlow?

I'm trying to implement the binarizer in page 4 of this paper. It's not too difficult of a function. It's simply this:
No gradients to be backpropagated for this function. I'm trying to do it in TensorFlow. There are two ways to go about it:
Implementing it in C++ using TensorFlow. However, the instructions are quite unclear to me. It would be great if someone could walk me through it. One thing that I was unclear was why is the gradient for ZeroOutOp implemented in Python?
I decided to go with the pure Python approach.
Here's the code:
import tensorflow as tf
import numpy as np
def py_func(func, inp, out_type, grad):
grad_name = "BinarizerGradients_Schin"
tf.RegisterGradient(grad_name)(grad)
g = tf.get_default_graph()
with g.gradient_override_map({"PyFunc": grad_name}):
return tf.py_func(func, inp, out_type)
'''
This is a hackish implementation to speed things up. Doesn't directly follow the formula.
'''
def _binarizer(x):
probability_matrix = (x + 1) / float(2)
probability_matrix = np.matrix.round(probability_matrix, decimals=0)
np.putmask(probability_matrix, probability_matrix==0.0, -1.0)
return probability_matrix
def binarizer(x):
return py_func(_binarizer, [x], [tf.float32], _BinarizerNoOp)
def _BinarizerNoOp(op, grad):
return grad
The problem happens here. Inputs are 32x32x3 CIFAR images and they get reduced to 4x4x64 in the last layer. My last layer has a shape of (?, 4, 4, 64), where ? is the batch size. After putting it through this by calling:
binarized = binarizer.binarizer(h_pool3)
h_deconv1 = tf.nn.conv2d_transpose(h_pool3, W_deconv1, output_shape=[batch_size, img_height/4, img_width/4, 64], strides=[1,2,2,1], padding='SAME') + b_deconv1
The following error occurs:
ValueError: Shapes (4, 4, 64) and (?, 4, 4, 64) are not compatible
I can kinda guess why this happens. The ? represents the batch size and after putting the last layer through the binarizer, the ? dimension seems to disappear.
I think you can proceed as described in this answer. Applied to our problem:
def binarizer(input):
prob = tf.truediv(tf.add(1.0, input), 2.0)
bernoulli = tf.contrib.distributions.Bernoulli(p=prob, dtype=tf.float32)
return 2 * bernoulli.sample() - 1
Then, where you setup your network:
W_h1, bias_h1 = ...
h1_before_bin = tf.nn.tanh(tf.matmul(x, W_h1) + bias_h1)
# The interesting bits:
t = tf.identity(h1_before_bin)
h1 = t + tf.stop_gradient(binarizer(h1_before_bin) - t)
However, I'm not sure how to verify that this works...