My prediction is y_hat = [ 0.57,0.05,0.14,0.10,0.14] and target is
target =[ 1, 0, 0, 0, 0 ].
I need to calculate Cross Entropy loss by NumPy and Pytorch loss function.
Using NumPy my formula is -np.sum(target*np.log(y_hat)), and I got 0.5621189181535413
However, using Pytorch:
loss = nn.CrossEntropyLoss()
output = torch.FloatTensor([0.57,0.05,0.14,0.10,0.14])
label = torch.FloatTensor([1,0,0,0,0])
loss_value = loss(output, label)
print(loss_value)
Gives tensor(1.2586), which is different.
You need to apply the softmax function to your y_hat vector before computing cross-entropy loss. For example, you can use scipy.special.softmax().
>>> from scipy.special import softmax
>>> import numpy as np
>>> y_hat = [0.57, 0.05, 0.14, 0.10, 0.14]
>>> target =[1, 0, 0, 0, 0]
>>> y_hat = softmax(y_hat)
>>> -np.sum(target * np.log(y_hat))
1.2586146726011722
Which agrees with the result from Pytorch.
Related
I am trying to write a custom Keras loss function but I am having issues with implementing and debugging my code. My target vector is:
y_pred = [p_conf, p_class_1, p_class_2]
where, p_conf = confidence an event of interest was detected
y_true examples:
[0, 0, 0] = no event of interest
[1, 1, 0] = first class event
[1, 0, 1] = second class event
I get relatively good results using multi-label classification (i.e. using a sigmoid activation in my final layer and binary_crossentropy loss function) but I want to experiment and improve my results using a custom loss function that calculates the:
binary_crossentropy loss for when y_true = [0, ..., ...]
categorical_crossentropy loss for when y_true = [1, ..., ...]
This is a simplified loss function used by the YOLO object detection algorithm. I tried adapting an existing Keras / TensorFlow implementation of the YOLO loss function but have not been successful.
Here is my current working code. It runs but generates unstable results. i.e. loss and accuracy decreases over time. Any assistance would be greatly appreciated.
import tensorflow as tf
from keras import losses
def custom_loss(y_true, y_pred):
# Initialisation
mask_shape = tf.shape(y_true)[:0]
conf_mask = tf.zeros(mask_shape)
class_mask = tf.zeros(mask_shape)
# Labels
true_conf = y_true[..., 0]
true_class = tf.argmax(y_true[..., 1:], -1)
# Predictions
pred_conf = tf.sigmoid(y_pred[..., 0])
pred_class = y_pred[..., 1:]
# Masks for selecting rows based on confidence = {0, 1}
conf_mask = conf_mask + (1 - y_true[..., 0])
class_mask = y_true[..., 0]
nb_class = tf.reduce_sum(tf.to_float(class_mask > 0.0))
# Calculate loss
loss_conf = losses.binary_crossentropy(true_conf, pred_conf)
loss_class = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=true_class, logits=pred_class)
loss_class = tf.reduce_sum(loss_class * class_mask) / nb_class
loss = loss_conf + loss_class
return loss
There is a function in numpy that inserts given values to the array:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html
Is there something similar in tensorflow?
Alternatively, is there a function in tensorflow that can do tensor upsampling using zeros in between values of a tensor?
tf.nn.conv2d_transpose can do this upsampling (with careful design of output_shape and strides). A sample code:
import tensorflow as tf
import numpy as np
input = tf.convert_to_tensor(np.ones((1, 20, 20, 1)))
input = tf.cast(input, tf.float32)
b = np.zeros((3, 3, 1, 1))
b[1, 1, 0, 0] = 1
weight = tf.convert_to_tensor(b)
weight = tf.cast(weight, tf.float32)
output = tf.nn.conv2d_transpose(input, weight, output_shape=(1, 40, 40, 1), strides=[1, 2, 2, 1])
sess = tf.Session()
print sess.run(output[0, :, :, 0])
I believe checking its api will help you more.
I'm trying to do softmax over selected indices, using infinity mask to silent out the unwanted ones. However, the gradient of those unwanted entires become nan as opposed to 0.
The reason I didn't use boolean mask is that the mask indices are different in my batch, which can't end up with a nice matrix form. If there's workaround here I'll be more than happy to adopt.
The code I tested the infinity mask is
import numpy as np
import tensorflow as tf
a = tf.placeholder(tf.float32, [5])
inf_mask = tf.placeholder(tf.float32, [5])
b = tf.multiply(a, inf_mask)
sf = tf.nn.softmax(b)
loss = (sf[2] - 0)
grad = tf.gradients(loss, a)
sess = tf.Session()
a_np = np.ones([5])
np_mask = np.ones([5]) * 4
np_mask[1] = -np.inf
print sess.run([sf, grad], feed_dict={
a: a_np,
inf_mask: np_mask
})
sess.close()
The output is
[array([ 0.25, 0. , 0.25, 0.25, 0.25], dtype=float32), [array([-0.25, nan, 0.75, -0.25, -0.25], dtype=float32)]]
The mask is working but the gradient has a nan, which should have been 0 I think.
I need to extract the high frequencies form an image in tensorflow.
Basically the functionality from ndimage.gaussian_filter(img, sigma)
The following code works as expected:
import tensorflow as tf
import cv2
img = cv2.imread(imgpath, cv2.IMREAD_GRAYSCALE)
img = cv2.normalize(img.astype('float32'), None, 0.0, 1.0, cv2.NORM_MINMAX)
# Gaussian Filter
K = np.array([[0.003765,0.015019,0.023792,0.015019,0.003765],
[0.015019,0.059912,0.094907,0.059912,0.015019],
[0.023792,0.094907,0.150342,0.094907,0.023792],
[0.015019,0.059912,0.094907,0.059912,0.015019],
[0.003765,0.015019,0.023792,0.015019,0.003765]], dtype='float32')
# as tensorflow constants with correct shapes
x = tf.constant(img.reshape(1,img.shape[0],img.shape[1], 1))
w = tf.constant(K.reshape(K.shape[0],K.shape[1], 1, 1))
with tf.Session() as sess:
# get low/high pass ops
lowpass = tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
highpass = x-lowpass
# get high pass image
l = sess.run(highpass)
l = l.reshape(img.shape[0],img.shape[1])
imshow(l)
However I don't know how the get the Gaussian weights form within tensorflow with a given sigma.
just refer this tflearn data augmentation-http://tflearn.org/data_augmentation/ here u can find add_random_blur(sigma_max=5.0) which randomly blur an image by applying a gaussian filter with a random sigma (0., sigma_max).
I need to save some values to specific places in a tensorflow array:
import tensorflow as tf
import numpy as np
AVG = tf.Variable([0, 0, 0, 0, 0], name='data')
for i in range(5):
data = np.random.randint(1000, size=10000)
AVG += np.average(data)
I need to make it average each iteration in different places of the AVG variable. Is this doable ?
You can use tf.scatter_add. Here is a complete working program :
import tensorflow as tf
import numpy as np
AVG = tf.Variable([0, 0, 0, 0, 0], name='data')
for i in range(5):
data = np.random.randint(1000, size=10000)
AVG = tf.scatter_add(AVG, [i], [np.average(data).astype('int')])
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
print(AVG.eval())