I want to implement a custom loss function, I read the following links
Implementing custom loss function in keras with different sizes for y_true and y_pred
What is y_true and y_pred when creating a custom metric in Keras?
But when I return the following in the metric function
K.sum(y_true)
it returns a float value.
My label values are all integrals, so why is their sum coming as float value?
Related
I am trying to write a custom loss function to use in a Tensorflow 2 model.
def loss(y_true, y_pred):
1. wish to convert y_true and y_pred to numpy array (these are images)
2. After that I wish to carry out some operations (such as binarization of images, some pixel wise AND
or OR operation etc.)
3. Finally get a single floating number as a loss, and feed it to the model and minimize the loss.
can any one please suggest me how to do it? I already tried many of the options given on the internet but I am not able to convert y_true and y_pred into a numpy array.
I am trying to create the custom loss function in Keras. I want to compute the loss function based on the input and predicted output of the neural network. I created the custom loss function which takes the y_true, y_pred and t as the arguments. t is the variable that I would like to use for the custom loss function calculation. I have two parts in the loss function (please refer to the attached image)
I can create the first part of the loss function (which is the mean squared error). I would like to slice the y_pred tensor and assign it to three tensors (y1_pred, y2_pred, and y3_pred). Is there a way to do that directly in Keras or I have to use tensorflow for that? How can I calculate the gradient in keras? Do I need to create a session for computing loss2?
def customloss(y_true, y_pred, t):
loss1 = K.mean(K.square(y_pred - y_true), axis=-1)
loss2 = tf.gradients(y1_pred, t) - y1_pred*y3_pred
return loss1+loss2
Thank you.
BACKGROUND:
I want to retrieve the equal of len(x) and x.shape[0] for y_pred and y_true inside a custom Keras metric without using anything but Keras backend.
Consider a minimal Keras metric example:
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)
Here y_pred and y_true are tensors that represent numpy arrays of a certain shape.
QUESTION:
How to get the length of the underlying array inside the keras metric function so that the resulting code will be in the form:
def binary_accuracy(y_true, y_pred):
# some Keras backend code
return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)
NOTE: the code has to be Keras backend code, so that it works on any Keras backend.
I've already tried K.ndim(y_pred) which returns 2 even though the length is 45 actually and K.int_shape(y_pred) which returns None.
You need to remember that in some cases, the shape of a given symbolic tensor (e.g. y_true and y_pred in your case) cannot be determined until you feed values to specific placeholders that this tensor relies on.
Keeping that in mind, you have two options:
Use K.int_shape(x) to get a tuple of ints and Nones that represent the shape of the input tensor x. In this case, the dimensions with undetermined lengths will be None.
This is useful in cases where your non-Tensorflow code does not depend on
undetermined dimensions. e.g. you cannot do the following:
if K.shape(x)[0] == 5:
...
else:
...
Use K.shape(x) to get a symbolic tensor that represents the shape of the
tensor x.
This is useful in cases where you want to use the shape of a tensor to change your TF graph, e.g.:
t = tf.ones(shape=K.shape(x)[0])
You can access the shape of the tensor through K.int_shape(x)
By taking the first value of the result, you will get the length of the underlying array : K.int_shape(x)[0]
I am implementing a custom loss function in keras using tensorflow as backend. The loss function gets a tensor for y_true and y_pred. My output is a 10-dimensional vector.I need to return loss as the difference of y_true and y_pred but first I need to find the indices of y_true and y_pred to find the difference.
Lets say y_true=[0,0,0,0,1,0,0,0,0,0] and y_pred=[0,0,1,0,0,0,0,0,0,0]. So after decoding the value will be y_true=4 and y_pred=2. So the loss is 2.
How can I decode the tensor and calculate the loss?
I am trying to write a cusom Keras loss function in which I process the tensors in sub-vector chunks. For example, if an output tensor represented a concatenation of quaternion coefficients (i.e. w,x,y,z,w,x,y,z...) I might wish to normalize each quaternion before calculating the mean squared error in a loss function like:
def norm_quat_mse(y_true, y_pred):
diff = y_pred - y_true
dist = 0
for i in range(0,16,4):
dist += K.sum( K.square(diff[i:i+4] / K.sqrt(K.sum(K.square(diff[i:i+4])))))
return dist/4
While Keras will accept this function without error and use in training, it outputs a different loss value from when applied as an independent function and when using model.predict(), so I suspect it is not working properly. None of the built-in Keras loss functions use this per-chunk processing approach, is it possible to do this within Keras' auto-differentiation framework?
Try:
def norm_quat_mse(y_true, y_pred):
diff = y_pred - y_true
dist = 0
for i in range(0,16,4):
dist += K.sum( K.square(diff[:,i:i+4] / K.sqrt(K.sum(K.square(diff[:,i:i+4])))))
return dist/4
You need to know that shape of y_true and y_pred is (batch_size, output_size) so you need to skip first dimension during computations.