I haved read this question,but didn't helps me.
Keras Loss Function with Additional Dynamic Parameter
How can I do if I have two labels,means that I have y_pred1,y_true1 and y_pred2,y_true2。My custom loss function can be def huber_loss_mean_weighted(y_true1, y_pred1, y_true2,y_pred2,is_weights) ?
my loss function as flow ,how can I modify code?
model.compile(optimizer='adagrad',
loss={'loss1': BinaryCrossentropy, 'loss2': BinaryCrossentropy, 'final': custom_error},loss_weights=[1, 1, 1])
model.fit(train_model_input,
{'loss1': label1, 'loss2': label2, 'final': label_final} )
Related
Here I need to write a function passing y_true as a tuple of (target,mask) and then inside the custom loss I want to recover it by y_true, y_mask = y_true. That means mask should be passed separately. How to do that? I am creating dataframe using flow from dataframe approach in Keras. I am using keras here.
I am having trouble with Keras Custom loss function. I want to be able to access truth as a numpy array.
Because it is a callback function, I think I am not in eager execution, which means I can't access it using the backend.get_value() function. i also tried different methods, but it always comes back to the fact that this 'Tensor' object doesn't exist.
Do I need to create a session inside the custom loss function ?
I am using Tensorflow 2.2, which is up to date.
def custom_loss(y_true, y_pred):
# 4D array that has the label (0) and a multiplier input dependant
truth = backend.get_value(y_true)
loss = backend.square((y_pred - truth[:,:,0]) * truth[:,:,1])
loss = backend.mean(loss, axis=-1)
return loss
model.compile(loss=custom_loss, optimizer='Adam')
model.fit(X, np.stack(labels, X[:, 0], axis=3), batch_size = 16)
I want to be able to access truth. It has two components (Label, Multiplier that his different for each item. I saw a solution that is input dependant, but I am not sure how to access the value. Custom loss function in Keras based on the input data
I think you can do this by enabling run_eagerly=True in model.compile as shown below.
model.compile(loss=custom_loss(weight_building, weight_space),optimizer=keras.optimizers.Adam(), metrics=['accuracy'],run_eagerly=True)
I think you also need to update custom_loss as shown below.
def custom_loss(weight_building, weight_space):
def loss(y_true, y_pred):
truth = backend.get_value(y_true)
error = backend.square((y_pred - y_true))
mse_error = backend.mean(error, axis=-1)
return mse_error
return loss
I am demonstrating the idea with a simple mnist data. Please take a look at the code here.
I am trying to create the custom loss function in Keras. I want to compute the loss function based on the input and predicted output of the neural network. I created the custom loss function which takes the y_true, y_pred and t as the arguments. t is the variable that I would like to use for the custom loss function calculation. I have two parts in the loss function (please refer to the attached image)
I can create the first part of the loss function (which is the mean squared error). I would like to slice the y_pred tensor and assign it to three tensors (y1_pred, y2_pred, and y3_pred). Is there a way to do that directly in Keras or I have to use tensorflow for that? How can I calculate the gradient in keras? Do I need to create a session for computing loss2?
def customloss(y_true, y_pred, t):
loss1 = K.mean(K.square(y_pred - y_true), axis=-1)
loss2 = tf.gradients(y1_pred, t) - y1_pred*y3_pred
return loss1+loss2
Thank you.
I'm trying to train DNN that outputs 3 values (x,y,z) where x and y are coordinates of the object I'm looking for and z is the probability that object is present
I need custom loss function:
If z_true<0.5 I don't care of x and y values, so error should be equal to (0, 0, sqr(z_true - z_pred))
otherwise error should be like (sqr(x_true - x_pred), sqr(y_true - y_pred), sqr(z_true - z_pred))
I'm in a struggle with mixing tensors and if statements together.
Maybe this example of a custom loss function will get you up and running. It shows how you can mix tensors with if statements.
def conditional_loss_function(l):
def loss(y_true, y_pred):
if l == 0:
return loss_funtion1(y_true, y_pred)
else:
return loss_funtion2(y_true, y_pred)
return loss
model.compile(loss=conditional_loss_function(l), optimizer=...)
Use switch from Keras backend: https://keras.io/backend/#switch
It is similar to tf.cond
How to create a custom loss in Keras described here: Make a custom loss function in keras
I would like to get the values of the y_pred and y_true tensors of this keras backend function. I need this to be able to perform some custom calculations and change the loss, these calculations are just possible with the real array values.
def mean_squared_error(y_true, y_pred):
#some code here
return K.mean(K.square(y_pred - y_true), axis=-1)
There is a way to do this in keras? Or in any other ML framework (tf, pytorch, theano)?
No, in general you can't compute the loss that way, because Keras is based on frameworks that do automatic differentiation (like Theano, TensorFlow) and they need to know which operations you are doing in between in order to compute the gradients of the loss.
You need to implement your loss computations using keras.backend functions, else there is no way to compute gradients and optimization won't be possible.
Try including this within the loss function:
y_true = keras.backend.print_tensor(y_true, message='y_true')
Following is an excerpt from the Keras documentation (https://keras.io/backend/):
print_tensor
keras.backend.print_tensor(x, message='')
Prints message and the tensor value when evaluated.
Note that print_tensor returns a new tensor identical to x which should be used in the later parts of the code. Otherwise, the print operation is not taken into account during evaluation.