tf.boolean_mask in loss function: No gradients provided for any variable - tensorflow

I am trying to use tf.boolean_mask to get a masked mean difference for image segmentation:
def custom_loss(image):
def loss(predicted_y, target_y):
pred_mask = tf.math.greater(predicted_y,0.5)
target_mask = tf.math.greater(target_y,0.5)
mean_diff = (tf.reduce_mean(tf.boolean_mask(image,pred_mask)) - tf.reduce_mean(tf.boolean_mask(image,target_mask))) ** 2
return mean_diff
return loss
Unfortunately, I am getting a ValueError: No gradients provided for any variable, which would logically be caused by the tf.boolean_mask. Any way to do this on Tensorflow 2.0?
Thanks a lot!

Related

ValueError: No gradients provided for any variable in semi/self supervised loss function

I am training a neural network for clustering applications in a semi/self-supervised way:
Instead of having the ground truth, I define the loss function by calculating the similarity among the data points assigned to the same clusters, like:
def loss_function(self, y_true, y_pred):
def get_loss(x_input, y_input):
similarity = 0
for i in range(len(np.unique(y_input))):
similarity += sum(pdist(x_input))
return similarity
score = tf.numpy_function(get_loss, [self.x_input, y_pred], tf.float32)
return score
In calculating the loss, I don't use y_true, and instead, I use self.x_input, which is the original data point.
I'm getting the following error while running my code:
raise ValueError(f"No gradients provided for any variable: {variable}. "
ValueError: No gradients provided for any variable
So my question is it possible to train a neural network model in this way (without having ground truth)? If so, what is causing the above problem?

Easy way to clamp Neural Network outputs between 0 and 1?

So I'm working on writing a GAN neural network and I want to set my network's output to 0 if it is less than 0 and 1 if it is greater than 1 and leave it unchanged otherwise. I'm pretty new to tensorflow, but I don't know of any tensorflow function or activation to do this without unwanted side effects. So I made my loss function so it calculates the loss as if the output was clamped, with this code:
def discriminator_loss(real_output, fake_output):
real_output_clipped = min(max(real_output.numpy()[0],
0), 1)
fake_output_clipped = min(max(fake_output.numpy()[0],
0), 1)
real_clipped_tensor =
tf.Variable([[real_output_clipped]], dtype = "float32")
fake_clipped_tensor =
tf.Variable([[fake_output_clipped]], dtype = "float32")
real_loss = cross_entropy(tf.ones_like(real_output),
real_clipped_tensor)
fake_loss = cross_entropy(tf.zeros_like(fake_output),
fake_clipped_tensor)
total_loss = real_loss + fake_loss
return total_loss
but I get this error:
ValueError: No gradients provided for any variable: ['dense_50/kernel:0', 'dense_50/bias:0', 'dense_51/kernel:0', 'dense_51/bias:0', 'dense_52/kernel:0', 'dense_52/bias:0', 'dense_53/kernel:0', 'dense_53/bias:0'].
Does anyone know a better way to do this, or a way to fix this error?
Thanks!
You can apply a ReLU layer from Keras as your final layer and set max_value=1.0. For example:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(32, input_shape=(16,)))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.ReLU(max_value=1.0))
You can read more about it here: https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU
TF probably does not know how to update your network weights based on this loss. The input of the cross entropy are tensors (variables) that are directly assigned from numpy arrays and are not connected to your actual network outputs.
If you want to perform operations on tensors that will remain within the graph and (hopefully) be differentiable, use the available TF operations. There's a "clip_by_value" operation described here: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_value.
E.g. real_output_clipped = tf.clip_by_value(real_output, clip_value_min=0, clip_value_max=1)

Keras Model - Get input in custom loss function

I am having trouble with Keras Custom loss function. I want to be able to access truth as a numpy array.
Because it is a callback function, I think I am not in eager execution, which means I can't access it using the backend.get_value() function. i also tried different methods, but it always comes back to the fact that this 'Tensor' object doesn't exist.
Do I need to create a session inside the custom loss function ?
I am using Tensorflow 2.2, which is up to date.
def custom_loss(y_true, y_pred):
# 4D array that has the label (0) and a multiplier input dependant
truth = backend.get_value(y_true)
loss = backend.square((y_pred - truth[:,:,0]) * truth[:,:,1])
loss = backend.mean(loss, axis=-1)
return loss
model.compile(loss=custom_loss, optimizer='Adam')
model.fit(X, np.stack(labels, X[:, 0], axis=3), batch_size = 16)
I want to be able to access truth. It has two components (Label, Multiplier that his different for each item. I saw a solution that is input dependant, but I am not sure how to access the value. Custom loss function in Keras based on the input data
I think you can do this by enabling run_eagerly=True in model.compile as shown below.
model.compile(loss=custom_loss(weight_building, weight_space),optimizer=keras.optimizers.Adam(), metrics=['accuracy'],run_eagerly=True)
I think you also need to update custom_loss as shown below.
def custom_loss(weight_building, weight_space):
def loss(y_true, y_pred):
truth = backend.get_value(y_true)
error = backend.square((y_pred - y_true))
mse_error = backend.mean(error, axis=-1)
return mse_error
return loss
I am demonstrating the idea with a simple mnist data. Please take a look at the code here.

Custom gradient in tensorflow attempts to convert model to tensor

I am trying to use the output of one neural network to compute the loss value for another network. As the first network is approximating another function (L2 distance) I would like to provide the gradients myself, as if it had come from an L2 function.
An example of my loss function in simplified code is:
#tf.custom_gradient
def loss_function(model_1_output):
def grad(dy, variables=None):
gradients = 2 * pred
return gradients
pred = model_2(model_1_output)
loss = pred ** 2
return loss, grad
This is called in a standard tensorflow 2.0 custom training loop such as:
with tf.GradientTape() as tape:
model_1_output = model_1(training_data)
loss = loss_function(model_1_output)
gradients = tape.gradient(loss, model_1.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables)
However, whenever I try to run this I keep getting the error:
ValueError: Attempt to convert a value (<model.model_2 object at 0x7f41982e3240>) with an unsupported type (<class 'model.model_2'>) to a Tensor.
The whole point of using the custom_gradients decorator is that I don't want the model_2 in the loss function to be included in the back propagation as I give it the gradients manually.
How can I make tensorflow completely ignore anything inside the loss function? So that for example I could do non-differetiable operations. I have tried using with tape.stop_recording() but I always result in a no gradients found error.
Using:
OS: Ubuntu 18.04
tensorflow: 2.0.0
python: 3.7

Need custom loss function that uses if statement

I'm trying to train DNN that outputs 3 values (x,y,z) where x and y are coordinates of the object I'm looking for and z is the probability that object is present
I need custom loss function:
If z_true<0.5 I don't care of x and y values, so error should be equal to (0, 0, sqr(z_true - z_pred))
otherwise error should be like (sqr(x_true - x_pred), sqr(y_true - y_pred), sqr(z_true - z_pred))
I'm in a struggle with mixing tensors and if statements together.
Maybe this example of a custom loss function will get you up and running. It shows how you can mix tensors with if statements.
def conditional_loss_function(l):
def loss(y_true, y_pred):
if l == 0:
return loss_funtion1(y_true, y_pred)
else:
return loss_funtion2(y_true, y_pred)
return loss
model.compile(loss=conditional_loss_function(l), optimizer=...)
Use switch from Keras backend: https://keras.io/backend/#switch
It is similar to tf.cond
How to create a custom loss in Keras described here: Make a custom loss function in keras