tensorflow gradient return none for custom loss function - tensorflow

I implement a model with a custom training loop and custom loss function. Loss function return a float value. But while calculate gradient tf.gradient(loss,model.trainable_weights), It gives None gradient. I know the issue is way of calculate loss. I tried with custom mse loss. It works fine. I want to implement loss function like count_predition_0's+count_predition_1's/label_0's+label_1's. It's a binary classification problem. I set batch size is 100. So model return 100 batch output. I only consider few batch eg out of 100 batch i only consider or filter it out 40 based on input. Both label and prediction in same shape that is not issue here. That label and prediction pass to custom_loss function.
Notebook link

Related

Keras: Custom loss function with training data not directly related to model

I am trying to convert my CNN written with tensorflow layers to use the keras api in tensorflow (I am using the keras api provided by TF 1.x), and am having issue writing a custom loss function, to train the model.
According to this guide, when defining a loss function it expects the arguments (y_true, y_pred)
https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses
def basic_loss_function(y_true, y_pred):
return ...
However, in every example I have seen, y_true is somehow directly related to the model (in the simple case it is the output of the network). In my problem, this is not the case. How do implement this if my loss function depends on some training data that is unrelated to the tensors of the model?
To be concrete, here is my problem:
I am trying to learn an image embedding trained on pairs of images. My training data includes image pairs and annotations of matching points between the image pairs (image coordinates). The input feature is only the image pairs, and the network is trained in a siamese configuration.
I am able to implement this successfully with tensorflow layers and train it sucesfully with tensorflow estimators.
My current implementations builds a tf Dataset from a large database of tf Records, where the features is a dictionary containing the images and arrays of matching points. Before I could easily feed these arrays of image coordinates to the loss function, but here it is unclear how to do so.
There is a hack I often use that is to calculate the loss within the model, by means of Lambda layers. (When the loss is independent from the true data, for instance, and the model doesn't really have an output to be compared)
In a functional API model:
def loss_calc(x):
loss_input_1, loss_input_2 = x #arbirtray inputs, you choose
#according to what you gave to the Lambda layer
#here you use some external data that doesn't relate to the samples
externalData = K.constant(external_numpy_data)
#calculate the loss
return the loss
Using the outputs of the model itself (the tensor(s) that are used in your loss)
loss = Lambda(loss_calc)([model_output_1, model_output_2])
Create the model outputting the loss instead of the outputs:
model = Model(inputs, loss)
Create a dummy keras loss function for compilation:
def dummy_loss(y_true, y_pred):
return y_pred #where y_pred is the loss itself, the output of the model above
model.compile(loss = dummy_loss, ....)
Use any dummy array correctly sized regarding number of samples for training, it will be ignored:
model.fit(your_inputs, np.zeros((number_of_samples,)), ...)
Another way of doing it, is using a custom training loop.
This is much more work, though.
Although you're using TF1, you can still turn eager execution on at the very beginning of your code and do stuff like it's done in TF2. (tf.enable_eager_execution())
Follow the tutorial for custom training loops: https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough
Here, you calculate the gradients yourself, of any result regarding whatever you want. This means you don't need to follow Keras standards of training.
Finally, you can use the approach you suggested of model.add_loss.
In this case, you calculate the loss exaclty the same way I did in the first answer. And pass this loss tensor to add_loss.
You can probably compile a model with loss=None then (not sure), because you're going to use other losses, not the standard one.
In this case, your model's output will probably be None too, and you should fit with y=None.

XGBoost - custom loss function

There are two different guidelines on using customized loss function in xgboost.
If predicted probability ā€˜pā€™ = sigmoid(z)
In https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py 1, line#25 mentions that gradient of customized loss function should be taken w.r.t 'zā€™
2 . In https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html 1, gradient is w.r.t 'pā€™
Which is correct?
To keep this as general as possible, you need to calculate the gradient of the total loss function w.r.t changing the current predicted values. Normally, your loss function will be of the form $L = \sum_{i=1}^{N} \ell (y_{i}, \hat{y_{i}})$, in which $y_{i}$ is the label of the $i^{th}$ datapoint and $\hat{y_{i}}$ is your prediction (in the binary classification case, you might choose to define it such that $y_{i}$ are the binary labels, and $\hat{y_{i}}$ are the probabilities the classifier assigns to the label being one of the classes).
You then need to calculate $\frac{\partial\ell}{\hat{y_{i}}}\big|{y{i}}$

How to get gradients during fit or fit_generator in Keras

I need to monitor the gradients in real time during training when using fit or fit_generator methods. This should have been achieved by using custom callback function. However, I don't how to access the gradients correctly. The attribute model.optimizer.update returns tensors of gradients but it need to be fed with data. What I want to get is the value of gradients that have been applied in the last batch during training.
The following answer does not give the corresponding solution because it just define a function to calculate the gradients by feeding extra data.
Getting gradient of model output w.r.t weights using Keras

An Efficient way to Calculate loss function batchwise?

I am using autoencoders to do anomaly detection. So, I have finished training my model and now I want to calculate the reconstruction loss for each entry in the dataset. so that I can assign anomalies to data points with high reconstruction loss.
This is my current code to calculate the reconstruction loss
But this is really slow. By my estimation, it should take 5 hours to go through the dataset whereas training one epoch occurs in approx 55 mins.
I feel that converting to tensor operation is bottlenecking the code, but I can't find a better way to do it.
I've tried changing the batch sizes but it does not make much of a difference. I have to use the convert to tensor part because K.eval is throwing an error if I do it normally.
python
for i in range(0, encoded_dataset.shape[0], batch_size):
y_true = tf.convert_to_tensor(encoded_dataset[i:i+batch_size].values,
np.float32)
y_pred= tf.convert_to_tensor(ae1.predict(encoded_dataset[i:i+batch_size].values),
np.float32)
# Append the batch losses (numpy array) to the list
reconstruction_loss_transaction.append(K.eval(loss_function( y_true, y_pred)))
I was able to train in 55 mins per epoch. So I feel prediction should not take 5 hours per epoch. encoded_dataset is a variable that has the entire dataset in main memory as a data frame.
I am using Azure VM instance.
K.eval(loss_function(y_true,y_pred) is to find the loss for each row of the batch
So y_true will be of size (batch_size,2000) and so will y_pred
K.eval(loss_function(y_true,y_pred) will give me an output of
(batch_size,1) evaluating binary cross entropy on each row of y
_true and y_pred
Moved from comments:
My suspicion is that ae1.predict and K.eval(loss_function) are behaving in unexpected ways. ae1.predict should normally be used to output the loss function value as well as y_pred. When you create the model, specify that the loss value is another output (you can have a list of multiple outputs), then just call predict here once to get both y_pred the loss value in one call.
But I want the loss for each row . Won't the loss returned by the predict method be the mean loss for the entire batch?
The answer depends on how the loss function is implemented. Both ways produce perfectly valid and identical results in TF under the hood. You could average the loss over the batch before taking the gradient w.r.t. the loss, or take the gradient w.r.t. a vector of losses. The gradient operation in TF will perform the averaging of the losses for you if you use the latter approach (see SO articles on taking the per-sample gradient, it's actually hard to do).
If Keras implements the loss with reduce_mean built into the loss, you could just define your own loss. If you're using square loss, replacing 'mean_squared_error' with lambda y_true, y_pred: tf.square(y_pred - y_true). That would produce square error instead of MSE (no difference to the gradient), but look here for the variant including the mean.
In any case this produces a per sample loss so long as you don't use tf.reduce_mean, which is purely optional in the loss. Another option is to simply compute the loss separately from what you optimize for and make that an output of the model, also perfectly valid.

Custom training loop / loss function in tensorflow

I'm trying to define a multi-layer perceptron where the loss function is the L2 distance between the input of the network and a transformation of the output of the network (some black-box function that roughly transforms the output of the network back into the input space so it can be compared), i.e. loss = tf.reduce_sum(tf.square(input - transform(output)), 1).
The problem is that I need the output to be evaluated in order to transform it, but at the time of model definition, output is just a Tensor.
Is it possible to do this kind of custom training loop in TensorFlow?