I am currently trying to build a deep learning model with three different loss functions in Keras. The first loss function is the typical mean squared error loss. The other two loss functions are the ones I built myself, which finds the difference between a calculation made from the input image and the output image (this code is a simplified version of what I'm doing).
def p_autoencoder_loss(yTrue,yPred):
def loss(yTrue, y_Pred):
return K.mean(K.square(yTrue - yPred), axis=-1)
def a(image):
return K.mean(K.sin(image))
def b(image):
return K.sqrt(K.cos(image))
a_pred = a(yPred)
a_true = a(yTrue)
b_pred = b(yPred)
b_true = b(yTrue)
empirical_loss = (loss(yTrue, yPred))
a_loss = K.mean(K.square(a_true - a_pred))
b_loss = K.mean(K.square(b_true - b_pred))
final_loss = K.mean(empirical_loss + a_loss + b_loss)
return final_loss
However, when I train with this loss function, it is simply not converging well. What I want to try is to minimize the three loss functions separately, not together by adding them into one loss function.
I essentially want to do the second option here Tensorflow: Multiple loss functions vs Multiple training ops but in Keras form. I also want the loss functions to be independent from each other. Is there a simple way to do this?
You could have 3 outputs in your keras model, each with your specified loss, and then keras has support for weighting these losses. It will also then generate a final combined loss for you in the output, but it will be optimising to reduce all three losses. Be wary with this though as depending on your data/problem/losses you might find it stalls slightly or is slow if you have losses fighting each other. This however requires use of the functional API. I'm unsure as to whether this actually implements separate optimiser instances, however I think this is as close you will get in pure Keras that i'm aware of without having to start writing more complex TF training regimes.
For example:
loss_out1 = layers.Dense(1, activation='sigmoid', name='loss1')(x)
loss_out2 = layers.Dense(1, activation='sigmoid', name='loss2')(x)
loss_out3 = layers.Dense(1, activation='sigmoid', name='loss3')(x)
model = keras.Model(inputs=[input],
outputs=[loss1, loss2, loss3])
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy', 'custom_loss1'],
loss_weights=[1., 1., 1.])
This should compile a model with 3 outputs at the end from (x) which would be above. When you compile you set the outputs as a list as well as set the losses and loss weights as a list. Note that when you fit() that you'll need to supply your target outputs three times as a list too e.g. [y, y, y] as your model now has three outputs.
I'm not a Keras expert, but it's pretty high-level and i'm not aware of another way using pure Keras. Hopefully someone can come correct me with a better solution!
Since there is only one output, few things that can be done:
1.Monitor the individual loss components to see how they vary.
def a_loss(y_true, y_pred):
a_pred = a(yPred)
a_true = a(yTrue)
return K.mean(K.square(a_true - a_pred))
model.compile(....metrics=[...a_loss,b_loss])
2.Weight the loss components where lambda_a & lambda_b are hyperparameters.
final_loss = K.mean(empirical_loss + lambda_a * a_loss + lambda_b * b_loss)
Use a different loss function like SSIM.
https://www.tensorflow.org/api_docs/python/tf/image/ssim
Related
I want to train a Neural Network for a classification task in Keras using a TensorFlow backend with a custom loss function. In my loss, I want to give different weights to different training examples. I have some datapoints I consider important and some I do not consider as important. I want my loss function to take this into account and punish errors in important examples more than in less important ones.
I have already built my model:
input = tf.keras.Input(shape=(16,))
hidden_layer_1 = tf.keras.layers.Dense(5, kernel_initializer='glorot_uniform', activation='relu')(input)
output = tf.keras.layers.Dense(1, kernel_initializer='normal', activation='softmax')(hidden_layer_1)
model = tf.keras.Model(input, output)
model.compile(loss=custom_loss(input), optimizer='adam', run_eagerly=True, metrics = [tf.keras.metrics.Accuracy(), 'acc'])
and the currrent state of my loss function is:
def custom_loss(input):
def loss(y_true, y_pred):
return ...
return loss
I'm struggling with implementing the loss function in the way I explained above, mainly because I don't exactly know what input, y_pred and y_true are (KerasTensors, I know - but what is the content? And is it for one training example only or for the whole batch?). I'd appreciate help with
printing out the values of input, y_true and y_pred
converting the input value to a numpy ndarray ([1,3,7] for example) so I can use the array to look up my weight for this specific training data point
once I have my weigth as a number (0.5 for example), how do I implement the computation of the loss function in Keras? My loss for one training exaple should be 0 if the classification was correct and weight if it was incorrect.
I am trying to find out, how exactly does BatchNormalization layer behave in TensorFlow. I came up with the following piece of code which to the best of my knowledge should be a perfectly valid keras model, however the mean and variance of BatchNormalization doesn't appear to be updated.
From docs https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
in the case of the BatchNormalization layer, setting trainable = False on the layer means that the layer will be subsequently run in inference mode (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch).
I expect the model to return a different value with each subsequent predict call.
What I see, however, are the exact same values returned 10 times.
Can anyone explain to me why does the BatchNormalization layer not update its internal values?
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(3, 5) * 5 + 0.3
bn = tf.keras.layers.BatchNormalization(trainable=False, epsilon=1e-9)
z = input = tf.keras.layers.Input([5])
z = bn(z)
model = tf.keras.Model(inputs=input, outputs=z)
for i in range(10):
print(x)
print(model.predict(x))
print()
I use TensorFlow 2.1.0
Okay, I found the mistake in my assumptions. The moving average is being updated during training not during inference as I thought. This makes perfect sense, as updating the moving averages during inference would likely result in an unstable production model (for example a long sequence of highly pathological input samples [e.g. such that their generating distribution differs drastically from the one on which the network was trained] could potentially bias the network and result in worse performance on valid input samples).
The trainable parameter is useful when you're fine-tuning a pretrained model and want to freeze some of the layers of the network even during training. Because when you call model.predict(x) (or even model(x) or model(x, training=False)), the layer automatically uses the moving averages instead of batch averages.
The code below demonstrates this clearly
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(10, 5) * 5 + 0.3
z = input = tf.keras.layers.Input([5])
z = tf.keras.layers.BatchNormalization(trainable=True, epsilon=1e-9, momentum=0.99)(z)
model = tf.keras.Model(inputs=input, outputs=z)
# a dummy loss function
model.compile(loss=lambda x, y: (x - y) ** 2)
# a dummy fit just to update the batchnorm moving averages
model.fit(x, x, batch_size=3, epochs=10)
# first predict uses the moving averages from training
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# outputs the same thing as previous predict
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here calling the model with training=True results in update of moving averages
# furthermore, it uses the batch mean and variance as in training,
# so the result is very different
pred = model(x, training=True).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here we see again that the moving averages are used but they differ slightly after
# the previous call, as expected
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
In the end, I found that the documentation (https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) mentions this:
When performing inference using a model containing batch normalization, it is generally (though not always) desirable to use accumulated statistics rather than mini-batch statistics. This is accomplished by passing training=False when calling the model, or using model.predict.
Hopefully this will help someone with similar misunderstanding in the future.
I am working on some kind of framework for myself built on top of Tensorflow and Keras. As a start, I wrote just the core of the framework and implemented a first toy example. This toy example is just a classic feed forward network solivng XOR.
It's probably not necessary to explain everything around it but I implemented the loss function like this:
class MeanSquaredError(Modality):
def loss(self, y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, dtype=y_pred.dtype)
loss = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.NONE)(y_true, y_pred)
return tf.reduce_sum(loss) / self.model_hparams.model.batch_size
This will be used in the actual model class like this:
class Model(keras.Model):
def loss(self, y_true, y_pred, weights=None):
target_modality = self.modalities['targets'](self.problem.hparams, self.hparams)
return target_modality.loss(y_true, y_pred)
Now, when it comes to training, I can train the model like this:
model.compile(
optimizer=keras.optimizers.Adam(0.001),
loss=model.loss, # Simply setting 'mse' works as well here
metrics=['accuracy']
)
or I can just set loss=mse. Both cases work as expected without any problems.
However, I have another Modality class which I am using for sequence-to-sequence (e.g. translation) tasks. It looks like this:
class CategoricalCrossentropy(Modality):
"""Simple SymbolModality with one hot as embeddings."""
def loss(self, y_true, y_pred, sample_weight=None):
labels = tf.reshape(y_true, shape=(tf.shape(y_true)[0], tf.reduce_prod(tf.shape(y_true)[1:])))
y_pred = tf.reshape(y_pred, shape=(tf.shape(y_pred)[0], tf.reduce_prod(tf.shape(y_pred)[1:])))
loss = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)(labels, y_pred)
return tf.reduce_mean(loss) / self.model_hparams.model.batch_size
What this does is just reshaping the y_true and y_pred tensors [batch_size, seq_len, embedding_size] to [seq_len * batch_size, embedding_size] - effectively stacking all examples. From this, the categorical cross-entropy is calculated and normalized.
Now, the model I am using is a very simple LSTM - this isn't important though. As I am training the model like this:
model.compile(
optimizer=keras.optimizers.Adam(0.001),
loss='categorical_crossentropy', # <-- Setting the loss via string argument (works)
metrics=['accuracy']
)
The model does learn the task as expected. However, if I use the CategoricalCrossentropy-modality from above, setting loss=model.loss, the model does not converge at all. The loss oscillates randomly but does not converge.
And this is where I am scrathing my head. Since the simple XOR-examples works, both ways, and since setting categorical_crossentropy works as well, I do not quite see why using said modality doesn't work.
Am I doing something obviously wrong?
I am sorry that I cannot provide a small example here but this not possible since the framework already consists of some lines of code. Empirically speaking, everything should work.
Any ideas how I could track down the issue or what might be causing this?
You're creating a tuple of tensors for shape. That might not work.
Why not just this?
labels = tf.keras.backend.batch_flatten(y_true)
y_pred = tf.keras.backend.batch_flatten(y_pred)
The standard 'categorical_crossentropy' loss does not perform any kind of flattening, and it considers as classes the last axis.
Are you sure you want to flatten your data? If you flatten, you will multiply the number of classes by the number of steps, this doesn't seem to make much sense.
Also, the standard 'categorical_crossentropy' loss uses from_logits=False!
The standard loss expects outputs from a "softmax" activation, while from_logits=True expects outputs without that activation.
In tensorflow 2.0 you don't have to worry about training phase(batch size, number of epochs etc), because everything can be defined in compile method: model.fit(X_train,Y_train,batch_size = 64,epochs = 100).
But I have seen the following code style:
optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
#tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss = tf.math.add_n(model.losses)
pred_loss = loss_fn(labels, predictions)
total_loss = pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for epoch in range(NUM_EPOCHS):
for inputs, labels in train_data:
train_step(inputs, labels)
print("Finished epoch", epoch)
So here you can observe "more detailed" code, where you manually define by for loops you training procedure.
I have following question: what is the best practice in Tensorflow 2.0? I haven't found a any complete tutorial.
Use what is best for your needs.
Both methods are documented in Tensorflow tutorials.
If you don't need anything special, no extra losses, strange metrics or intricate gradient computation, just use a model.fit() or a model.fit_generator(). This is totally ok and makes your life easier.
A custom training loop might come in handy when you have complicated models with non-trivial loss/gradients calculation.
Up to now, two applications I tried were easier with this:
Training a GAN's generator and discriminator simultaneously without having to do the generation step twice. (It's complicated because you have a loss function that applies to different y_true values, and each case should update only a part of the model) - The other option would require to have a few separate models, each model with its own trainable=True/False configuration, and train then in separate phases.
Training inputs (good for style transfer models) -- Alternatively, create a custom layer that takes dummy inputs and that outputs its own trainable weights. But it gets complicated to compile several loss functions for each of the outputs of the base and style networks.
I'm porting a bunch of code from tf.estimator.Estimator API to tf.keras using tf.data.Datasets and I'm hoping to stay as close to the provided compile/fit as possible. I'm being frustrated by compile's loss and metrics args.
Essentially, I'd like to use a loss function which uses multiple outputs and labels in a non-additive way, i.e. I want to provide
def custom_loss(all_labels, model_outputs):
"""
Args:
all_labels: all labels in the dataset, as a single tensor, tuple or dict
model_outputs: all outputs of model as a single tensor, tuple or dict
Returns:
single loss tensor to be averaged.
""""
...
I can't provide this to compile because as far as I'm aware it only supports weighted sums of per-output/label losses, and makes assumptions about the shape of each label based on the the corresponding model output. I can't create it separately and use model.add_loss because I never have explicit access to a labels tensor if I want to let model.fit handle dataset iteration. I've considered flattening/concatenating all outputs and labels together, but then I can't monitor multiple metrics.
I can write my own training loop using model.train_on_batch, but that forces me to replicate behaviour already implemented in fit such as dataset iteration, callbacks, validation, distribution strategies etc.
As an example, I'd like to replicate the following estimator.
def model_fn(features, labels, mode):
outputs = get_outputs(features) # dict
loss = custom_loss(labels, outputs)
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
eval_metrics_op = {
'a_mean': tf.metrics.mean(outputs['a'])
}
return tf.estimator.EstimatorSpec(
loss=loss, train_op=train_op, mode=mode, eval_metric_ops=eval_metric_ops)
estimator = tf.estimator.Estimator(model_fn=model_fn)
estimator.train(dataset_fn)