I'm want to train my model with Keras. I'm using a huge dataset Where one training epoch has more than 30000 steps. My problem is that I don't want to wait for an epoch before checking the model improvement on the validation dataset. Is there any way to make Keras evaluate the validation data every 1000 steps of the training data? I think one option will be to use a callback but is there any built-in solution with Keras?
if train:
log('Start training')
history = model.fit(train_dataset,
steps_per_epoch=train_steps,
epochs=50,
validation_data=val_dataset,
validation_steps=val_steps,
callbacks=[
keras.callbacks.EarlyStopping(
monitor='loss',
patience=10,
restore_best_weights=True,
),
keras.callbacks.ModelCheckpoint(
filepath=f'model.h5',
monitor='val_loss',
save_best_only=True,
save_weights_only=True,
),
keras.callbacks.ReduceLROnPlateau(
monitor = "val_loss",
factor = 0.5,
patience = 3,
min_lr=0.001,
),
],
)
With the in-built callbacks, you cannot do that. What you need is to implement a custom callback.
class MyCustomCallback(tf.keras.callbacks.Callback):
def on_train_batch_begin(self, batch, logs=None):
print('Training: batch {} begins at {}'.format(batch, datetime.datetime.now().time()))
def on_train_batch_end(self, batch, logs=None):
print('Training: batch {} ends at {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_begin(self, batch, logs=None):
print('Evaluating: batch {} begins at {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_end(self, batch, logs=None):
print('Evaluating: batch {} ends at {}'.format(batch, datetime.datetime.now().time()))
This is taken from the TensorFlow documentation.
You can override the on_train_batch_end() function and, since the batch parameter is an integer, you can verify batch % 100 == 0, then self.model.predict(val_data) etc. to your needs.
Please check my answer here: How to get other metrics in Tensorflow 2.0 (not only accuracy)? to have a good overview on how to override a custom callback function. Please note that in your case it is the on_train_batch_end() not on_epoch_end() that is important.
Related
I am training a model with keras for one epoch only:
history = model.fit([x], y,
validation_split=0.2, epochs=1, batch_size=2)
print(history.history['accuracy'])
The history now obviously only contains one value from the end of the epoch. How can I evaluate the training accuracy or loss during the epoch? I expect these to be the values that are shown in the console during training.
To be clear: I want a history to be written after every batch (not after every epoch, as per usual).
I assume you want to save the accuracy and loss at the end of each batch. To do that you need to create a custom callback as shown below
class BCP(keras.callbacks.Callback):
batch_accuracy = [] # accuracy at given batch
batch_loss = [] # loss at given batch
def __init__(self):
super(BCP,self).__init__()
def on_train_batch_end(self, batch, logs=None):
BCP.batch_accuracy.append(logs.get('accuracy'))
BCP.batch_loss.append(logs.get('loss'))
now in model.fit include
callbacks = [BCP()]
now train for 1 epoch. at the end of the epoch the values of the accuracy and loss for each batch is stored in BCP.batch_accuracy and BCP.batch_loss. You can print them out
as follows:
print('{0:^4s}{1:^22s}{2:^10s}'.format('Batch', 'Loss', 'Accuracy'))
for i in range (len(BCP.batch_accuracy)):
print('{0:^4s}{1:15.5f}{2:15.2f}'.format(str(i), BCP.batch_loss[i], BCP.batch_accuracy[i]* 100))
if I "compile" a Keras model with an optimizer using a LearningRateSchedule and run model.fit() for one epoch several times, will it restart the learning rate scheduler every time or will it preserve its state?
model = create_keras_model()
lr_scheduler = create_lr_scheduler()
optimizer = Adam(learning_rate=lr_scheduler)
for i in range(10)
model.fit(dataset, epochs=1)
Thanks.
I've been trying to investigate into the reason (e.g. by checking weights, gradients and activations during training) why SGD with a 0.001 learning rate worked in training while Adam fails to do so. (Please see my previous post [here](Why is my loss (binary cross entropy) converging on ~0.6? (Task: Natural Language Inference)"Why is my loss (binary cross entropy) converging on ~0.6? (Task: Natural Language Inference)"))
Note: I'm using the same model from my previous post here as well.
using tf.keras, i trained the neural network using model.fit():
model.compile(optimizer=SGD(learning_rate=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x=ds,
epoch=80,
validation_data=ds_val)
This resulted in a epoch loss graphed below, within the 1st epoch, it's reached a train loss of 0.46 and then ultimately resulting in a train_loss of 0.1241 and val_loss of 0.2849.
I would've used tf.keras.callbacks.Tensorboard(histogram_freq=1) to train the network with both SGD(0.001) and Adam to investigate but it's throwing an InvalidArgumentError on Variable:0, something I can't decipher. So I tried to write a custom training loop using GradientTape and plotting the values.
using tf.GradientTape(), i tried to reproduce the results using the exact same model and dataset, however the epoch loss is training incredibly slowly, reaching train loss of 0.676 after 15 epochs (see graph below), is there something wrong with my implementation? (code below)
#tf.function
def compute_grads(train_batch: Dict[str,tf.Tensor], target_batch: tf.Tensor,
loss_fn: Loss, model: tf.keras.Model):
with tf.GradientTape(persistent=False) as tape:
# forward pass
outputs = model(train_batch)
# calculate loss
loss = loss_fn(y_true=target_batch, y_pred=outputs)
# calculate gradients for each param
grads = tape.gradient(loss, model.trainable_variables)
return grads, loss
BATCH_SIZE = 8
EPOCHS = 15
bce = BinaryCrossentropy()
optimizer = SGD(learning_rate=0.001)
for epoch in tqdm(range(EPOCHS), desc='epoch'):
# - accumulators
epoch_loss = 0.0
for (i, (train_batch, target_dict)) in tqdm(enumerate(ds_train.shuffle(1024).batch(BATCH_SIZE)), desc='step'):
(grads, loss) = compute_grads(train_batch, target_dict['target'], bce, model)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
epoch_loss += loss
avg_epoch_loss = epoch_loss/(i+1)
tensorboard_scalar(writer, name='epoch_loss', data=avg_epoch_loss, step=epoch) # custom helper function
print("Epoch {}: epoch_loss = {}".format(epoch, avg_epoch_loss))
Thanks in advance!
Check if you have shuffle your dataset then the problem may came from the shuffling using the tf.Dataset method. It only shuffled through the dataset one bucket at the time. Using the Keras.Model.fit yielded better results because it probably adds another shuffling.
By adding a shuffling with numpy.random.shuffle it may improve the training performance. From this reference.
The example of applying it into generation of the dataset is:
numpy_data = np.hstack([index_rows.reshape(-1, 1), index_cols.reshape(-1, 1), index_data.reshape(-1, 1)])
np.random.shuffle(numpy_data)
indexes = np.array(numpy_data[:, :2], dtype=np.uint32)
labels = np.array(numpy_data[:, 2].reshape(-1, 1), dtype=np.float32)
train_ds = data.Dataset.from_tensor_slices(
(indexes, labels)
).shuffle(100000).batch(batch_size, drop_remainder=True)
If this not work you may need to use Dataset .repeat(epochs_number) and .shuffle(..., reshuffle_each_iteration=True):
train_ds = data.Dataset.from_tensor_slices(
(np.hstack([index_rows.reshape(-1, 1), index_cols.reshape(-1, 1)]), index_data)
).shuffle(100000, reshuffle_each_iteration=True
).batch(batch_size, drop_remainder=True
).repeat(epochs_number)
for ix, (examples, labels) in train_ds.enumerate():
train_step(examples, labels)
current_epoch = ix // (len(index_data) // batch_size)
This workaround is not beautiful nor natural, for the moment you can use this to shuffle each epoch. It's a known issue and will be fixed, in the future you can use for epoch in range(epochs_number) instead of .repeat()
The solution provided here may also help a lot. You might want to check it out.
If this is not the case, you may want to speed up the TF2.0 GradientTape. This can be the solution:
TensorFlow 2.0 introduces the concept of functions, which translate eager code into graph code.
The usage is pretty straight-forward. The only change needed is that all relevant functions (like compute_loss and apply_gradients) have to be annotated with #tf.function.
I have dropout layers in my model so I want keras to figure out the training and test phases to run or ignore the dropout layers, and I found that K.set_learning_phase can do me this favor but how can I add it to training and test processes? My code is like this:
def discriminator(self):
x_A = Input(shape=self.shape)
x_B = Input(shape=self.shape)
x = concatenate([x_A, x_B], axis=-1)
self.model = Sequential()
self.model.add(Dropout(0.5, input_shape=self.shape_double))
self.model.add(LSTM(200, return_sequences=True, kernel_constraint=unit_norm()))
self.model.add(Dropout(0.5))
self.model.add(LSTM(200, return_sequences=True, kernel_constraint=unit_norm()))
self.model.add(Dropout(0.5))
self.model.add(Flatten())
self.model.add(Dense(8, activation="softmax", kernel_constraint=unit_norm())
label=self.model(x)
return Model([x_A,x_B], label)
...
def train(self, epoch, batch_size):
for epoch in range(epochs):
for batch,train_A,train_B,train_label in enumerate(Load_train(batch_size)):
Dloss = self.discriminator.train_on_batch([train_A,train_B],train_label)
...
def test(self,test_A,test_B,test_label):
predicted_label_dist = self.discriminator.predict([test_A,test_B])
...
Any suggestions will be appreciated. Thanks.
Keras does figure out the appropriate learning phase on its own by default when you call fit or predict. Hence, your dropout will only be applied during training but not during testing. However, if you still wish to configure training phase on your own i.e. overwrite the default behaviour you can do it like this (from the keras docs):
keras.backend.set_learning_phase(value)
Where:
value: Learning phase value, either 0 or 1 (integers).
simply add this code in your training and testing function.
Is there any way that I can log the validaton loss and accuracy to tensorboard when using tf-slim? When I was using keras, the following code can do this for me:
model.fit_generator(generator=train_gen(), validation_data=valid_gen(),...)
Then the model will evaluate the validation loss and accuracy after each epoch, which is very convenient. But how to achieve this using tf-slim? The following steps are using primitive tensorflow, which is not what I want:
with tf.Session() as sess:
for step in range(100000):
sess.run(train_op, feed_dict={X: X_train, y: y_train})
if n % batch_size * batches_per_epoch == 0:
print(sess.run(train_op, feed_dict={X: X_train, y: y_train}))
Right now, the steps to train a model using tf-slim is:
tf.contrib.slim.learning.train(
train_op=train_op,
logdir="logs",
number_of_steps=10000,
log_every_n_steps = 10,
save_summaries_secs=1
)
So how to evaluate validation loss and accuracy after each epoch with the above slim training procedure?
Thanks in advance!
The matter is still being discussed on TF Slim repo (issue #5987).
The framework allows you to easily create an evaluation script to run after / in parallel of your training (solution 1 below), but some people are pushing to be able to implement the "classic cycle of batch training + validation" (solution 2).
1. Use slim.evaluation in another script
TF Slim has evaluation methods e.g. slim.evaluation.evaluation_loop() you can use in another script (which can be run in parallel of your training) to periodically load the latest checkpoint of your model and perform evaluation. TF Slim page contains a good example how such a script may look: example.
2. Provide a custom train_step_fn to slim.learning.train()
A patchy solution the initiator of the discussion came up with makes use of a custom training step function you can provide to slim.learning.train():
"""
Snippet from code by Kevin Malakoff #kmalakoff
https://github.com/tensorflow/tensorflow/issues/5987#issue-192626454
"""
# ...
accuracy_validation = slim.metrics.accuracy(
tf.argmax(predictions_validation, 1),
tf.argmax(labels_validation, 1)) # ... or whatever metrics needed
def train_step_fn(session, *args, **kwargs):
total_loss, should_stop = train_step(session, *args, **kwargs)
if train_step_fn.step % FLAGS.validation_check == 0:
accuracy = session.run(train_step_fn.accuracy_validation)
print('Step %s - Loss: %.2f Accuracy: %.2f%%' % (str(train_step_fn.step).rjust(6, '0'), total_loss, accuracy * 100))
# ...
train_step_fn.step += 1
return [total_loss, should_stop]
train_step_fn.step = 0
train_step_fn.accuracy_validation = accuracy_validation
slim.learning.train(
train_op,
FLAGS.logs_dir,
train_step_fn=train_step_fn,
graph=graph,
number_of_steps=FLAGS.max_steps
)