TensorFlow DataSet shuffle - data shuffling only starting from second epoch - tensorflow

I am using the tensorflow DataSet for input data pipeline. I am wondering how to run training without data shuffling in first epoch and start shuffling data from the second epoch.
the graph is usually built before iterative training start and during training it seems not straight-forward how to change the DataSet shuffling behavior since it looks to me kinds of like changing the graph.
any idea?
thanks,
Harry

The buffer_size argument to Dataset.shuffle() can be a computed tf.Tensor, so you can use the following code that uses Dataset.range(NUM_EPOCHS).flat_map(...) to transform a sequence of epoch numbers to the (shuffled or otherwise) elements of a per_epoch_dataset:
NUM_EPOCHS = ... # The total number of epochs.
BUFFER_SIZE = ... # The shuffle buffer size to use from the second epoch on.
per_epoch_dataset = ... # A `Dataset` representing the elements of a single epoch.
def shuffle_after_first_epoch(epoch):
# Set `epoch_buffer_size` to 1 (i.e. no shuffling) in the 0th epoch,
# and `BUFFER_SIZE` thereafter.
epoch_buffer_size = tf.cond(tf.equal(epoch, 0),
lambda: tf.constant(1, tf.int64),
lambda: tf.constant(BUFFER_SIZE, tf.int64))
return per_epoch_dataset.shuffle(epoch_buffer_size)
dataset = tf.data.Dataset.range(NUM_EPOCHS).flat_map(shuffle_after_first_epoch)

Related

Unknown number of steps - Training convolution neural network at Google Colab Pro

I am trying to run (training) my CNN at Google Colab Pro, when I run my code, all is allright, but It does not know the number of steps, so an infinite loop is created.
Mounted at /content/drive
2.2.0-rc3
Found 10018 images belonging to 2 classes.
Found 1336 images belonging to 2 classes.
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Epoch 1/300
8/Unknown - 364s 45s/step - loss: 54.9278 - accuracy: 0.5410
I am using ImageDataGenerator() for loadings images. How can I fix it?
An iterator does not store anything, it generates the data dynamically. When you are using a dataset or dataset iterator, you must provide steps_per_epoch. The length of an iterator is unknown until you iterate through it. You could explicitly pass len(datafiles) into the .fit function. So, You need to provide steps_per_epoch as shown below.
model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
More details are mentioned here
steps_per_epoch: Integer or None. Total number of steps (batches of
samples) before declaring one epoch finished and starting the next
epoch. When training with input tensors such as TensorFlow data
tensors, the default None is equal to the number of samples in your
dataset divided by the batch size, or 1 if that cannot be determined.
If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch
will run until the input dataset is exhausted. This argument is not
supported with array inputs.
I notice you are using binary classification. One more thing to remember when you use ImageDataGenerator is to provide class_mode as shown below. Otherwise, there will be a bug (in keras) or 50% accuracy (in tf.keras).
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),class_mode='binary') #

Running training the discriminator with more examples

As I understand what of the diff between regular GAN to WGAN is that we train the discriminator/critic with more examples in each epoch. If in the regular gan we have in each epoch one batch for both modules, in WGAN we will have 5 batches (or more) for the discriminator and one for the generator.
So basically we have another inner loop for the discriminator :
real_images_labels = np.ones((BATCH_SIZE, 1))
fake_images_labels = -real_images_labels
for epoch in range(epochs):
for batch in range(NUM_BACHES):
for critic_iter in range(n_critic):
random_batches_idx = np.random.randint(0, NUM_BACHES) # Choose random batch from dataset
imgs_data=dataset_list[random_batches_idx]
c_loss_real = critic.train_on_batch(imgs_data, real_images_labels) # update the weights after 1 batch
noise = tf.random.normal([imgs_data.shape[0], noise_dim]) # Generate noise data
generated_images = generator(noise, training=True)
c_loss_fake = critic.train_on_batch(generated_images, fake_images_labels) # update the weights after 1 batch
imgs_data=dataset_list[batch]
noise = tf.random.normal([imgs_data.shape[0], noise_dim]) # Generate noise data
gen_loss_batch = gen_loss_batch + gan.train_on_batch(noise,real_images_labels)
The training is taking me a lot of time, per epoch about 3m. The idea I had to decrease the training time is instead running forward for each batch n_critic times I can increase the batch_size for the discriminator and run forward one time with a bigger batch_size.
I am seeking feedback: does it sound reasonable?
(I didn't paste my entire code, it was just a part of it).
Yes, it does sound reasonable typically increasing batch_size during training, typically decreases the training time with a cost of using more memory and lower accuracy (lower generalization ability).
Having said this you should do always do trial and error with regards to batching as extreme values may or may not increase the training time.
For further discussion you can refer to this question

Train model in batches using fit_generator

My model has 100 000 training samples of images, how do I modify my code below to train it in batches? With model.fit_generator I have to specify this inside the generator function:
def data_generator(descriptions, features, n_step, max_sequence):
# loop until we finish training
while 1:
# loop over photo identifiers in the dataset
for i in range(0, len(descriptions), n_step):
Ximages, XSeq, y = list(), list(),list()
for j in range(i, min(len(descriptions), i+n_step)):
image = features[j]
# retrieve text input
desc = descriptions[j]
# generate input-output pairs
in_img, in_seq, out_word = preprocess_data([desc], [image], max_sequence)
for k in range(len(in_img)):
Ximages.append(in_img[k])
XSeq.append(in_seq[k])
y.append(out_word[k])
# yield this batch of samples to the model
yield [[array(Ximages), array(XSeq)], array(y)]
My model.fit_generator code:
model.fit_generator(data_generator(texts, train_features, 1, 150),
steps_per_epoch=1500, epochs=50, callbacks=callbacks_list, verbose=1)
Any assistance would be great, I'm training on a cloud 16GB V100 Tesla
Edit: My image caption model creates a training sample for each token in the DSL(250 tokens). With a dataset of 50 images (equivalent to 12500 training samples) and a batch size of 1, I get an OOM. With about 32 (equivalent to 8000 samples and a batch size of 1 it trains just fine.) My question is can I optimize my code better, or is my only option to use multiple GPUs?
Fix:
Steps_per_epoch must be equal to ceil(num_samples / batch_size), so if the dataset has 1500 samples, steps_per_epoch should be equal to 1500. I also reduced my LSTM sliding window from 48 to 24
steps_per_epoch: Integer. Total number of steps (batches of samples)
to yield from generator before declaring one epoch finished and
starting the next epoch. It should typically be equal to
ceil(num_samples / batch_size). Optional for Sequence: if unspecified,
will use the len(generator) as a number of steps.
The generators already return batches.
Every yield is a batch. It's totally up to you to design the generator with the batches the way you want.
In your code, the batch size is n_step.
Here's the proper way of using generators: Make a generator that yields individual datums. Create a Dataset from it and use batch method on that object. Tune the parameter to find the largest batch size that won't cause an OOM.
def data_generator(descriptions, features, max_sequence):
def _gen():
for img, seq, word in zip(*preprocess_data(descriptions, features, max_sequence)):
yield {'image': in_img, 'seq': seq}, wo
return _gen
ds = tf.data.Dataset.from_generator(
data_generator(descriptions, features, max_sequence),
output_types=({'image': tf.float32, 'seq': tf.float32}, tf.int32),
output_shapes=({
'image': tf.TensorShape([blah, blah]),
'seq': tf.TensorShape([blah, blah]),
},
tf.TensorShape([balh])
)
)
ds = ds.batch(n_step)

Can't use estimator + dataset and train for less than one epoch

TensorFlow 1.4 moves TF Dataset to core (tf.data.Dataset) and doc/tutorial suggest to use tf.estimator to train models.
However, as recommended at the end of this page, the Dataset object and its iterator must be instantiated inside the input_fn function. This means the iterations through the dataset will start over for each call to estimator.train(input_fn, steps). Thus, calling is with steps < number of samples in epoch, will lead to train the model on a subset of the dataset.
Thus my question. Is it possible to implement something like this with Estimator + Dataset:
for i in range(num_epochs):
# Train for some steps
estimator.train(input_fn=train_input_fn, steps=valid_freq)
validation_iterator.
# Evaluate on the validation set (steps=None, we evaluate on the full validation set)
estimator.evaluate(input_fn=valid_input_fn)
without starting training samples iterations from scratch at each call to estimator.train(input_fn=train_input_fn, steps=valid_freq)?
For example, unlike here, instantiate the Dataset and its iterator outside input_fn? I tried it but it does not work because then the input (from the dataset iterator) and the model (from the estimator model_fn) are not part of the same graph.
Thanks
Related GitHub issue
I don't know any way to make the training consistent across runs of estimator.train().
However what you can do is make sure that you build the train_input_fn such that it will be random enough to obtain the same effect.
For instance suppose you have a dataset of values [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] and you can only train on half the dataset at each call of estimator.train.
If you don't shuffle well enough, you will keep training on values [0, 1, 2, 3, 4]:
train_size = 10
dataset = tf.data.Dataset.range(train_size)
x = dataset.make_one_shot_iterator().get_next()
sess = tf.Session()
for i in range(train_size // 2):
print(sess.run(x))
However, if you call tf.data.Dataset.shuffle() with a buffer_size at least as large as the dataset, you will get random values. Calling multiple times estimator.train with this will be equivalent to calling it one time with multiple epochs.
train_size = 10
dataset = tf.data.Dataset.range(train_size)
dataset = dataset.shuffle(buffer_size=train_size)
x = dataset.make_one_shot_iterator().get_next()
sess = tf.Session()
for i in range(train_size // 2):
print(sess.run(x))
I wrote another answer to explain the importance of buffer_size here.
You can return a dataset from your input_fn. Something like:
def input_fn():
dataset = ...
return dataset
To run evaluation without stopping training process, you can use tf.contrib.estimator.InMemoryEvaluatorHook

How to average summaries over multiple batches?

Assuming I have a bunch of summaries defined like:
loss = ...
tf.scalar_summary("loss", loss)
# ...
summaries = tf.merge_all_summaries()
I can evaluate the summaries tensor every few steps on the training data and pass the result to a SummaryWriter.
The result will be noisy summaries, because they're only computed on one batch.
However, I would like to compute the summaries on the entire validation dataset.
Of course, I can't pass the validation dataset as a single batch, because it would be too big.
So, I'll get summary outputs for each validation batch.
Is there a way to average those summaries so that it appears as if the summaries have been computed on the entire validation set?
Do the averaging of your measure in Python and create a new Summary object for each mean. Here is what I do:
accuracies = []
# Calculate your measure over as many batches as you need
for batch in validation_set:
accuracies.append(sess.run([training_op]))
# Take the mean of you measure
accuracy = np.mean(accuracies)
# Create a new Summary object with your measure
summary = tf.Summary()
summary.value.add(tag="%sAccuracy" % prefix, simple_value=accuracy)
# Add it to the Tensorboard summary writer
# Make sure to specify a step parameter to get nice graphs over time
summary_writer.add_summary(summary, global_step)
I would avoid calculating the average outside the graph.
You can use tf.train.ExponentialMovingAverage:
ema = tf.train.ExponentialMovingAverage(decay=my_decay_value, zero_debias=True)
maintain_ema_op = ema.apply(your_losses_list)
# Create an op that will update the moving averages after each training step.
with tf.control_dependencies([your_original_train_op]):
train_op = tf.group(maintain_ema_op)
Then, use:
sess.run(train_op)
That will call maintain_ema_op because it is defined as a control dependency.
In order to get your exponential moving averages, use:
moving_average = ema.average(an_item_from_your_losses_list_above)
And retrieve its value using:
value = sess.run(moving_average)
This calculates the moving average within your calculation graph.
I think it's always better to let tensorflow do the calculations.
Have a look at the streaming metrics. They have an update function to feed the information of your current batch and a function to get the averaged summary.
It's going to look somewhat like this:
accuracy = ...
streaming_accuracy, streaming_accuracy_update = tf.contrib.metrics.streaming_mean(accuracy)
streaming_accuracy_scalar = tf.summary.scalar('streaming_accuracy', streaming_accuracy)
# set up your session etc.
for i in iterations:
for b in batches:
sess.run([streaming_accuracy_update], feed_dict={...})
streaming_summ = sess.run(streaming_accuracy_scalar)
writer.add_summary(streaming_summary, i)
Also see the tensorflow documentation: https://www.tensorflow.org/versions/master/api_guides/python/contrib.metrics
and this question:
How to accumulate summary statistics in tensorflow
You can average store the current sum and recalculate the average after each batch, like:
loss_sum = tf.Variable(0.)
inc_op = tf.assign_add(loss_sum, loss)
clear_op = tf.assign(loss_sum, 0.)
average = loss_sum / batches
tf.scalar_summary("average_loss", average)
sess.run(clear_op)
for i in range(batches):
sess.run([loss, inc_op])
sess.run(average)
For future reference, the TensorFlow metrics API now supports this by default. For example, take a look at tf.mean_squared_error:
For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_squared_error. Internally, a squared_error operation computes the element-wise square of the difference between predictions and labels. Then update_op increments total with the reduced sum of the product of weights and squared_error, and it increments count with the reduced sum of weights.
These total and count variables are added to the set of metric variables, so in practice what you would do is something like:
x_batch = tf.placeholder(...)
y_batch = tf.placeholder(...)
model_output = ...
mse, mse_update = tf.metrics.mean_squared_error(y_batch, model_output)
# This operation resets the metric internal variables to zero
metrics_init = tf.variables_initializer(
tf.get_default_graph().get_collection(tf.GraphKeys.METRIC_VARIABLES))
with tf.Session() as sess:
# Train...
# On evaluation step
sess.run(metrics_init)
for x_eval_batch, y_eval_batch in ...:
mse = sess.run(mse_update, feed_dict={x_batch: x_eval_batch, y_batch: y_eval_batch})
print('Evaluation MSE:', mse)
I found one solution myself. I think it's kind of hacky and I hope there is a more elegant solution.
During setup:
valid_loss_placeholder = tf.placeholder(dtype=tf.float32, shape=[])
valid_loss_summary = tf.scalar_summary("valid loss", valid_loss_placeholder)
Or for tensorflow versions after 0.12 (change in name for tf.scalar_summary):
valid_loss_placeholder = tf.placeholder(dtype=tf.float32, shape=[])
valid_loss_summary = tf.summary.scalar("valid loss", valid_loss_placeholder)
Within training loop:
# Compute valid loss in python by doing sess.run() for each batch
# and averaging
valid_loss = ...
summary = sess.run(valid_loss_summary, {valid_loss_placeholder: valid_loss})
summary_writer.add_summary(summary, step)
As of August 2018, streaming metrics have been depreciated. However, unintuitively, all metrics are streaming. So, use tf.metrics.accuracy.
However, if you want accuracy (or another metric) over only a subset of batches, then you can use Exponential Moving Average, as in the answer by #MZHm or reset any of the the tf.metric's by following this very informative blog post
For quite some time I'm only saving the summary once per epoch. I never knew that TensorFlows summary would then only save the summary for the last run batch.
Shocked I looked into this problem. This is the solution I came up with (using the dataset API):
loss = ...
train_op = ...
loss_metric, loss_metric_update = tf.metrics.mean(ae_loss)
tf.summary.scalar('loss', loss_metric)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(os.path.join(res_dir, 'train'))
test_writer = tf.summary.FileWriter(os.path.join(res_dir, 'test'))
init_local = tf.initializers.local_variables()
init_global = tf.initializers.global_variables()
sess.run(init_global)
def train_run(epoch):
sess.run([dataset.train_init_op, init_local]) # test_init_op is the operation that switches to test data
for i in range(dataset.num_train_batches): # num_test_batches is the number of batches that should be run for the test set
sess.run([train_op, loss_metric_update])
summary, cur_loss = sess.run([merged, loss_metric])
train_writer.add_summary(summary, epoch)
return cur_loss
def test_run(epoch):
sess.run([dataset.test_init_op, init_local]) # test_init_op is the operation that switches to test data
for i in range(dataset.num_test_batches): # num_test_batches is the number of batches that should be run for the test set
sess.run(loss_metric_update)
summary, cur_loss = sess.run([merged, loss_metric])
test_writer.add_summary(summary, epoch)
return cur_loss
for epoch in range(epochs):
train_loss = train_run(epoch+1)
test_loss = test_run(epoch+1)
print("Epoch: {0:3}, loss: (train: {1:10.10f}, test: {2:10.10f})".format(epoch+1, train_loss, test_loss))
For the summary I'm just wrapping the tensor I'm interested in into tf.metrics.mean(). For each batch run I call the metrics update operation. At the end of every epoch the metrics tensor will return the correct mean of all batch results.
Don't forget to initialize local variables every time you switch between training and test data. Otherwise your train and test metrics will be near identical.
I had the same problem when I realized I had to iterate over my validation data when the memory space cramped up and the OOM errors flooding.
As multiple of these answers say, the tf.metrics have this built in, but I'm not using tf.metrics in my project. So inspired by that, I made this:
import tensorflow as tf
import numpy as np
def batch_persistent_mean(tensor):
# Make a variable that keeps track of the sum
accumulator = tf.Variable(initial_value=tf.zeros_like(tensor), dtype=tf.float32)
# Keep count of batches in accumulator (needed to estimate mean)
batch_nums = tf.Variable(initial_value=tf.zeros_like(tensor), dtype=tf.float32)
# Make an operation for accumulating, increasing batch count
accumulate_op = tf.assign_add(accumulator, tensor)
step_batch = tf.assign_add(batch_nums, 1)
update_op = tf.group([step_batch, accumulate_op])
eps = 1e-5
output_tensor = accumulator / (tf.nn.relu(batch_nums - eps) + eps)
# In regards to the tf.nn.relu, it's a hacky zero_guard:
# if batch_nums are zero then return eps, else it'll be batch_nums
# Make an operation to reset
flush_op = tf.group([tf.assign(accumulator, 0), tf.assign(batch_nums, 0)])
return output_tensor, update_op, flush_op
# Make a variable that we want to accumulate
X = tf.Variable(0., dtype=tf.float32)
# Make our persistant mean operations
Xbar, upd, flush = batch_persistent_mean(X)
Now you send Xbar to your summary e.g. tf.scalar_summary("mean_of_x", Xbar), and where you'd do sess.run(X) before, you'll do sess.run(upd). And between epochs you'd do sess.run(flush).
Testing behaviour:
### INSERT ABOVE CODE CHUNK IN S.O. ANSWER HERE ###
sess = tf.InteractiveSession()
with tf.Session() as sess:
sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])
# Calculate the mean of 1+2+...+20
for i in range(20):
sess.run(upd, {X: i})
print(sess.run(Xbar), "=", np.mean(np.arange(20)))
for i in range(40):
sess.run(upd, {X: i})
# Now Xbar is the mean of (1+2+...+20+1+2+...+40):
print(sess.run(Xbar), "=", np.mean(np.concatenate([np.arange(20), np.arange(40)])))
# Now flush it
sess.run(flush)
print("flushed. Xbar=", sess.run(Xbar))
for i in range(40):
sess.run(upd, {X: i})
print(sess.run(Xbar), "=", np.mean(np.arange(40)))