Using tf.data.Dataset in keras - tensorflow

I have a model written in Keras. Because I'm dealing with large files, I'm using tf.data.Dataset api to load data and feed into the Keras fit function. Before I call model.fit(), I reinitialize the dataset using it=ds.make_initializable_iterator() and then pass the X, and y tensors that I get from the it.get_next() function to the model.fit(). The problem is that, when model.fit() reaches the end of the dataset, it does not continue training, in other words, I can only train for ONE epoch, not matter what I pass as the "epochs" variable to the fit function.
How can I tell Keras to reinitialize the iterator when it reaches the end of the dataset?

Use the function dataset.repeat(n_epochs) to repeat your dataset for the number of epochs.
The epochs argument in the model.fit function defines how many times to iterate over the dataset. If you do not repeat the dataset, however, you will run out of samples after the first epoch. You can use dataset.repeat(n_epochs) to repeat for n_epochs, or you can use dataset.repeat() to repeat infinitely.

Related

Encoder Decoder for time series forecasting

I want to predict for 7 days from training size of 55 days. I tried to apply models given here and here, but I am getting output value for all 7 days as 1.
I am also confused about how to give time series as input to encoder decoder and it's code, I tried based on my understanding.
model.add(LSTM(150, input_shape=(None, 1)))
model.add(RepeatVector(8))
model.add(LSTM(150, return_sequences=True))
model.add(TimeDistributed(Dense(1, activation='softmax')))
model.compile(loss='mse', optimizer='adam')
for i in range(7):
x=df[i*7:(i+1)*7]
y=df[(i+1)*7:(i+2)*7]
x=np.array(x)
x=np.insert(x,0,len(x))
x=x.reshape(1,len(x),1)
y=np.array(y)
y=np.insert(y,0,len(y))
y=y.reshape(1,len(y),1)
model.fit(x, y, epochs=1, verbose=2)
after training I am predicting from entire train sequence for 7 days.
second I tried from link 2
#functions define_models and predict_sequence same as link
for i in range(0,47):
x1=df[i:i+7]
print(len(x1))
x2=df[i+1:i+8]
print(len(x2))
y=df[i+1:i+8]
x1=np.array(x1)
x1=np.insert(x1,0,len(x1))
print(len(x1))
x1=x1.reshape(len(x1),1,1)
x2=np.array(x2)
x2=np.insert(x2,0,0)
print(len(x2))
x2=x2.reshape(len(x2),1,1)
y=np.array(y)
y=np.insert(y,0,len(y))
y=y.reshape(len(y),1,1)
model.fit([x1,x2],y,epochs=1)
this is also giving output as 1.
I dont know exactly what x2 should be here.
Please correct me where I am wrong.
The first problem is that to train a deep network you should do the following steps:
Create a clear dataset. By a "clear dataset" I mean an instance of tf.Dataset object. To create an instance of tf.Dataset you should first organize your dataset in a NumPy array with shape (Maximum sequence length, Batch size, Size of each record). In your case, the size of the X array which contains the training data should be (7, 1, 1), and the Y array which contains the labels of the training data should be (7,1).
After organizing the data according to the explained format, you can create an instance of tf.Dataset using the function tf.Dataset.from_tensor_slices()
You should use the model.fit() function using the created tf.Dataset instance and specifying a suitable number of epochs which is more than 1. The parameter specifies the number of times the network should iterate on the dataset to be trained. The value of this parameter is somehow arbitrary, but, you should try different values to reach the best one fitting your problem.
Note that using this process you do not need to make a for-loop anymore. The loop will be executed inside of the model.fit function.
For more information about how to implement and train an encoder-decoder model in TensorFlow take a look at the official sample for neural machine translation.

Keras: Custom loss function with training data not directly related to model

I am trying to convert my CNN written with tensorflow layers to use the keras api in tensorflow (I am using the keras api provided by TF 1.x), and am having issue writing a custom loss function, to train the model.
According to this guide, when defining a loss function it expects the arguments (y_true, y_pred)
https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses
def basic_loss_function(y_true, y_pred):
return ...
However, in every example I have seen, y_true is somehow directly related to the model (in the simple case it is the output of the network). In my problem, this is not the case. How do implement this if my loss function depends on some training data that is unrelated to the tensors of the model?
To be concrete, here is my problem:
I am trying to learn an image embedding trained on pairs of images. My training data includes image pairs and annotations of matching points between the image pairs (image coordinates). The input feature is only the image pairs, and the network is trained in a siamese configuration.
I am able to implement this successfully with tensorflow layers and train it sucesfully with tensorflow estimators.
My current implementations builds a tf Dataset from a large database of tf Records, where the features is a dictionary containing the images and arrays of matching points. Before I could easily feed these arrays of image coordinates to the loss function, but here it is unclear how to do so.
There is a hack I often use that is to calculate the loss within the model, by means of Lambda layers. (When the loss is independent from the true data, for instance, and the model doesn't really have an output to be compared)
In a functional API model:
def loss_calc(x):
loss_input_1, loss_input_2 = x #arbirtray inputs, you choose
#according to what you gave to the Lambda layer
#here you use some external data that doesn't relate to the samples
externalData = K.constant(external_numpy_data)
#calculate the loss
return the loss
Using the outputs of the model itself (the tensor(s) that are used in your loss)
loss = Lambda(loss_calc)([model_output_1, model_output_2])
Create the model outputting the loss instead of the outputs:
model = Model(inputs, loss)
Create a dummy keras loss function for compilation:
def dummy_loss(y_true, y_pred):
return y_pred #where y_pred is the loss itself, the output of the model above
model.compile(loss = dummy_loss, ....)
Use any dummy array correctly sized regarding number of samples for training, it will be ignored:
model.fit(your_inputs, np.zeros((number_of_samples,)), ...)
Another way of doing it, is using a custom training loop.
This is much more work, though.
Although you're using TF1, you can still turn eager execution on at the very beginning of your code and do stuff like it's done in TF2. (tf.enable_eager_execution())
Follow the tutorial for custom training loops: https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough
Here, you calculate the gradients yourself, of any result regarding whatever you want. This means you don't need to follow Keras standards of training.
Finally, you can use the approach you suggested of model.add_loss.
In this case, you calculate the loss exaclty the same way I did in the first answer. And pass this loss tensor to add_loss.
You can probably compile a model with loss=None then (not sure), because you're going to use other losses, not the standard one.
In this case, your model's output will probably be None too, and you should fit with y=None.

How to get gradients during fit or fit_generator in Keras

I need to monitor the gradients in real time during training when using fit or fit_generator methods. This should have been achieved by using custom callback function. However, I don't how to access the gradients correctly. The attribute model.optimizer.update returns tensors of gradients but it need to be fed with data. What I want to get is the value of gradients that have been applied in the last batch during training.
The following answer does not give the corresponding solution because it just define a function to calculate the gradients by feeding extra data.
Getting gradient of model output w.r.t weights using Keras

The established way to use TF Dataset API in Keras is to feed `model.fit` with `make_one_shot_iterator()`, But this iterator only good for one Epoch

Edit:
To clarify why this question is different from the suggested duplicates, this SO question follows up on those suggested duplicates, on what exactly is Keras doing with the techniques described in those SO questions. The suggested duplicates specify using a dataset API make_one_shot_iterator() in model.fit, my follow up is that make_one_shot_iterator() can only go through the dataset once, however in the solutions given, several epochs are specified.
This is a follow up to these SO questions
How to Properly Combine TensorFlow's Dataset API and Keras?
Tensorflow keras with tf dataset input
Using tf.data.Dataset as training input to Keras model NOT working
Where "Starting from Tensorflow 1.9, one can pass tf.data.Dataset object directly into keras.Model.fit() and it would act similar to fit_generator". Each example has a TF dataset one shot iterator fed into Kera's model.fit.
An example is given below
# Load mnist training data
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
training_set = tfdata_generator(x_train, y_train,is_training=True)
model = # your keras model here
model.fit(
training_set.make_one_shot_iterator(),
steps_per_epoch=len(x_train) // 128,
epochs=5,
verbose = 1)
However, according the the Tensorflow Dataset API guide (here https://www.tensorflow.org/guide/datasets ) :
A one-shot iterator is the simplest form of iterator, which only
supports iterating once through a dataset
So it's only good for 1 epoch. However, the codes in the SO questions specify several epochs, with the code example above specifying 5 epochs.
Is there any explanation for this contradiction? Does Keras somehow know that when the one shot iterator has gone through the dataset, it can re-initialize and shuffle the data?
You can simply pass dataset object to model.fit, Keras will handle iteration.
Considering one of pre-made datasets:
train, test = tf.keras.datasets.cifar10.load_data()
dataset = tf.data.Dataset.from_tensor_slices((train[0], train[1]))
This will create dataset object from training data of cifar10 dataset. In this case parse function isn't needed.
If you create dataset from path containing images of list of numpy arrays you'll need one.
dataset = tf.data.Dataset.from_tensor_slices((image_path, labels_path))
In case you'll need a function to load actual data from filename. Numpy array can be handled the same way just without tf.read_file
def parse_func(filename):
f = tf.read_file(filename)
image = tf.image.decode_image(f)
label = #get label from filename
return image, label
Then you can shuffle, batch, and map any parse function to this dataset. You can control how many examples will be preloaded with shuffle buffer. Repeat controls epoch count and better be left None, so it will repeat indefinitely. You can use either plain batch function or combine with
dataset = dataset.shuffle().repeat()
dataset.apply(tf.data.experimental.map_and_batch(map_func=parse_func, batch_size,num_parallel_batches))
Then dataset object can be passed to model.fit
model.fit(dataset, epochs, steps_per_epoch). Note that steps_per_epoch is a necessary parameter in this case, it will define when to start new epoch. So you'll have to know epoch size in advance.

Training Estimators less than one epoch using dataset API?

I am trying to train a model on a large dataset. I would like to run the evaluation step multiple times before one epoch of training has been completed. Looking at the implementation of Dataset API with Estimators it looks like every time I restart the training after the evaluation step, Estimator creates a fresh dataset from scratch and the training never completes for the full data.
I have written an input function very similar to one provided on the tensorflow website.
def train_input_fn(features, labels, batch_size):
"""An input function for training"""
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features),
labels))
# Shuffle, repeat, and batch the examples.
dataset = dataset.repeat(1).batch(batch_size)
# Return the read end of the pipeline.
return dataset
I then use the tf.estimator.Estimator.train to call my input function. I call the above input function with the following method.
classifier.train(input_fn=lambda: train_input_fn,
steps=n_steps)
where n_steps in number less than the total step taken to complete one epoch.
I then call an evaluation function like this.
classifier.evaluate(input_fn=lambda: eval_input_fn())
I want the run both the step in a loop.
Every time the loop reaches training, It initialization the dataset in the train_input_fn. This applies the training only in first n_steps of training data.
If you want to evaluate multiple times during training, you can check InMemoryEvaluatorHook.
You can probably refer this discussion about train_and_evaluate and InMemoryEvaluatorHook for more details on how to better use them.