how to get labels when using model.predict() - tensorflow

In my project, I have a number of cases where I have a Dataset instance and I need to get predictions from some model on every item in the dataset.
The model.predict() API is optimized perfectly for this, as shown in the documentation. However, there seems to be one major catch. I also happen to need the labels to compare with the predicted values, i.e. the dataset contains x,y pairs, and I'd like to end up with (y_predicted, y) pairs after the prediction is complete. This does not seem to be possible with the predict() API though, and I can't think of a clean way to 'split' the dataset so that the x's are fed into the model and the y's are retained to be joined back up with the predicted y's.
EDIT: I know it's quite simple to do by iterating over the dataset manually and calling the model directly, e.g.
for x, y in dataset:
y_pred = model(x)
result.append((y, y_pred))
However, this seems like it will be a fair bit slower than using the inbuilt predict() as Tensorflow won't be able to multi-thread/optimize the input pipeline.
Does anyone have a good way to accomplish this?

Given the concerns you mentioned, it may be best to overwrite predict to suit your needs. You don't actually need to overwrite that function though, instead only predict_step which is called by that function. Just use this class instead of Model:
class MyModel(tf.keras.Model):
def predict_step(self, data):
x, y = data
return self(x, training=False), y
If your model is currently Sequential, inherit from that instead. Basically the only change I made from the default implementation is to add , y to the model call result.
Note that this also makes some assumptions, such that your dataset consists of (input, label) batch pairs. You may need to adapt it slightly to your needs. Here is a minimal example:
import tensorflow as tf
import numpy as np
(imgs, lbls), (te_imgs, te_lbls) = tf.keras.datasets.mnist.load_data()
imgs = imgs.astype(np.float32).reshape((-1, 784)) / 255.
te_imgs = te_imgs.astype(np.float32).reshape((-1, 784)) / 255.
lbls = lbls.astype(np.int32)
te_lbls = te_lbls.astype(np.int32)
tr_data = tf.data.Dataset.from_tensor_slices((imgs, lbls)).shuffle(60000).batch(128)
te_data = tf.data.Dataset.from_tensor_slices((te_imgs, te_lbls)).batch(128)
class MyModel(tf.keras.Model):
def predict_step(self, data):
x, y = data
return self(x, training=False), y
inp = tf.keras.Input((784,))
logits = tf.keras.layers.Dense(10)(inp)
model = MyModel(inp, logits)
opt = tf.keras.optimizers.Adam()
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(loss=loss, optimizer=opt)
something = model.predict(te_data)
print(something[0].shape, something[1].shape)
This shows ((10000, 10), (10000,)) -- predict now returns a tuple of outputs, labels (this can be confirmed by inspecting the returned labels and comparing to the images in the test set).

Related

Using Tensorflow Dataset from_generator() to create multi Input/Output with Custom Generator and ImageDataGenerator

I am trying to scale up my model which uses a "cluster loss" extension, the implementation works so far on MNIST, but I would like to benefit from data augmentation and multi-processing for the real dataset.
In short, the network follows works done with the "centre loss", which resemble a bit a Siamese Network. The important part of the architectures is that the model has 2 inputs and 2 outputs. Therefore, I implemented a custom generator in order to feed the model as follow:
def my_generator(stop):
i = 0
while i < stop:
batch = train_gen.next()
img = batch[0]
labels = batch[1]
labels_size = np.shape(labels)
cluster = np.zeros(labels_size)
x = [img, labels]
y = [labels, cluster]
yield x, y
i += 1
which calls the generator ("train_gen") defined as follow:
generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_gen = generator.flow_from_dataframe(df, x_col='img_path', y_col='label',
class_mode='categorical',
target_size=(32, 32),
batch_size=batch_size)
The generator works if I set only one worker in the fit function. But obviously it's painfully slow... So I tried to use the recommended tf.Data from Tensorflow (tf.data.Dataset.from_generator) to fit my model, but setting it as follow,
ds = tf.data.Dataset.from_generator(my_generator,
args=[num_iter],
output_types=([tf.float32, tf.float32], [tf.float32, tf.float32]))
I got the following error:
TypeError: Cannot convert value [tf.float32, tf.float32] to a Tensorflow DType.
From there, I tried multiple things, following this post
For example, trying to return tuples instead of arrays:
x = (img, labels)
y = (labels, cluster)
But I got:
ValueError: as_list() is not defined on an unknown TensorShape
Does anyone have experience with this? I am not sure to understand the error and I am thinking that I could change the "output_types" argument perhaps, but TensorFlow has no "list" or "tuple" DType argument.
Here is a link to my code which construct a small image dataset from cifar10 to feed a toy model.
I do not think your generator works as you expect. Each time it is called it sets i=0. The code after
yield x, y
i += 1
i += 1 never executes. Put a print statement as below
yield x, y
i += 1
print ('the value of i is ',i)
and you will see it never executes.
The above is true if you execute
x,y=next(my_generator(2))
which is how generators are used. However if you execute
x,y=my_generator(2)
then the i += 1 statement does execute. Normally with generators you use them with next(my_generator). model.fit I believe gets the next batch by using next() on the generator you specify.

tf.keras.layers.BatchNormalization with trainable=False appears to not update its internal moving mean and variance

I am trying to find out, how exactly does BatchNormalization layer behave in TensorFlow. I came up with the following piece of code which to the best of my knowledge should be a perfectly valid keras model, however the mean and variance of BatchNormalization doesn't appear to be updated.
From docs https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
in the case of the BatchNormalization layer, setting trainable = False on the layer means that the layer will be subsequently run in inference mode (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch).
I expect the model to return a different value with each subsequent predict call.
What I see, however, are the exact same values returned 10 times.
Can anyone explain to me why does the BatchNormalization layer not update its internal values?
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(3, 5) * 5 + 0.3
bn = tf.keras.layers.BatchNormalization(trainable=False, epsilon=1e-9)
z = input = tf.keras.layers.Input([5])
z = bn(z)
model = tf.keras.Model(inputs=input, outputs=z)
for i in range(10):
print(x)
print(model.predict(x))
print()
I use TensorFlow 2.1.0
Okay, I found the mistake in my assumptions. The moving average is being updated during training not during inference as I thought. This makes perfect sense, as updating the moving averages during inference would likely result in an unstable production model (for example a long sequence of highly pathological input samples [e.g. such that their generating distribution differs drastically from the one on which the network was trained] could potentially bias the network and result in worse performance on valid input samples).
The trainable parameter is useful when you're fine-tuning a pretrained model and want to freeze some of the layers of the network even during training. Because when you call model.predict(x) (or even model(x) or model(x, training=False)), the layer automatically uses the moving averages instead of batch averages.
The code below demonstrates this clearly
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(10, 5) * 5 + 0.3
z = input = tf.keras.layers.Input([5])
z = tf.keras.layers.BatchNormalization(trainable=True, epsilon=1e-9, momentum=0.99)(z)
model = tf.keras.Model(inputs=input, outputs=z)
# a dummy loss function
model.compile(loss=lambda x, y: (x - y) ** 2)
# a dummy fit just to update the batchnorm moving averages
model.fit(x, x, batch_size=3, epochs=10)
# first predict uses the moving averages from training
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# outputs the same thing as previous predict
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here calling the model with training=True results in update of moving averages
# furthermore, it uses the batch mean and variance as in training,
# so the result is very different
pred = model(x, training=True).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here we see again that the moving averages are used but they differ slightly after
# the previous call, as expected
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
In the end, I found that the documentation (https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) mentions this:
When performing inference using a model containing batch normalization, it is generally (though not always) desirable to use accumulated statistics rather than mini-batch statistics. This is accomplished by passing training=False when calling the model, or using model.predict.
Hopefully this will help someone with similar misunderstanding in the future.

How to use the TensorFlow dataset API with unknown shapes properly?

I've been trying for several hours to complete this task with no success.
I have a very large dataset which is comprised of the following structure:
I want to split this data into X and Y (and pass Y to tf.to_categorical) as in the picture using the tf.data.Dataset API, but unfortunately every attempt of me trying to use it has ended up with some kind of error.
How do I use tf.data.Dataset to:
Split each row to x and y.
Convert Y to categorical with tf.to_categorical.
Split the dataset into batches.
Feed my model with the dataset.
My current attempt:
def map_sequence():
for sequence in input_sequences:
yield sequence[:-1], keras.utils.to_categorical(sequence[-1], total_words)
dataset = tf.data.Dataset.from_generator(map_sequence,
(tf.int32, tf.int32),
(tf.TensorShape(title_length-1), tf.TensorShape(total_words)))
But when I try to train my model with the following code:
inputs = keras.layers.Input(shape=(title_length-1, ))
x = keras.layers.Embedding(total_words, 32)(inputs)
x = keras.layers.Bidirectional(keras.layers.LSTM(64, return_sequences=True))(x)
x = keras.layers.Bidirectional(keras.layers.LSTM(64))(x)
predictions = keras.layers.Dense(total_words, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=predictions)
model.compile('Adam', 'categorical_crossentropy', metrics=['acc'])
model.fit(dataset)
I am getting this error: ValueError: Shapes (32954, 1) and (65, 32954) are incompatible
I think you have a similar problem as in this question. Keras expects the dataset that you give to produce batches, not individual examples. Since you are giving it two one-dimensional vectors at a time, Keras interprets that each of these is a batch of examples with one feature. So, your X data, which has 65 elements, is interpreted as a batch of 65 examples with a single feature (a 65x1 tensor). This fixes the batch size to 65. The output of the model has then shape 65x32,954 (which I assume is the value of total_words). But your Y vector, with 32,954 elements, is again interpreted as a batch of 32,954 with one features (32,954x1 tensor). These two things don't match, hence the error. You should be able to fix it by simply making a new dataset with batch before passing it to fit.
In any case, if you input_sequences is a NumPy array, as it seems to be, your method to produce the dataset is not really good, as using a generator will be really slow. This is a better way to do the same:
def map_sequence(sequence):
# Using tf.one_hot instead of keras.utils.to_categorical
# because we are working with TensorFlow tensors now
return sequence[:-1], tf.one_hot(sequence[-1], total_words)
dataset = tf.data.Dataset.from_tensor_slices(input_sequences)
dataset = dataset.map(map_sequence)
dataset = dataset.batch(batch_size)

How to create a custom layer in Keras with 'stateful' variables/tensors?

I would like to ask you some help for creating my custom layer.
What I am trying to do is actually quite simple: generating an output layer with 'stateful' variables, i.e. tensors whose value is updated at each batch.
In order to make everything more clear, here is a snippet of what I would like to do:
def call(self, inputs)
c = self.constant
m = self.extra_constant
update = inputs*m + c
X_new = self.X_old + update
outputs = X_new
self.X_old = X_new
return outputs
The idea here is quite simple:
X_old is initialized to 0 in the def__ init__(self, ...)
update is computed as a function of the inputs to the layer
the output of the layer is computed (i.e. X_new)
the value of X_old is set equal to X_new so that, at the next batch, X_old is no longer equal to zero but equal to X_new from the previous batch.
I have found out that K.update does the job, as shown in the example:
X_new = K.update(self.X_old, self.X_old + update)
The problem here is that, if I then try to define the outputs of the layer as:
outputs = X_new
return outputs
I will receiver the following error when I try model.fit():
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have
gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
And I keep having this error even though I imposed layer.trainable = False and I did not define any bias or weights for the layer. On the other hand, if I just do self.X_old = X_new, the value of X_old does not get updated.
Do you guys have a solution to implement this? I believe it should not be that hard, since also stateful RNN have a 'similar' functioning.
Thanks in advance for your help!
Defining a custom layer can become confusing some times. Some of the methods that you override are going to be called once but it gives you the impression that just like many other OO libraries/frameworks, they are going to be called many times.
Here is what I mean: When you define a layer and use it in a model the python code that you write for overriding call method is not going to be directly called in forward or backward passes. Instead, it's called only once when you call model.compile. It compiles the python code to a computational graph and that graph in which the tensors will flow is what does the computations during training and prediction.
That's why if you want to debug your model by putting a print statement it won't work; you need to use tf.print to add a print command to the graph.
It is the same situation with the state variable you want to have. Instead of simply assigning old + update to new you need to call a Keras function that adds that operation to the graph.
And note that tensors are immutable so you need to define the state as tf.Variable in the __init__ method.
So I believe this code is more like what you're looking for:
class CustomLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(CustomLayer, self).__init__(**kwargs)
self.state = tf.Variable(tf.zeros((3,3), 'float32'))
self.constant = tf.constant([[1,1,1],[1,0,-1],[-1,0,1]], 'float32')
self.extra_constant = tf.constant([[1,1,1],[1,0,-1],[-1,0,1]], 'float32')
self.trainable = False
def call(self, X):
m = self.constant
c = self.extra_constant
outputs = self.state + tf.matmul(X, m) + c
tf.keras.backend.update(self.state, tf.reduce_sum(outputs, axis=0))
return outputs

Train with target data generated within the model

How can I get the loss function used by tf.keras.Model.fit(x, y) to compare two outputs within the graph instead of one output with externally supplied target data, y?
The manual says you can use tensors for target value which sounds like what I want but that you then also need the inputs to be tensors. But my inputs are numpy arrays and I don't think I should have to change that.
1 - Easy, best - maybe not good for memory
Why not just get the expected items for the loss already?
new_y_train = non_trainable_ops_model.predict(original_y_train)
nn_model.fit(x_train, new_y_train)
This sounds definitely the best way if your memory can handle this. Simpler model, faster training.
You can even save/load the new data:
np.save(name, new_y_train)
new_y_train = np.load(name)
2 - Make the model output the loss and use a dummy loss for compiling
Losses:
def dummy_loss(true, pred):
return pred
def true_loss(x):
true, pred = x
return loss_function(true, pred) #you can probably from keras.losses import loss_function
Model:
#given
nn_model = create_nn_model()
non_trainable_ops_model = create_nto_model()
nn_input = Input(nn_input_shape)
nto_input = Input(nto_input_shape)
nn_outputs = nn_model(nn_input)
nto_outputs = non_trainable_ops_model(nto_input)
loss = Lambda(true_loss)([nto_outputs, nn_outputs])
training_model = Model([nn_input, nto_input], loss)
training_model.compile(loss = dummy_loss, ...)
training_model.fit([nn_x_train, nto_x_train], np.zeros((len(nn_x_train),)))
3 - Use model.add_loss instead of compiling a loss
Following the same as the previous answer, you can:
training_model = Model([nn_input, nto_input], nn_outputs)
loss = true_loss([nto_outputs, nn_outputs])
training_model.add_loss(loss)
training_model.compile(loss=None, ...)
training_model.fit([nn_x_train, nto_x_train], None)
4 - Enable eager execution and make custom training loops
https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough