Does Keras preprocessing layers apply to the validation set? - tensorflow

I was reading the data augmentation article on Keras and they allow one to make preprocessing layer a part of the model:
model = tf.keras.Sequential([
resize_and_rescale,
data_augmentation,
layers.Conv2D(16, 3, padding="same", activation="relu"),
layers.MaxPooling2D(),
# Rest of your model
])
I'm wondering whether one or both of the resize_and_rescale and data_augmentation layers are also applied to the validation set during training?

It depends on which type of augmentation you are using. For example if you use resizing layer or a rescale layer they are applied even during inference mode, that is they would be applied to the valiation data in model.fit. For other augmentation layers like RandomFlip layer the documentation states:
During inference time, the output will be identical to input.
So you have to look up the information on the type of layer you are using. Documentation is here. From what I could gather I think only the resizing and rescaling layers remain active during inference mode.

Related

Is validation curve slight greater or lower in CNN models good?

Can you tell me which one among the two is a good validation vs train plot?
Both of them are trained with same keras sequential layers, but the second one is trained using more number of samples, i.e. augmented the dataset.
I'm a little bit confused about the zigzags in the first plot, otherwise I think it is better than the second.
In the second plot, there are no zigzags but the validation accuracy tends to be a little high than train, is it overfitting or considerable?
It is an image detection model where the first model's dataset size is 5170 and the second had 9743 samples.
The convolutional layers defined for the model building:
tf.keras.layers.Conv2D(128,(3,3), activation = 'relu', input_shape = (150,150,3)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(64,(3,3), activation = 'relu'),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(32,(3,3), activation = 'relu'),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512,activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Dense(1,activation='sigmoid')
Can the model be improved?
From the graphs the second graph where you have more samples is better. The reason is with more samples the model is trained on a much wider probability distribution of images. So when validation is run you have a better chance of correctly classifying the image. You have a lot of dropout in your model. This is good to prevent over fitting, however it will lower the training accuracy relative to the validation accuracy. Your model seems to be doing well. It might improve if you add additional convolution- max pooling layers. Alternative of course is to use transfer learning. I would recommend efficientnetb3. I also recommend using an adjustable learning rate. The Keras callback ReduceLROnPlateau works well for that purpose. Documentation is here.. Code below shows my recommended settings.
rlronp=tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss',
factor=0.5,
patience=2,
verbose=1,
mode='auto'
)
in model.fit include callbacks=[rlronp]

Extracting activations from a specific layer of neural network

I was working on an image recognition problem. After training the model, I saved the architecture as well as weights. Now I want to use the model for extracting features from other images and perform SVM on that. For this, I want to remove the last two layers of my model and get the values calculated by the CNN and fully connected layers till then. How can I do that in Keras?
# a simple model
model = keras.models.Sequential([
keras.layers.Input((32,32,3)),
keras.layers.Conv2D(16, 3, activation='relu'),
keras.layers.Flatten(),
keras.layers.Dense(10, activation='softmax')
])
# after training
feature_only_model = keras.models.Model(model.inputs, model.layers[-2].output)
feature_only_model take a (32,32,3) for input and the output is the feature vector
If your model is subclassed - just change call() method.
If not:
if your model is complicated - wrap your model by subclassed model and change forward pass in call() method, or
if your model is simple - create model without the last layers, load weights to every layer separately

efficientnet.tfkeras vs tf.keras.applications.efficientnet

I am trying to use efficientnet to custom train my dataset.
And I find out with all other code/data/config the same.
efficientnet.tfkeras.EfficientNetB0 can gives ~90% training/prediction accruacy and tf.keras.applications.efficientnet.EfficientNetB0 only gives ~70% accuracy.
But I guess both should be the same implementation of the efficient net, or I am missing something here?
I am using latest efficientnet and Tensorflow 2.3.0
with strategy.scope():
model = tf.keras.Sequential([
efficientnet.tfkeras.EfficientNetB0( #tf.keras.applications.efficientnet.EfficientNetB0
input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
weights='imagenet',
include_top=False
),
L.GlobalAveragePooling2D(),
L.Dense(1, activation='sigmoid')
])
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_crossentropy']
)
model.summary()
I did run into the same problem for EfficientNetB4 and did encounter the following:
The number of total parameters are not equal. The trainable parameters are equal, but the non-trainable parameters aren't. The efficientnet.tfkeras has 7 fewer non-trainable parameters than the tf.keras.applications model.
The number of layers are not equal, the efficientnet.tfkeras has fewer layers than tf.keras.application model.
The different layers are at the very beginning, the most noteworthy are the normalization and rescaling layers, which are in the tf.keras.applications model, but not in the efficientnet.tfkeras model. You can observe this yourself using the model.summary() method.
When applying this layer, by using model.layers[i](array), it turn out these layers do rescale the image by dividing it by 255 and applying normalization according to:
(input_image - IMAGENET_MEAN) / square_root(IMAGENET_STD)
Thus, it turns out the image normalization is build into the model. When you perform this normalization yourself to the input image, the image will be normalized twice resulting in extremely small pixel values. The model will therefore have a hard time learning.
TLDR: Do not normalize the input image as it is build into the tf.keras.application model, input images should have values in the range 0-255.

What is the behavior of data augmentation layer?

I am trying to understand the tensor flow data augmentation tutorial
In the following defined model
model = tf.keras.Sequential([
resize_and_rescale,
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
# Rest of your model
])
My understanding is no matter how many image rotate/zoom/transform defined in data_augmentation. This data_augmentation layer output just 1 image from 1 input image, am I correct?
I saw another post Does ImageDataGenerator add more images to my dataset?. Someone answers each epoch ImageDataGenerator will create different images, is that the same behavior here?
Otherwise, it is just the same transformed image trained epoch after epoch, which makes no sense.
Yes! Data augmentation layer would just transform images and return same shape as input (batch_size, *image_dims). But, due to randomisation in data augmentation layer, you are likely to get a different output each time that layer is called. For instance, in linked tutorial, random rotate or zoom is applied with a 20% chance, in addition the zoom factor and rotation angle are randomly selected(within specified limits) each time that layer is called.

How to create a sliding 2d autoencoder in keras?

I'm working on a toy Keras/Tensorflow project targeting the MNIST dataset. I want to build something akin to a 2D convolutional network, but instead of a stack of filters, I want to produce a dense vector representation.
Here is an example of a model that I used to create an autoencoder for a 3x3 sub-sample of the input:
model = Sequential()
model.add(Flatten(input_shape=(3, 3)))
model.add(Dense(32, activation='elu'))
model.add(Dense(4, activation='elu'))
model.add(Dense(32, activation='elu'))
model.add(Dense(9, activation='sigmoid'))
model.add(Reshape((3, 3)))
Using this model, I know that the topology is close to what for my 3x3 kernel. What I am trying to figure out is how to replicate/tile the first three layers of this model over my 2D image. I would like to have all of the features of the Conv2d layer such as strides/padding but it's not clear to me if/how i could replace the kernel of that layer with an entire multi-layer "sub model".
Some properties that I would like:
The "kernel" needs to be shared across the tiled instances so that we only have to train a single kernel.
However we define this kernel, it would be nice if it could be expressed in keras layers
It has all of the sampling features of Conv2d like padding/strides/dilation
Some things I have tried:
Keras Conv2D custom kernel initialization - seems to require the kernel to be reduced to a single tensor?
Using K.tile but that seems to require me to reimplement large parts of Conv2d and it's not clear if the variables that are created are shared or new instances
You're in luck, because there's a tensorflow function that does exactly what you want. You're looking for tf.image.extract_patches. You can just put it in a tf.keras.layers.Lambda layer to wrap it in a tf.keras.layer.Layer. A cleaner way to do it is tf.keras.layers.Layer, but it has slightly more effort. More info on how to do that can be found in the docs for tf.keras.layers.Lamba