I'm trying to implement a SimGAN in Keras, but I believe the question is more generally related to GANs and freezing layers.
As I understand it I need three models:
The refiner, which processes a synthetic image to make it more realistic. (In other GAN architectures, this might be the generator, which accepts random noise.)
The discriminator, which processes an image and classifies it as synthetic or real.
A combined model which passes the synthetic image through the refiner, and then the refined image into the discriminator.
In the combined model, we want the discriminator layers to be frozen, so that when the combined model is trained, we update only the refiner layers.
Separately, we train the discriminator-only model, where obviously the layers should not be frozen. The layers of the discriminator model and the combined model should share weights so that both get updated.
Here's what I have so far:
refiner_model = make_refiner_model(input_shape=(img_height, img_width, img_channels))
discriminator_model = make_discriminator_model(input_shape=refiner_model.output_shape[1:])
# create combined model with frozen discriminator layers
synthetic_image_tensor = layers.Input(refiner_model.input_shape[1:])
refiner_model_output = refiner_model(synthetic_image_tensor)
combined_output = discriminator_model(refiner_model_output)
combined_model = models.Model(
inputs=synthetic_image_tensor,
outputs=[refiner_model_output, combined_output],
name='combined'
)
How do I freeze the discriminator layers in the combined model without freezing them in the discriminator-only model?
The Keras FAQ explicitly suggests the following:
refiner_model.compile(...)
discriminator_model.compile(...)
discriminator_model.trainable = False
combined_model.compile(...)
But then when I print out discriminator_model.summary(), the number of parameters has doubled?
Total params: 151,812
Trainable params: 75,906
Non-trainable params: 75,906
Then I'll get warnings about changing .trainable without recompiling, and eventually fail with this error.
Related
I am following the Transfer learning and fine-tuning guide on the official TensorFlow website. It points out that during fine-tuning, batch normalization layers should be in inference mode:
Important notes about BatchNormalization layer
Many image models contain BatchNormalization layers. That layer is a
special case on every imaginable count. Here are a few things to keep
in mind.
BatchNormalization contains 2 non-trainable weights that get updated during training. These are the variables tracking the mean and variance of the inputs.
When you set bn_layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean & variance statistics. This is not the case for other layers in general, as weight trainability & inference/training modes are two orthogonal concepts. But the two are tied in the case of the BatchNormalization layer.
When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training=False when calling the base model. Otherwise the updates applied to the non-trainable weights will suddenly destroy what the model has learned.
You'll see this pattern in action in the end-to-end example at the end
of this guide.
Even tho, some other sources, for example this article (titled Transfer Learning with ResNet), says something completely different:
for layer in resnet_model.layers:
if isinstance(layer, BatchNormalization):
layer.trainable = True
else:
layer.trainable = False
ANYWAY, I know that there is a difference between training and trainable parameters in TensorFlow.
I am loading my model from file, as so:
model = tf.keras.models.load_model(path)
And I am unfreezing (or actually freezing the rest) some of the top layers in this way:
model.trainable = True
for layer in model.layers:
if layer not in model.layers[idx:]:
layer.trainable = False
NOW about batch normalization layers: I can either do:
for layer in model.layers:
if isinstance(layer, keras.layers.BatchNormalization):
layer.trainable = False
or
for layer in model.layers:
if layer.name.startswith('bn'):
layer.call(layer.input, training=False)
Which one should I do? And whether finally it is better to freeze batch norm layer or not?
Not sure about the training vs trainable difference, but personally I've gotten good results settings trainable = False.
Now as to whether to freeze them in the first place: I've had good results with not freezing them. The reasoning is simple, the batch norm layer learns the moving average of the initial training data. This may be cats, dogs, humans, cars e.t.c. But when you're transfer learning, you could be moving to a completely different domain. The moving averages of this new domain of images are far different from the prior dataset.
By unfreezing those layers and freezing the CNN layers, my model saw a 6-7% increase in accuracy (82 -> 89% ish). My dataset was far different from the inital Imagenet dataset that efficientnet was trained on.
P.S. Depending on how you plan on running the mode post training, I would advise you to freeze the batch norm layers once the model is trained. For some reason, if you ran the model online (1 image at a time), the batch norm would get all funky and give irregular results. Freezing them post training fixed the issue for me.
Use the code below to see whether the batch norm layer are being freezed or not. It will not only print the layer names but whether they are trainable or not.
def print_layer_trainable(conv_model):
for layer in conv_model.layers:
print("{0}:\t{1}".format(layer.trainable, layer.name))
In this case i have tested your method but did not freezed my model's batch norm layers.
for layer in model.layers:
if isinstance(layer, keras.layers.BatchNormalization):
layer.trainable = False
The code below worked nice for me. In my case the model is a ResNetV2 and the batch norm layers are named with the suffix "preact_bn". By using the code above for printing layers you can see how the batch norm layers are named and configure as you want.
for layer in new_model.layers[:]:
if ('preact_bn' in layer.name):
trainable = False
else:
trainable = True
layer.trainable = trainable
Just to add to #luciano-dourado answer;
In my case, I started by following the Transfer Learning guide as is, that is, freezing BN layers throughout the entire training (classifier + fine-tuning).
What I saw is that training the classifier worked without problems but as soon as I started fine-tuning, the loss went to NaN after a few batches.
After running the usual checks: input data without NaNs, loss functions yielding correct values, etc. I checked if BN layers were running in inference mode (trainable = False).
But in my case, the dataset was so different to ImageNet that I needed to do the contrary, set all trainable BN attributes to True. I found this empirically just as #zwang commented. Just remember to freeze them after training, before you deploy the model for inference.
By the way, just as an informative note, ResNet50V2, for example, has a total 49 BN layers of which only 16 are pre-activations BNs. This means that the remaining 33 layers were updating their mean and variance values.
Yet another case where one has to run several empirical tests to find out why the "standard" approach does not work in his/her case. I guess this further reinforces the importance of data in Deep Learning :)
My training and loss curves look like below and yes, similar graphs have received comments like "Classic overfitting" and I get it.
My model looks like below,
input_shape_0 = keras.Input(shape=(3,100, 100, 1), name="img3")
model = tf.keras.layers.TimeDistributed(Conv2D(8, 3, activation="relu"))(input_shape_0)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(16, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(32, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(Flatten())(model)
model = tf.keras.layers.TimeDistributed(Dropout(0.4))(model)
model = LSTM(16, kernel_regularizer=tf.keras.regularizers.l2(0.007))(model)
# model = Dense(100, activation="relu")(model)
# model = Dense(200, activation="relu",kernel_regularizer=tf.keras.regularizers.l2(0.001))(model)
model = Dense(60, activation="relu")(model)
# model = Flatten()(model)
model = Dropout(0.15)(model)
out = Dense(30, activation='softmax')(model)
model = keras.Model(inputs=input_shape_0, outputs = out, name="mergedModel")
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
opt = tf.keras.optimizers.RMSprop()
lr_metric = get_lr_metric(opt)
# merged.compile(loss='sparse_categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt, metrics=['accuracy',lr_metric])
model.summary()
In the above model building code, please consider the commented lines as some of the approaches I have tried so far.
I have followed the suggestions given as answers and comments to this kind of question and none seems to be working for me. Maybe I am missing something really important?
Things that I have tried:
Dropouts at different places and different amounts.
Played with inclusion and expulsion of dense layers and their number of units.
Number of units on the LSTM layer was tried with different values (started from as low as 1 and now at 16, I have the best performance.)
Came across weight regularization techniques and tried to implement them as shown in the code above and so tried to put it at different layers ( I need to know what is the technique in which I need to use it instead of simple trial and error - this is what I did and it seems wrong)
Implemented learning rate scheduler using which I reduce the learning rate as the epochs progress after a certain number of epochs.
Tried two LSTM layers with the first one having return_sequences = true.
After all these, I still cannot overcome the overfitting problem.
My data set is properly shuffled and divided in a train/val ratio of 80/20.
Data augmentation is one more thing that I found commonly suggested which I am yet to try, but I want to see if I am making some mistake so far which I can correct it and avoid diving into data augmentation steps for now. My data set has the below sizes:
Training images: 6780
Validation images: 1484
The numbers shown are samples and each sample will have 3 images. So basically, I input 3 mages at once as one sample to my time-distributed CNN which is then followed by other layers as shown in the model description. Following that, my training images are 6780 * 3 and my Validation images are 1484 * 3. Each image is 100 * 100 and is on channel 1.
I am using RMS prop as the optimizer which performed better than adam as per my testing
UPDATE
I tried some different architectures and some reularizations and dropouts at different places and I am now able to achieve a val_acc of 59% below is the new model.
# kernel_regularizer=tf.keras.regularizers.l2(0.004)
# kernel_constraint=max_norm(3)
model = tf.keras.layers.TimeDistributed(Conv2D(32, 3, activation="relu"))(input_shape_0)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(64, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(128, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(GlobalAveragePooling2D())(model)
model = LSTM(128, return_sequences=True,kernel_regularizer=tf.keras.regularizers.l2(0.040))(model)
model = Dropout(0.60)(model)
model = LSTM(128, return_sequences=False)(model)
model = Dropout(0.50)(model)
out = Dense(30, activation='softmax')(model)
Try to perform Data Augmentation as a preprocessing step. Lack of data samples can lead to such curves. You can also try using k-fold Cross Validation.
There are many ways to prevent overfitting, according to the papers below:
Dropout layers (Disabling randomly neurons). https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
Input Noise (e.g. Random Gaussian Noise on the imges). https://arxiv.org/pdf/2010.07532.pdf
Random Data Augmentations (e.g. Rotating, Shifting, Scaling, etc.).
https://arxiv.org/pdf/1906.11052.pdf
Adjusting Number of Layers & Units.
https://clgiles.ist.psu.edu/papers/UMD-CS-TR-3617.what.size.neural.net.to.use.pdf
Regularization Functions (e.g. L1, L2, etc)
https://www.researchgate.net/publication/329150256_A_Comparison_of_Regularization_Techniques_in_Deep_Neural_Networks
Early Stopping: If you notice that for N successive epochs that your model's training loss is decreasing, but the model performs poorly on validaiton data set, then It is a good sign to stop the training.
Shuffling the training data or K-Fold cross validation is also common way way of dealing with Overfitting.
I found this great repository, which contains examples of how to implement data augmentations:
https://github.com/kochlisGit/random-data-augmentations
Also, this repository here seems to have examples of CNNs that implement most of the above methods:
https://github.com/kochlisGit/Tensorflow-State-of-the-Art-Neural-Networks
The goal should be to get the model predict correctly irrespective of
the order in which the 3 images in the sample are arranged.
If the order of the images of each sample is not important for the training, I think your model does the inverse, the Timedistributed layers succeded by LSTM take into account the order of the three images. As a solution, primarily, you can add images by reordering the images of each sample (= Augmented data). Secondly, try to consider the three images as one image with three-channel and remove the Timedistributed layers (I'm not sure that the three-channels are more efficient but you can give it a try)
I was working on an image recognition problem. After training the model, I saved the architecture as well as weights. Now I want to use the model for extracting features from other images and perform SVM on that. For this, I want to remove the last two layers of my model and get the values calculated by the CNN and fully connected layers till then. How can I do that in Keras?
# a simple model
model = keras.models.Sequential([
keras.layers.Input((32,32,3)),
keras.layers.Conv2D(16, 3, activation='relu'),
keras.layers.Flatten(),
keras.layers.Dense(10, activation='softmax')
])
# after training
feature_only_model = keras.models.Model(model.inputs, model.layers[-2].output)
feature_only_model take a (32,32,3) for input and the output is the feature vector
If your model is subclassed - just change call() method.
If not:
if your model is complicated - wrap your model by subclassed model and change forward pass in call() method, or
if your model is simple - create model without the last layers, load weights to every layer separately
I am trying to use efficientnet to custom train my dataset.
And I find out with all other code/data/config the same.
efficientnet.tfkeras.EfficientNetB0 can gives ~90% training/prediction accruacy and tf.keras.applications.efficientnet.EfficientNetB0 only gives ~70% accuracy.
But I guess both should be the same implementation of the efficient net, or I am missing something here?
I am using latest efficientnet and Tensorflow 2.3.0
with strategy.scope():
model = tf.keras.Sequential([
efficientnet.tfkeras.EfficientNetB0( #tf.keras.applications.efficientnet.EfficientNetB0
input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
weights='imagenet',
include_top=False
),
L.GlobalAveragePooling2D(),
L.Dense(1, activation='sigmoid')
])
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_crossentropy']
)
model.summary()
I did run into the same problem for EfficientNetB4 and did encounter the following:
The number of total parameters are not equal. The trainable parameters are equal, but the non-trainable parameters aren't. The efficientnet.tfkeras has 7 fewer non-trainable parameters than the tf.keras.applications model.
The number of layers are not equal, the efficientnet.tfkeras has fewer layers than tf.keras.application model.
The different layers are at the very beginning, the most noteworthy are the normalization and rescaling layers, which are in the tf.keras.applications model, but not in the efficientnet.tfkeras model. You can observe this yourself using the model.summary() method.
When applying this layer, by using model.layers[i](array), it turn out these layers do rescale the image by dividing it by 255 and applying normalization according to:
(input_image - IMAGENET_MEAN) / square_root(IMAGENET_STD)
Thus, it turns out the image normalization is build into the model. When you perform this normalization yourself to the input image, the image will be normalized twice resulting in extremely small pixel values. The model will therefore have a hard time learning.
TLDR: Do not normalize the input image as it is build into the tf.keras.application model, input images should have values in the range 0-255.
I have built an image classifier with 2 classes, say 'A' and 'B'. I have also saved this model, using model.save().
Now, after a certain time, the requirement arose to add one more class 'C'. Is it possible to load_model() and then add only one class to the previously saved model so that we have the final model with 3 classes ('A','B' and 'C'), without having to retrain the whole model, for classes 'A and 'B' again?
Can anyone help?
I have tried this:
I used vgg16 as a base model and pop out its last layer, freeze weights and added one dense layer (DL2), trained it for predicting 2 classes.
Then I added one more dense layer on top of DL2 say DL3, freeze weights and train with class C only but now its always predicting class C.
I think you should check this tutorial:
https://www.tensorflow.org/tutorials/image_retraining.
In short:
You can not take trained model, and add new classes.
You should make some additional 'finetuning', may be not retrain the model from scratch, but at least to train classifier (and some additional layers).
You can also simply change the number of output classes in the last layer and freeze weights for the remaining layer. Retrain the weights for only the last layer.
Just use a transfer learning, and create a new model.
model = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
model.pop()
base_model_layers = model.output
pred = Dense(11, activation='softmax')(base_model_layers)
model = Model(inputs=model.input, outputs=pred)
# Freeze the first layers, before train it
for layer in model.layers[:-2]:
layer.trainable = False