why am I getting error in transfer learning? - tensorflow

I am training a model for Optical Character Recognition of Gujarati Language. The input image is a character image. I have taken 37 classes. Total training images are 22200 (600 per class) and testing images are 5920 (160 per class). My input images are 32x32
Below is my code:
model = tf.keras.applications.DenseNet121(include_top=False, weights='imagenet', pooling='max')
base_inputs = model.layers[0].input
base_outputs = model.layers[-1].output # NOTICE -1 not -2
prefinal_outputs = layers.Dense(1024)(base_outputs)
final_outputs = layers.Dense(37)(prefinal_outputs)
new_model = keras.Model(inputs=base_inputs, outputs=base_outputs)
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=False)
test_datagen = ImageDataGenerator(horizontal_flip = False)
training_set = train_datagen.flow_from_directory('C:/Users/shweta/Desktop/characters/train',
target_size = (32, 32),
batch_size = 64,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('C:/Users/shweta/Desktop/characters/test',
target_size = (32, 32),
batch_size = 64,
class_mode = 'categorical')
new_model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
new_model.fit_generator(training_set,
epochs = 25,
validation_data = test_set, shuffle=True)
new_model.save('alphanumeric.mod')
I am getting following output:
Thanks in advance!

First of all, very well written code.
These are some of the things, I have noticed while I was going through the code and tf,keras docs.
I would like to ask what kind of labels have you got beacuse you know categorical_crossentropy expects ONE HOT CODED labels.(Check this).So, if your labels are integers, use sparsecategoricalentropy.
Similar issue
There was post where someone was trying to classsify into 2 and used categorical instead of binary crossentropy. If you want to look at.
Cheers
Let me know how it goes!
PS: #gerry made a very good point and if labels are One hot encoded use categoricalcrossentropy!

The code should be:
model = tf.keras.applications.DenseNet121(include_top=False, weights='imagenet, pooling='max', input_shape=(32,32,3))
base_outputs = model.layers[-1].output
prefinal_outputs = layers.Dense(1024)(base_outputs)
final_outputs = layers.Dense(37)(prefinal_outputs)
new_model = keras.Model(inputs=model.input, outputs=final_outputs)
new_model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
Also you should use model.fit in the future. Model.fit can now work with generators and model.fit_generator will be depreciate in future versions of tensorflow. I ran against your dataset and got accurate results in about 10 epochs. Here is some additional advice. It is best to use and adjustable learning rate. The keras callback ReduceLROnPlateau makes this easy to do. Documentation is here. Set it to monitor the validation loss. My use is shown below.
lr_adjust=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=1, verbose=1, mode="auto",
min_delta=0.00001, cooldown=0, min_lr=0)
Also I recommend using the callback ModelCheckpoint. Documentation is here. Set it up to monitor validation loss and it will save the weights that achieved the lowest validation loss. My implementation is shown below.
sav_loc=r'c:\Temp' # set this to the path where you want to save the weights
checkpoint=tf.keras.callbacks.ModelCheckpoint(filepath=save_loc, monitor='val_loss', verbose=1, save_best_only=True,
save_weights_only=True, mode='auto', save_freq='epoch', options=None)
callbacks=[checkpoint, lr_adjust]
In model.fit include callbacks=callbacks. When training is completed you want to load these saved weights into the model, then save the model. You can use the saved model to make predictions. Code is below.
model.load_weights(save_loc)
model.save(save_loc)

Related

Deep Learning model (LSTM) predicts same class label

I am trying to solve the Spoken Digit Recognition task using the LSTM model, where the audio files are converted into spectrograms and fed into an LSTM model after doing Global Average Pooling. Here is the architecture of it
tf.keras.backend.clear_session()
#input layer
input_= Input(shape = (64, 35))
lstm = LSTM(100, activation='tanh', return_sequences= True, kernel_regularizer = l2(0.000001),
recurrent_initializer = 'glorot_uniform')(input_)
lstm = GlobalAveragePooling1D(data_format='channels_first')(lstm)
dense = Dense(20, activation='relu', kernel_regularizer = l2(0.000001), kernel_initializer='glorot_uniform')(lstm)
drop = Dropout(0.8)(dense)
dense1 = Dense(25, activation='relu', kernel_regularizer = l2(0.000001), kernel_initializer= 'he_uniform')(drop)
drop = Dropout(0.95)(dense1)
output = Dense(10,activation = 'softmax', kernel_regularizer = l2(0.000001), kernel_initializer= 'glorot_uniform')(drop)
model_2 = Model(inputs = [input_], outputs = output)
model_2.summary()
Having summary as -
I need to calculate the F1 score to check the performance of the model, I have implemented a custom callback and used TensorFlow addons F1 score too. However, I won't get the correct result, for every epoch I get the constant F1 score value.
On further digging, I found out that my model predicts the same class label, for the entire epoch, whereas it is supposed to predict 10 classes in one epoch. as there are 10 class label values present.
Here is my model.compile and model.predict commands. I have used TensorFlow addon here -
from tensorflow import keras
opt = keras.optimizers.Adam(0.001, clipnorm=0.8)
model_2.compile(loss='categorical_crossentropy', optimizer=opt, metrics = metric)
hist = model_2.fit([X_train_spectrogram],
[y_train_converted],
validation_data= ([X_test_spectrogram], [y_test_converted]),
epochs = 10,
verbose =1,
callbacks=[tensorBoard_callbk2, ClearMemory()],
# steps_per_epoch = 3,
batch_size=32)
Here is what I mean by getting the same prediction, the entire array is filled with the same predicted values.
Why is the model predicting the same class label? or How to rectify it?
I have tried increasing the number of trainable parameters, increasing - decreasing batch size too, but it won't help me. If anyone knows can you please help me out?

How to set custom weights in layers?

I am looking at how to set custom weights into the layers.
Below is the code I work with
batch_size = 64
input_dim = 12
units = 64
output_size = 1 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim))
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
model = build_model()
model.compile(
loss=keras.losses.MeanSquaredError(),
optimizer="Adam",
metrics=["accuracy"],
)
model.fit(
x_train, y_train, validation_data=(x_val, y_val), batch_size=batch_size, epochs=15
)
Modle Summary
Can anyone help me how to set_weights in above code?
Thanks in advance.
You can do it using set_weights method.
For example, if you want to set the weights of your LSTM Layer, it can be accessed using model.layers[0] and if your Custom Weights are, say in an array, named, my_weights_matrix, then you can set your Custom Weights to First Layer (LSTM) using the code shown below:
model.layers[0].set_weights([my_weights_matrix])
If you don't want your weights to be modified during Training, then you have to Freeze that Layer using the code, model.layers[0].trainable = False.
Please let me know if you face any other issue and I will be Happy to Help you.
Hope this helps. Happy Learning!

Tensorflow Hub vs Keras application - performance drop

I have image classification problem and i want to use Keras pretrained models for this task.
When I use such a model
model = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4",
output_shape=[1280],
trainable=False),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.build([None, image_size[0], image_size[1], 3])
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['acc'])
I easily get ~90% accuracy and very low loss on balanced dataset. However, if use keras.application like that:
`base_model = tf.keras.applications.mobilenet_v2.MobileNetV2(
input_shape=input_img_size,
include_top=False,
weights='imagenet'
)
base_model.trainable = False
model = tf.keras.layers.Dropout(0.5)(model)
model = tf.keras.layers.Dense(num_classes, activation='softmax')(model)
model = tf.keras.models.Model(inputs=base_model.input, outputs=model)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['acc'])`
and use a proper tf.keras.application.mobilenet_v2.preprocess_input function in datagenerator (and leaving everything else the same) it is stuck at around 60% validation and 80% training.
what is the difference between these approaches? why one is superior to the other?
The data generator:
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
preprocessing_function = preprocessing_function,
rotation_range=10,
zoom_range=0.3,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
vertical_flip=True,
shear_range=0.2,
)
Training:
history = model.fit_generator(
train_generator,
epochs=nb_epochs,
verbose=1,
steps_per_epoch=steps_per_epoch,
validation_data=valid_generator,
validation_steps=val_steps_per_epoch,
callbacks=[
checkpoint,
learning_rate_reduction,
csv_logger,
tensorboard_callback,
],
)
I believe you are training two different 'models'. In your TensorFlow Hub example, you used mobilenet's feature vector. Feature vector as I understand it, is not the same as a model. It is a 1-D tensor of certain length. It is probably the last layer before the output of the mobilenet model. This is different from the tf.keras example, where you are invoking the full mobilenet model.

CNN Model With Low Accuracy

I'm currently working on a CNN model that classifies food images. So far, I have managed to build a functioning CNN but I would like to improve the accurracy. For the dataset, I have used some images from Kaggle and few from my own collection.
Here is some information about the dataset:
There are 91 classes of food images.
Each class has around 500 to 650 images.
The dataset has been manually cleaned and checked for unrelated or bad quality images (the photos are of different sizes).
Here is my CNN model:
classifier = Sequential()
def cnn_layer_creation(classifier):
classifier.add(InputLayer(input_shape=[224,224,3]))
classifier.add(Conv2D(filters=32,kernel_size=5,strides=1,padding='same',activation='relu',data_format='channels_first'))
classifier.add(MaxPooling2D(pool_size=5,padding='same'))
classifier.add(Conv2D(filters=50,kernel_size=5,strides=1,padding='same',activation='relu'))
classifier.add(MaxPooling2D(pool_size=5,padding='same'))
classifier.add(Conv2D(filters=80,kernel_size=5,strides=1,padding='same',activation='relu',data_format='channels_last'))
classifier.add(MaxPooling2D(pool_size=5,padding='same'))
classifier.add(Dropout(0.25))
classifier.add(Flatten())
classifier.add(Dense(64,activation='relu'))
classifier.add(Dropout(rate=0.5))
classifier.add(Dense(91,activation='softmax'))
# Compiling the CNN
classifier.compile(optimizer="RMSprop", loss = 'categorical_crossentropy', metrics = ['accuracy'])
data_initialization(classifier)
def data_initialization(classifier):
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
training_set = train_datagen.flow_from_directory('food_image/train',
target_size = (224, 224),
batch_size = 100,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('food_image/test',
target_size = (224, 224),
batch_size = 100,
class_mode = 'categorical')
classifier.fit_generator(training_set,
steps_per_epoch = 100,
epochs = 100,
validation_data = test_set,
validation_steps = 100)
classifier.save("brynModelGPULite.h5")
classifier.summary()
def main():
cnn_layer_creation(classifier)
Training is done on GPU (nVidia 980M)
Unfortunately, the accuracy has not exceeded 10%. Things I've tried are:
Increase the number of epochs.
Change the optimizer (ADAM, RMSPROP).
Change the activation function.
Reduce the image input size.
Increase the batch size.
Change the filter size to 32, 64, 128.
None of these have improved the accuracy.
Could anyone shine some light on how I might improve my model accuracy?
You can augment only the training data.
The following code
test_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
should be
test_datagen = ImageDataGenerator(rescale = 1./255)
Firstly, I assume you are building your model from scratch. As such training on fewer epochs(I assume you would not have trained your model for more than 1000 epochs), will not help as the network would not evolve completely because the representations would not have been completely learnt in so few epochs when you train a model from scratch. You can try increasing the number of epochs to around 10000 and see. Rather why not try and use transfer learning for the same, additionally you can also using feature extraction and fine tuning or both using a pre trained convnet. For reference you can have a look at chapter 5 in the book by Francois Chollet titled Deep Learning with Python.
I had same problem with another dataset and I replaced the flatten layer with globalAveragepooling and it solved the problem.
I'm not sure this is going to work for you but as my model has a structure similar to yours, I think this can help you. But the difference is that I trained my model for 3 classes.

Keras: inconsistent results when fitting ConvNets in two different ways

I'm trying to use VGG16 network to do image classification. I've tried two different ways to do it which should be approximately equivalent as far as I understand, yet the results are very different.
Method 1: Extract features using VGG16 and fit these features using a custom fully connected network. Here is the code:
model = vgg16.VGG16(include_top=False, weights='imagenet',
input_shape=(imsize,imsize,3),
pooling='avg')
model_pred = keras.Sequential()
model_pred.add(keras.layers.Dense(1024, input_dim=512, activation='sigmoid'))
model_pred.add(keras.layers.Dropout(0.5))
model_pred.add(keras.layers.Dense(512, activation='sigmoid'))
model_pred.add(keras.layers.Dropout(0.5))
model_pred.add(keras.layers.Dense(num_categories, activation='sigmoid'))
model_pred.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
(xtr, ytr) = tools.extract_features(model, 3000, imsize, datagen,
rootdir + '/train',
pickle_name = rootdir + '/testpredstrain.pickle')
(xv, yv) = tools.extract_features(model, 300, imsize, datagen,
rootdir + '/valid1',
pickle_name = rootdir + '/testpredsvalid.pickle')
model_pred.fit(xtr, ytr, epochs = 10, validation_data = (xv, yv), verbose=1)
(The function extract_features() simply uses Keras ImageDataGenerator to generate sample images and returns the output after using model.predict() on those images)
Method 2: Take the VGG16 network without the top part, set all the convolutional layers to non-trainable and add a few densely connected layers that are trainable. Then fit using keras fit_generator(). Here is the code:
model2 = vgg16.VGG16(include_top=False, weights='imagenet',
input_shape=(imsize,imsize,3),
pooling='avg')
for ll in model2.layers:
ll.trainable = False
out1 = keras.layers.Dense(1024, activation='softmax')(model2.layers[-1].output)
out1 = keras.layers.Dropout(0.4)(out1)
out1 = keras.layers.Dense(512, activation='softmax')(out1)
out1 = keras.layers.Dropout(0.4)(out1)
out1 = keras.layers.Dense(num_categories, activation='softmax')(out1)
model2 = keras.Model(inputs = model2.input, outputs = out1)
model2.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model2.fit_generator(train_gen,
steps_per_epoch = 100,
epochs = 10,
validation_data = valid_gen,
validation_steps = 10)
The number of epochs, samples, etc. are not exactly the same in both methods, but they don't need to be to notice the inconsistency: method 1 yields validation accuracy of 0.47 after just one epoch and gets as high as 0.7-0.8 and even better when I'm using larger number of samples to fit. Method 2, however, gets stuck at validation accuracy of 0.1-0.15 and never gets any better no matter how much I train.
Also, method 2 is considerably slower than method 1 even though it seems to me that they should be approximately as fast (when taking into account the time it takes to extract the features in method 1).
With your first method you extract features with vgg16 pre-trained model once and then you train - finetune your network while in your second approach you are constantly passing your images through every layer including vgg's layers at every epoch. That causes your model to run slower with your second method.