Keras multi target classification model test accuracy is less than train accuracy - tensorflow

I am building multi classification model, after fit method, getting test accuracy is very very less than train accuracy even drop outs layers attached.
The layers definition is as follows.
n_labels = len(unique(X_train_enc))
in_layer = Input(shape=(1,))
em_layer = Embedding(input_dim = int(n_labels)+1,output_dim = 7,
input_length = 1, name="embedding")(in_layer)
dense = Dense(512, activation='relu', kernel_initializer='he_normal')(em_layer)
dense = Dropout(0.2)(dense)
output = Dense(7, activation='softmax')(dense)
model = Model(inputs=in_layer, outputs=output)
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train_enc, y_train_enc, epochs=100, batch_size=100, verbose=1)
Epoch 98/100
572/572 [==============================] - 11s 19ms/step - loss: 3.7565e-11 - accuracy: 1.0000
Epoch 99/100
313/572 [===============>..............] - ETA: 5s - loss: 2.2852e-11 - accuracy: 1.0000
test_loss, test_acc = model.evaluate(X_test_enc, y_test_enc,verbose=1)
print('Test Accuracy: ', test_acc, '\nTest Loss: ', test_loss)
print('Accuracy: %.2f' % (test_acc*100))
766/766 [==============================] - 7s 9ms/step - loss: 20.4420 - accuracy: 0.1426
Test Accuracy: 0.1426003873348236
Test Loss: 20.441951751708984
Accuracy: 14.26
any tuning will be highly appreciated.

Related

Transfer learning model not learning much, validation plateau at 45% while train go up to 90%

So it's been days i've been working on this model on image classification. I have 70000 images and 375 classes. I've tried training it with Vgg16, Xception, Resnet & Mobilenet ... and I always get the same limit of 45% on the validation.
As you can see here
I've tried adding dropout layers and regularization and it gets the same result for validation.
Data augmentation didn't do much to help either
Any ideas why this isn't working ?
Here's a snipped of the code of the last model I used:
from keras.models import Sequential
from keras.layers import Dense
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras import regularizers
from PIL import Image
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
validation_datagen = ImageDataGenerator(rescale=1./255)
target_size = (height, width)
datagen = ImageDataGenerator(rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
path,
target_size=(height, width),
batch_size=batchSize,
shuffle=True,
class_mode='categorical',
subset='training')
validation_generator = datagen.flow_from_directory(
path,
target_size=(height, width),
batch_size=batchSize,
class_mode='categorical',
subset='validation')
num_classes = len(train_generator.class_indices)
xception_model = Xception(weights='imagenet',input_shape=(width, height, 3), include_top=False,classes=num_classes)
x = xception_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
out = Dense(num_classes, activation='softmax')(x)
opt = Adam()
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
n_epochs = 15
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batchSize,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batchSize,
verbose=1,
epochs = n_epochs)
Yes, you may need a balanced dataset among each category in your dataset for better model training performance. Please try again by changing class_mode='sparse' and loss='sparse_categorical_crossentropy' because you are using the image dataset. Also freeze the pretrained model layers 'xception_model.trainable = False'.
Check the below code: (I have used a flower dataset of 5 classes)
xception_model = tf.keras.applications.Xception(weights='imagenet',input_shape=(width, height, 3), include_top=False,classes=num_classes)
xception_model.trainable = False
x = xception_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(32, activation='relu')(x)
out = Dense(num_classes, activation='softmax')(x)
opt = tf.keras.optimizers.Adam()
model = keras.Model(inputs=xception_model.input, outputs=out)
model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_generator, epochs=10, validation_data=validation_generator)
Output:
Epoch 1/10
217/217 [==============================] - 23s 95ms/step - loss: 0.5945 - accuracy: 0.7793 - val_loss: 0.4610 - val_accuracy: 0.8337
Epoch 2/10
217/217 [==============================] - 20s 91ms/step - loss: 0.3439 - accuracy: 0.8797 - val_loss: 0.4550 - val_accuracy: 0.8419
Epoch 3/10
217/217 [==============================] - 20s 93ms/step - loss: 0.2570 - accuracy: 0.9150 - val_loss: 0.4437 - val_accuracy: 0.8384
Epoch 4/10
217/217 [==============================] - 20s 91ms/step - loss: 0.2040 - accuracy: 0.9340 - val_loss: 0.4592 - val_accuracy: 0.8477
Epoch 5/10
217/217 [==============================] - 20s 91ms/step - loss: 0.1649 - accuracy: 0.9494 - val_loss: 0.4686 - val_accuracy: 0.8512
Epoch 6/10
217/217 [==============================] - 20s 92ms/step - loss: 0.1301 - accuracy: 0.9589 - val_loss: 0.4805 - val_accuracy: 0.8488
Epoch 7/10
217/217 [==============================] - 20s 93ms/step - loss: 0.0966 - accuracy: 0.9754 - val_loss: 0.4993 - val_accuracy: 0.8442
Epoch 8/10
217/217 [==============================] - 20s 91ms/step - loss: 0.0806 - accuracy: 0.9806 - val_loss: 0.5488 - val_accuracy: 0.8372
Epoch 9/10
217/217 [==============================] - 20s 91ms/step - loss: 0.0623 - accuracy: 0.9864 - val_loss: 0.5802 - val_accuracy: 0.8360
Epoch 10/10
217/217 [==============================] - 22s 100ms/step - loss: 0.0456 - accuracy: 0.9896 - val_loss: 0.6005 - val_accuracy: 0.8360

Loss when starting fine tuning is higher than loss from transfer learning

Since I start fine tuning with the weights learned by transfer learning, I would expect the loss to be the same or less. However it looks like it starts fine tuning using a different set of starting weights.
Code to start transfer learning:
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=3, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
epochs = 1000
callback = tf.keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)
history = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=epochs,
validation_data=val_generator,
validation_steps=len(val_generator),
callbacks=[callback],)
Output from last epoch:
Epoch 29/1000
232/232 [==============================] - 492s 2s/step - loss: 0.1298 - accuracy: 0.8940 - val_loss: 0.1220 - val_accuracy: 0.8937
Code to start fine tuning:
model.trainable = True
# Fine-tune from this layer onwards
fine_tune_at = -20
# Freeze all the layers before the `fine_tune_at` layer
for layer in model.layers[:fine_tune_at]:
layer.trainable = False
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
loss='binary_crossentropy',
metrics=['accuracy'])
history_fine = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=epochs,
validation_data=val_generator,
validation_steps=len(val_generator),
callbacks=[callback],)
But this is what I see for the first few epochs:
Epoch 1/1000
232/232 [==============================] - ETA: 0s - loss: 0.3459 - accuracy: 0.8409/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
"Palette images with Transparency expressed in bytes should be "
232/232 [==============================] - 509s 2s/step - loss: 0.3459 - accuracy: 0.8409 - val_loss: 0.7755 - val_accuracy: 0.7262
Epoch 2/1000
232/232 [==============================] - 502s 2s/step - loss: 0.1889 - accuracy: 0.9066 - val_loss: 0.5628 - val_accuracy: 0.8881
Eventually the loss drops and passes the transfer learning loss:
Epoch 87/1000
232/232 [==============================] - 521s 2s/step - loss: 0.0232 - accuracy: 0.8312 - val_loss: 0.0481 - val_accuracy: 0.8563
Why was the loss in the first epoch of fine tuning higher than the last loss from transfer learning?

Why accuracy stays zero in Keras LSTM while other metrics improve when training?

I have constructed a LSTM model for binary text classification problem. Here the labels are one hot encoded. Following is my constructed model
model = Sequential()
model.add(Embedding(input_dim=dictionary_length, output_dim=60, input_length=max_word_count))
model.add(LSTM(600))
model.add(Dense(units=max_word_count, activation='tanh', kernel_regularizer=regularizers.l2(0.04), activity_regularizer=regularizers.l2(0.015)))
model.add(Dense(units=max_word_count, activation='relu', kernel_regularizer=regularizers.l2(0.01), bias_regularizer=regularizers.l2(0.01)))
model.add(Dense(2, activation='softmax', kernel_regularizer=regularizers.l2(0.001)))
adam_optimizer = Adam(lr=0.001, decay=0.0001)
model.compile(loss='categorical_crossentropy', optimizer=adam_optimizer,
metrics=[tf.keras.metrics.Accuracy(), metrics.AUC(), metrics.Precision(), metrics.Recall()])
When I fit this model the accuracy stays 0 all the time but other matrices get improved. What is the issue here?
Epoch 1/3
94/94 [==============================] - 5s 26ms/step - loss: 3.4845 - accuracy: 0.0000e+00 - auc_4: 0.7583 - precision_4: 0.7251 - recall_4: 0.7251
Epoch 2/3
94/94 [==============================] - 2s 24ms/step - loss: 0.4772 - accuracy: 0.0000e+00 - auc_4: 0.9739 - precision_4: 0.9249 - recall_4: 0.9249
Epoch 3/3
94/94 [==============================] - 3s 27ms/step - loss: 0.1786 - accuracy: 0.0000e+00 - auc_4: 0.9985 - precision_4: 0.9860 - recall_4: 0.9860
Because you need CategoricalAccuracy : https://www.tensorflow.org/api_docs/python/tf/keras/metrics/CategoricalAccuracy to mesuring the accuracy of this problem.
model.compile(loss='categorical_crossentropy', optimizer=adam_optimizer,
metrics=[tf.keras.metrics.CategoricalAccuracy(), metrics.AUC(), metrics.Precision(), metrics.Recall()])

Tensorflow NN implementation not converging

I'm trying to implement a simple feedforward Neural Network using only Tensorflow and it is not converging. I'm not sure if the problem is in the architecture of the network or the training process implementation. Simple 2 layer NN built using Keras seems to be converging fine:
from keras.layers import LSTM, Dense, Flatten, Conv1D
from keras import Sequential
model = Sequential()
model.add(Dense(32, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(21, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(np.array(train_in), np.array(train_target), epochs=10, validation_split=0.1, batch_size=16)
Epoch 2/10 59717/59717 [==============================] - 4s 71us/sample - loss: 1.4021 - accuracy: 0.6812 - val_loss: 1.1049 - val_accuracy: 0.7066
Epoch 3/10
59717/59717 [==============================] - 4s 70us/sample - loss: 1.0942 - accuracy: 0.7321 - val_loss: 1.2269 - val_accuracy: 0.7015
Epoch 4/10
59717/59717 [==============================] - 4s 70us/sample - loss: 0.9096 - accuracy: 0.7654 - val_loss: 0.8207 - val_accuracy: 0.7905
Epoch 5/10
59717/59717 [==============================] - 4s 70us/sample - loss: 0.8373 - accuracy: 0.7790 - val_loss: 0.6863 - val_accuracy: 0.8267
Epoch 6/10
59717/59717 [==============================] - 4s 72us/sample - loss: 0.7925 - accuracy: 0.7918 - val_loss: 0.8132 - val_accuracy: 0.7929
Epoch 7/10
59717/59717 [==============================] - 4s 73us/sample - loss: 0.7916 - accuracy: 0.7925 - val_loss: 0.6749 - val_accuracy: 0.8210
Epoch 8/10
19600/59717 [========>.....................] - ETA: 2s - loss: 0.7475 - accuracy: 0.8011
Here's my implementation of the same network in Tensorflow:
tf.compat.v1.disable_eager_execution()
batch_size = 10
hid_dim = 32
output_dim = 21
features = train_x.shape[1]
x = tf.compat.v1.placeholder(tf.float32, (batch_size, features), name='x')
y = tf.compat.v1.placeholder(tf.int32, (batch_size, ), name='y')
w1 = tf.Variable(tf.compat.v1.random_normal([features, hid_dim]), dtype=tf.float32)
b1 = tf.Variable(tf.compat.v1.random_normal([hid_dim]), dtype=tf.float32)
w2 = tf.Variable(tf.compat.v1.random_normal([hid_dim, output_dim]), dtype=tf.float32)
b2 = tf.Variable(tf.compat.v1.random_normal([output_dim]), dtype=tf.float32)
h1 = tf.nn.relu(tf.matmul(x, w1) + b1)
h2 = tf.matmul(h1, w2) + b2
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=h2, labels=y))
optimizer = tf.compat.v1.train.AdamOptimizer(0.001).minimize(loss)
pred = tf.nn.softmax(h2)
And here's my training procedure implementation. In my case batch_size is fixed so at each epoch I feed the whole dataset to the network batch by batch. I calculate loss after each batch and add it to array. After each epoch I take average of the epoch's batch losses array to get my overall epoch loss:
train_in = np.array(train_x)
train_target = np.array(train_y)
train_target = np.squeeze(train_target)
y_t = train_target
num_of_train_batches = len(train_in)/batch_size
init=tf.compat.v1.global_variables_initializer()
print('TRAIN BATCHES: ', num_of_train_batches)
epoch_list = []
epoch_losses = []
epochs = 50
with tf.compat.v1.Session() as sess:
sess.run(init)
print('TRAINING')
for epoch in range(epochs):
lt = []
ft = 0
tt = 1
train_losses = []
print('EPOCH: ', epoch)
epoch_list.append(epoch)
# RUN WHOLE SET
for it in range(int(num_of_train_batches)): #len(x_train)/batch_size
# OPTIMIZE
_, batch_loss = sess.run([optimizer, loss], feed_dict={x:train_in[ft*batch_size:tt*batch_size],
y:train_target[ft*batch_size:tt*batch_size]})
train_losses.append(batch_loss)
ft+=1
tt+=1
epoch_losses.append(np.array(train_losses).mean())
print('EPOCH: ', epoch)
print('LOSS: ', np.array(train_losses).mean())
TRAIN BATCHES: 2200.0
TRAINING
EPOCH: 0
EPOCH: 0
LOSS: 1370.9271
EPOCH: 1
EPOCH: 1
LOSS: 64.23466
EPOCH: 2
EPOCH: 2
LOSS: 36.015495
EPOCH: 3
EPOCH: 3
LOSS: 30.292429
EPOCH: 4
EPOCH: 4
LOSS: 26.436918
EPOCH: 5
EPOCH: 5
LOSS: 25.689302
EPOCH: 6
EPOCH: 6
LOSS: 23.730627
EPOCH: 7
EPOCH: 7
LOSS: 22.356762
EPOCH: 8
EPOCH: 8
LOSS: 21.81124
My Keras implementation reached 0.75 loss only after 8 epochs using the same number of hidden layers and hidden layer size, but my TF implementation is still showing greater than 10 loss even after 15 epochs.
Can someone please point out why is this hapenning? I'm guessing the problem has more to do with training procedure than actual NN.
All suggestions are welcomed!

Significantly higher testing accuracy on mnist with keras than tensorflow.keras

I was verifying with a basic example my TensorFlow (v2.2.0), Cuda (10.1), and cudnn (libcudnn7-dev_7.6.5.32-1+cuda10.1_amd64.deb) and I'm getting weird results...
When running the following example in Keras as shown in https://keras.io/examples/mnist_cnn/ I get the ~99% acc #validation. When I adapt the imports run via the TensorFlow I get only 86%.
I might be forgetting something.
To run using tensorflow:
from __future__ import print_function
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Sadly, I get the following output:
Epoch 2/12
469/469 [==============================] - 3s 6ms/step - loss: 2.2245 - accuracy: 0.2633 - val_loss: 2.1755 - val_accuracy: 0.4447
Epoch 3/12
469/469 [==============================] - 3s 7ms/step - loss: 2.1485 - accuracy: 0.3533 - val_loss: 2.0787 - val_accuracy: 0.5147
Epoch 4/12
469/469 [==============================] - 3s 6ms/step - loss: 2.0489 - accuracy: 0.4214 - val_loss: 1.9538 - val_accuracy: 0.6021
Epoch 5/12
469/469 [==============================] - 3s 6ms/step - loss: 1.9224 - accuracy: 0.4845 - val_loss: 1.7981 - val_accuracy: 0.6611
Epoch 6/12
469/469 [==============================] - 3s 6ms/step - loss: 1.7748 - accuracy: 0.5376 - val_loss: 1.6182 - val_accuracy: 0.7039
Epoch 7/12
469/469 [==============================] - 3s 6ms/step - loss: 1.6184 - accuracy: 0.5750 - val_loss: 1.4296 - val_accuracy: 0.7475
Epoch 8/12
469/469 [==============================] - 3s 7ms/step - loss: 1.4612 - accuracy: 0.6107 - val_loss: 1.2484 - val_accuracy: 0.7719
Epoch 9/12
469/469 [==============================] - 3s 6ms/step - loss: 1.3204 - accuracy: 0.6402 - val_loss: 1.0895 - val_accuracy: 0.7945
Epoch 10/12
469/469 [==============================] - 3s 6ms/step - loss: 1.2019 - accuracy: 0.6650 - val_loss: 0.9586 - val_accuracy: 0.8097
Epoch 11/12
469/469 [==============================] - 3s 7ms/step - loss: 1.1050 - accuracy: 0.6840 - val_loss: 0.8552 - val_accuracy: 0.8216
Epoch 12/12
469/469 [==============================] - 3s 7ms/step - loss: 1.0253 - accuracy: 0.7013 - val_loss: 0.7734 - val_accuracy: 0.8337
Test loss: 0.7734305262565613
Test accuracy: 0.8337000012397766
Nowhere near 99.25% as when I import Keras.
What am I missing?
Discrepancy in optimiser parameters between keras and tensorflow.keras
So the crux of the issue lies in the different default parameters for the Adadelta optimisers in Keras and Tensorflow. Specifically, the different learning rates. We can see this with a simple check. Using the Keras version of the code, print(keras.optimizers.Adadelta().get_config()) outpus
{'learning_rate': 1.0, 'rho': 0.95, 'decay': 0.0, 'epsilon': 1e-07}
And in the Tensorflow version, print(tf.optimizers.Adadelta().get_config() gives us
{'name': 'Adadelta', 'learning_rate': 0.001, 'decay': 0.0, 'rho': 0.95, 'epsilon': 1e-07}
As we can see, there is a discrepancy between the learning rates for the Adadelta optimisers. Keras has a default learning rate of 1.0 while Tensorflow has a default learning rate of 0.001 (consistent with their other optimisers).
Effects of a higher learning rate
Since the Keras version of the Adadelta optimiser has a larger learning rate, it converges much faster and achieves a high accuracy within 12 epochs, while the Tensorflow Adadelta optimiser requires a longer training time. If you increased the number of training epochs, the Tensorflow model could potentially achieve a 99% accuracy as well.
The fix
But instead of increasing the training time, we can simply initialise the Tensorflow model to behave in a similar way to the Keras model by changing the learning rate of Adadelta to 1.0. i.e.
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.optimizers.Adadelta(learning_rate=1.0), # Note the new learning rate
metrics=['accuracy'])
Making this change, we get the following performance running on Tensorflow:
Epoch 12/12
60000/60000 [==============================] - 102s 2ms/sample - loss: 0.0287 - accuracy: 0.9911 - val_loss: 0.0291 - val_accuracy: 0.9907
Test loss: 0.029134796149221757
Test accuracy: 0.9907
which is close to the desired 99.25% accuracy.
p.s. Incidentally, it seems that the different default parameters between Keras and Tensorflow is a known issue that was fixed but then reverted:
https://github.com/keras-team/keras/pull/12841 software development is hard.