Ways to decrease validation loss % and increase validation accuracy %? - tensorflow

I'm trying to work with a image classification model for gravity waves detection.
So I want to check if there is something I could do to lower validation loss %, or more importantly, increase validation accuracy %.
The dataset is about a total of 460 images, split into
300 images that belong to 2 classes
60 images belonging to 2 classes
100 images belonging to 2 classes
For context, this is the code for pre processing:
batch_size = 32
Data Augmentation:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True,
)
test_datagen = ImageDataGenerator(
rescale=1./255)
The generator that reads images to generate batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'./train', target directory
target_size=(256, 256), images resized to 150x150
batch_size=batch_size,
# batch_size=40,
class_mode='binary')
validation_generator = train_datagen.flow_from_directory(
'./validation',
target_size=(256, 256),
batch_size=batch_size,
# batch_size=20,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(
'./test',
target_size=(256, 256),
# batch_size=batch_size,
batch_size=batch_size,
#class_mode=None,
class_mode= None,
shuffle=False)
And this is the model used:
from tensorflow.keras.applications.inception_v3 import InceptionV3
import tensorflow as tf
from keras import regularizers
base_model = InceptionV3(input_shape = (256,256,3), include_top = False, weights = 'imagenet')
x = layers.Flatten()(base_model.output)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(base_model.input, x)
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=0.001), loss = 'binary_crossentropy', metrics = ['accuracy'])
fitness = model.fit(
train_generator,
steps_per_epoch= 120,
epochs = 100,
validation_data=validation_generator,
validation_steps= 64)
So far the accuracy and loss % have been around:
Average training accuracy: 0.9237500047683715
Average training loss: 0.17309745135484264
Average validation accuracy: 0.6489999979734421
Average validation loss: 0.9121886402368545
The predicitons have been around:
validation predictions:
(24, 36)
test predictions:
(45, 55)
And the confusion matrix:
Confusion Matrix:
array([[12, 18],
[12, 18]])

Related

how to prevent overfitting in reinforced learning with vgg16

I'm trying to train a model to recognize facial expressions, so basically a classification problem with 7 classes:
img_size=48
batch_size=64
datagen_train=ImageDataGenerator( rotation_range=15,
width_shift_range=0.15,
height_shift_range=0.15,
shear_range=0.15,
zoom_range=0.15,
horizontal_flip=True,
preprocessing_function=preprocess_input
)
train_generator=datagen_train.flow_from_directory(
train_path,
target_size=(img_size,img_size),
# color_mode='grayscale',
batch_size=batch_size,
class_mode='categorical',
shuffle=True
)
datagen_validation=ImageDataGenerator( horizontal_flip=True, preprocessing_function=preprocess_input)
validation_generator=datagen_train.flow_from_directory(
valid_path,
target_size=(img_size,img_size),
# color_mode='grayscale',
batch_size=batch_size,
class_mode='categorical',
shuffle=True,
)
I'm using ImageDataGenerator and I did my model with VGG16 no head transfer learning like so :
ptm = PretrainedModel(
input_shape=[48,48,3],
weights='imagenet',
include_top=False)
x = Flatten()(ptm.output)
x = Dropout(0.5)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dense(7, activation='softmax',kernel_initializer='random_uniform', bias_initializer='random_uniform', bias_regularizer=regularizers.l2(0.01), name='predictions')(x)
opt=optimizers.RMSprop(learning_rate=0.0001)
model = Model(inputs=ptm.input, outputs=x)
model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy']
)
model.summary()
I used optimizers and early stopping and ran 100 epochs :
early_stopping = EarlyStopping(
monitor='val_accuracy',
min_delta=0.00005,
patience=11,
verbose=1,
restore_best_weights=True,
)
lr_scheduler = ReduceLROnPlateau(
monitor='val_accuracy',
factor=0.5,
patience=7,
min_lr=1e-7,
verbose=1,
)
callbacks = [
early_stopping,
lr_scheduler,
]
and after 61 epochs I had an early stop, and I got a decent accuracy, but the val_accuracy was very low compared to it:
loss: 0.6081 - accuracy: 0.7910 - val_loss: 1.4658 - val_accuracy: 0.5608
any suggestions on how I can fix this overfitting? thanks!
In your validation generator remove horizontal_flip=True and set shuffle=False. Also, you have the code
validation_generator=datagen_train.flow_from_directory( etc
You want to change it to
validation_generator=datagen_validation.flow_from_directory(etc

how to increase the accuracy of an image classifier?

I made an image classifier using Tensorflow, Keras with the implementation of a CNN architecture, the model works pretty fine (at least for the images that I have tested on it ) and it has reached an accuracy of 78.87%, the only thing that I m facing is that I want to make the accuracy no less than 85%.
Please Note:
Dataset: 2 folders: [Train Folder===> 80 folders each has 110 images, Validation folder===> 80 folders each has 22 images] size of the images [240-260]x[40-60]
Below is the code I used to create, save and test my model:
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
# dimensions of our images.
img_width, img_height = 251, 54
#img_width, img_height = 150, 33
train_data_dir = 'C:/Users/ADEM/Desktop/msi_youssef/PFE/test/numbers/data/train'
validation_data_dir = 'C:/Users/ADEM/Desktop/msi_youssef/PFE/test/numbers/data/valid'
nb_train_samples = 8800 #10435
nb_validation_samples = 1763 #2051
epochs = 30 #20 # how much time you want to train your model on the data
batch_size = 32 #16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(80)) #1
model.add(Activation('softmax')) #sigmoid
model.compile(loss='sparse_categorical_crossentropy',optimizer='rmsprop',metrics=['accuracy'])#categorical_crossentropy #binary_crossentropy
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.1,
zoom_range=0.05,
horizontal_flip=False)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.save('testX_2.h5') #first_try
last epoche resulat
Epoch 30/30
275/275 [==============================] - 38s 137ms/step - loss: 0.9406 - acc: 0.7562 - val_loss: 0.1268 - val_acc: 0.9688
how I tested my model:
from keras.models import load_model
from keras.preprocessing import image
import matplotlib.pyplot as plt
import numpy as np
import os
result = {"0":"0", "1":"0.25", "2":"0.5", "3":"0.75", "4":"1", "5":"1.25", "6":"1.5", "7":"1.75",
"47":"2", "48":"2.25", "49":"2.5", "50":"2.75", "52":"3","53":"3.25", "54":"3.5", "55":"3.75", "56":"4", "57":"4.25", "58":"4.5",
"59":"4.75","60":"5", "61":"5.25", "62":"5.5", "63":"5.75", "64":"6", "65":"6.25","66":"6.5", "67":"6.75", "68":"7", "69":"7.25",
"70":"7.5", "71":"7.75", "72":"8", "73":"8.25", "74":"8.5", "75":"8.75", "76":"9", "77":"9.25", "78":"9.5", "79":"9.75", "8":"10",
"9":"10.25", "10":"10.5", "11":"10.75", "12":"11", "13":"11.25", "14":"11.5", "15":"11.75", "16":"12","17":"12.25", "18":"12.5",
"19":"12.75", "20":"13", "21":"13.25", "22":"13.5", "23":"13.75","24":"14", "25":"14.25", "26":"14.5", "27":"14.75", "28":"15",
"29":"15.25", "30":"15.5", "31":"15.75", "32":"16", "33":"16.25", "34":"16.5", "35":"16.75", "36":"17", "37":"17.25", "38":"17.5",
"39":"17.75", "40":"18", "41":"18.25", "42":"18.5", "43":"18.75", "44":"19", "45":"19.25", "46":"19.5", "51":"20"}
def load_image(img_path, show=False):
img = image.load_img(img_path, target_size=(251, 54))
img_tensor = image.img_to_array(img) # (height, width, channels)
img_tensor = np.expand_dims(img_tensor, axis=0) # (1, height, width, channels), add a dimension because the model expects this shape: (batch_size, height, width, channels)
img_tensor /= 255. # imshow expects values in the range [0, 1]
if show:
plt.imshow(img_tensor[0])
plt.axis('off')
plt.show()
return img_tensor
if __name__ == "__main__":
# load model
model = load_model('C:/Users/ADEM/Desktop/msi_youssef/PFE/other_shit/testX_2.h5')
# image path
img_path = 'C:/Users/ADEM/Desktop/msi_youssef/PFE/dataset/5.75/a.png'
# load a single image
new_image = load_image(img_path)
# check prediction
#pred = model.predict(new_image)
pred = model.predict_classes(new_image)
#print(pred[0])
print(result[str(pred[0])])
Taking all the information about dataset and considering your CNN model already has around 80% accuracy you can start with training the model for a higher number of epochs (typically > 100 epochs). That should give the required boost to your model.
If that alone does not work you can implement:
Transformation/augmentation:
perform transformation/augmentation on the images before feeding into the model.
Tweak Model:
make changes to model layers and do hyperparameter tuning.
You can follow this article to learn more.

Keras: "ValueError: Error when checking target"

I am trying to build a model, which will classify video to certain category.
For this I used pretrained model - InceptionV3 and trained it on my own data. Training process was completed successfully, but when I tried to classify video I got the error:
ValueError: Error when checking : expected input_1 to have shape (None, None, None, 3) but got array with shape (1, 1, 104, 2048)
However for prediction I used the same video as for training process.
Defined model:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True,
rotation_range=10.,
width_shift_range=0.2,
height_shift_range=0.2)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
os.path.join('data', 'train'),
target_size=(299, 299),
batch_size=32,
classes=data.classes,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
os.path.join('data', 'test'),
target_size=(299, 299),
batch_size=32,
classes=data.classes,
class_mode='categorical')
base_model = InceptionV3(weights=weights, include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer
predictions = Dense(len(data.classes), activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
model.fit_generator(
train_generator,
steps_per_epoch=100,
validation_data=validation_generator,
validation_steps=10,
epochs=nb_epoch,
callbacks=callbacks)
Predictions:
#extract features from frames of video
files = [f for f in os.listdir('.') if os.path.isfile(f)]
for f in files:
features = extractor_model.extract(f)
sequence.append(features)
np.save(sequence_path, sequence)
sequences = np.load("data_final.npy")
#convert numpy array tp 4 dimensions
sequences = np.expand_dims(sequences, axis=0)
sequences = np.expand_dims(sequences, axis=0)
prediction = model.predict(sequences)
Features extractor:
def extract(self, image_path):
#print(image_path)
img = image.load_img(image_path, target_size=(299, 299))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
# Get the prediction.
features = self.model.predict(x)
if self.weights is None:
# For imagenet/default network:
features = features[0]
else:
# For loaded network:
features = features[0]
return features
Keras complains about the shape is not None...
However I expect to receive some predictions of the model, but got this error. Please help. Thanks

Validation Accuracy stuck to 50% using keras VGG16 model

I am working on food classification project using keras (Tensorflow backend) VGG16 model with food-101 image dataset. However, I have faced some problems related to validation accuracy. (I believe the problem is about overfitting).
My validation accuracy is not increasing and always stick to around 48-51%
I have 40 classes (40 different food) with 700 images for train and 300 images for validation for each food. And I evaluated my model with a bunch of random food images. I have tried:
Decreasing Learning Rate
Changing Dropout Layer to 0.75
Image Augmentation
Although it helped me a little bit but it does not increased the validation accuracy drastically. I heard that someone used preprocess_input() function to increase validation accuracy, but I am not sure about it.
Here's my code:
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras import applications
from keras.utils import to_categorical
from keras import optimizers
# Dimensions of images
img_width, img_height = 150, 150
top_model_weights_path = 'test2_classes.h5'
train_data_dir = 'D:\intallation\dataset\dataset-101/food/train'
validation_data_dir = 'D:\intallation\dataset\dataset-101/food/validation'
nb_train_samples = 28000
nb_validation_samples = 12000
epochs = 80
batch_size = 32
def save_bottlebeck_features():
datagen = ImageDataGenerator(rescale=1. / 255)
# build the VGG16 network
model = applications.VGG16(include_top=False, weights='imagenet')
generator = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
bottleneck_features_train = model.predict_generator(
generator, nb_train_samples // batch_size)
np.save('test_trained.npy', bottleneck_features_train)
generator = datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
bottleneck_features_validation = model.predict_generator(
generator, nb_validation_samples // batch_size)
np.save('test_validation.npy', bottleneck_features_validation)
def train_top_model():
# Class Labels for Training Data
datagen_top = ImageDataGenerator(rescale=1./255,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
fill_mode='nearest',
channel_shift_range=0.2*255)
datagen_top_val = ImageDataGenerator(rescale=1./255)
generator_top = datagen_top_val.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle=False)
np.save('test_class_indices.npy', generator_top.class_indices)
num_classes = len(generator_top.class_indices)
train_data = np.load('test_trained.npy')
train_labels = generator_top.classes # Get Class Labels
train_labels = to_categorical(train_labels, num_classes=num_classes)
# Class Labels for Validation Data
generator_top = datagen_top.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
validation_data = np.load('test_validation.npy')
validation_labels = generator_top.classes
validation_labels = to_categorical(validation_labels, num_classes=num_classes)
model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
sgd = optimizers.SGD(lr=1e-4, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd,
loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train_data, train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(validation_data, validation_labels))
model.save_weights(top_model_weights_path)
save_bottlebeck_features()
train_top_model()

CoreMLtools and Keras ValueError: need more than 1 value to unpack

I'm fine-tuning the Inception V3 model with Keras, in order to convert it with coremltools into a .mlmodel file.
However, when converting the model coremltools throws an error saying the following when the converter reaches the last layer of the model:
coremltools/models/neural_network.py", line 2501, in set_pre_processing_parameters
channels, height, width = array_shape
ValueError: need more than 1 value to unpack
I used the code from the Keras documentation on applications found here: https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes
And added a piece of code loading my dataset from the VGG example found here: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
My final script looks like this, using TesorFlow as backend:
LOAD THE DATA
from keras.preprocessing.image import ImageDataGenerator
img_width, img_height = 299, 299
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 358
nb_validation_samples = 21
epochs = 1
batch_size = 15
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
TRAIN THE MODEL
base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(7, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
for i, layer in enumerate(base_model.layers):
print(i, layer.name)
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.save('finetuned_inception.h5')
I'm writing here in response to #SwimBikeRun's request (as I need a bit more space)
I was converting YOLO to Keras and then Keras to CoreML. For conversion I was using this script https://github.com/qqwweee/keras-yolo3/blob/master/convert.py
In the conversion-process the model was eventually created like that:
input_layer = Input(shape=(None, None, 3))
...
model = Model(inputs=input_layer, outputs=[all_layers[i] for i in out_index])
And those "None"-inputs was what made CoreML conversion fail. For CoreML the input-size to your model must be known. So I changed it to this:
input_layer = Input(shape=(416, 416, 3)
Your input-size will probably vary.
For your original question:
Maybe check your base_model.input size for the same problem.