Keras NLP validation loss increases while training accuracy increases - tensorflow

I have looked at other posts with similar problems and it seems that my model is overfitting. However, I've tried regularization, dropout, reducing parameters, decreasing the learning rate and changing the loss function, but nothing seems to help.
Here is my model:
model = Sequential([
Embedding(max_words, 64),
Dropout(.5),
Bidirectional(GRU(64, return_sequences = True), merge_mode='concat'),
GlobalMaxPooling1D(),
Dense(64),
Dropout(.5),
Dense(1, activation='sigmoid')
])
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train, batch_size=32, epochs=25, verbose=1, validation_data=(x_test, y_test),shuffle=True)
And my training output:
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) (None, None, 64) 320000
_________________________________________________________________
dropout_6 (Dropout) (None, None, 64) 0
_________________________________________________________________
bidirectional_3 (Bidirection (None, None, 128) 49920
_________________________________________________________________
global_max_pooling1d_3 (Glob (None, 128) 0
_________________________________________________________________
dense_3 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_7 (Dropout) (None, 64) 0
_________________________________________________________________
dense_4 (Dense) (None, 1) 65
=================================================================
Total params: 378,241
Trainable params: 378,241
Non-trainable params: 0
_________________________________________________________________
Epoch 1/25
229/229 [==============================] - 7s 32ms/step - loss: 0.6952 - accuracy: 0.4939 - val_loss: 0.6923 - val_accuracy: 0.5240
Epoch 2/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6917 - accuracy: 0.5144 - val_loss: 0.6973 - val_accuracy: 0.4815
Epoch 3/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6709 - accuracy: 0.5881 - val_loss: 0.7164 - val_accuracy: 0.4784
Epoch 4/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6070 - accuracy: 0.6711 - val_loss: 0.7704 - val_accuracy: 0.4977
Epoch 5/25
229/229 [==============================] - 7s 30ms/step - loss: 0.5370 - accuracy: 0.7325 - val_loss: 0.8411 - val_accuracy: 0.4876
Epoch 6/25
229/229 [==============================] - 7s 30ms/step - loss: 0.4770 - accuracy: 0.7714 - val_loss: 0.9479 - val_accuracy: 0.4784
Epoch 7/25
229/229 [==============================] - 7s 30ms/step - loss: 0.4228 - accuracy: 0.8016 - val_loss: 1.0987 - val_accuracy: 0.4884
Epoch 8/25
229/229 [==============================] - 7s 30ms/step - loss: 0.3697 - accuracy: 0.8344 - val_loss: 1.2714 - val_accuracy: 0.4760
Epoch 9/25
229/229 [==============================] - 7s 30ms/step - loss: 0.3150 - accuracy: 0.8582 - val_loss: 1.4184 - val_accuracy: 0.4822
Epoch 10/25
229/229 [==============================] - 7s 31ms/step - loss: 0.2725 - accuracy: 0.8829 - val_loss: 1.6053 - val_accuracy: 0.4946
Epoch 11/25
229/229 [==============================] - 7s 31ms/step - loss: 0.2277 - accuracy: 0.9056 - val_loss: 1.8131 - val_accuracy: 0.4884
Epoch 12/25
229/229 [==============================] - 7s 31ms/step - loss: 0.1929 - accuracy: 0.9253 - val_loss: 1.9327 - val_accuracy: 0.4977
Epoch 13/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1717 - accuracy: 0.9318 - val_loss: 2.2280 - val_accuracy: 0.4900
Epoch 14/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1643 - accuracy: 0.9324 - val_loss: 2.2811 - val_accuracy: 0.4915
Epoch 15/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1419 - accuracy: 0.9439 - val_loss: 2.4530 - val_accuracy: 0.4830
Epoch 16/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1255 - accuracy: 0.9521 - val_loss: 2.6692 - val_accuracy: 0.4992
Epoch 17/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1124 - accuracy: 0.9558 - val_loss: 2.8106 - val_accuracy: 0.4892
Epoch 18/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1130 - accuracy: 0.9556 - val_loss: 2.6792 - val_accuracy: 0.4907
Epoch 19/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1085 - accuracy: 0.9610 - val_loss: 2.8966 - val_accuracy: 0.5093
Epoch 20/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0974 - accuracy: 0.9656 - val_loss: 2.8636 - val_accuracy: 0.5147
Epoch 21/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0921 - accuracy: 0.9663 - val_loss: 2.9874 - val_accuracy: 0.4977
Epoch 22/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0888 - accuracy: 0.9685 - val_loss: 3.0295 - val_accuracy: 0.4969
Epoch 23/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0762 - accuracy: 0.9731 - val_loss: 3.0607 - val_accuracy: 0.4884
Epoch 24/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0842 - accuracy: 0.9692 - val_loss: 3.0552 - val_accuracy: 0.4900
Epoch 25/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0816 - accuracy: 0.9693 - val_loss: 2.9571 - val_accuracy: 0.5015
My validation loss seems to always increases no matter what. I am trying to predict political affiliation from tweets. The dataset I am using has worked well on other models, so perhaps there is something wrong with my data preprocessing instead?
import pandas as pd
dataset = pd.read_csv('political_tweets.csv')
dataset.head()
dataset = pd.read_csv('political_tweets.csv')["tweet"].values
y_train = pd.read_csv('political_tweets.csv')["dem_or_rep"].values
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(dataset, y_train, test_size=0.15, shuffle=True)
print(x_train[0])
print(x_test[0])
max_words = 10000
max_len = 25
tokenizer = Tokenizer(num_words = max_words, filters='!"#$%&()*+,-./:;<=>?#[\\]^_`{|}~\t\n1234567890', lower=False,oov_token="<OOV>")
tokenizer.fit_on_texts(x_train)
x_train = tokenizer.texts_to_sequences(x_train)
x_train = pad_sequences(x_train, max_len, padding='post', truncating='post')
tokenizer.fit_on_texts(x_test)
x_test = tokenizer.texts_to_sequences(x_test)
x_test = pad_sequences(x_test, max_len, padding='post', truncating='post')
I am really stumped. Any help is appreciated.

You're doing a binary classification and your validation accuracy is near 50%. It just means your model learnt nothing useful, it's equivalent to random prediction.
Your training accuracy is really high, which suggests your model is badly overfitted.
Don't apply dropout after embedding layer, it can mess everything up.
Remove this Dense(64), after GlobalPooling.
Use recurrent_dropout in GRU.
Train for fewer epochs.
Reduce vocabulary, remove stop words. Maybe there is too noise, as your sequence length is only 25, noisy stop words can fool the model.
import nltk
from nltk.corpus import stopwords
set(stopwords.words('english'))
Your model is still overfitting. Try reducing embedding output_dim and GRU units both with many combinations.

Related

Loss not changing and accuracy remains 0 after calling fit()

I'm new to keras and tensorflow, I have a model that I am trying to train where the loss does not change after epoch #1.
my data is the sequence of numbers which I want NN to learn and predict the next number:
data[10:15]
Out[3]:
array([[30, 36, 28, 25, 30, 35],
[36, 28, 25, 30, 35, 28],
[28, 25, 30, 35, 28, 29],
[25, 30, 35, 28, 29, 25],
[30, 35, 28, 29, 25, 38]])
For example I want [30, 36, 28, 25, 30] to be my input and 35 to be my output.
and this is my very simple code and NN:
[I normalized all my data using StandardScaler() but it didn't change.]
data = gen_train_data()
scaler = StandardScaler()
data = scaler.fit_transform(data)
X_train_full, X_test, y_train_full, y_test = train_test_split(data[:, 0:5], data[:, 5])
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full)
model = keras.models.Sequential()
model.add(keras.layers.Dense(5))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(30, activation="elu", kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(1, activation="softmax"))
optimizer = keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss="mse", optimizer=optimizer, metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)
history = model.fit(X_train, y_train,
epochs=16, validation_data=(X_valid, y_valid),
callbacks=[early_stopping_cb])
mse_test = model.evaluate(X_test, y_test)
and this is my console after running code above:
Epoch 1/16
1/26 [>.............................] - ETA: 57s - loss: 3.2932 - accuracy: 0.0000e+00
26/26 [==============================] - 3s 23ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 2/16
26/26 [==============================] - 0s 4ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 3/16
26/26 [==============================] - 0s 4ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 4/16
26/26 [==============================] - 0s 5ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 5/16
26/26 [==============================] - 0s 5ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 6/16
26/26 [==============================] - 0s 5ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 7/16
26/26 [==============================] - 0s 4ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 8/16
26/26 [==============================] - 0s 5ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 9/16
26/26 [==============================] - 0s 5ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 10/16
26/26 [==============================] - 0s 5ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
Epoch 11/16
26/26 [==============================] - 0s 6ms/step - loss: 2.0375 - accuracy: 0.0000e+00 - val_loss: 2.2258 - val_accuracy: 0.0000e+00
12/12 [==============================] - 0s 3ms/step - loss: 1.7458 - accuracy: 0.0000e+00
and the prediction is 1 for all inputs
model.predict(X_test[:5])
Out[2]:
array([[1.],
[1.],
[1.],
[1.],
[1.]], dtype=float32)
I tried everything (activation functions, learning rates, more/less hidden layers, ...), nothing changes the output.
I would really appreciate it if someone can help me

Keras load model after saving the model, why start training from the beginning?

Epoch 1/8
222/222 [==============================] - 18s 67ms/step - loss: 1.4523 - accuracy: 0.9709 - val_loss: 1.3310 - val_accuracy: 0.9865
Epoch 2/8
222/222 [==============================] - 14s 63ms/step - loss: 1.3345 - accuracy: 0.9747 - val_loss: 1.2312 - val_accuracy: 0.9865
Epoch 3/8
222/222 [==============================] - 14s 64ms/step - loss: 1.1911 - accuracy: 0.9868 - val_loss: 1.1245 - val_accuracy: 0.9887
Epoch 4/8
222/222 [==============================] - 14s 63ms/step - loss: 1.0926 - accuracy: 0.9873 - val_loss: 1.0798 - val_accuracy: 0.9769
Epoch 5/8
222/222 [==============================] - 14s 63ms/step - loss: 1.0622 - accuracy: 0.9760 - val_loss: 1.0887 - val_accuracy: 0.9555
Epoch 6/8
222/222 [==============================] - 14s 63ms/step - loss: 0.9589 - accuracy: 0.9841 - val_loss: 0.9216 - val_accuracy: 0.9814
Epoch 7/8
222/222 [==============================] - 14s 64ms/step - loss: 0.8648 - accuracy: 0.9885 - val_loss: 0.8241 - val_accuracy: 0.9896
Epoch 8/8
222/222 [==============================] - 14s 63ms/step - loss: 0.7993 - accuracy: 0.9908 - val_loss: 0.7694 - val_accuracy: 0.9893
Model: "model_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 32, 32, 3)] 0
_________________________________________________________________
model_1 (Functional) (None, 10) 3250058
=================================================================
Total params: 3,250,058
Trainable params: 3,228,170
Non-trainable params: 21,888
_________________________________________________________________
Epoch 1/8
222/222 [==============================] - 18s 66ms/step - loss: 1.4423 - accuracy: 0.9741 - val_loss: 1.3361 - val_accuracy: 0.9839
Epoch 2/8
222/222 [==============================] - 14s 64ms/step - loss: 1.3457 - accuracy: 0.9734 - val_loss: 1.2327 - val_accuracy: 0.9845
Epoch 3/8
222/222 [==============================] - 14s 63ms/step - loss: 1.1927 - accuracy: 0.9893 - val_loss: 1.1287 - val_accuracy: 0.9870
this is my output, as you can see when I load the model after training, the value of the loss is still the same compared with the value before training. I am really confused about it.
This is my code, I want to use two models (After combining, Final combining), and I use load_mode and model.save . Cuz I want to mimic Federated Learning process.
Hope someone can give me some ideas.
def train2():
img_input = Input(shape=(32, 32, 3))
Mobilenet2 = load_model('Final combining.h5')
output = Mobilenet2(img_input)
model = Model(img_input, output)
model.summary()
# set optimizer
sgd = optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# start training
h2 = model.fit(X_train2, y_2_train, batch_size=batch_size,
steps_per_epoch=len(X_train2) // batch_size,
epochs=epochs1,
# callbacks=cbks,
validation_data=(X_test, y_test))
# callbacks=callbacks
model.save('After combining.h5')
def train3():
img_input = Input(shape=(32, 32, 3))
Mobilenet1 = load_model('After combining.h5')
output = Mobilenet1(img_input)
model = Model(img_input, output)
model.summary()
# set optimizer
sgd = optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# start training
h3 = model.fit(X_train1, y_1_train, batch_size=batch_size,
steps_per_epoch=len(X_train1) // batch_size,
epochs=epochs1,
# callbacks=cbks,
validation_data=(X_test, y_test))
# callbacks=callbacks
model.save('Final combining.h5')
I use the for loop to control the training process, the output is the last iteration... , the value of accuracy and loss is almost the same compared with the first iteration
for _ in range(5):
num = 0
if num % 2==0:
train2()
num+=1
else:
train3()
num+=1
I solve it after changing the same name of model
_________________________________________________________________
Epoch 1/8
222/222 [==============================] - 25s 100ms/step - loss: 0.2912 - accuracy: 0.9854 - val_loss: 0.3016 - val_accuracy: 0.9800
Epoch 2/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2637 - accuracy: 0.9906 - val_loss: 0.3110 - val_accuracy: 0.9800
Epoch 3/8
222/222 [==============================] - 22s 97ms/step - loss: 0.2420 - accuracy: 0.9922 - val_loss: 0.2764 - val_accuracy: 0.9865
Epoch 4/8
222/222 [==============================] - 22s 97ms/step - loss: 0.2960 - accuracy: 0.9743 - val_loss: 0.2632 - val_accuracy: 0.9842
Epoch 5/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2291 - accuracy: 0.9928 - val_loss: 0.2757 - val_accuracy: 0.9789
Epoch 6/8
222/222 [==============================] - 22s 97ms/step - loss: 0.2286 - accuracy: 0.9921 - val_loss: 0.2806 - val_accuracy: 0.9744
Epoch 7/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2161 - accuracy: 0.9920 - val_loss: 0.2381 - val_accuracy: 0.9828
Epoch 8/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1936 - accuracy: 0.9953 - val_loss: 0.2192 - val_accuracy: 0.9887
Model: "model_20"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_22 (InputLayer) [(None, 32, 32, 3)] 0
_________________________________________________________________
model_19 (Functional) (None, 10) 3250058
=================================================================
Total params: 3,250,058
Trainable params: 3,228,170
Non-trainable params: 21,888
_________________________________________________________________
Epoch 1/8
222/222 [==============================] - 25s 101ms/step - loss: 0.1774 - accuracy: 0.9972 - val_loss: 0.2197 - val_accuracy: 0.9876
Epoch 2/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1805 - accuracy: 0.9928 - val_loss: 0.2880 - val_accuracy: 0.9713
Epoch 3/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2062 - accuracy: 0.9852 - val_loss: 0.2234 - val_accuracy: 0.9814
Epoch 4/8
222/222 [==============================] - 22s 97ms/step - loss: 0.1765 - accuracy: 0.9938 - val_loss: 0.2218 - val_accuracy: 0.9769
Epoch 5/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1792 - accuracy: 0.9905 - val_loss: 0.2180 - val_accuracy: 0.9803
Epoch 6/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1608 - accuracy: 0.9942 - val_loss: 0.2602 - val_accuracy: 0.9679
Epoch 7/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1581 - accuracy: 0.9925 - val_loss: 0.1826 - val_accuracy: 0.9873
Epoch 8/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2309 - accuracy: 0.9734 - val_loss: 0.2034 - val_accuracy: 0.9831

Fine tuning in CNN using Tensor Flow - 2.0

I am currently working defect classification problem in solar panel. It's a multi class classification problem. Currently its 3 class. I have done the coding part but my accuracy is very low. How to improve my accuracy?
Total training images - 900
Testing/validation - 300
Class - 3
My code is given below -
import tensorflow as tf
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
TRAINING_DIR = "/content/drive/My Drive/solar_images/solar_images/train/"
training_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
VALIDATION_DIR = "/content/drive/My Drive/solar_images/solar_images/test/"
validation_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = training_datagen.flow_from_directory(
TRAINING_DIR,
target_size=(150,150),
class_mode='categorical',
batch_size=64
)
validation_generator = validation_datagen.flow_from_directory(
VALIDATION_DIR,
target_size=(150,150),
class_mode='categorical',
batch_size=64
)
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 150x150 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()
model.compile(loss = 'categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
batch_size=64
history = model.fit(train_generator,
epochs=20,
steps_per_epoch=int(894/batch_size),
validation_data = validation_generator,
verbose = 1,
validation_steps=int(289/batch_size))
model.save("solar_images_weight.h5")
My accuracy is -
Epoch 1/20
13/13 [==============================] - 1107s 85s/step - loss: 1.2893 - accuracy: 0.3470 - val_loss: 1.0926 - val_accuracy: 0.3594
Epoch 2/20
13/13 [==============================] - 1239s 95s/step - loss: 1.1037 - accuracy: 0.3566 - val_loss: 1.0954 - val_accuracy: 0.3125
Epoch 3/20
13/13 [==============================] - 1203s 93s/step - loss: 1.0964 - accuracy: 0.3904 - val_loss: 1.0841 - val_accuracy: 0.5625
Epoch 4/20
13/13 [==============================] - 1182s 91s/step - loss: 1.0980 - accuracy: 0.3750 - val_loss: 1.0894 - val_accuracy: 0.3633
Epoch 5/20
13/13 [==============================] - 1218s 94s/step - loss: 1.1086 - accuracy: 0.3386 - val_loss: 1.0874 - val_accuracy: 0.3125
Epoch 6/20
13/13 [==============================] - 1214s 93s/step - loss: 1.0953 - accuracy: 0.3257 - val_loss: 1.0763 - val_accuracy: 0.6094
Epoch 7/20
13/13 [==============================] - 1136s 87s/step - loss: 1.0851 - accuracy: 0.3831 - val_loss: 1.0754 - val_accuracy: 0.3164
Epoch 8/20
13/13 [==============================] - 1170s 90s/step - loss: 1.1005 - accuracy: 0.3940 - val_loss: 1.0545 - val_accuracy: 0.5039
Epoch 9/20
13/13 [==============================] - 1138s 88s/step - loss: 1.1294 - accuracy: 0.4337 - val_loss: 1.0130 - val_accuracy: 0.5703
Epoch 10/20
13/13 [==============================] - 1131s 87s/step - loss: 1.0250 - accuracy: 0.4531 - val_loss: 0.8911 - val_accuracy: 0.6055
Epoch 11/20
13/13 [==============================] - 1162s 89s/step - loss: 1.0243 - accuracy: 0.4735 - val_loss: 0.9160 - val_accuracy: 0.4727
Epoch 12/20
13/13 [==============================] - 1153s 89s/step - loss: 0.9978 - accuracy: 0.4783 - val_loss: 0.7754 - val_accuracy: 0.6406
Epoch 13/20
13/13 [==============================] - 1187s 91s/step - loss: 1.0080 - accuracy: 0.4687 - val_loss: 0.7701 - val_accuracy: 0.6602
Epoch 14/20
13/13 [==============================] - 1204s 93s/step - loss: 0.9851 - accuracy: 0.5048 - val_loss: 0.7450 - val_accuracy: 0.6367
Epoch 15/20
13/13 [==============================] - 1181s 91s/step - loss: 0.9699 - accuracy: 0.4892 - val_loss: 0.7409 - val_accuracy: 0.6289
Epoch 16/20
13/13 [==============================] - 1187s 91s/step - loss: 0.8884 - accuracy: 0.5241 - val_loss: 0.7169 - val_accuracy: 0.6133
Epoch 17/20
13/13 [==============================] - 1197s 92s/step - loss: 0.9372 - accuracy: 0.5084 - val_loss: 0.7464 - val_accuracy: 0.5859
Epoch 18/20
13/13 [==============================] - 1224s 94s/step - loss: 0.9230 - accuracy: 0.5229 - val_loss: 0.9198 - val_accuracy: 0.5156
Epoch 19/20
13/13 [==============================] - 1270s 98s/step - loss: 0.9161 - accuracy: 0.5192 - val_loss: 0.6785 - val_accuracy: 0.6289
Epoch 20/20
13/13 [==============================] - 1173s 90s/step - loss: 0.8728 - accuracy: 0.5193 - val_loss: 0.6674 - val_accuracy: 0.5781
Training and validation accuracy plot is given below -
You could use transfer learning. Using a pre-trained model such as mobilenet or inception to train on your dataset. This would significantly improve your accuracy.

Increase accuracy of CNN model

I have been working on image classification problem. I have used ImageDataGenerator to load and preprocess data and then train my CNN model on image data set but the accuracy is stuck to 51%. I have tried using:
My data set is of 1000 signatures where there are 4000 real sample images and 4000 fake samples images for each signature. In total i have 8000 images.
Different train test split ratio
Different number of epochs and batch size
Increase/Decrease number of layers of CNN
but either model overfits while the accuracy is still 51% or accuracy decreases further.
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.15,
height_shift_range=0.15,
shear_range=0.15,
zoom_range=0.15,
horizontal_flip=True,
fill_mode='nearest',
validation_split=0.4)
train_data_gen = train_image_generator.flow_from_directory(
train_dir,
target_size=(IMG_HEIGHT,IMG_WIDTH),
batch_size=batch_size,
class_mode='binary',
subset='training')
val_data_gen = train_image_generator.flow_from_directory(
train_dir, # same directory as training data
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=batch_size,
class_mode='binary',
subset='validation')
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history_more = model.fit_generator(
train_data_gen,
steps_per_epoch=train_data_gen.samples // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=val_data_gen.samples // batch_size)
37/37 [==============================] - 2886s 78s/step - loss: 0.8010 - accuracy: 0.4994 - val_loss: 0.6933 - val_accuracy: 0.5000
Epoch 2/15
37/37 [==============================] - 985s 27s/step - loss: 0.6934 - accuracy: 0.5015 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 3/15
37/37 [==============================] - 986s 27s/step - loss: 0.6931 - accuracy: 0.4991 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 4/15
37/37 [==============================] - 985s 27s/step - loss: 0.6931 - accuracy: 0.4998 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 5/15
37/37 [==============================] - 988s 27s/step - loss: 0.6930 - accuracy: 0.4961 - val_loss: 0.6927 - val_accuracy: 0.5000
Epoch 6/15
37/37 [==============================] - 991s 27s/step - loss: 0.6934 - accuracy: 0.5021 - val_loss: 0.6923 - val_accuracy: 0.5000
Epoch 7/15
37/37 [==============================] - 979s 26s/step - loss: 0.6917 - accuracy: 0.5028 - val_loss: 0.6909 - val_accuracy: 0.5000
Epoch 8/15
37/37 [==============================] - 974s 26s/step - loss: 0.6858 - accuracy: 0.4998 - val_loss: 0.6897 - val_accuracy: 0.4991
Epoch 9/15
37/37 [==============================] - 967s 26s/step - loss: 0.6802 - accuracy: 0.5078 - val_loss: 0.6909 - val_accuracy: 0.5003
Epoch 10/15
37/37 [==============================] - 970s 26s/step - loss: 0.6808 - accuracy: 0.5045 - val_loss: 0.6943 - val_accuracy: 0.5081
Epoch 11/15
37/37 [==============================] - 967s 26s/step - loss: 0.6741 - accuracy: 0.5103 - val_loss: 0.7072 - val_accuracy: 0.5131
Epoch 12/15
37/37 [==============================] - 950s 26s/step - loss: 0.6732 - accuracy: 0.5128 - val_loss: 0.7064 - val_accuracy: 0.5041
Epoch 13/15
37/37 [==============================] - 947s 26s/step - loss: 0.6707 - accuracy: 0.5171 - val_loss: 0.6996 - val_accuracy: 0.5078
Epoch 14/15
37/37 [==============================] - 951s 26s/step - loss: 0.6675 - accuracy: 0.5103 - val_loss: 0.7122 - val_accuracy: 0.5016
Epoch 15/15
37/37 [==============================] - 952s 26s/step - loss: 0.6724 - accuracy: 0.5197 - val_loss: 0.7105 - val_accuracy: 0.5119

Convnet in 1D gives accuracy of 50%: I can't figure out why

I am trying to build a convnet in 1D for timeseries binary classification. I tried to build the network, but what I get is not more than 50% accuracy: basically, it's completely random.
Looking at the weights, I figured out that in the first layer of my network the weights don't see anything – some easy-to-spot spikes in the timeseries.
print(x.shape, y.shape)
"(1965, 100, 1) (1965, 2)"
#My model
model = models.Sequential()
model.add(layers.Conv1D(filters = 1,
kernel_size = 10,
activation = 'relu',
input_shape=(timesteps,1)))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Flatten())
model.add(layers.Dense(1,
activation = 'sigmoid'))
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 91, 1) 11
_________________________________________________________________
global_max_pooling1d (Global (None, 1) 0
_________________________________________________________________
flatten (Flatten) (None, 1) 0
_________________________________________________________________
dense (Dense) (None, 1) 2
=================================================================
Total params: 13
Trainable params: 13
Non-trainable params: 0
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics = ['accuracy'])
network = model.fit(x, y[:,0],
epochs = 20,
batch_size = 32,
shuffle = True)
Epoch 1/20
1965/1965 [==============================] - 0s 211us/sample - loss: 0.6929 - accuracy: 0.5247
Epoch 2/20
1965/1965 [==============================] - 0s 78us/sample - loss: 0.6925 - accuracy: 0.5288
Epoch 3/20
1965/1965 [==============================] - 0s 79us/sample - loss: 0.6922 - accuracy: 0.5288
Epoch 4/20
1965/1965 [==============================] - 0s 79us/sample - loss: 0.6920 - accuracy: 0.5288
Epoch 5/20
1965/1965 [==============================] - 0s 77us/sample - loss: 0.6919 - accuracy: 0.5288
Epoch 6/20
1965/1965 [==============================] - 0s 76us/sample - loss: 0.6917 - accuracy: 0.5288
Epoch 7/20
1965/1965 [==============================] - 0s 78us/sample - loss: 0.6917 - accuracy: 0.5288
Epoch 8/20
1965/1965 [==============================] - 0s 76us/sample - loss: 0.6916 - accuracy: 0.5288
Epoch 9/20
1965/1965 [==============================] - 0s 79us/sample - loss: 0.6916 - accuracy: 0.5288
Epoch 10/20
1965/1965 [==============================] - 0s 77us/sample - loss: 0.6916 - accuracy: 0.5288
Epoch 11/20
1965/1965 [==============================] - 0s 77us/sample - loss: 0.6916 - accuracy: 0.5288
Epoch 12/20
1965/1965 [==============================] - 0s 77us/sample - loss: 0.6916 - accuracy: 0.5288
Epoch 13/20
1965/1965 [==============================] - 0s 83us/sample - loss: 0.6915 - accuracy: 0.5288
Epoch 14/20
1965/1965 [==============================] - 0s 81us/sample - loss: 0.6915 - accuracy: 0.5288
Epoch 15/20
1965/1965 [==============================] - 0s 81us/sample - loss: 0.6915 - accuracy: 0.5288
Epoch 16/20
1965/1965 [==============================] - 0s 79us/sample - loss: 0.6915 - accuracy: 0.5288
Epoch 17/20
1965/1965 [==============================] - 0s 80us/sample - loss: 0.6916 - accuracy: 0.5288
Epoch 18/20
1965/1965 [==============================] - 0s 81us/sample - loss: 0.6915 - accuracy: 0.5288
Epoch 19/20
1965/1965 [==============================] - 0s 81us/sample - loss: 0.6915 - accuracy: 0.5288
Epoch 20/20
1965/1965 [==============================] - 0s 81us/sample - loss: 0.6915 - accuracy: 0.5288
This is all I get.
I tried to build a small conv1D "by hand", and it shows that it can actually find the spikes in the data.
I believe I am doing some small error, but can't figure it out.