Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I tried a lot to make a change in my validation accuracy by adding layers and dropout but still, I have no change yet my accuracy is upper than 95% and my validation accuracy is always stuck in 88%.
my split :
x_train,x_validate,y_train,y_validate = train_test_split(x_train,y_train,test_size = 0.2,random_state = 42)
after splitting data (shape) :
x_train shape: (5850,)
x_train shape: (5850,)
x_validate shape: (1463,)
y_validate shape: (1463,)
x_test shape: (2441,)
y_test shape: (2441,)
width and height and number of channels :
width, height, channels = 64, 64, 3
after converting images to array (shape) :
Training set shape : (5850, 64, 64, 3)
Validation set shape : (1463, 64, 64, 3)
Test set shape : (2441, 64, 64, 3)
and I have 6 classes
augmentation :
datagen = ImageDataGenerator(
featurewise_center=True,
samplewise_center=True,
featurewise_std_normalization=True,
samplewise_std_normalization=True,
zca_whitening=False,
rotation_range=0.9,
zoom_range = 0.7,
width_shift_range=0.8,
height_shift_range=0.8,
horizontal_flip=True,
vertical_flip=True)
datagen.fit(x_train)
my Sequential :
model = Sequential()
model.add(Conv2D(16,(3,3),input_shape = (224,224,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(32,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(128,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(256,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.3))
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(numberOfClass)) # output
model.add(Activation("softmax"))
model.compile(loss = "binary_crossentropy",
optimizer = "adam",
metrics = ["accuracy"])
batch_size = 256
I put an early stopping in my code to make the best less validation loss, how can improve my validation accuracy to at least 92%?
Epoch 1/100
29/29 [==============================] - 62s 2s/step - loss: 0.4635 - accuracy: 0.3040 - val_loss: 0.4227 - val_accuracy: 0.4007
Epoch 00001: val_loss improved from inf to 0.42266, saving model to ./model_best_weights.h5
Epoch 2/100
29/29 [==============================] - 60s 2s/step - loss: 0.4230 - accuracy: 0.3260 - val_loss: 0.4046 - val_accuracy: 0.3314
Epoch 00002: val_loss improved from 0.42266 to 0.40463, saving model to ./model_best_weights.h5
Epoch 3/100
29/29 [==============================] - 60s 2s/step - loss: 0.3833 - accuracy: 0.4234 - val_loss: 0.3417 - val_accuracy: 0.5125
Epoch 00003: val_loss improved from 0.40463 to 0.34174, saving model to ./model_best_weights.h5
Epoch 4/100
29/29 [==============================] - 60s 2s/step - loss: 0.3351 - accuracy: 0.5040 - val_loss: 0.3108 - val_accuracy: 0.5432
Epoch 00004: val_loss improved from 0.34174 to 0.31083, saving model to ./model_best_weights.h5
Epoch 5/100
29/29 [==============================] - 59s 2s/step - loss: 0.3002 - accuracy: 0.5683 - val_loss: 0.2655 - val_accuracy: 0.6247
Epoch 00005: val_loss improved from 0.31083 to 0.26553, saving model to ./model_best_weights.h5
Epoch 6/100
29/29 [==============================] - 60s 2s/step - loss: 0.2794 - accuracy: 0.6025 - val_loss: 0.2677 - val_accuracy: 0.6194
Epoch 00006: val_loss did not improve from 0.26553
Epoch 7/100
29/29 [==============================] - 60s 2s/step - loss: 0.2606 - accuracy: 0.6374 - val_loss: 0.2524 - val_accuracy: 0.6477
Epoch 00007: val_loss improved from 0.26553 to 0.25239, saving model to ./model_best_weights.h5
Epoch 8/100
29/29 [==============================] - 59s 2s/step - loss: 0.2400 - accuracy: 0.6751 - val_loss: 0.2232 - val_accuracy: 0.6997
Epoch 00008: val_loss improved from 0.25239 to 0.22320, saving model to ./model_best_weights.h5
Epoch 9/100
29/29 [==============================] - 60s 2s/step - loss: 0.2307 - accuracy: 0.6875 - val_loss: 0.2092 - val_accuracy: 0.7181
Epoch 00009: val_loss improved from 0.22320 to 0.20916, saving model to ./model_best_weights.h5
Epoch 10/100
29/29 [==============================] - 59s 2s/step - loss: 0.2085 - accuracy: 0.7284 - val_loss: 0.2092 - val_accuracy: 0.7255
Epoch 00010: val_loss did not improve from 0.20916
Epoch 11/100
29/29 [==============================] - 60s 2s/step - loss: 0.1961 - accuracy: 0.7463 - val_loss: 0.1943 - val_accuracy: 0.7603
Epoch 00011: val_loss improved from 0.20916 to 0.19435, saving model to ./model_best_weights.h5
Epoch 12/100
29/29 [==============================] - 60s 2s/step - loss: 0.1894 - accuracy: 0.7621 - val_loss: 0.1829 - val_accuracy: 0.7669
Epoch 00012: val_loss improved from 0.19435 to 0.18294, saving model to ./model_best_weights.h5
Epoch 13/100
29/29 [==============================] - 60s 2s/step - loss: 0.1766 - accuracy: 0.7770 - val_loss: 0.1751 - val_accuracy: 0.7780
Epoch 00013: val_loss improved from 0.18294 to 0.17508, saving model to ./model_best_weights.h5
Epoch 14/100
29/29 [==============================] - 60s 2s/step - loss: 0.1606 - accuracy: 0.8006 - val_loss: 0.1666 - val_accuracy: 0.8005
Epoch 00014: val_loss improved from 0.17508 to 0.16662, saving model to ./model_best_weights.h5
Epoch 15/100
29/29 [==============================] - 60s 2s/step - loss: 0.1531 - accuracy: 0.8105 - val_loss: 0.1718 - val_accuracy: 0.7816
Epoch 00015: val_loss did not improve from 0.16662
Epoch 16/100
29/29 [==============================] - 61s 2s/step - loss: 0.1449 - accuracy: 0.8265 - val_loss: 0.1600 - val_accuracy: 0.8083
Epoch 00016: val_loss improved from 0.16662 to 0.16000, saving model to ./model_best_weights.h5
Epoch 17/100
29/29 [==============================] - 62s 2s/step - loss: 0.1309 - accuracy: 0.8419 - val_loss: 0.1609 - val_accuracy: 0.8202
Epoch 00017: val_loss did not improve from 0.16000
Epoch 18/100
29/29 [==============================] - 60s 2s/step - loss: 0.1165 - accuracy: 0.8607 - val_loss: 0.1572 - val_accuracy: 0.8222
Epoch 00018: val_loss improved from 0.16000 to 0.15722, saving model to ./model_best_weights.h5
Epoch 19/100
29/29 [==============================] - 60s 2s/step - loss: 0.1109 - accuracy: 0.8711 - val_loss: 0.1523 - val_accuracy: 0.8370
Epoch 00019: val_loss improved from 0.15722 to 0.15225, saving model to ./model_best_weights.h5
Epoch 20/100
29/29 [==============================] - 60s 2s/step - loss: 0.1008 - accuracy: 0.8877 - val_loss: 0.1405 - val_accuracy: 0.8484
Epoch 00020: val_loss improved from 0.15225 to 0.14046, saving model to ./model_best_weights.h5
Epoch 21/100
29/29 [==============================] - 60s 2s/step - loss: 0.1063 - accuracy: 0.8764 - val_loss: 0.1514 - val_accuracy: 0.8390
Epoch 00021: val_loss did not improve from 0.14046
Epoch 22/100
29/29 [==============================] - 61s 2s/step - loss: 0.0880 - accuracy: 0.8979 - val_loss: 0.1423 - val_accuracy: 0.8550
Epoch 00022: val_loss did not improve from 0.14046
Epoch 23/100
29/29 [==============================] - 60s 2s/step - loss: 0.0750 - accuracy: 0.9196 - val_loss: 0.1368 - val_accuracy: 0.8632
Epoch 00023: val_loss improved from 0.14046 to 0.13678, saving model to ./model_best_weights.h5
Epoch 24/100
29/29 [==============================] - 60s 2s/step - loss: 0.0712 - accuracy: 0.9218 - val_loss: 0.1520 - val_accuracy: 0.8521
Epoch 00024: val_loss did not improve from 0.13678
Epoch 25/100
29/29 [==============================] - 60s 2s/step - loss: 0.0664 - accuracy: 0.9288 - val_loss: 0.1600 - val_accuracy: 0.8451
Epoch 00025: val_loss did not improve from 0.13678
Epoch 26/100
29/29 [==============================] - 60s 2s/step - loss: 0.0605 - accuracy: 0.9360 - val_loss: 0.1528 - val_accuracy: 0.8636
Epoch 00026: val_loss did not improve from 0.13678
Epoch 00026: early stopping
images of my graph :
https://i.imgur.com/pNYwcE8.jpg
https://i.imgur.com/ZCSRI8e.jpg
You should experiment more, but glancing at your code, I can give you the following tips:
according to the plot, validation accuracy is increasing a bit even in the end, maybe you can try to increase EarlyStopping patience and monitor validation accuracy instead of validation loss
add batch normalization into your architecture
increase dropout rate, maybe to some value between 0.4 and 0.7
tune learning rate, and maybe use some learning rate scheduler like ReduceLROnPlateau which might help to train even further after there is no increase in validation metrics
Good luck!
Monitoring Keras metric of val_reall. It has been improving but it keeps the best value as the lowest 0.9958 although better values 0.9978 or 0.9985 have been recorded. The monitor mode is set to 'auto'.
Please help understand why the Keras thinks the metric is not improving.
Epoch 1/10
6883/6883 [==============================] - 1982s 287ms/step - loss: 0.1025 - recall: 0.9738 - accuracy: 0.9631 - val_loss: 0.0537 - val_recall: 0.9978 - val_accuracy: 0.9837
Epoch 00001: val_recall improved from inf to 0.99783, saving model to /content/drive/MyDrive/home/repository/mon/kaggle/toxic_comment_classification/toxicity_classification_2021JUL10_1647/model_Ctoxic_B32_L256/model.h5
Epoch 2/10
6883/6883 [==============================] - 1970s 286ms/step - loss: 0.0348 - recall: 0.9946 - accuracy: 0.9901 - val_loss: 0.0412 - val_recall: 0.9958 - val_accuracy: 0.9888
Epoch 00002: val_recall improved from 0.99783 to 0.99583, saving model to /content/drive/MyDrive/home/repository/mon/kaggle/toxic_comment_classification/toxicity_classification_2021JUL10_1647/model_Ctoxic_B32_L256/model.h5
Epoch 3/10
6883/6883 [==============================] - 1970s 286ms/step - loss: 0.0181 - recall: 0.9968 - accuracy: 0.9952 - val_loss: 0.0446 - val_recall: 0.9984 - val_accuracy: 0.9897
Epoch 00003: val_recall did not improve from 0.99583
Epoch 4/10
6883/6883 [==============================] - 1972s 286ms/step - loss: 0.0125 - recall: 0.9976 - accuracy: 0.9967 - val_loss: 0.0429 - val_recall: 0.9985 - val_accuracy: 0.9902
Epoch 00004: val_recall did not improve from 0.99583
Epoch 5/10
6883/6883 [==============================] - 1973s 287ms/step - loss: 0.0094 - recall: 0.9979 - accuracy: 0.9974 - val_loss: 0.0663 - val_recall: 0.9991 - val_accuracy: 0.9873
Epoch 00005: ReduceLROnPlateau reducing learning rate to 5.9999998484272515e-06.
Epoch 00005: val_recall did not improve from 0.99583
Epoch 6/10
6883/6883 [==============================] - 1970s 286ms/step - loss: 0.0031 - recall: 0.9996 - accuracy: 0.9993 - val_loss: 0.0646 - val_recall: 0.9998 - val_accuracy: 0.9901
Epoch 00006: val_recall did not improve from 0.99583
Epoch 7/10
6883/6883 [==============================] - 1967s 286ms/step - loss: 0.0019 - recall: 0.9998 - accuracy: 0.9997 - val_loss: 0.0641 - val_recall: 0.9997 - val_accuracy: 0.9903
Restoring model weights from the end of the best epoch.
Epoch 00007: val_recall did not improve from 0.99583
Epoch 00007: early stopping
Solution
As per the comment by Innat, mode=max at callbacks.
From Comments:
Setting mode=max in the Callbacks has resolved the issue.
I created two anaconda environments for tensorflow2x and tensorflow1x respectively. In tensorflow2x, the tensorflow 2.3.2 and keras 2.4.3 (the latest) are installed, while in tensorflow1x, the tensorflow-gpu 1.15 and keras 2.3.1 are installed. Then I run a toy example mnist_cnn.py. It is found that the former tensorflow2 version give much lower accuracy than that the one obtained by the latter tensorflow 1.
Here below are the results:
# tensorflow2.3.2 + keras 2.4.3:
Epoch 1/12
60000/60000 [==============================] - 3s 54us/step - loss: 2.2795 - accuracy: 0.1270 - val_loss: 2.2287 - val_accuracy: 0.2883
Epoch 2/12
60000/60000 [==============================] - 3s 52us/step - loss: 2.2046 - accuracy: 0.2435 - val_loss: 2.1394 - val_accuracy: 0.5457
Epoch 3/12
60000/60000 [==============================] - 3s 52us/step - loss: 2.1133 - accuracy: 0.3636 - val_loss: 2.0215 - val_accuracy: 0.6608
Epoch 4/12
60000/60000 [==============================] - 3s 52us/step - loss: 1.9932 - accuracy: 0.4560 - val_loss: 1.8693 - val_accuracy: 0.7147
Epoch 5/12
60000/60000 [==============================] - 3s 52us/step - loss: 1.8430 - accuracy: 0.5239 - val_loss: 1.6797 - val_accuracy: 0.7518
Epoch 6/12
60000/60000 [==============================] - 3s 52us/step - loss: 1.6710 - accuracy: 0.5720 - val_loss: 1.4724 - val_accuracy: 0.7755
Epoch 7/12
60000/60000 [==============================] - 3s 53us/step - loss: 1.5003 - accuracy: 0.6071 - val_loss: 1.2725 - val_accuracy: 0.7928
Epoch 8/12
60000/60000 [==============================] - 3s 52us/step - loss: 1.3414 - accuracy: 0.6363 - val_loss: 1.0991 - val_accuracy: 0.8077
Epoch 9/12
60000/60000 [==============================] - 3s 53us/step - loss: 1.2129 - accuracy: 0.6604 - val_loss: 0.9603 - val_accuracy: 0.8169
Epoch 10/12
60000/60000 [==============================] - 3s 53us/step - loss: 1.1103 - accuracy: 0.6814 - val_loss: 0.8530 - val_accuracy: 0.8281
Epoch 11/12
60000/60000 [==============================] - 3s 52us/step - loss: 1.0237 - accuracy: 0.7021 - val_loss: 0.7689 - val_accuracy: 0.8350
Epoch 12/12
60000/60000 [==============================] - 3s 52us/step - loss: 0.9576 - accuracy: 0.7168 - val_loss: 0.7030 - val_accuracy: 0.8429
Test loss: 0.7029915698051452
Test accuracy: 0.8428999781608582
# tensorflow1.15.5 + keras2.3.1
60000/60000 [==============================] - 5s 84us/step - loss: 0.2631 - accuracy: 0.9198 - val_loss: 0.0546 - val_accuracy: 0.9826
Epoch 2/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0898 - accuracy: 0.9731 - val_loss: 0.0394 - val_accuracy: 0.9866
Epoch 3/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0674 - accuracy: 0.9799 - val_loss: 0.0341 - val_accuracy: 0.9881
Epoch 4/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0563 - accuracy: 0.9835 - val_loss: 0.0320 - val_accuracy: 0.9895
Epoch 5/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0465 - accuracy: 0.9859 - val_loss: 0.0343 - val_accuracy: 0.9889
Epoch 6/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0423 - accuracy: 0.9872 - val_loss: 0.0327 - val_accuracy: 0.9892
Epoch 7/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0387 - accuracy: 0.9882 - val_loss: 0.0279 - val_accuracy: 0.9907
Epoch 8/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0351 - accuracy: 0.9893 - val_loss: 0.0269 - val_accuracy: 0.9909
Epoch 9/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0330 - accuracy: 0.9902 - val_loss: 0.0311 - val_accuracy: 0.9895
Epoch 10/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0292 - accuracy: 0.9915 - val_loss: 0.0256 - val_accuracy: 0.9919
Epoch 11/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0293 - accuracy: 0.9911 - val_loss: 0.0276 - val_accuracy: 0.9911
Epoch 12/12
60000/60000 [==============================] - 4s 63us/step - loss: 0.0269 - accuracy: 0.9917 - val_loss: 0.0264 - val_accuracy: 0.9915
Test loss: 0.026934823030711867
Test accuracy: 0.9918000102043152
What caused the poor results for the tensorflow 2.3.2 + keras 2.4.3?? Is there any compatibility issue between tensorflow and keras here?
According to the author of keras, users should consider switching their Keras code to tf.keras in TensorFlow 2.x. In the above toy example, if
from tensorflow import keras in place of import keras, it also leads lower accuracy. It seems tf.keras gives poorer accuracy than keras? Maybe I run a wrong toy example for Tensorflow 2.X??
Update:
I also note if I decrease tensorflow to the version 2.2.1 (along with keras 2.3.1). They will produce about the same result. It seems there are some major changes from keras 2.3.1 to keras 2.4.0 (https://newreleases.io/project/github/keras-team/keras/release/2.4.0).
What are the specific main differences between keras 2.3.1 and keras 2.4.x??
Which versions of tensorflow are compatible with keras 2.4.x??
I am training a classifier model on cats vs dogs data. The model is a minor variant of ResNet18 & returns a softmax probability for classes. However, I am noticing that the validation loss is majorly NaN whereas training loss is steadily decreasing & behaves as expected. Training & Validation accuracy increase epoch by epoch.
Epoch 1/15
312/312 [==============================] - 1372s 4s/step - loss: 0.7849 - accuracy: 0.5131 - val_loss: nan - val_accuracy: 0.5343
Epoch 2/15
312/312 [==============================] - 1372s 4s/step - loss: 0.6966 - accuracy: 0.5539 - val_loss: 13989871201999266517090304.0000 - val_accuracy: 0.5619
Epoch 3/15
312/312 [==============================] - 1373s 4s/step - loss: 0.6570 - accuracy: 0.6077 - val_loss: 747123703808.0000 - val_accuracy: 0.5679
Epoch 4/15
312/312 [==============================] - 1372s 4s/step - loss: 0.6180 - accuracy: 0.6483 - val_loss: nan - val_accuracy: 0.6747
Epoch 5/15
312/312 [==============================] - 1373s 4s/step - loss: 0.5838 - accuracy: 0.6852 - val_loss: nan - val_accuracy: 0.6240
Epoch 6/15
312/312 [==============================] - 1372s 4s/step - loss: 0.5338 - accuracy: 0.7301 - val_loss: 31236203781405710523301888.0000 - val_accuracy: 0.7590
Epoch 7/15
312/312 [==============================] - 1373s 4s/step - loss: 0.4872 - accuracy: 0.7646 - val_loss: 52170.8672 - val_accuracy: 0.7378
Epoch 8/15
312/312 [==============================] - 1372s 4s/step - loss: 0.4385 - accuracy: 0.7928 - val_loss: 2130819335420217655296.0000 - val_accuracy: 0.8101
Epoch 9/15
312/312 [==============================] - 1373s 4s/step - loss: 0.3966 - accuracy: 0.8206 - val_loss: 116842888.0000 - val_accuracy: 0.7857
Epoch 10/15
312/312 [==============================] - 1372s 4s/step - loss: 0.3643 - accuracy: 0.8391 - val_loss: nan - val_accuracy: 0.8199
Epoch 11/15
312/312 [==============================] - 1373s 4s/step - loss: 0.3285 - accuracy: 0.8557 - val_loss: 788904.2500 - val_accuracy: 0.8438
Epoch 12/15
312/312 [==============================] - 1372s 4s/step - loss: 0.3029 - accuracy: 0.8670 - val_loss: nan - val_accuracy: 0.8245
Epoch 13/15
312/312 [==============================] - 1373s 4s/step - loss: 0.2857 - accuracy: 0.8781 - val_loss: 121907.8594 - val_accuracy: 0.8444
Epoch 14/15
312/312 [==============================] - 1373s 4s/step - loss: 0.2585 - accuracy: 0.8891 - val_loss: nan - val_accuracy: 0.8674
Epoch 15/15
312/312 [==============================] - 1374s 4s/step - loss: 0.2430 - accuracy: 0.8965 - val_loss: 822.7968 - val_accuracy: 0.8776
I checked for the following -
Infinity/NaN in validation data
Infinity/NaN caused when normalizing data (using tf.keras.applications.resnet.preprocess_input)
If the model is predicting only one class & hence causing loss function to behave oddly
Training code for reference -
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-3)
model = Resnet18(NUM_CLASSES=NUM_CLASSES) # variant of original model
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"])
history = model.fit(
train_dataset,
steps_per_epoch=len(X_train) // BATCH_SIZE,
epochs=EPOCHS,
validation_data=valid_dataset,
validation_steps=len(X_valid) // BATCH_SIZE,
verbose=1,
)
The most relevant answer I found was the last paragraph of the accepted answer here. However, that doesn't seem to be the case here as validation loss diverges by order of magnitudes compared to training loss & returns nan. Seems like the loss function is misbehaving.
I am currently studying the book hands on machine learning. I want to create a simple neural network, as described in the book chapter 10 for the mnist hand written data. But my model is stuck, and the accuracy is not increasing at all.
Here is my code:
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np
data = pd.read_csv('sample_data/mnist_train_small.csv', header=None)
test = pd.read_csv('sample_data/mnist_test.csv', header=None)
labels = data[0]
data = data.drop(0, axis=1)
test_labels = test[0]
test = test.drop(0, axis=1)
model = keras.models.Sequential([
keras.layers.Dense(300, activation='relu', input_shape=(784,)),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(10, activation='softmax'),
])
model.compile(loss='sparse_categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
keras.utils.plot_model(model, show_shapes=True)
hist = model.fit(data.to_numpy(), labels.to_numpy(), epochs=20, validation_data=(test.to_numpy(), test_labels.to_numpy()))
The first few outputs are :
Epoch 1/20
625/625 [==============================] - 2s 3ms/step - loss: 2055059923226079526912.0000 - accuracy: 0.1115 - val_loss: 2.4539 - val_accuracy: 0.1134
Epoch 2/20
625/625 [==============================] - 2s 3ms/step - loss: 2.4160 - accuracy: 0.1085 - val_loss: 2.2979 - val_accuracy: 0.1008
Epoch 3/20
625/625 [==============================] - 2s 2ms/step - loss: 2.3006 - accuracy: 0.1110 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 4/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3009 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 5/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3009 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 6/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 7/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 8/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 9/20
625/625 [==============================] - 2s 2ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 10/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 11/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 12/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Your loss function should be categorical_crossentrophy. Sparse is for large and mostly empty matrixes(word matrixes etc.). And also instead of data[] you can use data.iloc[]. And adam optimizer would be better in this problem.