I have 4 different types of images like bicyle,car,airplane and musicalinstrument and i tried to make image classification with this dataset.Then when i train the model i get this accuracy : 0.62
What should i do to increase the accuracy?
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.optimizers import RMSprop,Adam
#build the model
#set up the layers
model = Sequential()
model.add(Conv2D(filters = 8, kernel_size = (5,5),padding = 'Same',
activation ='relu', input_shape = (IMG_SIZE,IMG_SIZE,1)))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 16, kernel_size = (3,3),padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(10, activation = "softmax"))
# Define the optimizer
#Adam optimizer: Change the learning rate
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
# Compile the model
model.compile(optimizer = optimizer , loss = "categorical_crossentropy",
metrics=["accuracy"])
#train the model
model.fit(train_images, train_labels, epochs=10,batch_size=250)
#evaluate accuracy
val_loss, val_acc = model.evaluate(val_images, val_labels)
print('validation accuracy:', val_acc)
print('validation loss:' , val_loss)
Here is the result:
Epoch 1/10
18620/18620 [==============================] - 987s 53ms/step - loss: 2.0487 - acc: 0.39380/18620 [=======>......................] - ETA: 11:57 - loss: 4.0915 - acc: 0.278410500/18620 [===============>..............] - ETA: 7:07 - loss: 2.7013 - acc: 0.325015500/18620 [=======================>......] - ETA: 2:45 - loss: 2.2196 - acc: 0.3754
Epoch 2/10
18620/18620 [==============================] - 985s 53ms/step - loss: 1.1145 - acc: 0.5409TA: 14:05 - loss: 1.1721 - acc: 0.4987 7750/18620 [===========>..................] - ETA: 9:31 - loss: 1.1378 - acc: 0.5288 - ETA: 2:44 - loss: 1.1183 - acc: 0.5392
Epoch 3/10
18620/18620 [==============================] - 978s 53ms/step - loss: 1.0331 - acc: 0.5830TA: 14:17 - loss: 1.0323 - acc: 0.5845
Epoch 4/10
18620/18620 [==============================] - 975s 52ms/step - loss: 1.0032 - acc: 0.5942TA: 9:37 - loss: 1.0127 - acc: 0.5875 9750/18620 [==============>...............] - ETA: 7:41 - loss: 1.0119 - acc: 0.5892 - ETA: 5:19 - loss: 1.0122 - acc: 0.5902
Epoch 5/10
18620/18620 [==============================] - 973s 52ms/step - loss: 0.9680 - acc: 0.6137TA: 11:27 - loss: 0.9670 - acc: 0.6137 7000/18620 [==========>...................] - ETA: 9:58 - loss: 0.9718 - acc: 0.6066 15000/18620 [=======================>......] - ETA: 3:08 - loss: 0.9694 - acc: 0.6115
Epoch 6/10
18620/18620 [==============================] - 979s 53ms/step - loss: 0.9308 - acc: 0.62960/18620 [=============>................] - ETA: 8:36 - loss: 0.9311 - acc: 0.633110500/18620 [===============>..............] - ETA: 7:05 - loss: 0.9310 - acc: 0.6304
Epoch 7/10
18620/18620 [==============================] - 976s 52ms/step - loss: 0.9052 - acc: 0.63860/18620 [==========>...................] - ETA: 10:02 - loss: 0.9112 - acc: 0.6347 - ETA: 9:11 - loss: 0.9055 - acc: 0.6368 - ETA: 5:19 - loss: 0.9105 - acc: 0.6362
Epoch 8/10
18620/18620 [==============================] - 1008s 54ms/step - loss: 0.8755 - acc: 0.6507/18620 [==================>...........] - ETA: 5:52 - loss: 0.8746 - acc: 0.6513
Epoch 9/10
18620/18620 [==============================] - 994s 53ms/step - loss: 0.8479 - acc: 0.66140/18620 [===>..........................] - ETA: 14:12 - loss: 0.8474 - acc: 0.6560 3500/18620 [====>.........................] - ETA: 13:27 - loss: 0.8437 - acc: 0.6566 - ETA: 11:54 - loss: 0.8318 - acc: 0.6672 - ETA: 9:30 - loss: 0.8273 - acc: 0.6681 9500/18620 [==============>...............] - ETA: 8:08 - loss: 0.8390 - acc: 0.6653 - ETA: 6:09 - loss: 0.8399 - acc: 0.6660 - ETA: 1:53 - loss: 0.8473 - acc: 0.6628
Epoch 10/10
18620/18620 [==============================] - 997s 54ms/step - loss: 0.8108 - acc: 0.67490/18620 [=======>......................] - ETA: 11:54 - loss: 0.8146 - acc: 0.6650 - ETA: 10:45 - loss: 0.8196 - acc: 0.6652
4656/4656 [==============================] - 40s 9ms/stepETA: 1s
validation accuracy: 0.6265034364261168
validation loss: 0.964772748373628
At the first you can increase CNN filters (8 or 16 is not enough)and then use BatchNormalization layers after MaxPool layers. If it's not work change the optimizer (like SGD etc.)
You may need to tune the exact values of filters depending on
The complexity of your dataset.
The depth of your neural network, but I recommend starting with filters in the range [32, 64, 128] in the earlier and increasing up to [256, 512, 1024] in the deeper layers.
If your input images are greater than 128×128 you may choose to use a kernel size > 3.
learn larger spatial filters.
to help reduce volume size.
If your images are smaller than 128×128 you may want to consider sticking with strictly 1×1 and 3×3 filters.
Also use Adam optimiser with its original values.
Related
I've been trying to save and reupload a model and whenever I do that the accuracy always goes down.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu', input_shape=(IMG_SIZE,IMG_SIZE,3)))
model.add(tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(len(SURFACE_TYPES), activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=EPOCHS,
validation_steps=10)
Output:
Epoch 1/3
84/84 [==============================] - 2s 19ms/step - loss: 1.9663 - acc: 0.6258 - val_loss: 0.8703 - val_acc: 0.6867
Epoch 2/3
84/84 [==============================] - 1s 18ms/step - loss: 0.2865 - acc: 0.9105 - val_loss: 0.4494 - val_acc: 0.8667
Epoch 3/3
84/84 [==============================] - 1s 18ms/step - loss: 0.1409 - acc: 0.9574 - val_loss: 0.3614 - val_acc: 0.9000
This followed by running these commands to produce outputs result in the same training loss but different training accuracies. The weights and structures of the models are also identical.
model.save("my_model2.h5")
model2 = load_model("my_model2.h5")
model2.evaluate(train_ds)
model.evaluate(train_ds)
Output:
84/84 [==============================] - 1s 9ms/step - loss: 0.0854 - acc: 0.0877
84/84 [==============================] - 1s 9ms/step - loss: 0.0854 - acc: 0.9862
[0.08536089956760406, 0.9861862063407898]
i have shared reference link click here
it has all formats to save & load your model
I am currently working defect classification problem in solar panel. It's a multi class classification problem. Currently its 3 class. I have done the coding part but my accuracy is very low. How to improve my accuracy?
Total training images - 900
Testing/validation - 300
Class - 3
My code is given below -
import tensorflow as tf
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
TRAINING_DIR = "/content/drive/My Drive/solar_images/solar_images/train/"
training_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
VALIDATION_DIR = "/content/drive/My Drive/solar_images/solar_images/test/"
validation_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = training_datagen.flow_from_directory(
TRAINING_DIR,
target_size=(150,150),
class_mode='categorical',
batch_size=64
)
validation_generator = validation_datagen.flow_from_directory(
VALIDATION_DIR,
target_size=(150,150),
class_mode='categorical',
batch_size=64
)
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 150x150 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()
model.compile(loss = 'categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
batch_size=64
history = model.fit(train_generator,
epochs=20,
steps_per_epoch=int(894/batch_size),
validation_data = validation_generator,
verbose = 1,
validation_steps=int(289/batch_size))
model.save("solar_images_weight.h5")
My accuracy is -
Epoch 1/20
13/13 [==============================] - 1107s 85s/step - loss: 1.2893 - accuracy: 0.3470 - val_loss: 1.0926 - val_accuracy: 0.3594
Epoch 2/20
13/13 [==============================] - 1239s 95s/step - loss: 1.1037 - accuracy: 0.3566 - val_loss: 1.0954 - val_accuracy: 0.3125
Epoch 3/20
13/13 [==============================] - 1203s 93s/step - loss: 1.0964 - accuracy: 0.3904 - val_loss: 1.0841 - val_accuracy: 0.5625
Epoch 4/20
13/13 [==============================] - 1182s 91s/step - loss: 1.0980 - accuracy: 0.3750 - val_loss: 1.0894 - val_accuracy: 0.3633
Epoch 5/20
13/13 [==============================] - 1218s 94s/step - loss: 1.1086 - accuracy: 0.3386 - val_loss: 1.0874 - val_accuracy: 0.3125
Epoch 6/20
13/13 [==============================] - 1214s 93s/step - loss: 1.0953 - accuracy: 0.3257 - val_loss: 1.0763 - val_accuracy: 0.6094
Epoch 7/20
13/13 [==============================] - 1136s 87s/step - loss: 1.0851 - accuracy: 0.3831 - val_loss: 1.0754 - val_accuracy: 0.3164
Epoch 8/20
13/13 [==============================] - 1170s 90s/step - loss: 1.1005 - accuracy: 0.3940 - val_loss: 1.0545 - val_accuracy: 0.5039
Epoch 9/20
13/13 [==============================] - 1138s 88s/step - loss: 1.1294 - accuracy: 0.4337 - val_loss: 1.0130 - val_accuracy: 0.5703
Epoch 10/20
13/13 [==============================] - 1131s 87s/step - loss: 1.0250 - accuracy: 0.4531 - val_loss: 0.8911 - val_accuracy: 0.6055
Epoch 11/20
13/13 [==============================] - 1162s 89s/step - loss: 1.0243 - accuracy: 0.4735 - val_loss: 0.9160 - val_accuracy: 0.4727
Epoch 12/20
13/13 [==============================] - 1153s 89s/step - loss: 0.9978 - accuracy: 0.4783 - val_loss: 0.7754 - val_accuracy: 0.6406
Epoch 13/20
13/13 [==============================] - 1187s 91s/step - loss: 1.0080 - accuracy: 0.4687 - val_loss: 0.7701 - val_accuracy: 0.6602
Epoch 14/20
13/13 [==============================] - 1204s 93s/step - loss: 0.9851 - accuracy: 0.5048 - val_loss: 0.7450 - val_accuracy: 0.6367
Epoch 15/20
13/13 [==============================] - 1181s 91s/step - loss: 0.9699 - accuracy: 0.4892 - val_loss: 0.7409 - val_accuracy: 0.6289
Epoch 16/20
13/13 [==============================] - 1187s 91s/step - loss: 0.8884 - accuracy: 0.5241 - val_loss: 0.7169 - val_accuracy: 0.6133
Epoch 17/20
13/13 [==============================] - 1197s 92s/step - loss: 0.9372 - accuracy: 0.5084 - val_loss: 0.7464 - val_accuracy: 0.5859
Epoch 18/20
13/13 [==============================] - 1224s 94s/step - loss: 0.9230 - accuracy: 0.5229 - val_loss: 0.9198 - val_accuracy: 0.5156
Epoch 19/20
13/13 [==============================] - 1270s 98s/step - loss: 0.9161 - accuracy: 0.5192 - val_loss: 0.6785 - val_accuracy: 0.6289
Epoch 20/20
13/13 [==============================] - 1173s 90s/step - loss: 0.8728 - accuracy: 0.5193 - val_loss: 0.6674 - val_accuracy: 0.5781
Training and validation accuracy plot is given below -
You could use transfer learning. Using a pre-trained model such as mobilenet or inception to train on your dataset. This would significantly improve your accuracy.
I have looked at other posts with similar problems and it seems that my model is overfitting. However, I've tried regularization, dropout, reducing parameters, decreasing the learning rate and changing the loss function, but nothing seems to help.
Here is my model:
model = Sequential([
Embedding(max_words, 64),
Dropout(.5),
Bidirectional(GRU(64, return_sequences = True), merge_mode='concat'),
GlobalMaxPooling1D(),
Dense(64),
Dropout(.5),
Dense(1, activation='sigmoid')
])
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train, batch_size=32, epochs=25, verbose=1, validation_data=(x_test, y_test),shuffle=True)
And my training output:
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) (None, None, 64) 320000
_________________________________________________________________
dropout_6 (Dropout) (None, None, 64) 0
_________________________________________________________________
bidirectional_3 (Bidirection (None, None, 128) 49920
_________________________________________________________________
global_max_pooling1d_3 (Glob (None, 128) 0
_________________________________________________________________
dense_3 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_7 (Dropout) (None, 64) 0
_________________________________________________________________
dense_4 (Dense) (None, 1) 65
=================================================================
Total params: 378,241
Trainable params: 378,241
Non-trainable params: 0
_________________________________________________________________
Epoch 1/25
229/229 [==============================] - 7s 32ms/step - loss: 0.6952 - accuracy: 0.4939 - val_loss: 0.6923 - val_accuracy: 0.5240
Epoch 2/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6917 - accuracy: 0.5144 - val_loss: 0.6973 - val_accuracy: 0.4815
Epoch 3/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6709 - accuracy: 0.5881 - val_loss: 0.7164 - val_accuracy: 0.4784
Epoch 4/25
229/229 [==============================] - 7s 30ms/step - loss: 0.6070 - accuracy: 0.6711 - val_loss: 0.7704 - val_accuracy: 0.4977
Epoch 5/25
229/229 [==============================] - 7s 30ms/step - loss: 0.5370 - accuracy: 0.7325 - val_loss: 0.8411 - val_accuracy: 0.4876
Epoch 6/25
229/229 [==============================] - 7s 30ms/step - loss: 0.4770 - accuracy: 0.7714 - val_loss: 0.9479 - val_accuracy: 0.4784
Epoch 7/25
229/229 [==============================] - 7s 30ms/step - loss: 0.4228 - accuracy: 0.8016 - val_loss: 1.0987 - val_accuracy: 0.4884
Epoch 8/25
229/229 [==============================] - 7s 30ms/step - loss: 0.3697 - accuracy: 0.8344 - val_loss: 1.2714 - val_accuracy: 0.4760
Epoch 9/25
229/229 [==============================] - 7s 30ms/step - loss: 0.3150 - accuracy: 0.8582 - val_loss: 1.4184 - val_accuracy: 0.4822
Epoch 10/25
229/229 [==============================] - 7s 31ms/step - loss: 0.2725 - accuracy: 0.8829 - val_loss: 1.6053 - val_accuracy: 0.4946
Epoch 11/25
229/229 [==============================] - 7s 31ms/step - loss: 0.2277 - accuracy: 0.9056 - val_loss: 1.8131 - val_accuracy: 0.4884
Epoch 12/25
229/229 [==============================] - 7s 31ms/step - loss: 0.1929 - accuracy: 0.9253 - val_loss: 1.9327 - val_accuracy: 0.4977
Epoch 13/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1717 - accuracy: 0.9318 - val_loss: 2.2280 - val_accuracy: 0.4900
Epoch 14/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1643 - accuracy: 0.9324 - val_loss: 2.2811 - val_accuracy: 0.4915
Epoch 15/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1419 - accuracy: 0.9439 - val_loss: 2.4530 - val_accuracy: 0.4830
Epoch 16/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1255 - accuracy: 0.9521 - val_loss: 2.6692 - val_accuracy: 0.4992
Epoch 17/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1124 - accuracy: 0.9558 - val_loss: 2.8106 - val_accuracy: 0.4892
Epoch 18/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1130 - accuracy: 0.9556 - val_loss: 2.6792 - val_accuracy: 0.4907
Epoch 19/25
229/229 [==============================] - 7s 30ms/step - loss: 0.1085 - accuracy: 0.9610 - val_loss: 2.8966 - val_accuracy: 0.5093
Epoch 20/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0974 - accuracy: 0.9656 - val_loss: 2.8636 - val_accuracy: 0.5147
Epoch 21/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0921 - accuracy: 0.9663 - val_loss: 2.9874 - val_accuracy: 0.4977
Epoch 22/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0888 - accuracy: 0.9685 - val_loss: 3.0295 - val_accuracy: 0.4969
Epoch 23/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0762 - accuracy: 0.9731 - val_loss: 3.0607 - val_accuracy: 0.4884
Epoch 24/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0842 - accuracy: 0.9692 - val_loss: 3.0552 - val_accuracy: 0.4900
Epoch 25/25
229/229 [==============================] - 7s 30ms/step - loss: 0.0816 - accuracy: 0.9693 - val_loss: 2.9571 - val_accuracy: 0.5015
My validation loss seems to always increases no matter what. I am trying to predict political affiliation from tweets. The dataset I am using has worked well on other models, so perhaps there is something wrong with my data preprocessing instead?
import pandas as pd
dataset = pd.read_csv('political_tweets.csv')
dataset.head()
dataset = pd.read_csv('political_tweets.csv')["tweet"].values
y_train = pd.read_csv('political_tweets.csv')["dem_or_rep"].values
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(dataset, y_train, test_size=0.15, shuffle=True)
print(x_train[0])
print(x_test[0])
max_words = 10000
max_len = 25
tokenizer = Tokenizer(num_words = max_words, filters='!"#$%&()*+,-./:;<=>?#[\\]^_`{|}~\t\n1234567890', lower=False,oov_token="<OOV>")
tokenizer.fit_on_texts(x_train)
x_train = tokenizer.texts_to_sequences(x_train)
x_train = pad_sequences(x_train, max_len, padding='post', truncating='post')
tokenizer.fit_on_texts(x_test)
x_test = tokenizer.texts_to_sequences(x_test)
x_test = pad_sequences(x_test, max_len, padding='post', truncating='post')
I am really stumped. Any help is appreciated.
You're doing a binary classification and your validation accuracy is near 50%. It just means your model learnt nothing useful, it's equivalent to random prediction.
Your training accuracy is really high, which suggests your model is badly overfitted.
Don't apply dropout after embedding layer, it can mess everything up.
Remove this Dense(64), after GlobalPooling.
Use recurrent_dropout in GRU.
Train for fewer epochs.
Reduce vocabulary, remove stop words. Maybe there is too noise, as your sequence length is only 25, noisy stop words can fool the model.
import nltk
from nltk.corpus import stopwords
set(stopwords.words('english'))
Your model is still overfitting. Try reducing embedding output_dim and GRU units both with many combinations.
I am kind of new to deep learning and especially keras, and I have an assignment from university to train a CNN and learn about it, using keras. I am using the MURA dataset (skeletonal radiography).
What I have done until now is to go over all images from the dataset and split the training set into train and validation (90/10).
I am using a CNN that has been given in the paper and I am not allowed to modify it until the second task. The first task is to observe and understand the CNN.
def run():
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory('train_data',
target_size=(227,227),
batch_size=BATCH_SIZE,
class_mode='binary',
color_mode='grayscale'
)
val_generator = val_datagen.flow_from_directory('test_data',
target_size=(227,227),
batch_size=BATCH_SIZE,
class_mode='binary',
color_mode='grayscale'
)
classifier = Sequential()
classifier.add(Conv2D(64,(7,7),strides=2, input_shape=(227,227,1)))
classifier.add(Activation('relu'))
classifier.add(MaxPooling2D(pool_size=(2,2), strides=2))
classifier.add(Conv2D(128, (5,5), strides=2 ))
classifier.add(Activation('relu'))
classifier.add(MaxPooling2D(pool_size=(2,2),strides=2))
classifier.add(Conv2D(256, (3,3), strides=1))
classifier.add(Activation('relu'))
classifier.add(Conv2D(384, (3,3), strides=1))
classifier.add(Activation('relu'))
classifier.add(Conv2D(256, (3,3), strides=1))
classifier.add(Activation('relu'))
classifier.add(Conv2D(256, (3,3), strides=1))
classifier.add(Activation('relu'))
classifier.add(MaxPooling2D(pool_size=(2,2),strides=2))
classifier.add(Flatten())
classifier.add(Dropout(0.5))
classifier.add(Dense(units=2048))
classifier.add(Activation('relu'))
classifier.add(Dropout(0.5))
classifier.add(Dense(units=1))
classifier.add(Activation('sigmoid'))
classifier.summary()
# from keras.optimizers import SGD
# sg = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
classifier.compile(optimizer=keras.optimizers.SGD(),loss='binary_crossentropy', metrics=['accuracy'])
classifier.fit_generator(train_generator,
steps_per_epoch= training_len//BATCH_SIZE,
epochs=10,
validation_data=val_generator,
validation_steps= valid_len//BATCH_SIZE,
shuffle=True,
verbose=1)
classifier.save_weights('first_model_weights.h5')
classifier.save('first_model.h5')
The problem I am having is that if I run this, it just does not learn. Or at least I think it doesn't.
The output looks like this:
Epoch 1/10
575/575 [==============================] - 693s 1s/step - loss: 0.6767 - acc: 0.5958 - val_loss: 0.6751 - val_acc: 0.5966
Epoch 2/10
575/575 [==============================] - 207s 359ms/step - loss: 0.6760 - acc: 0.5948 - val_loss: 0.6752 - val_acc: 0.5958
Epoch 3/10
575/575 [==============================] - 258s 448ms/step - loss: 0.6745 - acc: 0.5983 - val_loss: 0.6748 - val_acc: 0.5958
Epoch 4/10
575/575 [==============================] - 165s 287ms/step - loss: 0.6760 - acc: 0.5950 - val_loss: 0.6757 - val_acc: 0.5947
Epoch 5/10
575/575 [==============================] - 166s 288ms/step - loss: 0.6761 - acc: 0.5948 - val_loss: 0.6731 - val_acc: 0.6016
Epoch 6/10
575/575 [==============================] - 167s 290ms/step - loss: 0.6742 - acc: 0.5990 - val_loss: 0.6778 - val_acc: 0.5875
Epoch 7/10
575/575 [==============================] - 206s 359ms/step - loss: 0.6762 - acc: 0.5938 - val_loss: 0.6721 - val_acc: 0.6038
Epoch 8/10
575/575 [==============================] - 165s 286ms/step - loss: 0.6762 - acc: 0.5938 - val_loss: 0.6763 - val_acc: 0.5947
Epoch 9/10
575/575 [==============================] - 164s 286ms/step - loss: 0.6751 - acc: 0.5972 - val_loss: 0.6787 - val_acc: 0.5897
Epoch 10/10
575/575 [==============================] - 168s 292ms/step - loss: 0.6750 - acc: 0.5971 - val_loss: 0.6722 - val_acc: 0.6022
Am I doing something wrong in the code? Is it the dataset splitting? I am currently in a dark spot and I can't seem to figure it out.
I am trying to learn keras and none of the code I use is learning. From the example code on Deep Learning with Python to the code on https://medium.com/nybles/create-your-first-image-recognition-classifier-using-cnn-keras-and-tensorflow-backend-6eaab98d14dd. With the last link, I am not able to use the full 10000 dataset, but even with my 1589 size training dataset the accuracy still stays at .5 the whole time.
I'm almost starting to think the problem is my overclocked cpu, and ram, but thats more of a crazy guess.
I initially thought the problem was that I had the tensorflow2.0.0-alpha. However, even after I went to regular tensorflow-gpu still nothing is learning.
#Convolutional Neural Network
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.models import model_from_json
import os
#initialize the cnn
classifier = Sequential()
#Step 1 convolution
classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu'))
#Step 2 Pooling
classifier.add(MaxPooling2D(pool_size = (2,2)))
#Step 3 Flattening
classifier.add(Flatten())
#Step 4 Full Connection
classifier.add(Dense(output_dim = 128, activation = 'relu'))
classifier.add(Dense(output_dim = 64, activation = 'relu'))
classifier.add(Dense(output_dim = 32, activation = 'relu'))
classifier.add(Dense(output_dim = 1, activation = 'sigmoid'))
#Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
#Part 2 Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1/.255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory(
'dataset/training_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
test_set = test_datagen.flow_from_directory(
'dataset/test_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
from IPython.display import display
from PIL import Image
classifier.fit_generator(
training_set,
steps_per_epoch=1589,
epochs=10,
validation_data=test_set,
validation_steps=378)
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('dataset/test_set/cats/cat.4012.jpg', target_size = (64,64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
if result[0][0] >= 0.5:
prediction = 'dog'
else:
prediction = 'cat'
print(prediction)
examples from deep-learning with python
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) =
imdb.load_data(num_words = 10000)
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i,sequence in enumerate(sequences):
results[i, sequence]=1.
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu',input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val,y_val))
dogs an cats output:
Epoch 1/10
1589/1589 [==============================] - 112s 70ms/step - loss: 7.8736 - acc: 0.5115 - val_loss: 7.9528 - val_acc: 0.4976
Epoch 2/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8697 - acc: 0.5117 - val_loss: 7.9606 - val_acc: 0.4971
Epoch 3/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8740 - acc: 0.5115 - val_loss: 7.9499 - val_acc: 0.4978
Epoch 4/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8674 - acc: 0.5119 - val_loss: 7.9634 - val_acc: 0.4969
Epoch 5/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8765 - acc: 0.5113 - val_loss: 7.9499 - val_acc: 0.4977
Epoch 6/10
1589/1589 [==============================] - 111s 70ms/step - loss: 7.8737 - acc: 0.5115 - val_loss: 7.9634 - val_acc: 0.4970
Epoch 7/10
1589/1589 [==============================] - 129s 81ms/step - loss: 7.8623 - acc: 0.5122 - val_loss: 7.9626 - val_acc: 0.4970
Epoch 8/10
1589/1589 [==============================] - 112s 71ms/step - loss: 7.8758 - acc: 0.5114 - val_loss: 7.9508 - val_acc: 0.4977
Epoch 9/10
1589/1589 [==============================] - 115s 72ms/step - loss: 7.8708 - acc: 0.5117 - val_loss: 7.9519 - val_acc: 0.4976
Epoch 10/10
1589/1589 [==============================] - 112s 70ms/step - loss: 7.8738 - acc: 0.5115 - val_loss: 7.9614 - val_acc: 0.4971
cat
deeplearning imdb example output:
WARNING:tensorflow:From C:\Users\Mike\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 4s 246us/step - loss: 0.6932 - acc: 0.4982 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 2/20
15000/15000 [==============================] - 2s 115us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 3/20
15000/15000 [==============================] - 2s 115us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 4/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 5/20
15000/15000 [==============================] - 2s 120us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 6/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 7/20
15000/15000 [==============================] - 2s 113us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 8/20
15000/15000 [==============================] - 2s 113us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 9/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 10/20
15000/15000 [==============================] - 2s 122us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 11/20
15000/15000 [==============================] - 2s 116us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 12/20
15000/15000 [==============================] - 2s 116us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 13/20
15000/15000 [==============================] - 2s 121us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.4947
Epoch 14/20
15000/15000 [==============================] - 2s 127us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 15/20
15000/15000 [==============================] - 2s 121us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 16/20
15000/15000 [==============================] - 2s 113us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 17/20
15000/15000 [==============================] - 2s 115us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 18/20
15000/15000 [==============================] - 2s 114us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 19/20
15000/15000 [==============================] - 2s 114us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947
Epoch 20/20
15000/15000 [==============================] - 2s 119us/step - loss: 0.6931 - acc: 0.5035 - val_loss: 0.6932 - val_acc: 0.4947