Wrong accuracy validation set - tensorflow

I have a model for identification object by spectrum
model = Sequential()
model.add(Conv1D(filters = 64, input_shape=(train_generator.get_half_spec_size(160000), 1), kernel_size = 20, activation='relu'))
model.add(BatchNormalization())
model.add(Conv1D(filters = 64, kernel_size = 16, activation='relu', kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4),
bias_regularizer=regularizers.l2(1e-4),
activity_regularizer=regularizers.l2(1e-5)))
model.add(BatchNormalization())
model.add(MaxPool1D(strides=5))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
#model.add(BatchNormalization())
model.add(Dense(2, activation='softmax'))
model.compile(optimizer=Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
It seems that I have strange thing with 'accuracy' metric. First of all, almost in all epoches I have 0.66 accuracy for train set and 0.99 for validation set.
Okay. Than when model has learned - I just took one sample and check if validation is correct on sample. The validation was not correct (there were 1 but predicted 0), but answer of evaluate was
[2.060739278793335, 1.0]
so big loss ( expectingly ) and wrong accuracy (1.0)
Then I took another sample and predicted again. answer of 'evaluate' was
[0.18439120054244995, 1.0]
so correct loss and accuracy now (and in fact the prediction of model was correct).
My suggestion is that 'accuracy' metrics works wrong there. Or where there is a mistake?

Related

Validation loss decreasing but validation accuracy is fluctuating

I am training my first ML model. I am working on a 10-class classification problem. From what I can see, the model is overfitting since there is a significant difference between the training and validation accuracy.
This is the relevant code for the model
model = keras.Sequential()
model.add(keras.Input(shape=(x_train[0].shape)))
model.add(tf.keras.layers.Conv2D(filters=32,kernel_size=3, strides = (3, 3), padding = "same", activation = "relu", kernel_regularizer=tf.keras.regularizers.l1_l2(0.01)))
model.add(tf.keras.layers.MaxPool2D(strides=2))
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=(3,3), padding='valid', activation='relu', kernel_regularizer=tf.keras.regularizers.l1_l2(0.01)))
model.add(tf.keras.layers.MaxPool2D(strides=2))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(10))
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001/2)
model.summary()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs = 30, validation_data = (x_val, y_val), callbacks=tf.keras.callbacks.EarlyStopping(verbose=1, patience=4))
There are large fluctuations in the validation accuracy and I am not sure why.
I have tried augmenting the data and have also injected noise into the training data. (This is an audio classification problem with 10 different classes)
https://i.stack.imgur.com/TXe50.png

Why the input size ot shape is not valid?

I know that similar questions were asked before, but the solutions didn't helped me.
I have the following model:
model = Sequential()
# CNN
model.add(Conv2D(filters=16, kernel_size=2, input_shape=(40, 2000, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
# CNN
model.add(Conv2D(filters=32, kernel_size=2, activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
# CNN
model.add(Conv2D(filters=64, kernel_size=2, activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
# CNN
model.add(Conv2D(filters=128, kernel_size=2, activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(GlobalAveragePooling2D())
model.add(Dense(num_labels, activation='softmax'))
optimizer = optimizers.SGD(lr=0.002, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
I'm trying to fit the model:
model.fit(X_train, y_train_hot, batch_size=10, epochs=50,
validation_data=(X_test, y_test_hot))
where
X_train.shape = {tuple:3} (246, 40, 2000)
from other post I read (Keras input_shape for conv2d and manually loaded images) it seems that my input is right.
But I'm getting the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, 40, 2000]
What am I missing ? and how can I fix it ?
As you saw from the link you posted you must reshape your data to be 4dim.
X_train = X_train.reshape(246, 40, 2000, 1)
or
X_train = X_train.reshape(-1, 40, 2000, 1)
4D: [batch_size, img_height, img_width, number_of_channels]
The error is that you did not include the 4-th axis(i.e. axis=3 or axis -1 for that matter).
Here, you can see the following:
expected min_ndim=4, found ndim=3. Full shape received: [None, 40,
2000]
The None translates to the batch size, which of course is variable and set to None.
Then, you have 40, 2000, which should correspond to height and width respectively.
At the same time, remember that you wrote in your code that the input shape your network expects is input_shape=(40, 2000, 1) not (40,2000).
You need to explicitly add the "color"/"number of channels" axis, the 3-rd channel in this case, so you need to either use reshape or expand_dims to achieve that.
For demonstrative purposes, suppose that X is of shape (40,2000), then reshape X to X = X.reshape(40,2000,1)

constant training validation accuracy problem

I have a dataset of about 500 .mat files 300 train and 200 test and these are really small sized cropped images that are at most 3kb each. when I try training on the below architecture with the following parameters, I get a test accuracy and loss of 69% and the validation accuracy over 25 epochs remains around 51%. I want to know how to improve my test accuracy and fix the constant validation accuracy problem.
note: The problem is a binary classification problem and the class split is in the 60:40 ratio
weight_decay = 1e-3
model = models.Sequential()
model.add(layers.Conv2D(16, (3, 3), kernel_regularizer=regularizers.l2(weight_decay),padding='same',input_shape=X_train.shape[1:]))
model.add(layers.Activation('relu'))
model.add(layers.Dropout(0.2))
model.add(layers.Conv2D(32, (3, 3),kernel_regularizer=regularizers.l2(weight_decay), padding='same'))
model.add(layers.Activation('relu'))
#model.add(layers.Dropout(0.2))
model.add(layers.Flatten())
#model.add(layers.Dropout(0.4))
model.add(layers.Dense(20, activation='relu'))
model.add(layers.Dropout(0.50))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=optimizers.adam(lr=0.001), metrics=['acc'])
es_callback = callbacks.EarlyStopping(monitor='val_loss', patience=5)
history= model.fit(#train_generator,
X_train,Y_train,
batch_size= batch_size,
#steps_per_epoch=trainSize,
epochs=25,
validation_data=(X_val,Y_val),#val_generator,
#validation_steps=valSize,
#callbacks=[LearningRateScheduler(lr_schedule)]
callbacks=[es_callback]
)

Keras input shape mismatch error for multi-feature CNN classification model

Here is my code:
model = Sequential()
model.add(Conv1D(32, kernel_size=3,
activation='relu',
input_shape=(14,1)))
model.add(MaxPooling1D(pool_size=1))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='softmax'))
model.summary()
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(X_train.values, y_train.values,
batch_size=4,
epochs=1,
verbose=2,
validation_data=(X_test.values,y_test.values))
And error is:
Error when checking input: expected conv1d_35_input to have 3 dimensions, but got array with shape (13166, 14)
As suggested by other posts, I tweaked with flatten layer before output layer but that did not work.
My X_train.values.shape gives (13166, 14)
Any suggestion how should I fix this?
You need to reshape the X_train.values from (13166, 14) to (13166, 14, 1) as your input shape of CNN network is (None, 14, 1).
This may solve your problem:
X_train.values.reshape([-1,14,1])

How to load fine-tuned keras model

I am following this tutorial to try fine-tuning using VGG16 model, I trained the model and saved .h5 file using model.save_weights and
vgg_conv = VGG16(include_top=False, weights='imagenet', input_shape=(image_size, image_size, 3))
# Freeze the layers except the last 4 layers
for layer in vgg_conv.layers[:-4]:
layer.trainable = False
model = Sequential()
model.add(vgg_conv)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(11, activation='softmax'))
I then tried to rebuild the architecture and load weights using the below
def create_model(self):
model = Sequential()
vgg_model = VGG16(include_top=False, weights='imagenet', input_shape=(150, 150, 3))
model.add(vgg_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(11, activation='softmax'))
model.load_weights(self.top_model_weights_path) # throws error
return model
but it then throws this error
ValueError: Cannot feed value of shape (512, 512, 3, 3) for Tensor 'Placeholder:0', which has shape '(3, 3, 3, 64)'
what am I doing wrong?
I am not sure how to intepret the error but you could try saving the model architecture and the weights together model.save("model.h5") after fine tuning.
To load the model you can type
model = load_model('model.h5')
# summarize model.
model.summary()
I think this has the benefeit of not having to rebuild the model and requires only one line to acomplish the same purpose.
The problem comes from the trainable difference between the two models. If you freeze the 4 last layers in the create_model function, it will work.
But as said by Igna, model.save and model.load_model is simpler.