I am new to machine learning and I am working on a problem of sequence classification.
The data in the dataset consists of sequences of shape(20,9)
9 features of length 20.
I have tried below model but I have failed to get good accuracy.
f = 32
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(units = 4*f, activation='tanh', return_sequences = True, input_shape = (window_length, 9)))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.LSTM(units = 4*f, activation='tanh', return_sequences = True))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.LSTM(units = 4*f, activation='tanh'))
model.add(tf.keras.layers.Dense(units = 256, activation = 'tanh'))
model.add(tf.keras.layers.Dense(units = 64, activation = 'tanh'))
model.add(tf.keras.layers.Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', TP, TN, FP, FN])
model.build()
model.summary()
history = model.fit(X_train, y_train, epochs=256, batch_size=256, callbacks=[WeightsSaver(model,1)], validation_data=(X_test, y_test))
I have also tried more deep network, CNN_LSTMs and CNN_LSTM with dense layers at the end.
I am not getting good confustion matrix. It is mostly giving me false negatives even on train set.
I have also tried to balance the data using SMOTE and SVM_SMOTE still it is giving me low accuracy.
The problems I am facing:
1)Without resampling: majority class prevails and very less minority class.
2)With resampling: I am getting many false positives on train and test data.
I have pasted the data on below kaggle page.
https://www.kaggle.com/jgranth/binary-seqclassification-of-input-shape209
Can someone expert in this field help to share experiences and possible solution or point out mistakes that I could be making.
Below is the output I am getting on train without resampling.
Epoch 1/256
2602/2602 [==============================] - 143s 48ms/step - loss: 0.4011 - accuracy: 0.8606 - TP: 4.3987e-04 - TN: 0.8601 - FP: 7.5963e-04 - FN: 0.1387 - val_loss: 0.4525 - val_accuracy: 0.8285 - val_TP: 5.8129e-04 - val_TN: 0.8281 - val_FP: 5.8710e-04 - val_FN: 0.1708
Epoch 2/256
2602/2602 [==============================] - 122s 47ms/step - loss: 0.3956 - accuracy: 0.8609 - TP: 3.2877e-04 - TN: 0.8605 - FP: 3.4228e-04 - FN: 0.1388 - val_loss: 0.4554 - val_accuracy: 0.8285 - val_TP: 0.0000e+00 - val_TN: 0.8286 - val_FP: 5.2316e-05 - val_FN: 0.1713
Epoch 3/256
2602/2602 [==============================] - 123s 47ms/step - loss: 0.3937 - accuracy: 0.8609 - TP: 4.1434e-04 - TN: 0.8605 - FP: 3.8132e-04 - FN: 0.1387 - val_loss: 0.4495 - val_accuracy: 0.8285 - val_TP: 8.7193e-05 - val_TN: 0.8286 - val_FP: 6.9754e-05 - val_FN: 0.1713
Epoch 4/256
2602/2602 [==============================] - 122s 47ms/step - loss: 0.3926 - accuracy: 0.8610 - TP: 5.7198e-04 - TN: 0.8604 - FP: 4.8640e-04 - FN: 0.1385 - val_loss: 0.4479 - val_accuracy: 0.8289 - val_TP: 0.0016 - val_TN: 0.8275 - val_FP: 0.0012 - val_FN: 0.1698
Related
So it's been days i've been working on this model on image classification. I have 70000 images and 375 classes. I've tried training it with Vgg16, Xception, Resnet & Mobilenet ... and I always get the same limit of 45% on the validation.
As you can see here
I've tried adding dropout layers and regularization and it gets the same result for validation.
Data augmentation didn't do much to help either
Any ideas why this isn't working ?
Here's a snipped of the code of the last model I used:
from keras.models import Sequential
from keras.layers import Dense
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras import regularizers
from PIL import Image
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
validation_datagen = ImageDataGenerator(rescale=1./255)
target_size = (height, width)
datagen = ImageDataGenerator(rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
path,
target_size=(height, width),
batch_size=batchSize,
shuffle=True,
class_mode='categorical',
subset='training')
validation_generator = datagen.flow_from_directory(
path,
target_size=(height, width),
batch_size=batchSize,
class_mode='categorical',
subset='validation')
num_classes = len(train_generator.class_indices)
xception_model = Xception(weights='imagenet',input_shape=(width, height, 3), include_top=False,classes=num_classes)
x = xception_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
out = Dense(num_classes, activation='softmax')(x)
opt = Adam()
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
n_epochs = 15
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batchSize,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batchSize,
verbose=1,
epochs = n_epochs)
Yes, you may need a balanced dataset among each category in your dataset for better model training performance. Please try again by changing class_mode='sparse' and loss='sparse_categorical_crossentropy' because you are using the image dataset. Also freeze the pretrained model layers 'xception_model.trainable = False'.
Check the below code: (I have used a flower dataset of 5 classes)
xception_model = tf.keras.applications.Xception(weights='imagenet',input_shape=(width, height, 3), include_top=False,classes=num_classes)
xception_model.trainable = False
x = xception_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(32, activation='relu')(x)
out = Dense(num_classes, activation='softmax')(x)
opt = tf.keras.optimizers.Adam()
model = keras.Model(inputs=xception_model.input, outputs=out)
model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_generator, epochs=10, validation_data=validation_generator)
Output:
Epoch 1/10
217/217 [==============================] - 23s 95ms/step - loss: 0.5945 - accuracy: 0.7793 - val_loss: 0.4610 - val_accuracy: 0.8337
Epoch 2/10
217/217 [==============================] - 20s 91ms/step - loss: 0.3439 - accuracy: 0.8797 - val_loss: 0.4550 - val_accuracy: 0.8419
Epoch 3/10
217/217 [==============================] - 20s 93ms/step - loss: 0.2570 - accuracy: 0.9150 - val_loss: 0.4437 - val_accuracy: 0.8384
Epoch 4/10
217/217 [==============================] - 20s 91ms/step - loss: 0.2040 - accuracy: 0.9340 - val_loss: 0.4592 - val_accuracy: 0.8477
Epoch 5/10
217/217 [==============================] - 20s 91ms/step - loss: 0.1649 - accuracy: 0.9494 - val_loss: 0.4686 - val_accuracy: 0.8512
Epoch 6/10
217/217 [==============================] - 20s 92ms/step - loss: 0.1301 - accuracy: 0.9589 - val_loss: 0.4805 - val_accuracy: 0.8488
Epoch 7/10
217/217 [==============================] - 20s 93ms/step - loss: 0.0966 - accuracy: 0.9754 - val_loss: 0.4993 - val_accuracy: 0.8442
Epoch 8/10
217/217 [==============================] - 20s 91ms/step - loss: 0.0806 - accuracy: 0.9806 - val_loss: 0.5488 - val_accuracy: 0.8372
Epoch 9/10
217/217 [==============================] - 20s 91ms/step - loss: 0.0623 - accuracy: 0.9864 - val_loss: 0.5802 - val_accuracy: 0.8360
Epoch 10/10
217/217 [==============================] - 22s 100ms/step - loss: 0.0456 - accuracy: 0.9896 - val_loss: 0.6005 - val_accuracy: 0.8360
Since I start fine tuning with the weights learned by transfer learning, I would expect the loss to be the same or less. However it looks like it starts fine tuning using a different set of starting weights.
Code to start transfer learning:
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=3, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
epochs = 1000
callback = tf.keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)
history = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=epochs,
validation_data=val_generator,
validation_steps=len(val_generator),
callbacks=[callback],)
Output from last epoch:
Epoch 29/1000
232/232 [==============================] - 492s 2s/step - loss: 0.1298 - accuracy: 0.8940 - val_loss: 0.1220 - val_accuracy: 0.8937
Code to start fine tuning:
model.trainable = True
# Fine-tune from this layer onwards
fine_tune_at = -20
# Freeze all the layers before the `fine_tune_at` layer
for layer in model.layers[:fine_tune_at]:
layer.trainable = False
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
loss='binary_crossentropy',
metrics=['accuracy'])
history_fine = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=epochs,
validation_data=val_generator,
validation_steps=len(val_generator),
callbacks=[callback],)
But this is what I see for the first few epochs:
Epoch 1/1000
232/232 [==============================] - ETA: 0s - loss: 0.3459 - accuracy: 0.8409/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
"Palette images with Transparency expressed in bytes should be "
232/232 [==============================] - 509s 2s/step - loss: 0.3459 - accuracy: 0.8409 - val_loss: 0.7755 - val_accuracy: 0.7262
Epoch 2/1000
232/232 [==============================] - 502s 2s/step - loss: 0.1889 - accuracy: 0.9066 - val_loss: 0.5628 - val_accuracy: 0.8881
Eventually the loss drops and passes the transfer learning loss:
Epoch 87/1000
232/232 [==============================] - 521s 2s/step - loss: 0.0232 - accuracy: 0.8312 - val_loss: 0.0481 - val_accuracy: 0.8563
Why was the loss in the first epoch of fine tuning higher than the last loss from transfer learning?
I have trained a model with keras using transfer learning. since the whole code is almost big i only bring important parts.
For learning rate I cloned from github some code to be able to use cyclic learning rate. and passed it to the model as callback.
Here is how I defined my learning rate.
from tensorflow.keras.optimizers import RMSprop
opt = RMSprop()
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
lr_track = get_lr_metric(opt)
MIN_LR = 1e-7
MAX_LR = 1e-3
CLR_METHOD = "triangular"
clr = CyclicLR(
mode= CLR_METHOD,
base_lr= MIN_LR,
max_lr= MAX_LR,
step_size= steps_per_epoch)
and my model:
def vgg16_fine_tune():
vgg16_model = VGG16(weights='imagenet', include_top=False)
x = vgg16_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.3)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.3)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.3)(x)
predictions = Dense(3, activation='softmax')(x)
model = Model(inputs=vgg16_model.input, outputs=predictions)
for layer in vgg16_model.layers:
layer.trainable = False
return model
model = vgg16_fine_tune()
and i compiled my code:
import keras
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy' , lr_track, keras.metrics.Precision(), keras.metrics.Recall()])
history_2 = model.fit(datagen.flow(x_train, y_train),
epochs=20,
shuffle=True,
validation_data=(x_val, y_val),
callbacks=[chkpt, clr, es])
Epoch 1/20
188/188 [==============================] - 80s 416ms/step - loss: 0.5007 - accuracy: 0.8038 - lr: 4.4728e-06 - precision: 0.8275 - recall: 0.7711 - val_loss: 0.3959 - val_accuracy: 0.8560 - val_lr: 8.7048e-07 - val_precision: 0.8833 - val_recall: 0.8227
Epoch 2/20
188/188 [==============================] - 79s 423ms/step - loss: 0.4116 - accuracy: 0.8442 - lr: 4.8224e-06 - precision: 0.8660 - recall: 0.8215 - val_loss: 0.3621 - val_accuracy: 0.8700 - val_lr: 1.7400e-06 - val_precision: 0.8923 - val_recall: 0.8393
Epoch 3/20
188/188 [==============================] - 79s 421ms/step - loss: 0.3884 - accuracy: 0.8535 - lr: 5.1341e-06 - precision: 0.8775 - recall: 0.8331 - val_loss: 0.3529 - val_accuracy: 0.8767 - val_lr: 2.6094e-06 - val_precision: 0.8953 - val_recall: 0.8547
Epoch 4/20
188/188 [==============================] - 80s 423ms/step - loss: 0.3836 - accuracy: 0.8599 - lr: 5.4058e-06 - precision: 0.8809 - recall: 0.8407 - val_loss: 0.3452 - val_accuracy: 0.8767 - val_lr: 3.4789e-06 - val_precision: 0.8962 - val_recall: 0.8580
Epoch 5/20
188/188 [==============================] - 79s 419ms/step - loss: 0.3516 - accuracy: 0.8662 - lr: 5.6348e-06 - precision: 0.8857 - recall: 0.8448 - val_loss: 0.3324 - val_accuracy: 0.8780 - val_lr: 4.3484e-06 - val_precision: 0.8923 - val_recall: 0.8613
Epoch 6/20
188/188 [==============================] - 79s 422ms/step - loss: 0.3518 - accuracy: 0.8726 - lr: 5.8182e-06 - precision: 0.8905 - recall: 0.8487 - val_loss: 0.3378 - val_accuracy: 0.8733 - val_lr: 5.2179e-06 - val_precision: 0.8952 - val_recall: 0.8540
Epoch 7/20
188/188 [==============================] - 78s 413ms/step - loss: 0.3324 - accuracy: 0.8799 - lr: 5.9525e-06 - precision: 0.8955 - recall: 0.8649 - val_loss: 0.3393 - val_accuracy: 0.8740 - val_lr: 6.0873e-06 - val_precision: 0.8944 - val_recall: 0.8527
Epoch 8/20
188/188 [==============================] - 78s 417ms/step - loss: 0.3312 - accuracy: 0.8759 - lr: 6.0333e-06 - precision: 0.8936 - recall: 0.8549 - val_loss: 0.3149 - val_accuracy: 0.8920 - val_lr: 6.9568e-06 - val_precision: 0.9109 - val_recall: 0.8653
and then after fitting i saved it:
model.save_weights('model_weight.h5')
model.save('model_keras.h5')
But when I need to load my model and use it I get an error about custom objects.
from tensorflow import keras
import os
model_dir = 'My Directory'
model1 = os.path.join(model_dir, "DenseNet_model_keras.h5")
Vgg16 = keras.models.load_model(model1)
here is my error:
ValueError: Unknown metric function: lr. Please ensure this object is
passed to the custom_objects argument. See
https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object
for details.
i even tried this code.
Vgg16 = keras.models.load_model(model1 , custom_objects={"lr": lr})
but all i get is
Vgg16 = keras.models.load_model(model1 , custom_objects={"lr": lr})
Traceback (most recent call last):
File "", line 1, in
Vgg16 = keras.models.load_model(model1 , custom_objects={"lr": lr})
NameError: name 'lr' is not defined
Can someone help me with my problem please?
Because as the error says you didn't call it lr, you called it
lr_track in lr_track = get_lr_metric(opt), you never defined lr.
you need to call it like this:
Vgg16 = keras.models.load_model(model1 , custom_objects={"lr": lr_track })
You need to use the same keyword as used in the model.compile() for the custom object (metric here).
In this case, you need to write:
keras.models.load_model(model1 , custom_objects={"lr_track": lr_track })
Simply use compile=False, then compile the model. In your case, use this:
Vgg16 = keras.models.load_model(model1, compile=False)
Vgg16.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'
, lr_track, keras.metrics.Precision(), keras.metrics.Recall()])
I have constructed a LSTM model for binary text classification problem. Here the labels are one hot encoded. Following is my constructed model
model = Sequential()
model.add(Embedding(input_dim=dictionary_length, output_dim=60, input_length=max_word_count))
model.add(LSTM(600))
model.add(Dense(units=max_word_count, activation='tanh', kernel_regularizer=regularizers.l2(0.04), activity_regularizer=regularizers.l2(0.015)))
model.add(Dense(units=max_word_count, activation='relu', kernel_regularizer=regularizers.l2(0.01), bias_regularizer=regularizers.l2(0.01)))
model.add(Dense(2, activation='softmax', kernel_regularizer=regularizers.l2(0.001)))
adam_optimizer = Adam(lr=0.001, decay=0.0001)
model.compile(loss='categorical_crossentropy', optimizer=adam_optimizer,
metrics=[tf.keras.metrics.Accuracy(), metrics.AUC(), metrics.Precision(), metrics.Recall()])
When I fit this model the accuracy stays 0 all the time but other matrices get improved. What is the issue here?
Epoch 1/3
94/94 [==============================] - 5s 26ms/step - loss: 3.4845 - accuracy: 0.0000e+00 - auc_4: 0.7583 - precision_4: 0.7251 - recall_4: 0.7251
Epoch 2/3
94/94 [==============================] - 2s 24ms/step - loss: 0.4772 - accuracy: 0.0000e+00 - auc_4: 0.9739 - precision_4: 0.9249 - recall_4: 0.9249
Epoch 3/3
94/94 [==============================] - 3s 27ms/step - loss: 0.1786 - accuracy: 0.0000e+00 - auc_4: 0.9985 - precision_4: 0.9860 - recall_4: 0.9860
Because you need CategoricalAccuracy : https://www.tensorflow.org/api_docs/python/tf/keras/metrics/CategoricalAccuracy to mesuring the accuracy of this problem.
model.compile(loss='categorical_crossentropy', optimizer=adam_optimizer,
metrics=[tf.keras.metrics.CategoricalAccuracy(), metrics.AUC(), metrics.Precision(), metrics.Recall()])
I've been trying to save and reupload a model and whenever I do that the accuracy always goes down.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu', input_shape=(IMG_SIZE,IMG_SIZE,3)))
model.add(tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(len(SURFACE_TYPES), activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=EPOCHS,
validation_steps=10)
Output:
Epoch 1/3
84/84 [==============================] - 2s 19ms/step - loss: 1.9663 - acc: 0.6258 - val_loss: 0.8703 - val_acc: 0.6867
Epoch 2/3
84/84 [==============================] - 1s 18ms/step - loss: 0.2865 - acc: 0.9105 - val_loss: 0.4494 - val_acc: 0.8667
Epoch 3/3
84/84 [==============================] - 1s 18ms/step - loss: 0.1409 - acc: 0.9574 - val_loss: 0.3614 - val_acc: 0.9000
This followed by running these commands to produce outputs result in the same training loss but different training accuracies. The weights and structures of the models are also identical.
model.save("my_model2.h5")
model2 = load_model("my_model2.h5")
model2.evaluate(train_ds)
model.evaluate(train_ds)
Output:
84/84 [==============================] - 1s 9ms/step - loss: 0.0854 - acc: 0.0877
84/84 [==============================] - 1s 9ms/step - loss: 0.0854 - acc: 0.9862
[0.08536089956760406, 0.9861862063407898]
i have shared reference link click here
it has all formats to save & load your model