Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I really don't understand what is wrong with my model. Sometimes it gives me excellent results, but in other cases results are just absurd. During training, from one moment to another it gives absurd results. I tried model with 3 dropout layers and without them, and get same strange results. Here's my model definition:
batch_size = 1
epochs = 25
model = Sequential()
model.add(Conv1D(32, input_shape=(1040,1), kernel_size=100,padding='same',name='ConvLayer1', strides=1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling1D(pool_size=70, strides=1, padding='same',name='PoolingLayer1'))
#model.add(Dropout(0.10))
model.add(Conv1D(64, kernel_size=70,padding='same',name='ConvLayer2',strides=1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling1D(pool_size=40, strides=1, padding='same',name='PoolingLayer2'))
#model.add(Dropout(0.10))
model.add(Conv1D(128, kernel_size=40,padding='same',name='ConvLayer3',strides=1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling1D(pool_size=10, strides=1, padding='same',name='PoolingLayer3'))
#model.add(Dropout(0.10))
model.add(Flatten())
model.add(Dense(1,name='output', activation='linear'))
w = model.get_weights()
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=0.001),metrics=['mse'])
Get that kind of results: Results screenshot
What is happening? And also, do you know how I can improve this model to get better results?
Decrease Kernel and Pool Sizes
From just glancing over your architecture I would say that it's worth trying much smaller values for the pooling and conv filters. The network would have to find a pattern while looking at 100 values at the same time. To put this in perspective, when convolutional nets are used in image processing they have found that kernel sizes of 2-4 are best. You can still look at many data-points all at once in the deeper layers because as they pool together they are looking at deeper combinations of data points.
Increase Batch Size
It's very hard for a network to establish a good gradient from a single example. You should be using larger batch sizes, I would start with 32 and move around from there.
Start with the above changes and then try...
Adding another dense layer before your output layer
Batch normalization between layers
A different activation function. Not sure what your use-case is but you may need to look at that too to optimize performance
Try something like this to see if it improves.
from keras import Sequential
from keras.layers import Conv1D, LeakyReLU, MaxPooling1D, Flatten, Dense
import keras
batch_size = 32
epochs = 25
model = Sequential()
model.add(Conv1D(32, input_shape=(1040, 1), kernel_size=2, padding='same', name='ConvLayer1', strides=1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling1D(pool_size=2, strides=1, padding='same', name='PoolingLayer1'))
# model.add(Dropout(0.10))
model.add(Conv1D(64, kernel_size=3, padding='same', name='ConvLayer2', strides=1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling1D(pool_size=3, strides=1, padding='same', name='PoolingLayer2'))
# model.add(Dropout(0.10))
model.add(Conv1D(128, kernel_size=3, padding='same', name='ConvLayer3', strides=1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling1D(pool_size=3, strides=1, padding='same', name='PoolingLayer3'))
# model.add(Dropout(0.10))
model.add(Flatten())
model.add(Dense(1, name='output', activation='linear'))
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=0.001), metrics=['mse'])
model.summary()
BatchNormalization Example
from keras.layers import BatchNormalization
model.add(Conv1D(32, input_shape=(1040, 1), kernel_size=2, padding='same', name='ConvLayer1', strides=1))
model.add(LeakyReLU(alpha=0.1))
model.add(BatchNormalization()) # Try adding this after each activation function except the output layer
model.add(MaxPooling1D(pool_size=2, strides=1, padding='same', name='PoolingLayer1'))
I would also add early stopping and/or model check-pointing to stop the training once the validation loss stops improving and allows you to load the weights for the best validation loss.
Related
I have the following CNN topology -
model=Sequential()
#model.add(Lambda(standardize,input_shape=(28,28,1)))
model.add(Conv2D(filters=64, kernel_size = (3,3), activation="relu", input_shape=(32,32,3)))
model.add(Conv2D(filters=64, kernel_size = (3,3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.50))
model.add(Conv2D(filters=128, kernel_size = (3,3), activation="relu"))
model.add(Conv2D(filters=128, kernel_size = (3,3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.50))
model.add(Conv2D(filters=256, kernel_size = (3,3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.50))
model.add(Flatten())
model.add(Dense(512,activation="relu"))
model.add(BatchNormalization())
model.add(Dense(10,activation="softmax"))
This network worked really well on just 500 training images of the MNIST dataset (https://www.tensorflow.org/datasets/catalog/mnist) Input size - (28,28,1)
However, on 500 training images of the SVHN dataset (http://ufldl.stanford.edu/housenumbers/), the model doesn't seem to learn as its validation accuracy maxed out at 0.1959. Input size - (32,32,3)
Another CNN I created which is much simpler than this network however, reaches ~70% validation accuracy on the SVHN dataset.
Im failing to understand why this might be the case - is it cause the CNN doesnt work the same on RGB images? Is it cause the CNN isnt complex enough to extract features of the SVHN dataset?
Let me know if there is anything else I could provide to help you guys out with this problem. :)
I am rather new to deep learning and got some questions on performing a multi-label image classification task with keras convolutional neural networks. Those are mainly referring to evaluating keras models performing multi label classification tasks. I will structure this a bit to get a better overview first.
Problem Description
The underlying dataset are album cover images from different genres. In my case those are electronic, rock, jazz, pop, hiphop. So we have 5 possible classes that are not mutual exclusive. Task is to predict possible genres for a given album cover. Each album cover is of size 300px x 300px. The images are loaded into tensorflow datasets, resized to 150px x 150px.
Model Architecture
The architecture for the model is the following.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomFlip("vertical"),
layers.experimental.preprocessing.RandomRotation(0.4),
layers.experimental.preprocessing.RandomZoom(height_factor=(0.2, 0.6), width_factor=(0.2, 0.6))
]
)
def create_model(num_classes=5, augmentation_layers=None):
model = Sequential()
# We can pass a list of layers performing data augmentation here
if augmentation_layers:
# The first layer of the augmentation layers must define the input shape
model.add(augmentation_layers)
model.add(layers.experimental.preprocessing.Rescaling(1./255))
else:
model.add(layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)))
model.add(layers.Conv2D(32, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
# Use sigmoid activation function. Basically we train binary classifiers for each class by specifiying binary crossentropy loss and sigmoid activation on the output layer.
model.add(layers.Dense(num_classes, activation='sigmoid'))
model.summary()
return model
I'm not using the usual metrics here like standard accuracy. In this paper I read that you cannot evaluate multi-label classification models with the usual methods. In chapter 7. evaluation metrics the hamming loss and an adjusted accuracy (variant of exact match) are presented which I use for this model.
The hamming loss is already provided by tensorflow-addons (see here) and an implementation of the subset accuracy I found here (see here).
from tensorflow_addons.metrics import HammingLoss
hamming_loss = HammingLoss(mode="multilabel", threshold=0.5)
def subset_accuracy(y_true, y_pred):
# From https://stackoverflow.com/questions/56739708/how-to-implement-exact-match-subset-accuracy-as-a-metric-for-keras
threshold = tf.constant(.5, tf.float32)
gtt_pred = tf.math.greater(y_pred, threshold)
gtt_true = tf.math.greater(y_true, threshold)
accuracy = tf.reduce_mean(tf.cast(tf.equal(gtt_pred, gtt_true), tf.float32), axis=-1)
return accuracy
# Create model
model = create_model(num_classes=5, augmentation_layers=data_augmentation)
# Compile model
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=[subset_accuracy, hamming_loss])
# Fit the model
history = model.fit(training_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=callbacks)
Problem with this model
When training the model subset_accuracy hamming_loss are at some point stuck which looks like the following:
What could cause this behaviour. I am honestly a little bit lost right now. Could this be a case of the dying relu problem? Or is it wrong use of the metrics mentioned or is the implementation of those maybe wrong?
So far I tried to test differen optimizers and lowering the learning rate (e.g. from 0.01 to 0.001, 0.0001, etc..) but that didn't help either.
Maybe somebody has an idea that can help me.
Thanks in advance!
I think you need to tune your model's hyperparameters right. For that I'll recommend try using Keras Tuner library.
This would take some time to run, but will fetch you right set of hyperparameters.
I'm currently trying to build a CNN that can detect whether a patient has pnemonia caused by covid or not, and no matter what parameters I change the model accuracy is staying at 49%/50% so its basically useless because it's the same as a coin flip. Here is my code, I thought I would try using the VGG-16 model.
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPool2D, GlobalAveragePooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.preprocessing.image import ImageDataGenerator
# Loading in the dataset
traindata = ImageDataGenerator(rescale=1/255)
trainingdata = traindata.flow_from_directory(
directory="Covid-19CT/TrainingData",
target_size=(224,224),
batch_size=100,
class_mode="binary")
testdata = ImageDataGenerator(rescale=1/255)
testingdata = testdata.flow_from_directory(
directory="Covid-19CT/TestingData",
target_size=(224,224),
batch_size=100,
class_mode="binary")
# Initialize the model w/ Sequential & add layers + input and output <- will refer to the VGG 16 model architecture
model = Sequential()
model.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(2,2),padding="same", activation="relu"))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding="same", activation ="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(GlobalAveragePooling2D())
model.add(Dense(units=4096, activation="relu"))
model.add(Dense(units=4096, activation="relu"))
model.add(Dense(units=1000, activation="relu"))
model.add(Dense(units=1, activation="softmax"))
# Compile the model
model_optimizer = Adam(lr=0.001)
model.compile(optimizer=model_optimizer, loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
# Add the callbacks
checkpoint = ModelCheckpoint(filepath="Covid-19.hdf5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto')
early = EarlyStopping(monitor='val_acc', min_delta=0, patience=50, verbose=1, mode='auto')
fit = model.fit_generator(steps_per_epoch=25, generator=trainingdata, validation_data=testingdata, validation_steps=10,epochs=10,callbacks=[checkpoint,early])
This always gives:
Epoch 1/10 6/25 [======>.......................] - ETA: 1:22:37 -
loss: 7.5388 - accuracy: 0.5083
<- Well, it just always gives a really poor accuracy...
Additional info:
Some of the images in the data set are JPG others are PNG (Not sure if this is the culprit)
The Dataset has 2072 images for training Covid CTs and 2098 images for training NonCovid CTs
The Dataset has 576 images for testing Covid CTs and 532 images for testing NonCovid CTs
File structure looks like this: Covid19ModelImages -> Training Data & Testing Data - Training Data has 2 subfolders Covid19CT and noncovid19 CT and testing data also has 2 subfolders Covid19CT and noncovid19CT
Also: Am I just being too impatient? I never let it run past the 1st epoch cause I just assume its never going to get better than 50%, could it be that the model will improve more on the next epochs?
If anyone would be willing to help out, or if you need any other additional info to maybe help you gain a better understanding of the problem, please let me know!
Since you are using binary cross entropy, the activation function in the dense layer with 1 unit should be "sigmoid". Since you are not using a GPU you have very long training times per epoch. To see if the model is working correctly you may want to reduce this time. There are few things you could do. Try reducing the image size say to 128 by 128. With 224 X 224 you have 50176 pixels to process versus 16384 for the 128 X 128 image so you reduce the computations by about a factor of 3. Also you have two dense layers with 4096 units. This is also computationally expense. It may also lead to overfitting. Try your model initially without these layers and see how it performs. I am not a fan of early stopping because it is a crutch to avoid dealing with the over fitting issue. If you encounter over fitting add a dropout layer to help avoid it. Finally I recommend you use an adjustable learning rate. The callback ReduceLROnPlateau makes this easy to do. Set it to monitor validation loss. You can set the parameters to reduce the learning rate a factor<1 if the loss fails to decrease after "patience" number of consecutive epochs. I usually use factor=.5 and patience=1. This also enables you to use a larger initial learning rate for faster convergence. Documentation is here. You need to let your model run for several epochs to see if the training loss and validation loss are decreasing.
i'm a beginner in the argument. I have this problem: I have to classify the percentage of 2 class in each frame of a video.
I created a small dataset with about 500 images (250 of each class), and a CNN with these layers:
model = tf.models.Sequential()
model.add(tf.layers.Conv2D(32, kernel_size=(3, 3), activation='relu',input_shape=(224,224,3)))
model.add(tf.layers.MaxPooling2D((2, 2)))
model.add(tf.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(tf.layers.MaxPooling2D((2, 2)))
model.add(tf.layers.Conv2D(128, kernel_size=(3, 3), activation='relu'))
model.add(tf.layers.MaxPooling2D((2, 2)))
model.add(tf.layers.Conv2D(256, kernel_size=(3, 3), activation='relu'))
model.add(tf.layers.MaxPooling2D((2, 2)))
model.add(tf.layers.Flatten())
model.add(tf.layers.Dense(512, activation='relu'))
model.add(tf.layers.Dropout(0.2))
model.add(tf.layers.Dense(2,activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer=tf.optimizers.Adam(learning_rate=0.00001), metrics=['accuracy'])
1)It's better for the problem use binary_crossentropy + sigmoid or binary_crossentropy + softmax?
2)Then it's better to use transfer learning/fine tuning or build CNN from scratch like this?
3)I'm using ImageDataGenerator for DataAugmentation because the small dataset, it's right?
4)Which values I can use for batch_size, steps_per_epochs,learning_rate...I noticed that the model accuracy goes early to 1.0 with val_accuracy, and in the predictions doesn't return the correct percentage of each class, but return values like [9.999e-1 4.444e-5]
Since, yours is a binary classification, go with sigmoid. Softmax is for multi-class (>2).
It is always better to use transfer learning. Go with VGG16, ResNet, Inception and others.
Yes, in case of small datasets, data augmentation helps a lot.
You need to use one neuron in the last layer rather than 2. Since, in one neuron, if value is greater than 0.5, it will be considered as class 1 otherwise 0. If you want to stick with two neurons, then, for getting your answer, you should take np.argmax of the prediction, in the example you have given, pred = [9.999e-1 4.444e-5], the predicted class is 0, as pred[0] > pred[1].
I have a very basic query. I have made 4 almost identical(Difference being input shapes) CNN and have merged them while connecting to a Feed Forward Network of fully connected layers.
Code for the almost identical CNN(s):
model3 = Sequential()
model3.add(Convolution2D(32, (3, 3), activation='relu', padding='same',
input_shape=(batch_size[3], seq_len, channels)))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Dropout(0.1))
model3.add(Convolution2D(64, (3, 3), activation='relu', padding='same'))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Flatten())
But on tensorboard I see all the Dropout layers are interconnected, and Dropout1 is of different color than Dropout2,3,4,etc which all are the same color.
I know this is an old question but I had the same issue myself and just now I realized what's going on
Dropout is only applied if we're training the model. This should be deactivated by the time we're evaluating/predicting. For that purpose, keras creates a learning_phase placeholder, set to 1.0 if we're training the model.
This placeholder is created inside the first Dropout layer you create and is shared across all of them. So that's what you're seeing there!