I'm trying to do a binary image classification problem, but the two classes (~590 and ~5900 instances, for class 1 and 2, respectively) are heavily skewed, but still quite distinct.
Is there any way I can fix this, I want to try SMOTE/random weighted oversampling.
I've tried a lot of different things but I'm stuck. I've tried using class_weights=[10,1],[5900,590], and [1/5900,1/590] and my model still only predicts class 2.
I've tried using tf.data.experimental.sample_from_datasets but I couldn't get it to work. I've even tried using sigmoid focal cross-entropy loss, which helped a lot but not enough.
I want to be able to oversample class 1 by a factor of 10, the only thing I have tried that has kinda worked is manually oversampling i.e. copying the train dir's class 1 instances to match the number of instances in class 2.
Is there not an easier way of doing this, I'm using Google Colab and so doing this is extremely inefficient.
Is there a way to specify SMOTE params / oversampling within the data generator or similar?
data/
...class_1/
........image_1.jpg
........image_2.jpg
...class_2/
........image_1.jpg
........image_2.jpg
My data is in the form shown above.
TRAIN_DATAGEN = ImageDataGenerator(rescale = 1./255.,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
TEST_DATAGEN = ImageDataGenerator(rescale = 1.0/255.)
TRAIN_GENERATOR = TRAIN_DATAGEN.flow_from_directory(directory = TRAIN_DIR,
batch_size = BACTH_SIZE,
class_mode = 'binary',
target_size = (IMG_HEIGHT, IMG_WIDTH),
subset = 'training',
seed = DATA_GENERATOR_SEED)
VALIDATION_GENERATOR = TEST_DATAGEN.flow_from_directory(directory = VALIDATION_DIR,
batch_size = BACTH_SIZE,
class_mode = 'binary',
target_size = (IMG_HEIGHT, IMG_WIDTH),
subset = 'validation',
seed = DATA_GENERATOR_SEED)
...
...
...
HISTORY = MODEL.fit(TRAIN_GENERATOR,
validation_data = VALIDATION_GENERATOR,
epochs = EPOCHS,
verbose = 2,
callbacks = [EARLY_STOPPING],
class_weight = CLASS_WEIGHT)
I'm relatively new to Tensorflow but I have some experience with ML as a whole. I've been tempted to switch to PyTorch several times as they have params for data loaders that automatically (over/under)sample with sampler=WeightedRandomSampler.
Note: I've looked at many tutorials about how to oversample however none of them are image classification problems, I want to stick with TF/Keras as it allows for easy transfer learning, could you guys help out?
You can use this strategy to calculate weights based on the imbalance:
from sklearn.utils import class_weight
import numpy as np
class_weights = class_weight.compute_class_weight(
'balanced',
np.unique(train_generator.classes),
train_generator.classes)
train_class_weights = dict(enumerate(class_weights))
model.fit_generator(..., class_weight=train_class_weights)
In Python you can implement SMOTE using imblearn library as follows:
from imblearn.over_sampling import SMOTE
oversample = SMOTE()
X, y = oversample.fit_resample(X, y)
As you already define your class_weight as a dictionary, e.g., {0: 10, 1: 1}, you might try augmenting the minority class. See balancing an imbalanced dataset with keras image generator and the tutorial (that was mentioned there) at https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
Related
I am training a model for Optical Character Recognition of Gujarati Language. The input image is a character image. I have taken 37 classes. Total training images are 22200 (600 per class) and testing images are 5920 (160 per class). My input images are 32x32
Below is my code:
model = tf.keras.applications.DenseNet121(include_top=False, weights='imagenet', pooling='max')
base_inputs = model.layers[0].input
base_outputs = model.layers[-1].output # NOTICE -1 not -2
prefinal_outputs = layers.Dense(1024)(base_outputs)
final_outputs = layers.Dense(37)(prefinal_outputs)
new_model = keras.Model(inputs=base_inputs, outputs=base_outputs)
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=False)
test_datagen = ImageDataGenerator(horizontal_flip = False)
training_set = train_datagen.flow_from_directory('C:/Users/shweta/Desktop/characters/train',
target_size = (32, 32),
batch_size = 64,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('C:/Users/shweta/Desktop/characters/test',
target_size = (32, 32),
batch_size = 64,
class_mode = 'categorical')
new_model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
new_model.fit_generator(training_set,
epochs = 25,
validation_data = test_set, shuffle=True)
new_model.save('alphanumeric.mod')
I am getting following output:
Thanks in advance!
First of all, very well written code.
These are some of the things, I have noticed while I was going through the code and tf,keras docs.
I would like to ask what kind of labels have you got beacuse you know categorical_crossentropy expects ONE HOT CODED labels.(Check this).So, if your labels are integers, use sparsecategoricalentropy.
Similar issue
There was post where someone was trying to classsify into 2 and used categorical instead of binary crossentropy. If you want to look at.
Cheers
Let me know how it goes!
PS: #gerry made a very good point and if labels are One hot encoded use categoricalcrossentropy!
The code should be:
model = tf.keras.applications.DenseNet121(include_top=False, weights='imagenet, pooling='max', input_shape=(32,32,3))
base_outputs = model.layers[-1].output
prefinal_outputs = layers.Dense(1024)(base_outputs)
final_outputs = layers.Dense(37)(prefinal_outputs)
new_model = keras.Model(inputs=model.input, outputs=final_outputs)
new_model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
Also you should use model.fit in the future. Model.fit can now work with generators and model.fit_generator will be depreciate in future versions of tensorflow. I ran against your dataset and got accurate results in about 10 epochs. Here is some additional advice. It is best to use and adjustable learning rate. The keras callback ReduceLROnPlateau makes this easy to do. Documentation is here. Set it to monitor the validation loss. My use is shown below.
lr_adjust=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=1, verbose=1, mode="auto",
min_delta=0.00001, cooldown=0, min_lr=0)
Also I recommend using the callback ModelCheckpoint. Documentation is here. Set it up to monitor validation loss and it will save the weights that achieved the lowest validation loss. My implementation is shown below.
sav_loc=r'c:\Temp' # set this to the path where you want to save the weights
checkpoint=tf.keras.callbacks.ModelCheckpoint(filepath=save_loc, monitor='val_loss', verbose=1, save_best_only=True,
save_weights_only=True, mode='auto', save_freq='epoch', options=None)
callbacks=[checkpoint, lr_adjust]
In model.fit include callbacks=callbacks. When training is completed you want to load these saved weights into the model, then save the model. You can use the saved model to make predictions. Code is below.
model.load_weights(save_loc)
model.save(save_loc)
little background: I'm making a simple rock, paper, scissors image classifier program. Basically, I want the image classifier to be able to distinguish between a rock, paper, or scissor image.
problem: The program works amazing for two of the classes, rock and paper, but completely fails whenever given a scissors test image. I've tried increasing my training data and a few other things but no luck. I was wondering if anyone has any ideas on how to offset this.
sidenote: I suspect it also has something to do with overfitting. I say this because the model has about a 92% accuracy with the training data but 55% accuracy on test data.
import numpy as np
import os
import cv2
import random
import tensorflow as tf
from tensorflow import keras
CATEGORIES = ['rock', 'paper', 'scissors']
IMG_SIZE = 400 # The size of the images that your neural network will use
CLASS_SIZE = len(CATEGORIES)
TRAIN_DIR = "../Train/"
def loadData( directoryPath ):
data = []
for category in CATEGORIES:
path = os.path.join(directoryPath, category)
class_num = CATEGORIES.index(category)
for img in os.listdir(path):
try:
img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
data.append([new_array, class_num])
except Exception as e:
pass
return data
training_data = loadData(TRAIN_DIR)
random.shuffle(training_data)
X = [] #features
y = [] #labels
for i in range(len(training_data)):
features = training_data[i][0]
label = training_data[i][1]
X.append(features)
y.append(label)
X = np.array(X)
y = np.array(y)
X = X/255.0
model = keras.Sequential([
keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(CLASS_SIZE)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(X, y, epochs=25)
TEST_DIR = "../Test/"
test_data = loadData( TEST_DIR )
random.shuffle(test_data)
test_images = []
test_labels = []
for i in range(len(test_data)):
features = test_data[i][0]
label = test_data[i][1]
test_images.append(features)
test_labels.append(label)
test_images = np.array(test_images)
test_images = test_images/255.0
test_labels = np.array(test_labels)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
# Saving the model
model_json = model.to_json()
with open("model.json", "w") as json_file :
json_file.write(model_json)
model.save_weights("model.h5")
print("Saved model to disk")
model.save('CNN.model')
If you want to create a massive amount of training data fast: https://github.com/ThomasStuart/RockPaperScissorsMachineLearning/blob/master/source/0.0-collectMassiveData.py
Thanks in advance to any help or ideas :)
You can simply test overfitting by adding 2 additional layers, one dropout layer and one dense layer. Also be sure to shuffle your train_data after each epoch, so the model keeps the learning general. Also, if I see this correctly, you are doing a multi class classification but do not have a softmax activation in the last layer. I would recommend you, to use it.
With drouput and softmax your model would look like this:
model = keras.Sequential([
keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dropout(0.4), #0.4 means 40% of the neurons will be randomly unused
keras.layers.Dense(CLASS_SIZE, activation="softmax")
])
As last advice: Cnns perform in general way better with tasks like this. You might want to switch to a CNN network, for having even better performance.
I'm currently working on a CNN model that classifies food images. So far, I have managed to build a functioning CNN but I would like to improve the accurracy. For the dataset, I have used some images from Kaggle and few from my own collection.
Here is some information about the dataset:
There are 91 classes of food images.
Each class has around 500 to 650 images.
The dataset has been manually cleaned and checked for unrelated or bad quality images (the photos are of different sizes).
Here is my CNN model:
classifier = Sequential()
def cnn_layer_creation(classifier):
classifier.add(InputLayer(input_shape=[224,224,3]))
classifier.add(Conv2D(filters=32,kernel_size=5,strides=1,padding='same',activation='relu',data_format='channels_first'))
classifier.add(MaxPooling2D(pool_size=5,padding='same'))
classifier.add(Conv2D(filters=50,kernel_size=5,strides=1,padding='same',activation='relu'))
classifier.add(MaxPooling2D(pool_size=5,padding='same'))
classifier.add(Conv2D(filters=80,kernel_size=5,strides=1,padding='same',activation='relu',data_format='channels_last'))
classifier.add(MaxPooling2D(pool_size=5,padding='same'))
classifier.add(Dropout(0.25))
classifier.add(Flatten())
classifier.add(Dense(64,activation='relu'))
classifier.add(Dropout(rate=0.5))
classifier.add(Dense(91,activation='softmax'))
# Compiling the CNN
classifier.compile(optimizer="RMSprop", loss = 'categorical_crossentropy', metrics = ['accuracy'])
data_initialization(classifier)
def data_initialization(classifier):
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
training_set = train_datagen.flow_from_directory('food_image/train',
target_size = (224, 224),
batch_size = 100,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('food_image/test',
target_size = (224, 224),
batch_size = 100,
class_mode = 'categorical')
classifier.fit_generator(training_set,
steps_per_epoch = 100,
epochs = 100,
validation_data = test_set,
validation_steps = 100)
classifier.save("brynModelGPULite.h5")
classifier.summary()
def main():
cnn_layer_creation(classifier)
Training is done on GPU (nVidia 980M)
Unfortunately, the accuracy has not exceeded 10%. Things I've tried are:
Increase the number of epochs.
Change the optimizer (ADAM, RMSPROP).
Change the activation function.
Reduce the image input size.
Increase the batch size.
Change the filter size to 32, 64, 128.
None of these have improved the accuracy.
Could anyone shine some light on how I might improve my model accuracy?
You can augment only the training data.
The following code
test_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
should be
test_datagen = ImageDataGenerator(rescale = 1./255)
Firstly, I assume you are building your model from scratch. As such training on fewer epochs(I assume you would not have trained your model for more than 1000 epochs), will not help as the network would not evolve completely because the representations would not have been completely learnt in so few epochs when you train a model from scratch. You can try increasing the number of epochs to around 10000 and see. Rather why not try and use transfer learning for the same, additionally you can also using feature extraction and fine tuning or both using a pre trained convnet. For reference you can have a look at chapter 5 in the book by Francois Chollet titled Deep Learning with Python.
I had same problem with another dataset and I replaced the flatten layer with globalAveragepooling and it solved the problem.
I'm not sure this is going to work for you but as my model has a structure similar to yours, I think this can help you. But the difference is that I trained my model for 3 classes.
I am currently training a CNN on MNIST, and the output probabilities (softmax) are giving [0.1,0.1,...,0.1] as training goes on. The initial values aren't uniform, so I can't figure out if I'm doing something stupid here?
I'm only training for 15 steps, just to see how training progresses; even though that's a low number, I don't think that should result in uniform predictions?
import numpy as np
import tensorflow as tf
import imageio
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
# Getting data
from sklearn.model_selection import train_test_split
def one_hot_encode(data):
new_ = []
for i in range(len(data)):
_ = np.zeros([10],dtype=np.float32)
_[int(data[i])] = 1.0
new_.append(np.asarray(_))
return new_
data = np.asarray(mnist["data"],dtype=np.float32)
labels = np.asarray(mnist["target"],dtype=np.float32)
labels = one_hot_encode(labels)
tr_data,test_data,tr_labels,test_labels = train_test_split(data,labels,test_size = 0.1)
tr_data = np.asarray(tr_data)
tr_data = np.reshape(tr_data,[len(tr_data),28,28,1])
test_data = np.asarray(test_data)
test_data = np.reshape(test_data,[len(test_data),28,28,1])
tr_labels = np.asarray(tr_labels)
test_labels = np.asarray(test_labels)
def get_conv(x,shape):
weights = tf.Variable(tf.random_normal(shape,stddev=0.05))
biases = tf.Variable(tf.random_normal([shape[-1]],stddev=0.05))
conv = tf.nn.conv2d(x,weights,[1,1,1,1],padding="SAME")
return tf.nn.relu(tf.nn.bias_add(conv,biases))
def get_pool(x,shape):
return tf.nn.max_pool(x,ksize=shape,strides=shape,padding="SAME")
def get_fc(x,shape):
sh = x.get_shape().as_list()
dim = 1
for i in sh[1:]:
dim *= i
x = tf.reshape(x,[-1,dim])
weights = tf.Variable(tf.random_normal(shape,stddev=0.05))
return tf.nn.relu(tf.matmul(x,weights) + tf.Variable(tf.random_normal([shape[1]],stddev=0.05)))
#Creating model
x = tf.placeholder(tf.float32,shape=[None,28,28,1])
y = tf.placeholder(tf.float32,shape=[None,10])
conv1_1 = get_conv(x,[3,3,1,128])
conv1_2 = get_conv(conv1_1,[3,3,128,128])
pool1 = get_pool(conv1_2,[1,2,2,1])
conv2_1 = get_conv(pool1,[3,3,128,512])
conv2_2 = get_conv(conv2_1,[3,3,512,512])
pool2 = get_pool(conv2_2,[1,2,2,1])
conv3_1 = get_conv(pool2,[3,3,512,1024])
conv3_2 = get_conv(conv3_1,[3,3,1024,1024])
conv3_3 = get_conv(conv3_2,[3,3,1024,1024])
conv3_4 = get_conv(conv3_3,[3,3,1024,1024])
pool3 = get_pool(conv3_4,[1,3,3,1])
fc1 = get_fc(pool3,[9216,1024])
fc2 = get_fc(fc1,[1024,10])
softmax = tf.nn.softmax(fc2)
loss = tf.losses.softmax_cross_entropy(logits=fc2,onehot_labels=y)
train_step = tf.train.AdamOptimizer().minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for i in range(15):
print(i)
indices = np.random.randint(len(tr_data),size=[200])
batch_data = tr_data[indices]
batch_labels = tr_labels[indices]
sess.run(train_step,feed_dict={x:batch_data,y:batch_labels})
Thank you so much.
There are several issues with your code, including elementary ones. I strongly suggest you first go through the Tensorflow step-by-step tutorials for MNIST, MNIST For ML Beginners and Deep MNIST for Experts.
In short, regarding your code:
First, your final layer fc2 should not have a ReLU activation.
Second, the way you build your batches, i.e.
indices = np.random.randint(len(tr_data),size=[200])
is by just grabbing random samples in each iteration, which is far from the correct way of doing so...
Third, the data you feed into the network are not normalized in [0, 1], as they should be:
np.max(tr_data[0]) # get the max value of your first training sample
# 255.0
The third point was initially puzzling for me, too, since in the aforementioned Tensorflow tutorials they don't seem to normalize the data either. But close inspection revealed the reason: if you import the MNIST data through the Tensorflow-provided utility functions (instead of the scikit-learn ones, as you do here), they come already normalized in [0, 1], something that is nowhere hinted at:
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
np.max(mnist.train.images[0])
# 0.99607849
This is an admittedly strange design decision - as far as I am aware of, in all other similar cases/tutorials normalizing the input data is an explicit part of the pipeline (see e.g. the Keras example), and with good reason (it is something you will be certainly expected to do yourself later, when using your own data).
I am trying to use the scikit-learn in Keras to fine tune the model that has one input(images) and 2 outputs(rotational vector and translation vector). The code snippet is as below,
img_input =Input(shape=(img_rows, img_cols, img_channels))
model = KerasRegressor(build_fn = toy_model, verbose = 1)
loss_weights = [[1.0, 250.0], [1.0, 500.0], [1.0, 750.0]]
epochs =[10, 20]
batches = [5, 10]
param_grid = dict(loss_weight= loss_weights, epochs = epochs,
batch_size = batches)
grid = GridSearchCV(estimator = model, param_grid=param_grid)
grid_result = grid.fit(train_imgs, [train_pose_tx, train_pose_rt])
I want to fine tune the "loss_weights" parameter for this model. However, I get the following error
ValueError: Found input variables with inconsistent numbers of samples:[895, 2]
As I understand since this model has single input, this functionality must be supported.
Link to Github gist :
https://gist.github.com/sushant4788/1f84cd2781f96fb752ee1f16a56d1bcb