Recently I have been trying to build an automated car in gta v using a CNN model. I started out by collecting about 30k images from the game by driving around while capturing the scene and the key that was pressed at the current time. I also made sure to keep the dataset balanced by limiting the amount of data for each label to be equal.
An example of a random image in the dataset: IMAGE.
The labels are the basic driving inputs -LABELS.
Using this dataset on various models the accuracy never went above 50-60% on the validation test (test accuracy is even lower). Trying to fix this issue I tried cropping the images from the dataset to only include the center of the image which contains the road and drop the outlying data (scenery, buildings etc..). Also tried using RGB pictures as data instead of greyscale, also tested out collecting data from a specific location and testing it in the same place, different model architectures, different parameters and still no luck.
All the models were tested out in-game by constantly capturing the image of the road in-game and using it as an input to the model, then the output of the model would be the input for the game. All models seem to behave in the same general way which is basically outputting the same label – mostly ‘WA’, until it crashes into a wall.
I would love to get some tips on what I may be doing wrong or on what I can do to improve performance and let me know if you need any more information regarding this project to help out.
Thanks in advance.
TWO OF THE MODELS I TRIED:
model = Sequential()
model.add(Conv2D(filters=96, kernel_size=11, strides=4, activation='relu', input_shape=(IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_CHANNELS), padding='same'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Conv2D(filters=256, kernel_size=5, activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Conv2D(filters=384, kernel_size=3, activation='relu', padding='same'))
model.add(Conv2D(filters=384, kernel_size=3, activation='relu', padding='same'))
model.add(Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(7, activation='sigmoid'))
model = Sequential()
model.add(Conv2D(filters=12, kernel_size=11, activation='relu', input_shape=(IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_CHANNELS), padding='same'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Conv2D(filters=256, kernel_size=5, activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Conv2D(filters=384, kernel_size=3, activation='relu', padding='same'))
model.add(Conv2D(filters=256, kernel_size=5, activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Flatten())
model.add(Dense(7, activation='sigmoid'))
The code:
filenames = os.listdir("dataset")
labels = []
for filename in filenames:
label = filename.split('.')[1]
labels.append(label)
df = pd.DataFrame({
'filename': filenames,
'category': labels
})
model = model1()
print(model.summary())
model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=0.001), metrics=['accuracy'])
train_df, validate_df = train_test_split(df, test_size=0.40, random_state=42)
train_df = train_df.reset_index(drop=True)
validate_df = validate_df.reset_index(drop=True)
total_train = train_df.shape[0]
total_validate = validate_df.shape[0]
batch_size=32
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_dataframe(
train_df,
"dataset",
x_col='filename',
y_col='category',
target_size=IMAGE_SIZE,
class_mode='categorical',
batch_size=batch_size
)
validation_datagen = ImageDataGenerator( rescale=1./255,)
validation_generator = validation_datagen.flow_from_dataframe(
validate_df,
"dataset",
x_col='filename',
y_col='category',
target_size=IMAGE_SIZE,
class_mode='categorical',
batch_size=batch_size
)
epochs = 25
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
history = model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
validation_steps=total_validate // batch_size,
steps_per_epoch=total_train // batch_size, shuffle=True, callbacks=[tensorboard_callback])
model.save("model.h5")
Related
I am trying to create a model using Tensorflow and Python, I get the data from a folder on my pc
The Folder Structure
An Example from the data
Almost all data are the same size [237 items,223 items,495 items,387 items,301 items]
That's how I load my data:
lables = {'Basic T-shrit':0, 'Bikini Bottom':1, 'Cargo Pants':2, 'Jeans':3, 'Oversize T-Shirt':4}
#Data
train_datagen = ImageDataGenerator(rescale=1/256)
train_generator = train_datagen.flow_from_directory(
'Dataset', # This is the source directory for training images
target_size=(256, 256), # All images will be resized to 200 x 200
batch_size=batch_size,
# Specify the classes explicitly
classes = lables,
# Since we use categorical_crossentropy loss, we need categorical labels
class_mode='categorical')
That's a model I tried:
#Model
model = Sequential()
model.add(Conv2D(32, (3,3), 1, activation='relu', input_shape=(image_width,image_height,3)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(32,3,3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64,3,3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64,3,3, activation='relu'))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(categorys_size, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer="adam",
metrics=['acc'])
Then I start the learning process:
model.fit_generator(train_generator,
steps_per_epoch=epoch_steps,
epochs=Epoch,
validation_data=train_generator)
But it's not working good, The model sees the oversize shirt and normal shirt the same and any kind of pants as jeans
Model Train result
Then I tested this model:
model = tf.keras.models.Sequential([
keras.layers.Conv2D(32, kernel_size=(5, 5), activation=tf.keras.activations.relu, input_shape=IMAGE_SHAPE),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.BatchNormalization(axis = 1),
keras.layers.Dropout(0.22),
keras.layers.Conv2D(32, kernel_size=(5, 5), activation=tf.keras.activations.relu),
keras.layers.AveragePooling2D(pool_size=(2, 2)),
keras.layers.BatchNormalization(axis = 1),
keras.layers.Dropout(0.25),
keras.layers.Conv2D(32, kernel_size=(4, 4), activation=tf.keras.activations.relu),
keras.layers.AveragePooling2D(pool_size=(2, 2)),
keras.layers.BatchNormalization(axis = 1),
keras.layers.Dropout(0.15),
keras.layers.Conv2D(32, kernel_size=(3, 3), activation=tf.keras.activations.relu),
keras.layers.AveragePooling2D(pool_size=(2, 2)),
keras.layers.BatchNormalization(axis = 1),
keras.layers.Dropout(0.15),
keras.layers.Flatten(),
keras.layers.Dense(256, activation=tf.keras.activations.relu,kernel_regularizer=keras.regularizers.l2(0.001)),
#keras.layers.Dropout(0.25),
keras.layers.Dense(64, activation=tf.keras.activations.relu,kernel_regularizer=keras.regularizers.l2(0.001)),
#keras.layers.Dropout(0.1),
keras.layers.Dense(len(lables), activation=tf.keras.activations.softmax)])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
Then I start learning:
#Start Learning
checkpoint_path="/chk/cp-{epoch:04d}.ckpt"
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1,
period=10)
model.fit(train,
epochs=Epoch,callbacks = [cp_callback],
validation_data=val,verbose=1)
And that's how I load the data
data = tf.keras.utils.image_dataset_from_directory('Dataset',labels = 'inferred',image_size = (192,192))
data_iterator = data.as_numpy_iterator()
batch = data_iterator.next()
fig, ax = plt.subplots(ncols=4, figsize=(20,20))
for idx, img in enumerate(batch[0][:4]):
ax[idx].imshow(img.astype(int))
ax[idx].title.set_text(batch[1][idx])
#Scale Data
data = data.map(lambda x,y: (x/192, y))
data.as_numpy_iterator().next()
train_size = int(len(data)*.7)
val_size = int(len(data)*.2)
test_size = int(len(data)*.1)
print(train_size)
train = data.take(train_size)
val = data.skip(train_size).take(val_size)
test = data.skip(train_size+val_size).take(test_size)
What I am doing wrong? and is the data I collected good enough for what I am trying to do? Am I missing something ?
Thanks
I am (very) new to deep learning and I am trying to train a dog breed classifier using Tensorflow/Keras. I have selected a subset of 10 breeds to speed up calculations, and I am using all the images available in the Stanford dataset for those breeds, which I have placed in train/test/val directories. I have 1338 images for training, 379 images for validation and 200 images for test.
I have first tried building a simple CNN from scratch without data augmentation, and I quickly reached 99% accuracy for the training set and got stuck at 30% for the val set (which I assume is quite normal without augmentation ?)
Then I applied data augmentation and tried two approaches, building a CNN from scratch and using transfer learning. With the "home-made" CNN I can't reach more than around 30 % accuracy even for the training set, and I can't figure out what the problem is. And I am stuck around 80 % with transfer learning, which I guess is not that good either ?
Here is the code for data augmentation:
`
# Creating image generator steps
train_datagen = ImageDataGenerator(rescale=1.0/255.0,
rotation_range=60,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.2,
zoom_range=[0.5, 1.5],
brightness_range=[0.5, 1.5],
horizontal_flip=True
)
val_datagen = ImageDataGenerator(rescale=1.0/255.0)
test_datagen = ImageDataGenerator(rescale=1.0/255.0)
train_generator = train_datagen.flow_from_directory(
directory="split_output/train",
target_size=(224,224),
color_mode="rgb",
batch_size=8,
class_mode='sparse',
shuffle='True',
seed=42
)
val_generator = val_datagen.flow_from_directory(
directory="split_output/val",
target_size=(224,224),
color_mode="rgb",
batch_size=8,
class_mode='sparse',
shuffle='True',
seed=42
)
test_generator = test_datagen.flow_from_directory(
directory="split_output/test",
target_size=(224,224),
color_mode="rgb",
batch_size=8,
class_mode='sparse',
shuffle='False',
seed=42
)
`
Here is the first CNN I tried (for which accuracies are both stuck around 25 %):
`
# The CNN architecture
model = Sequential()
model.add(Conv2D(32,(3,3), padding="same", activation='relu',input_shape = (224,224,3)))
model.add(MaxPooling2D((2,2)))
# 32 = number of filters
# (3, 3) = kernel size
model.add(Conv2D(64,(3,3), padding="same", activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(64,(3,3), padding="same", activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Flatten())
model.add(Dense(64,activation='relu'))
model.add(Dense(10,activation='softmax'))
# Fitting the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history = model.fit_generator(train_generator,
# steps_per_epoch=1000,
epochs=50,
validation_data=val_generator,
# validation_steps=250,
verbose=1
)
`
And the second one, a bit deeper and including BatchNorm and Dropout (accuracies are stuck around 35%):
`
# The CNN architecture
model = Sequential()
model.add(Conv2D(32,(3,3), padding="same", activation='relu',input_shape = (224,224,3)))
model.add(MaxPooling2D((2,2)))
# 32 = number of filters
# (3, 3) = kernel size
model.add(Conv2D(32,(3,3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(64,(3,3), padding="same", activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(128,(3,3), padding="same", activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(512,activation='relu'))
model.add(Dense(10,activation='softmax'))
model.summary()
opt = Adam(lr=0.0001)
# Fitting the model
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history = model.fit(train_generator,
# steps_per_epoch=1000,
epochs=50,
validation_data=val_generator,
# validation_steps=250,
verbose=1
)
`
Here is the history for that second CNN:
accuracies for 2nd CNN
And finally I tried with a resnet, which gets stuck around 90% for train and 80% for val:
`
model = Sequential()
model.add(ResNet50(include_top=False, pooling='avg', weights="imagenet"))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(2048, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(10, activation='softmax'))
opt = Adam(lr=0.0001)
model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_generator,
# steps_per_epoch=1000,
epochs=150,
validation_data=val_generator,
# validation_steps=250,
verbose=1
)
`
And the history for this last one:
resnet history
I'm a bit surprised at how the accuracies (especially val) get stuck so fast at a nearly constant value...
Again I'm very new at this so there could be very basic mistakes!
I am training a CNN model where,
Training data=687 , validation data=102 , testing data=79
The validation accuracy is higher than training accuracy
The test accuracy is very low compared to both validation accuracy and training accuracy.
Validation loss is lower than training loss.
code snippet:
train_datagen = ImageDataGenerator(
rescale=1./255,
# rotation_range=30,
zoom_range=0.1,
horizontal_flip=True,
# vertical_flip=True,
# fill_mode='nearest',
validation_split=.15) # set validation split
val_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.15)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(height, width),
batch_size=batch_size,
class_mode='categorical',
seed=13
)
validation_generator = val_datagen.flow_from_directory(
train_dir, # same directory as training data
target_size=(height, width),
batch_size=batch_size,
class_mode='categorical',
seed=13,
subset='validation'
)
model = Sequential()
model.add(Conv2D(16,3,padding="same", activation="relu", input_shape=(height, width, 3)))
model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.2))
model.add(Conv2D(32,3,padding="same", activation="relu"))
model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.2))
model.add(Conv2D(32, 3, padding="same", activation="relu"))
model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.2))
# model.add(Conv2D(32, 3, padding="same", activation="relu", kernel_regularizer=l2(0.0001)))
# model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.4))
model.add(Conv2D(64, 3, padding="same", activation="relu"))
model.add(AveragePooling2D(strides=(2,2), padding="same",))
# model.add(Dropout(0.2))
model.add(Conv2D(64, 3, padding="same", activation="relu"))
model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.2))
model.add(Conv2D(128, 3, padding="same", activation="relu"))
model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.2))
model.add(Conv2D(128, 3, padding="same", activation="relu"))
model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.2))
model.add(Conv2D(256, 3, padding="same", activation="relu"))
model.add(AveragePooling2D(strides=(2,2), padding="same"))
# model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(256,activation="relu"))
model.add(Dropout(.5))
# model.add(Dense(256,activation="relu"))
# model.add(Dropout(.5))
model.add(Dense(4, activation="softmax"))
model.summary()
Adam(learning_rate=0.0001, name='Adam')
model.compile(optimizer = 'Adam',loss = 'categorical_crossentropy',metrics = ['accuracy'])
I've done few things to solve this problem:
I didn't used dropout layers between the conv layers
I decreased the range of the data augmentation.
I trained for longer period(the testing accuracy drops to 62% and the val_acc eventually reaches 100%).
What could be the cause of this issue and how can it be resolved?
How can I display test images in Python that have high levels of inaccuracy?
You need to add subset='training' in train_generator. Right now, you are training on both training and validation data.
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(height, width),
batch_size=batch_size,
class_mode='categorical',
seed=13,
subset='training'
)
I have a dataset of images ( EEG spectrograms ) as given below
Image dimensions are 669X1026. I am using the following code for binary classification of the spectrograms.
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
# dimensions of our images.
img_width, img_height = 669, 1026
train_data_dir = '/home/spectrograms/train'
validation_data_dir = '/home/spectrograms/test'
nb_train_samples = 791
nb_validation_samples = 198
epochs = 100
batch_size = 3
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height,3)
model = Sequential()
model.add(Conv2D(128, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(512, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(16))
model.add(Activation('relu'))
# model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=False)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.save_weights('CNN_model.h5')
But I am not able to get a training accuracy greater than 0.53. I have only a limited amount of data ( 790 training samples and 198 testing samples ). So increasing number of input images is not an option. What else can I do to improve the accuracy?
your code
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0,
zoom_range=0,
horizontal_flip=False)
Is not doing any image augmentation, only rescalling. Not sure what type of augmentation may help. Looks like your images really do not rely on color. It probably will not help accuracy but you could reduce computational expense by converting the images to gray scale. You might get some improvement by using the Keras callbacks ReduceLROnPlateau and EarlyStopping. Documentation is here. My suggested code for these callbacks is shown below
rlronp=tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss", factor=0.5, patience=1,
verbose=1, mode="auto", min_delta=0.0001, cooldown=0, min_lr=0)
estop=tf.keras.callbacks.EarlyStopping(monitor="val_loss", min_delta=0,patience=4,
verbose=1, mode="auto", baseline=None, restore_best_weights=True)
callbacks=[rlronp, estop]
You can try using transfer learning. Most of those models are trained on the imagenet dataset which is dis-similar to the type of images you are using but it might be worth a try. I recommend you use the Mobilenet model. Code for that is shown below
base_model=tf.keras.applications.mobilenet.MobileNet( include_top=False,
input_shape=input_shape, pooling='max', weights='imagenet',dropout=.4)
x=base_model.output
x = Dense(64,activation='relu')(x)
x=Dropout(.3, seed=123)(x)
output=Dense(1, activation='sigmoid')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adamax(lr=.001), loss='binary_crossentropy', metrics=['accuracy'])
use the callbacks referenced above in model.fit You may get a warning the Mobilenet was trained with an image shape of 224 X 224 X 3 but it should still load the imagenet weights and work.
I'm using Tensorflow 2.0 nightly build, on google colab.
I made simple CNN model, and than compiled it, and fit it.
Here's code:
model = tf.keras.models.Sequential([
tf.keras.layers.Reshape((28, 28, 1)),
tf.keras.layers.Conv2D(filters=32, kernel_size=(3, 3), padding='SAME',
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3), padding='SAME',
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(10),
tf.keras.layers.Softmax()
])
optimizer = tf.keras.optimizers.Adam(0.001)
model.compile(optimizer=optimizer,
loss=tf.keras.losses.CategoricalCrossentropy(),
matrics=['accuracy'])
log_dir='./logs'
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,
histogram_freq=2,
write_images=True,
update_freq='batch',
profile_batch=0)
model.fit(x=x_train, y=y_train, batch_size=100, epochs=15,
callbacks=[tensorboard_callback], validation_data=(x_test, y_test))
And it don't give me accuracy information.
I evaluated model, and it supposed to give me accuracy information, but it only gives me loss information.
I printed model.metrics, and it was just [].
Is it bug? Or I missed something?
You misspelled metrics as matrics. Change matrics=['accuracy'] to metrics=['accuracy'].