Tensorflow running slow on RTX 2060 - tensorflow

I am following a course on deep learning and I am doing right now the CNN networks the train set is 8000 photos 4000 cats and 4000 dogs the training set is 2000/2000 the size I am using for images is 64x64 with RGB. I am using Keras with 2 conv2d/maxpool layers of 32 filters a flatten layer and two dense layers of 128 and 1 output. My problem is that this setup is performing at 15 minutes per epoch and for 25 epochs that means 6 Hours of training at least plus sometimes on some epochs is freezing for sometimes at 7999/8000 I am running this on windows 10 and anaconda with python 3.7 and TensorFlow 1.13. Is this a good performance or I can improve it? I was expecting from the new Turing architecture better performances.
# -*- coding: utf-8 -*-
# Part 1 - Building the convolutional neural network
import tensorflow as tf
from keras import backend as K
config = tf.ConfigProto(intra_op_parallelism_threads=6,
inter_op_parallelism_threads=6,
allow_soft_placement=True,
device_count = {'CPU' : 1,
'GPU' : 1}
)
session = tf.Session(config=config)
K.set_session(session)
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
weights = classifier.get_weights()
#Part 2 - Fiting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 25,
validation_data = test_set,
validation_steps = 2000)
classifier.save("my first model")
Thank you

Related

Error with Cats and Dog Convolutional Network: model.fit_generator

I'm using a PDF to build my first Convolutional Nerual Network using cats and dogs and am encountering a consistent error. The text is: WARNING:tensorflow:sample_weight modes were
coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
The relevant code is pasted in two sections below. Any help would be appreciated because I'm hitting a wall in regards to this.
This top bit is working but may be relevant:
#Build the network
#Import needed layers and models from tensorflow.keras
import tensorflow as tf
from tensorflow.keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D, Dropout
from tensorflow.keras.models import Sequential
#Build model--Use sequential value--Most common
model = models.Sequential()
#Input layer
model.add(layers.Conv2D(32, (3,3), activation = 'relu',
input_shape = (150, 150, 3)))
model.add(layers.MaxPooling2D(2,2))
#First hidden layer
model.add(layers.Conv2D(64, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D(2,2))
#Second hidden layer
model.add(layers.Conv2D(128, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D(2,2))
#Third hidden layer
model.add(layers.Conv2D(128, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D(2,2))
#Fourth hidden layer
model.add(layers.Flatten())
model.add(layers.Dense(512, activation = 'relu'))
#Output layer
model.add(layers.Dense(1, activation = 'sigmoid'))
#
from tensorflow.keras import optimizers
#Compilation step
model.compile(loss = 'binary_crossentropy',
optimizer= 'adam',
metrics=['acc'])
#Read images from directories
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255)
test_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = test_datagen.flow_from_directory(
train_dir,
target_size = (150, 150),
batch_size = 20,
class_mode = 'binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size = (150, 150),
batch_size = 20,
class_mode = 'binary')
Fit model with a batch generator
This part of the code is what causes the error
history = model.fit_generator(
train_generator,
steps_per_epoch = 100,
epochs = 30,
validation_data = validation_generator,
validation_steps = 50)
As a final note, this code is in Python 3 and uses the kagglecatsanddogs database from Microsoft
The below Warning is fixed in the Nightly Version of Tensorflow and will be included in the Next Stable Version, Tensorflow 2.2
WARNING:tensorflow:sample_weight modes were coerced from ... to
['...'] WARNING:tensorflow:sample_weight modes were coerced from ... to
['...']
Currently, to make if work, please install Tensorflow Nightly Version as shown below:
!pip install tf-nightly
For more details, please refer this Github Issue.

"NaN" result when running multi class classification

when i run these lines of code for binary classification it is running well without any problem and get a good result, but when i try to make it for many classes e.g 3 classes it give "NaN" in predict result
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2enter code hereD(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 3, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('data/train',
target_size = (64, 64),
batch_size = 32,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('data/test',
target_size = (64, 64),
batch_size = 32,
class_mode = 'categorical')
classifier.fit_generator(training_set,
steps_per_epoch = 240 ,
epochs = 25,
validation_data = test_set,
validation_steps = 30)
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('2.jpeg', target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
i tried these lines of code with loss function "binary" with 2 classes it worked well without any problems and get a good result that helped me with my work and the accuracy approximately '93%' .
but my project based on multi class classification, so i tried to change the loss function to 'categorical_crossentropy' and the class mod in fit_generator to 'categorical' to make it multi class, the accuracy start with 60% and grows up to 99 and suddenly drop down to 33%.
the expected result the labels of the classes
the actual result is "NaN".
thanks in advance.
For multi-class classification, usually softmax is applied on the last dense layer instead of sigmoid. Change it to softmax to see whether the issue is still there.

Low GPU usage while training CNN

I'm trying to train a CNN that predicts if an image is an image of a cat or a dog using keras with tensorflow on my GPU, but it's taking a lot of time per epoch.
I followed a tutorial to build this CNN from scratch, so i've installed CUDA 10.0, Visual Studio community 2017, tensorflow on GPU and Keras (all of this using Spyder and Anaconda). But when i started training the CNN i opened the task manager and saw that CUDA is being used by 6-7%. It happens the same when i scan the GPU usage with NVSMI.
My GPU is an NVIDIA RTX 2060.
This is the code i'm running:
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator
classifier = Sequential()
classifier.add(Convolution2D(32, (3, 3), padding = 'same', input_shape = (64, 64, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Convolution2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
classifier.add(Dense(128, activation = 'relu'))
classifier.add(Dense(1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory(
'dataset/training_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
test_set = test_datagen.flow_from_directory(
'dataset/test_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
classifier.fit_generator(
training_set,
steps_per_epoch=8000,
epochs=10,
validation_data=test_set,
validation_steps=2000)
I want to know if there's any chance to set an specific value for the usage of the GPU or at least to make it grow more than 6%.

How can I predict bigger sized image if the model is trained on a smaller sized image using keras

I'm using the following keras code with tensorflow backend to classify the difference between dog and a cat. It is not predicting any image above 800x800 image. How can I predict or resize the image to predict an hd image.
Code to train:
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import load_img, img_to_array
from keras.models import model_from_json
from scipy.misc import imresize
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Convolution2D(32, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(output_dim = 128, activation = 'relu'))
classifier.add(Dense(output_dim = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(
training_set,
samples_per_epoch=80,
nb_epoch=100,
validation_data=test_set,
nb_val_samples=2000
)
print(training_set.class_indices)
Code to predict:
from keras.models import model_from_json
json_file = open('model.json', 'r')
model_json = json_file.read()
json_file.close()
model = model_from_json(model_json)
# load weights into new model
model.load_weights("model.h5")
# evaluate loaded model on test data
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
import shutil
import matplotlib.pyplot as plt
import requests
url = raw_input("Please enter the image url/link")
response = requests.get(url, stream=True)
with open('test.jpg', 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
from keras.preprocessing import image
import numpy as np
test = image.load_img('test.jpg')
test = image.img_to_array(test)
test = np.expand_dims(test, axis=0)
result = model.predict(test)
if result[0][0] == 1:
prediction = 'dog'
print prediction
else:
prediction = 'cat'
print prediction
According to the Keras docs you can just specify the target size using:
test = image.load_img('test.jpg', target_size=(224, 224))
see https://keras.io/applications/ for an example.

Keras/Tensorflow model returning zeros for everything

I have four variables train_X, train_Y, test_X, test_Y, where train_X, train_Yare the training set, and test_X, test_Yare the test set. I have the following Keras neural network
from keras.models import Sequential, Model
from keras.optimizers import RMSprop
from keras.layers import Input, Dense, Convolution2D, LSTM, MaxPooling2D, \
UpSampling2D, RepeatVector, Flatten, Dropout, Activation
from keras.callbacks import TensorBoard
from keras.preprocessing.image import ImageDataGenerator
idg = ImageDataGenerator()
nb_epoch = 25
idg.fit(train_X)
input_data = Input(shape=(100, 100, 1))
conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(input_data)
pool1 = MaxPooling2D((2, 2), border_mode='same')(conv1)
conv2 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(pool1)
pool2 = MaxPooling2D((2, 2), border_mode='same')(conv2)
conv3 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(pool2)
pool3 = MaxPooling2D((2, 2), border_mode='same')(conv3)
flatten = Flatten()(pool3)
dense1 = Dense(64)(flatten)
activation = Activation('relu')(dense1)
dropout = Dropout(0.5)(activation)
dense2 = Dense(1)(dropout)
output_data = Activation('sigmoid')(dense2)
model = Model(input_data, output_data)
model.compile(optimizer='adadelta', loss='mean_squared_error')
model.fit_generator(idg.flow(train_X, train_Y, batch_size=32, seed = 0),
samples_per_epoch = len(train_X), nb_epoch = nb_epoch,
validation_data = (test_X, test_Y), callbacks = [TensorBoard(log_dir='log_dir')])
However, the following line is giving me zeros for everything:
predictions = model.predict(test_X)
I checked obvious things, such as test_X being zero. My guess is that the problem is some kind of vanishing gradient issue. Any help is appreciated; thanks!