keras with tensorflow runs fine, until I add callbacks - tensorflow

I'm running a model using Keras and TensorFlow backend. Everything works perfect:
model = Sequential()
model.add(Dense(dim, input_dim=dim, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='Adam', metrics=['mae'])
history = model.fit(X, Y, epochs=12,
batch_size=100,
validation_split=0.2,
shuffle=True,
verbose=2)
But as soon as I include logger and callbacks so I can log for tensorboard, I get
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input_layer_input_2' with dtype float and shape [?,1329]...
Here's my code: (and actually, it worked 1 time, the very first time, then ecer since been getting that error)
model = Sequential()
model.add(Dense(dim, input_dim=dim, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='Adam', metrics=['mae'])
logger = keras.callbacks.TensorBoard(log_dir='/tf_logs',
write_graph=True,
histogram_freq=1)
history = model.fit(X, Y,
epochs=12,
batch_size=100,
validation_split=0.2,
shuffle=True,
verbose=2,
callbacks=[logger])

A tensorboard callback uses tf.summary.merge_all function in order to collect all tensors for histogram computations. Because of that - your summary is collecting tensors from previous models not cleared from previous model runs. In order to clear these previous models try:
from keras import backend as K
K.clear_session()
model = Sequential()
model.add(Dense(dim, input_dim=dim, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='Adam', metrics=['mae'])
logger = keras.callbacks.TensorBoard(log_dir='/tf_logs',
write_graph=True,
histogram_freq=1)
history = model.fit(X, Y,
epochs=12,
batch_size=100,
validation_split=0.2,
shuffle=True,
verbose=2,
callbacks=[logger])

Related

Unable to achieve good accuracy in dog breed classifier using Keras CNN

I am (very) new to deep learning and I am trying to train a dog breed classifier using Tensorflow/Keras. I have selected a subset of 10 breeds to speed up calculations, and I am using all the images available in the Stanford dataset for those breeds, which I have placed in train/test/val directories. I have 1338 images for training, 379 images for validation and 200 images for test.
I have first tried building a simple CNN from scratch without data augmentation, and I quickly reached 99% accuracy for the training set and got stuck at 30% for the val set (which I assume is quite normal without augmentation ?)
Then I applied data augmentation and tried two approaches, building a CNN from scratch and using transfer learning. With the "home-made" CNN I can't reach more than around 30 % accuracy even for the training set, and I can't figure out what the problem is. And I am stuck around 80 % with transfer learning, which I guess is not that good either ?
Here is the code for data augmentation:
`
# Creating image generator steps
train_datagen = ImageDataGenerator(rescale=1.0/255.0,
rotation_range=60,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.2,
zoom_range=[0.5, 1.5],
brightness_range=[0.5, 1.5],
horizontal_flip=True
)
val_datagen = ImageDataGenerator(rescale=1.0/255.0)
test_datagen = ImageDataGenerator(rescale=1.0/255.0)
train_generator = train_datagen.flow_from_directory(
directory="split_output/train",
target_size=(224,224),
color_mode="rgb",
batch_size=8,
class_mode='sparse',
shuffle='True',
seed=42
)
val_generator = val_datagen.flow_from_directory(
directory="split_output/val",
target_size=(224,224),
color_mode="rgb",
batch_size=8,
class_mode='sparse',
shuffle='True',
seed=42
)
test_generator = test_datagen.flow_from_directory(
directory="split_output/test",
target_size=(224,224),
color_mode="rgb",
batch_size=8,
class_mode='sparse',
shuffle='False',
seed=42
)
`
Here is the first CNN I tried (for which accuracies are both stuck around 25 %):
`
# The CNN architecture
model = Sequential()
model.add(Conv2D(32,(3,3), padding="same", activation='relu',input_shape = (224,224,3)))
model.add(MaxPooling2D((2,2)))
# 32 = number of filters
# (3, 3) = kernel size
model.add(Conv2D(64,(3,3), padding="same", activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(64,(3,3), padding="same", activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Flatten())
model.add(Dense(64,activation='relu'))
model.add(Dense(10,activation='softmax'))
# Fitting the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history = model.fit_generator(train_generator,
# steps_per_epoch=1000,
epochs=50,
validation_data=val_generator,
# validation_steps=250,
verbose=1
)
`
And the second one, a bit deeper and including BatchNorm and Dropout (accuracies are stuck around 35%):
`
# The CNN architecture
model = Sequential()
model.add(Conv2D(32,(3,3), padding="same", activation='relu',input_shape = (224,224,3)))
model.add(MaxPooling2D((2,2)))
# 32 = number of filters
# (3, 3) = kernel size
model.add(Conv2D(32,(3,3),activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(64,(3,3), padding="same", activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(128,(3,3), padding="same", activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(512,activation='relu'))
model.add(Dense(10,activation='softmax'))
model.summary()
opt = Adam(lr=0.0001)
# Fitting the model
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history = model.fit(train_generator,
# steps_per_epoch=1000,
epochs=50,
validation_data=val_generator,
# validation_steps=250,
verbose=1
)
`
Here is the history for that second CNN:
accuracies for 2nd CNN
And finally I tried with a resnet, which gets stuck around 90% for train and 80% for val:
`
model = Sequential()
model.add(ResNet50(include_top=False, pooling='avg', weights="imagenet"))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(2048, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(10, activation='softmax'))
opt = Adam(lr=0.0001)
model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_generator,
# steps_per_epoch=1000,
epochs=150,
validation_data=val_generator,
# validation_steps=250,
verbose=1
)
`
And the history for this last one:
resnet history
I'm a bit surprised at how the accuracies (especially val) get stuck so fast at a nearly constant value...
Again I'm very new at this so there could be very basic mistakes!

How to use the output of ANN model with different dataset as an input to another ANN model with different dataset?

I am building two ANN models (ANN_1 and ANN_2). I want to use the output of the ANN_1 as input to ANN_2.
The structure of ANN_1 goes as follows:
# define the Keras model
model_1 = Sequential()
model_1.add(Dense(46, input_dim=46, activation='relu'))
model_1.add(Dense(31, input_dim=46, activation='relu'))
model_1.add(Dense(1, activation='sigmoid'))
#Training the ANN_1 model
model_1.compile(loss="mean_absolute_error", optimizer='adam')
model_1.summary()
#Training the ANN on the Training set
history = model_1.fit(X_train, y_train, epochs=800, batch_size=15, validation_data=(X_test,
y_test))
The structure of ANN_2 goes as follows:
# define the Keras model
model_2 = Sequential()
model_2.add(Dense(52, input_dim=52, activation='relu'))
model_2.add(Dense(40, activation='relu'))
model_2.add(Dense(1, activation='sigmoid'))
#Training the ANN_2 model
model_2.compile(loss="mean_absolute_error", optimizer='Adam')
model_2.summary()
#Training the ANN_2 on the Training set
history = model_2.fit(X_train_2, y_train_2, epochs=1000, batch_size=15, validation_data=
(X_test_2, y_test_2))

How to improve Volatile GPU-Util?

Using Keras to train model, Volatile GPU-Util of two GPU is too low.
The block of code:
%%time
np.random.seed(seed)
model_d2v_01 = Sequential()
model_d2v_01.add(Dense(64, activation='relu', input_dim=400))
model_d2v_01.add(Dense(1, activation='sigmoid'))
model_d2v_01 = multi_gpu_model(model_d2v_01, gpus=2)
model_d2v_01.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_d2v_01.fit(train_vecs_dbow_dmm, y_train, validation_data=(validation_vecs_dbow_dmm, y_validation), epochs=5, batch_size=32*2, verbose=2)
How to modify this code? some suggestion, please.
I found that this situation was normal. The cause was from few layers, so the model was too simple. when add more model layer, this situation will improve. for example:
%%time
np.random.seed(seed)
# with tf.device('/cpu:0'):
model_d2v_12 = Sequential()
model_d2v_12.add(Dense(512, activation='relu', input_dim=400))
model_d2v_12.add(Dense(512, activation='relu'))
model_d2v_12.add(Dense(512, activation='relu'))
model_d2v_12.add(Dense(1, activation='sigmoid'))
model_d2v_12 = multi_gpu_model(model_d2v_12, gpus=2)
model_d2v_12.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_d2v_12.fit(train_vecs_dbow_dmm, y_train, validation_data=(validation_vecs_dbow_dmm, y_validation), epochs=10, batch_size=2048*2, verbose=2)

How to do early stopping with tensorflow.models.Sequential()?

Using a sequential model generated like this:
def generate_model():
model = Sequential()
model.add(Conv1D(64, kernel_size=10, strides=1,
activation='relu', padding='same',
input_shape=(MAXLENGTH, NAMESPACELENGTH)))
model.add(MaxPooling1D(pool_size=4, strides=2))
model.add(Conv1D(32, 3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=4))
model.add(Flatten())
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mean_squared_error',
optimizer='adam', metrics=['mean_squared_error'])
return model
I want to do Kfold cross-validated modeling. So, I train K models in a loop:
models = []
for ndx_train, ndx_val in kfold.split(X, y):
model = generate_model()
N_train = len(ndx_train)
X_batch = X[ndx_train]
y_batch = y[ndx_train]
model.fit(X_batch, y_batch, epochs=100, verbose=1, steps_per_epoch=10,
validation_data=(X[ndx_val], y[ndx_val]), validation_steps=100)
models.append(model)
Now, I can see when I want each model to stop by looking at the output. I.e. when the validation error increases again. Is it possible to do that easily with pure tf and with this higher level api setup? There is some suggestions using along the lines using tflearn here.
By using EarlyStopping callback:
from tensorflow.keras.callbacks import EarlyStopping
callbacks = [
EarlyStopping(monitor='val_mean_squared_error', patience=2, verbose=1),
]
model.fit(..., callbacks=callbacks)

Keras - Stationary result using model.fit()

I'm implementing this simple neural network, with these inputs data:
x_train = np.asarray(x_train)
y_train = np.asarray(y_train)
x_test = np.asarray(x_test)
y_test = np.asarray(y_test)
After have defined the network's structure:
model = Sequential()
model.add(Dense(20, input_dim=5, init='normal', activation='sigmoid'))
model.add(Dense(1, init='normal', activation='sigmoid'))
I run this to train and evaluate the NN:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_train, y_train, nb_epoch=10, validation_split=0.2)
and I always get the same result from model.fit( ... ) :
32/143 [=====>........................] - ETA: 0s.
It seems that doesn't work at all, despite I obtain consistent results on training and validation. How does I have to interpreter this stationary result about model. fit output?