Error Propagation in Keras DNN and/or CNN Regression - tensorflow

Using Keras with Tensorflow as the backend, I have created both a CNN and DNN to predict three stellar parameters (Temperature, gravity, and metallicity) using the stellar spectra as an input. Both models predict very well when applied to test sets, but in order to make my results useful it is necessary to include errors in my predictions.
The input spectra each have 7000 data points and the output is 3 values. Each spectra also has an error array of 7000 data points that I would like to be able to propagate through my model so that there is a set of 3 uncertainties associated with each prediction. Does anyone have any experience or insight on how to accomplish this?
My DNN structure looks something like this:
# Define vars
activation = 'relu'
init = 'he_normal'
beta_1 = 0.9
beta_2 = 0.999
epsilon = 1e-08
input_shape = (None,7000)
epochs = 100
lr = 0.0009
batch_size = 64
n_hidden = [2048,1024,512,256,128]
# Design DNN Layers
model = Sequential([
InputLayer(batch_input_shape=input_shape),
Dense(n_hidden[0], init=init, activation=activation, bias=True),
Dense(n_hidden[1], init=init, activation=activation, bias=True),
Dense(n_hidden[2], init=init, activation=activation, bias=True),
Dense(n_hidden[3], init=init, activation=activation, bias=True),
Dense(n_hidden[4], init=init, activation=activation, bias=True),
Dense(l, init=init, activation='linear', bias=True),
])
# Optimization function
optimizer = Adam(lr=lr, beta_1=beta_1, beta_2=beta_2, epsilon=epsilon, decay=0.0)
# Compile and train network
model.compile(optimizer=optimizer, loss='mean_squared_error')
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0.0001,
patience=3, verbose=2, mode='min')
## train_X.shape = (50000,7000)
## train_Y.shape = (50000,3)
## cv_X.shape = (10000,7000)
## cv_Y.shape = (10000,3)
history = model.fit(train_X, train_Y, validation_data=(cv_X, cv_Y),
nb_epoch=epochs, batch_size=batch_size, verbose=2, callbacks=[early_stopping])
My CNN is somewhat similar; basically replacing the first few layers with 1D-CNN layers and modifying the number of filters and filter lengths. Any thoughts on how to propagate errors through the model. I am familiar with some methods of obtaining the error of the model (through multiple trainings of the same model and/or dropout layers), but what I am looking for are errors in the actual predictions.
EDIT
Here is the architecture for my CNN
# Define vars
input_shape = (None,7000,1)
epochs = 30
activation = 'relu'
initializer = 'he_normal'
beta_1 = 0.9
beta_2 = 0.999
epsilon = 1e-08
batch_size = 64
n_hidden = [1024,512,256]
n_filters = [16,32,32,64,64]
lr = 0.001
model = Sequential([
InputLayer(batch_input_shape=input_shape),
Convolution1D(nb_filter=n_filters[0], filter_length=8, activation=activation, border_mode='same', init=initializer, input_shape=input_shape),
Convolution1D(nb_filter=n_filters[1], filter_length=8, activation=activation, border_mode='same', init=initializer),
MaxPooling1D(pool_length=4),
Convolution1D(nb_filter=n_filters[2], filter_length=8, activation=activation, border_mode='same', init=initializer),
Convolution1D(nb_filter=n_filters[3], filter_length=8, activation=activation, border_mode='same', init=initializer),
MaxPooling1D(pool_length=4),
Convolution1D(nb_filter=n_filters[4], filter_length=10, activation=activation),
MaxPooling1D(pool_length=4),
Flatten(),
Dense(output_dim=n_hidden[0], activation=activation, init=initializer),
Dense(output_dim=n_hidden[1], activation=activation, init=initializer),
Dense(output_dim=n_hidden[2], activation=activation, init=initializer),
Dense(output_dim=l, input_dim=n_hidden[2], activation='linear'),
])
# Compile Model
optimizer=Adam(lr=lr, beta_1=beta_1, beta_2=beta_2, epsilon=epsilon, decay=0.0)
model.compile(optimizer=optimizer, loss='mean_squared_error')
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0.0001,
patience=3, verbose=2, mode='min')
history = model.fit(train_X, train_Y, validation_data=(cv_X, cv_Y),
nb_epoch=epochs, batch_size=batch_size, verbose=2, callbacks=[early_stopping])

Related

HIGH Resolution Image and Colab PRO Crash

I have 1700 images of 1000*1000 Image height and Width. There are minor details in it, so I prefer to keep this size. Now, my google colab pro crashes. Please Help.
'''
##title IMAGE TO DATA, NORMALIZATION AND AUGMENTATION
#Directories with Subdirectories as Classes for training and validation datasets
%%capture
train_dir = '/content/Dataset/Training'
validation_dir = '/content/Dataset/Validation'
# Set batch size and Image Height and Width
batch_size = 32
IMG_HEIGHT, IMG_WIDTH = (1000,1000)
#Image to Data Transform using ImageDataGenerator of Keras
#Image to Data for Training Data
Dataset_Image_Training = ImageDataGenerator(rescale = 1./255, zoom_range=[0.8, 1.5], brightness_range= [0.8, 2.0])
train_data_gen = Dataset_Image_Training.flow_from_directory(
batch_size= batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT,IMG_WIDTH),
class_mode='binary')
#Image to Data for Validation Data
validation_image_generator = ImageDataGenerator(rescale=1./255, zoom_range=[0.8, 1.5], brightness_range= [0.8, 2.0])
val_data_gen = validation_image_generator.flow_from_directory(
batch_size=batch_size,
directory= validation_dir,
shuffle=True,
target_size=(IMG_HEIGHT,IMG_WIDTH),
class_mode= 'binary')
#Check Classes in Dataset
train_data_gen.class_indices
##title Deep Learning CNN Model with Keras Seqential with **Dropout**
#%%capture
model = Sequential([
Conv2D(32, (3,3), padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPool2D(2,2),
Dropout(0.5),
Conv2D(64, (3,3), padding='same', activation='relu'),
MaxPool2D(2,2),
Dropout(0.5),
Conv2D(128, (3,3), padding='same', activation='relu'),
MaxPool2D(2,2),
Dropout(0.5),
Conv2D(256, (3,3), padding='same', activation='relu'),
MaxPool2D(2,2),
Dropout(0.5),
Flatten(),
Dense(512, activation='relu'),
Dropout(0.5),
Dense(1, activation='sigmoid')])
# Model Compilation
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
#Tensorboard Set up
import tensorflow as tf
import datetime
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
#Checkpoint and earlystop setting
filepath = '/content/drive/My Drive/DL_Model.hdf5'
checkpoint = [tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_accuracy', mode='max', save_best_only=True, Save_weights_only = False, verbose = 1),
tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience = 15, verbose =1), [tensorboard_callback]]
#Model Fitting
hist = model.fit(
train_data_gen,
steps_per_epoch=None,
epochs=500,
validation_data=val_data_gen,
validation_steps=None,
callbacks = [checkpoint]
)
#Accuracy Print
train_acc = max(hist.history['accuracy'])
val_acc = max(hist.history['val_accuracy'])
train_loss = min(hist.history['loss'])
val_loss = min(hist.history['val_loss'])
print('Training accuracy is')
print(train_acc)
print('Validation accuracy is')
print(val_acc)
print('Training loss is')
print(train_loss)
print('Validation loss is')
print(val_loss)
#Load Tensorboard
%load_ext tensorboard
%tensorboard --logdir logs
'''

TF2 code 10 times slower than equivalent PyTorch code for a Conv1D network

I've been trying to translate some PyTorch code to TensorFlow 2, but the TF2 code is around 10 times slower. I've tried looking at where this might come from, and as far as I can tell it comes from the tape.gradient call (performance was the same with keras' .fit function). I've tried to use different data loaders, ways of declaring the model, installations, etc... and the results have been consistent.
Any explanation / solution as to why this is happening would be much appreciated.
Here is a minimalist version of the TF2 code:
import time
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
# Generate some fake data
train_labels = np.random.randint(10, size=1000)
train_data = np.random.rand(1000, 120, 18, 1)
train_dataset = tf.data.Dataset.from_tensor_slices((train_data, train_labels))
train_dataset = train_dataset.batch(256)
# Create a small model
model = tf.keras.Sequential([
layers.Conv1D(64, kernel_size=7, strides=3, padding="same", activation="relu"),
layers.Conv1D(64, kernel_size=5, strides=2, padding="same", activation="relu"),
layers.Conv1D(128, kernel_size=5, strides=2, padding="same", activation="relu"),
layers.Conv1D(128, kernel_size=3, strides=1, padding="same", activation="relu"),
layers.Conv1D(128, kernel_size=3, strides=1, padding="same", activation="relu"),
layers.Conv1D(256, kernel_size=1, strides=1, padding="same", activation="relu"),
layers.GlobalAveragePooling2D(),
layers.Flatten(),
layers.Dense(128, use_bias=True, activation="relu"),
layers.Dense(32, use_bias=True, activation="relu"),
layers.Dense(1, activation='sigmoid', use_bias=True),
])
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3, decay=5e-4)
#tf.function
def train_step(data_batch, label_batch):
with tf.GradientTape() as tape:
y_pred = model(data_batch)
loss = tf.keras.losses.MSE(labels_batch, y_pred)
gradients = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
step_times = []
for epoch in range(20):
for data_batch, labels_batch in train_dataset:
step_start_time = time.perf_counter()
train_step(data_batch, labels_batch)
if epoch != 0:
step_times.append(time.perf_counter()-step_start_time)
print(f"Average training step time: {np.mean(step_times):.3f}s.")
And the PyTorch equivalent:
import time
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.backends.cudnn.benchmark = True
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Generate some fake data
train_labels = np.random.randint(10, size=1000)
train_data = np.random.rand(1000, 18, 120)
# Create a small model
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv1d(18, 64, kernel_size=7, stride=3, padding=3)
self.conv2 = nn.Conv1d(64, 64, kernel_size=5, stride=2, padding=2)
self.conv3 = nn.Conv1d(64, 128, kernel_size=5, stride=2, padding=2)
self.conv4 = nn.Conv1d(128, 128, kernel_size=3, stride=1, padding=1)
self.conv5 = nn.Conv1d(128, 128, kernel_size=3, stride=1, padding=1)
self.conv6 = nn.Conv1d(128, 256, kernel_size=3, stride=1, padding=1)
self.fc1 = nn.Linear(256, 128)
self.fc2 = nn.Linear(128, 32)
self.fc3 = nn.Linear(32, 1)
def forward(self, inputs):
x = F.relu(self.conv1(inputs))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
x = F.relu(self.conv5(x))
x = F.relu(self.conv6(x))
x = x.mean(2)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = torch.sigmoid(self.fc3(x))
return x
model = Model()
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3, weight_decay=5e-4)
loss_fn = torch.nn.MSELoss()
batch_size = 256
train_steps_per_epoch = train_data.shape[0] // batch_size
step_times = []
for epoch in range(20):
for step in range(train_steps_per_epoch):
batch_start, batch_end = step * batch_size, (step+1) * batch_size
data_batch = torch.FloatTensor(train_data[batch_start:batch_end]).to(device)
labels_batch = torch.FloatTensor(train_labels[batch_start:batch_end]).to(device)
step_start_time = time.perf_counter()
optimizer.zero_grad()
y_pred = model(data_batch)
loss = loss_fn(labels_batch, torch.squeeze(y_pred))
loss.backward()
optimizer.step()
if epoch != 0:
step_times.append(time.perf_counter()-step_start_time)
print(f"Average training step time: {np.mean(step_times):.3f}s.")
You're using tf.GradientTape correctly, but both your models and data are different in the snippets you provided.
Here is the TF code that uses the same data and model architecture as your Pytorch model.
import time
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
# Generate some fake data
train_labels = np.random.randint(10, size=1000)
train_data = np.random.rand(1000, 120, 18)
train_dataset = tf.data.Dataset.from_tensor_slices((train_data, train_labels))
train_dataset = train_dataset.batch(256)
model = tf.keras.Sequential([
layers.Conv1D(64, kernel_size=7, strides=3, padding="same", activation="relu"),
layers.Conv1D(64, kernel_size=5, strides=2, padding="same", activation="relu"),
layers.Conv1D(128, kernel_size=5, strides=2, padding="same", activation="relu"),
layers.Conv1D(128, kernel_size=3, strides=1, padding="same", activation="relu"),
layers.Conv1D(128, kernel_size=3, strides=1, padding="same", activation="relu"),
layers.Conv1D(256, kernel_size=3, strides=1, padding="same", activation="relu"),
layers.GlobalAveragePooling1D(),
layers.Dense(128, use_bias=True, activation="relu"),
layers.Dense(32, use_bias=True, activation="relu"),
layers.Dense(1, activation='sigmoid', use_bias=True),
])
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3, decay=5e-4)
#tf.function
def train_step(data_batch, label_batch, model):
with tf.GradientTape() as tape:
y_pred = model(data_batch, training=True)
loss = tf.keras.losses.MSE(labels_batch, y_pred)
gradients = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
step_times = []
for epoch in range(20):
for data_batch, labels_batch in train_dataset:
step_start_time = time.perf_counter()
train_step(data_batch, labels_batch, model)
if epoch != 0:
step_times.append(time.perf_counter()-step_start_time)
print(f"Average training step time: {np.mean(step_times):.3f}s.")
So, in reality, TF is 3 times faster than Pytorch: 0.035s vs 0.112s

How can I pass logits to sigmoid_cross_entropy_with_logits before I fit and predict model?

Since I need to train a model with multiple labels, I need to use loss function tf.nn.sigmoid_cross_entropy_with_logits. This function has two parameters: logits and loss.
Is parameter logitsis the value of predicted y? How can I pass this value before I compile model? I cannot predict y before I compile and fit model, right?
This is my code:
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential([keras.layers.Dense(50, activation='tanh', input_shape=[100]),
keras.layers.Dense(30, activation='relu'),
keras.layers.Dense(50, activation='tanh'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(8)])
model.compile(optimizer='rmsprop',
loss=tf.nn.sigmoid_cross_entropy_with_logits(logits=y_pred), labels=y), # <---How to figure out y_pred here?
metrics=['accuracy'])
model.fit(x, y, epochs=10, batch_size=32)
y_pred = model.predict(x) # <--- Now I got y_pred after compile, fit and predict
I'm using tensorflow v2.1.0
These arguments (labels and logits) are passed to the loss function within Keras' implementation. To make your code work do like this:
import tensorflow as tf
from tensorflow import keras
def loss_fn(y_true, y_pred):
return tf.nn.sigmoid_cross_entropy_with_logits(labels=y_true, logits=y_pred)
model = keras.Sequential([keras.layers.Dense(50, activation='tanh', input_shape=[100]),
keras.layers.Dense(30, activation='relu'),
keras.layers.Dense(50, activation='tanh'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(8)])
model.compile(optimizer='rmsprop',
loss=loss_fn,
metrics=['accuracy'])
x = np.random.normal(0, 1, (64, 100))
y = np.random.randint(0, 2, (64, 8)).astype('float32')
model.fit(x, y, epochs=10, batch_size=32)
y_pred = model.predict(x)
The suggested way, though, is to use Keras' loss implementation instead. In your case it would be:
model = keras.Sequential([keras.layers.Dense(50, activation='tanh', input_shape=[100]),
keras.layers.Dense(30, activation='relu'),
keras.layers.Dense(50, activation='tanh'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(8)])
model.compile(optimizer='rmsprop',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
x = np.random.normal(0, 1, (64, 100))
y = np.random.randint(0, 2, (64, 8)).astype('float32')
model.fit(x, y, epochs=10, batch_size=32)
y_pred = model.predict(x)

How to minimize the loss?

This should be a regression problem.
I would like the Neural Network to be able to estimate the length of a line, in pixel, from an image, like this 3 images, each image is 200 x 200 pcs:
a)b)c)
Training image of 6000 pcs, and validation image of 1000 pcs.
The labels are the distance in pixel:
a) 1.205404496424333018e+02
b) 1.188780888137086436e+02
c) 1.110180165558725918e+02
Here is my training code:
img_size = 200
def preprocess_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [img_size, img_size])
image /= 255.0 # normalize to [0,1] range
return image
def load_and_preprocess_image(path):
image = tf.read_file(path)
return preprocess_image(image)
AUTOTUNE = tf.data.experimental.AUTOTUNE
BATCH_SIZE = 16
train_labels = np.loadtxt("train_labels.txt")
val_labels = np.loadtxt("test_labels.txt")
train_images = sorted(glob.glob("train_img/img_*.jpg"))
val_images = sorted(glob.glob("test_img/img_*.jpg"))
steps_per_epoch_count=tf.ceil(len(train_images)/BATCH_SIZE)
train_path_ds = tf.data.Dataset.from_tensor_slices(train_images)
val_path_ds = tf.data.Dataset.from_tensor_slices(val_images)
train_image_ds = train_path_ds.map(load_and_preprocess_image,
num_parallel_calls = AUTOTUNE)
train_label_ds =
tf.data.Dataset.from_tensor_slices(tf.cast(train_labels, tf.float32))
train_image_label_ds = tf.data.Dataset.zip((train_image_ds,
train_label_ds))
val_image_ds = val_path_ds.map(load_and_preprocess_image,
num_parallel_calls = AUTOTUNE)
val_label_ds = tf.data.Dataset.from_tensor_slices(tf.cast(val_labels, tf.float32))
val_image_label_ds = tf.data.Dataset.zip((val_image_ds, val_label_ds))
model = tf.keras.models.Sequential([
tf.keras.layers.Convolution2D(16,3,3, input_shape=(img_size,
img_size, 3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Convolution2D(32,3,3, activation = 'relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
# tf.keras.layers.Convolution2D(64,3,3, activation = 'relu'),
# tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(400, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(200, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(100, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.05),
tf.keras.layers.Dense(1, activation=tf.nn.relu)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
loss = "mean_squared_error",
metrics = ["mean_absolute_error", "mean_squared_error"]
)
train_ds = train_image_label_ds.apply(tf.data.experimental.shuffle_and_repeat(buffer_size=len(train_images)))
train_ds = train_ds.batch(BATCH_SIZE)
train_ds = train_ds.prefetch(buffer_size=AUTOTUNE)
val_ds = val_image_label_ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=len(val_images)))
val_ds = val_ds.batch(BATCH_SIZE)
val_ds = val_ds.prefetch(buffer_size=AUTOTUNE)
history = model.fit(
train_ds,
epochs = 80,
validation_data = val_ds,
steps_per_epoch = 374,
validation_steps = 62
)
However, this is the train vs eval mean_squared_error plot:
Question:
Why is the validation loss not stable?
The average Mean Squared Error is about 400 in training, which seems too high. What modification I can do to improve the estimation?
EDIT:
This is my latest model:
Learning rate = 0.01
Batch size = 16
model = tf.keras.models.Sequential([
tf.keras.layers.Convolution2D(16,3,3, input_shape=(img_size, img_size, 3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Convolution2D(32,3,3, activation = 'relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(2, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(2, activation=tf.nn.relu), #, kernel_regularizer = tf.keras.regularizers.l2(0.001)
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(2, activation=tf.nn.relu), #, kernel_regularizer = tf.keras.regularizers.l2(0.001)
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(2, activation=tf.nn.relu), #, kernel_regularizer = tf.keras.regularizers.l2(0.001)
tf.keras.layers.Dense(1, activation="linear")
])
The output looks like this:
As you can see, the train and validation loss is almost identical. The mse loss are both stabilized around 2393, which square root to 48.91 pixel error, quite high.
What advice to lower it further? Is it normal?

Training the same model with Keras Model API and TensorFlow Estimator API gives different accuracies

I've been experimenting with TensorFlow's higher level APIs recently and got some strange results: when I train a seemingly exact same model with the same hyperparameters using Keras Model API and TensorFlow Estimator API, I get different results (using Keras leads to ~4% higher accuracy).
Here's my code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, GlobalAveragePooling2D, BatchNormalization, Activation, Flatten
from tensorflow.keras.initializers import VarianceScaling
from tensorflow.keras.optimizers import Adam
# Load CIFAR-10 dataset and normalize pixel values
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()
X_train = np.array(X_train, dtype=np.float32)
y_train = np.array(y_train, dtype=np.int32).reshape(-1)
X_test = np.array(X_test, dtype=np.float32)
y_test = np.array(y_test, dtype=np.int32).reshape(-1)
mean = X_train.mean(axis=(0, 1, 2), keepdims=True)
std = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean) / std
X_test = (X_test - mean) / std
y_train_one_hot = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test_one_hot = tf.keras.utils.to_categorical(y_test, num_classes=10)
# Define forward pass for a convolutional neural network.
# This function takes a batch of images as input and returns
# unscaled class scores (aka logits) from the last layer
def conv_net(X):
initializer = VarianceScaling(scale=2.0)
X = Conv2D(filters=32, kernel_size=3, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = Conv2D(filters=64, kernel_size=3, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = MaxPooling2D()(X)
X = Conv2D(filters=64, kernel_size=3, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = Conv2D(filters=128, kernel_size=3, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = Conv2D(filters=256, kernel_size=3, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = GlobalAveragePooling2D()(X)
X = Dense(10)(X)
return X
# For training this model I use Adam optimizer with learning_rate=1e-3
# Train the model for 10 epochs using keras.Model API
def keras_model():
inputs = Input(shape=(32,32,3))
scores = conv_net(inputs)
outputs = Activation('softmax')(scores)
model = Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=Adam(lr=3e-3),
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
model1 = keras_model()
model1.fit(X_train, y_train_one_hot, batch_size=128, epochs=10)
results1 = model1.evaluate(X_test, y_test_one_hot)
print(results1)
# The above usually gives 79-82% accuracy
# Now train the same model for 10 epochs using tf.estimator.Estimator API
train_input_fn = tf.estimator.inputs.numpy_input_fn(x={'X': X_train}, y=y_train, \
batch_size=128, num_epochs=10, shuffle=True)
test_input_fn = tf.estimator.inputs.numpy_input_fn(x={'X': X_test}, y=y_test, \
batch_size=128, num_epochs=1, shuffle=False)
def tf_estimator(features, labels, mode, params):
X = features['X']
scores = conv_net(X)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions={'scores': scores})
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=scores, labels=labels)
metrics = {'accuracy': tf.metrics.accuracy(labels=labels, predictions=tf.argmax(scores, axis=-1))}
optimizer = tf.train.AdamOptimizer(learning_rate=params['lr'], epsilon=params['epsilon'])
step = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=tf.reduce_mean(loss), train_op=step, eval_metric_ops=metrics)
model2 = tf.estimator.Estimator(model_fn=tf_estimator, params={'lr': 3e-3, 'epsilon': tf.keras.backend.epsilon()})
model2.train(input_fn=train_input_fn)
results2 = model2.evaluate(input_fn=test_input_fn)
print(results2)
# This usually gives 75-78% accuracy
print('Keras accuracy:', results1[1])
print('Estimator accuracy:', results2['accuracy'])
I've trained both models 30 times, for 10 epochs each time: mean accuracy of the model trained with Keras is 0.8035 and mean accuracy of the model trained with Estimator is 0.7631 (standard deviations are 0.0065 and 0.0072 respectively). Accuracy is significantly higher if I use Keras. My question is why is this happenning? Am I doing something wrong or missing some important parameters? The architecture of the model is the same in both cases and I'm using the same hyperparametrers (I've even set Adam's epsilon to the same value, although it doesn't really affect overall result), but the accuracies are significantly different.
I also wrote training loop using raw TensorFlow and got the same accuracy as with Estimator API (lower than I get with Keras). It made me think that the default value of some parameter in Keras is different from TensorFlow, but they all actually seem to be the same.
I have also tried other architectures and sometimes I got smaller difference in accuracies, but I wasn't able to find any particular layer type that causes the difference. It looks like if I use more shallow network the difference often becomes smaller. Not always, however. For example, the difference in accuracies is even slightly bigger with the following model:
def simple_conv_net(X):
initializer = VarianceScaling(scale=2.0)
X = Conv2D(filters=32, kernel_size=5, strides=2, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = Conv2D(filters=64, kernel_size=3, strides=1, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = Conv2D(filters=64, kernel_size=3, strides=1, padding='valid', activation='relu', kernel_initializer=initializer)(X)
X = BatchNormalization()(X)
X = Flatten()(X)
X = Dense(10)(X)
return X
Again, I've trained it for 10 epochs 30 times using Adam optimizer with 3e-3 learning rate. Mean accuracy with Keras is 0.6561 and mean accuracy with Estimator is 0.6101 (standard deviations are 0.0084 and 0.0111 respectively). What can be causing such a difference?