I want to train on my local gpu but it's only running on cpu while torch.cuda.is_available() is actually true and i can see my gpu but it runs only on cpu , so how to fix it
my CNN model:
import torch.nn as nn
import torch.nn.functional as F
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 28 * 28, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 133)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
## Define forward behavior
x = self.pool(F.relu(self.conv1(x)))
#print(x.shape)
x = self.pool(F.relu(self.conv2(x)))
#print(x.shape)
x = self.pool(F.relu(self.conv3(x)))
#print(x.shape)
#print(x.shape)
# flatten image input
x = x.view(-1, 64 * 28 * 28)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
print("TRUE")
model_scratch = model_scratch.cuda()
train function :
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
loaders_scratch = {'train': train_loader,'valid': valid_loader,'test': test_loader}
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), save_path)
valid_loss_min = valid_loss
# return trained model
return model
# train the model
loaders_scratch = {'train': train_loader,'valid': valid_loader,'test': test_loader}
model_scratch = train(100, loaders_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
while i am getting "TRUE" in torch.cuda.is_available() but still not running on GPU
i am only running on CPU
the below picture shows that i am running on cpu with 62%
To utilize cuda in pytorch you have to specify that you want to run your code on gpu device.
a line of code like:
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
will determine whether you have cuda available and if so, you will have it as your device.
later in the code you have to pass your tensors and model to this device:
net = net.to(device)
and do the same for your other tensors that need to go to gpu, like test and training values.
If you are experiencing an issue where your model is only using the CPU during training even though your GPU is available, it's likely due to the data loading and transformation process. When loading images from your local directory and applying transforms on your data, the majority of the time during training is spent on the data loading process, which is performed on the CPU.
To resolve this issue, you can preprocess your data by applying your custom transforms once and then saving the results. This way, when you load the preprocessed data, you can take advantage of the GPU's performance during training. This can help to significantly improve the training time of your model.
In summary, if you are facing a problem with model using CPU instead of GPU during training, it could be due to the data loading process. To fix this, preprocess your data and save the results, then use the preprocessed data while training. This will allow you to take advantage of the GPU's performance and reduce training time.
Related
I saw the code from DB-VAE wrote by TensorFlow, the authors sampled data for each batch from all training set by a distribution:
All code can be found: https://github.com/aamini/introtodeeplearning/blob/master/lab2/solutions/Part2_Debiasing_Solution.ipynb
# The training loop -- outer loop iterates over the number of epochs
for i in range(num_epochs):
p_gep_pos = get_training_sample_probabilities(geps=all_gep_pos, dbvae=dbvae, latent_dim=latent_dim)
# get a batch of training data and compute the training step
loss = 0
class_loss = 0
for j in tqdm(range(loader.get_train_size() // batch_size)):
# load a batch of data
(x, y) = loader.get_batch(batch_size, p_pos=p_gep_pos) # also got some negative samples. <-- here
# loss optimization
loss = debiasing_train_step(x, y, optimizer=optimizer, dbvae=dbvae)
I want to use this sampling method on my own project. My question is that if it is possible to add it by a custom callback function (https://www.tensorflow.org/guide/keras/custom_callback) within training loops? So I can still use compile, fit functions from a valid TensorFlow model keras.Model.
Something may look like:
class CustomCallback(keras.callbacks.Callback):
def on_train_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: start of batch {}; got log keys: {}".format(batch, keys))
# I don't know how to get the data for current batch and replace them by sampled data
self.x, self.y = loader.get_batch(batch_size, p_pos=p_gep_pos)
return self.x, self.y
model = MyModel()
model.compile(...)
model.fit(x, y, callbacks=CustomCallback()) # <-- add it here
I have a server with two Nvidia GTX 1080 (driver 384.111). Currently, I am computing two larger CNN models in parallel, one at each GPU (Tensorflow backend) by adapting the TF config and passing the session to Keras:
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.visible_device_list = "0"
set_session(tf.Session(config=config))
I noticed that, for one GPU, the utilization keeps between 95% and 100% (let's call this one GPU A), which is normal. However, at some days(!), the utilization of the other GPU (B) fluctuates heavily between 0% and 100%, resulting in slow model training (I ensured that the GPU is really used - so I am not unintentionally using the CPU). Also, power usage seems to be rather low and fluctuating with the utilization rate. Memory is allocated properly on both GPUs. The temperature is at 80 degrees Celsius for the performing GPU and 90 degrees Celsius for the other (they are built in sub-optimally, so GPU A is also heating up GPU B a bit, hence the temperature difference.
What could be causing this? Tensorflow? Drivers? PSU (current one is > 1150)? Temperature? Bad GPU?
--- UPDATE 1 ---
Fluctuating performance does not occur when only using 1 GPU.
Furthermore, this phenomenon recently also occurred in reversed ways. GPU A was performing poorly while GPU B was doing fine.
Also, I figured it is not a temperature issue as this has also happened when both cards were cold (just tested). Therefore, I guess it boils down to (a) Tensorflow, (b) Drivers, (c) PSU ?
--- UPDATE 2 ---
Here's are functions that do the model assembly (1) and the batching and image augmentation (2, 3). I am feeding a pretrained InceptionV3 model to the make_pretrained_model function and the returned model from that function into train_model. Essentially, on each GPU, one of these models with a slightly different setup is running.
def make_pretrained_model(model, num_classes, frozen_top_layers=0, lr=0.0001, num_size_dense=[]):
'''
Building architecture of pretrained model
'''
in_inc = model.output
gap1 = GlobalAveragePooling2D()(in_inc)
for size, drop in num_size_dense:
gap1 = Dense(size, activation='relu')(gap1)
gap1 = Dropout(drop)(gap1)
softmax = Dense(num_classes, activation='softmax')(gap1)
model = Model(inputs=model.input, outputs=softmax)
if frozen_top_layers > 0:
for layer in model.layers[:-frozen_top_layers]:
layer.trainable = True
for layer in model.layers[-frozen_top_layers:]:
layer.trainable = False
else:
for layer in model.layers:
layer.trainable = True
opt = rmsprop(lr=lr, decay=1e-6)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
def augment_image_batch(imgdatagen, x, y, batch_size):
return imgdatagen.flow(x, y, batch_size=batch_size).__next__()
def train_model(model, x_train, y_train, x_test, y_test, train_batch_size, test_batch_size,
epochs, model_dir, model_name, num_classes, int_str, preprocessing_fun, patience=5, monitor='val_acc'):
'''
Training function
'''
train_img_datagen = ImageDataGenerator(
featurewise_center=False, # also use in test gen if activated AND fit test_gen on train data
samplewise_center=False,
featurewise_std_normalization=False, # also use in test gen if activated AND fit test_gen on train data
samplewise_std_normalization=False,
zca_whitening=False,
zca_epsilon=0,
rotation_range=0.05,
width_shift_range=0.05,
height_shift_range=0.05,
channel_shift_range=0,
fill_mode='nearest',
cval=0,
vertical_flip=False,
preprocessing_function=preprocessing_fun,
shear_range=0.,
zoom_range=0.,
horizontal_flip=False)
val_img_datagen = ImageDataGenerator(
preprocessing_function=preprocessing_fun)
callbacks = [ModelCheckpoint(MODEL_DIR+model_name+'.h5',
monitor=monitor,
save_best_only=True),
EarlyStopping(monitor=monitor, patience=patience),
TensorBoard(LOG_DIR+model_name+'_'+str(time())),
ReduceLROnPlateau(monitor='val_loss', factor=0.75, patience=2)]
def train_feat_gen(x_train, y_train, train_batch_size):
while True:
for batch in range(len(x_train) // train_batch_size + 1):
if batch > max(range(len(x_train) // train_batch_size)):
x, y = x_train[batch*train_batch_size:].astype(float), y_train[batch*train_batch_size:]
yield augment_image_batch(train_img_datagen, x, y, train_batch_size, False)
else:
x, y = x_train[batch*train_batch_size:(1+batch)*train_batch_size].astype(float), y_train[batch*train_batch_size:(1+batch)*train_batch_size]
yield augment_image_batch(train_img_datagen, x, y, train_batch_size, False)
def val_feat_gen(x_val, y_val, test_batch_size):
while True:
for batch in range(len(x_val) // test_batch_size + 1):
if batch > max(range(len(x_val) // test_batch_size)):
x, y = x_val[batch*test_batch_size:].astype(float), y_val[batch*test_batch_size:]
yield augment_image_batch(val_img_datagen, x, y, test_batch_size, True)
else:
x, y = x_val[batch*test_batch_size:(1+batch)*test_batch_size].astype(float), y_val[batch*test_batch_size:(1+batch)*test_batch_size]
yield augment_image_batch(val_img_datagen, x, y, test_batch_size, True)
train_gen_new = train_feat_gen(x_train, y_train, train_batch_size)
val_gen_new = val_feat_gen(x_test, y_test, test_batch_size)
model.fit_generator(
train_gen_new,
steps_per_epoch=len(x_train) // train_batch_size,
epochs=epochs,
validation_data=val_gen_new,
validation_steps=len(y_test) // test_batch_size,
callbacks=callbacks,
shuffle=True)
I have a tensorflow graph that is reading from .tfrecords files, as described in the process here (taken from Tflow docs):
def read_my_file_format(filename_queue):
reader = tf.SomeReader()
key, record_string = reader.read(filename_queue)
example, label = tf.some_decoder(record_string)
processed_example = some_processing(example)
return processed_example, label
def input_pipeline(filenames, batch_size, num_epochs=None):
filename_queue = tf.train.string_input_producer(
filenames, num_epochs=num_epochs, shuffle=True)
example, label = read_my_file_format(filename_queue)
# min_after_dequeue defines how big a buffer we will randomly sample
# from -- bigger means better shuffling but slower start up and more
# memory used.
# capacity must be larger than min_after_dequeue and the amount larger
# determines the maximum we will prefetch. Recommendation:
# min_after_dequeue + (num_threads + a small safety margin) * batch_size
min_after_dequeue = 10000
capacity = min_after_dequeue + 3 * batch_size
example_batch, label_batch = tf.train.shuffle_batch(
[example, label], batch_size=batch_size, capacity=capacity,
min_after_dequeue=min_after_dequeue)
return example_batch, label_batch`
In my code, a single batch (as returned by input_pipeline above) is used as an input to multiple networks (let's call them A,B) in my graph per iteration. So if I call:
#...define graph...
sess.run([A,B])
does tensorflow guarantee that it will use the same batch for each sess.run call?
If input of model A and B is example_batch and you evaluate the models simultaneously (as in your example sess.run([A,B])) then I expect to see the same batch. Because both models are fed by the same dequeuing operation. As soon as you break the synchronization (i.e., running separately) then inputs will be different.
The following code snippet looks trivial but shows my point.
import tensorflow as tf
import numpy as np
import time
batch_size = 16
input_shape, target_shape = (128), () # input with dimensionality 128.
num_threads = 4 # for input pipeline
queue_capacity = 10 # for input pipeline
def get_random_data_sample():
# Random inputs and targets
np_input = np.float32(np.random.normal(0,1, input_shape))
np_target = np.int32(1)
# Sleep randomly between 1 and 3 seconds.
#time.sleep(np.random.randint(1,3,1)[0])
return np_input, np_target
tensorflow_input, tensorflow_target = tf.py_func(get_random_data_sample, [], [tf.float32, tf.int32])
def create_model(inputs, hidden_size, num_hidden_layers):
# Create a dummy dense network.
dense_layer = inputs
for i in range(num_hidden_layers):
dense_layer = tf.layers.dense(
inputs=dense_layer,
units=hidden_size,
kernel_initializer= tf.zeros_initializer(),
bias_initializer= tf.zeros_initializer(),
activation=None,
use_bias=True,
reuse=False)
return dense_layer, inputs
# input pipeline
batch_inputs, batch_targets = tf.train.batch([tensorflow_input, tensorflow_target],
batch_size=batch_size,
num_threads=num_threads,
shapes=[input_shape, target_shape],
capacity=queue_capacity)
# Different models A and B using the same input operation.
modelA, modelA_inputs = create_model(batch_inputs, 32, 1) # 1 hidden layer
modelB, modelB_inputs = create_model(batch_inputs, 64, 2) # 2 hidden layers
sess = tf.InteractiveSession()
tf.train.start_queue_runners()
sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()))
sess = tf.InteractiveSession()
tf.train.start_queue_runners()
sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()))
# (1) Evaluate the models simultaneously.
resultA, resultB, inputsA, inputsB = sess.run([modelA, modelB, modelA_inputs, modelB_inputs])
assert((inputsA == inputsB).all())
# (2) Evaluate the models separately.
resultA2, inputsA2 = sess.run([modelA, modelA_inputs])
resultB2, inputsB2 = sess.run([modelB, modelB_inputs])
assert((inputsA2 == inputsB2).all())
Naturally the second evaluation uses different input batches and fails assertion. I hope this helps.
I am new to distributed tensorflow and am looking for a good example to perform synchronous training on CPUs.
I have already tried the Distributed Tensorflow Example and it can perform the asynchronous training successfully over 1 parameter server (1 machine with 1 CPU) and 3 workers (each worker = 1 machine with 1 CPU). However, when it comes to the synchronous training, I am not able to run it correctly, although I have followed the tutorial of
SyncReplicasOptimizer(V1.0 and V2.0).
I have inserted the official SyncReplicasOptimizer code into the working asynchronous training example but the training process is still asynchronous. My detailed code is as follows. Any code relates to synchronous training is within the block of ******.
import tensorflow as tf
import sys
import time
# cluster specification ----------------------------------------------------------------------
parameter_servers = ["xx1.edu:2222"]
workers = ["xx2.edu:2222", "xx3.edu:2222", "xx4.edu:2222"]
cluster = tf.train.ClusterSpec({"ps":parameter_servers, "worker":workers})
# input flags
tf.app.flags.DEFINE_string("job_name", "", "Either 'ps' or 'worker'")
tf.app.flags.DEFINE_integer("task_index", 0, "Index of task within the job")
FLAGS = tf.app.flags.FLAGS
# start a server for a specific task
server = tf.train.Server(cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index)
# Parameters ----------------------------------------------------------------------
N = 3 # number of replicas
learning_rate = 0.001
training_epochs = int(21/N)
batch_size = 100
# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_classes = 10 # MNIST total classes (0-9 digits)
if FLAGS.job_name == "ps":
server.join()
print("--- Parameter Server Ready ---")
elif FLAGS.job_name == "worker":
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Between-graph replication
with tf.device(tf.train.replica_device_setter(
worker_device="/job:worker/task:%d" % FLAGS.task_index,
cluster=cluster)):
# count the number of updates
global_step = tf.get_variable('global_step', [],
initializer = tf.constant_initializer(0),
trainable = False,
dtype = tf.int32)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
# ************************* SyncReplicasOpt Version 1.0 *****************************************************
''' This optimizer collects gradients from all replicas, "summing" them,
then applying them to the variables in one shot, after which replicas can fetch the new variables and continue. '''
# Create any optimizer to update the variables, say a simple SGD
opt = tf.train.AdamOptimizer(learning_rate=learning_rate)
# Wrap the optimizer with sync_replicas_optimizer with N replicas: at each step the optimizer collects N gradients before applying to variables.
opt = tf.train.SyncReplicasOptimizer(opt, replicas_to_aggregate=N,
replica_id=FLAGS.task_index, total_num_replicas=N)
# Now you can call `minimize()` or `compute_gradients()` and `apply_gradients()` normally
train = opt.minimize(cost, global_step=global_step)
# You can now call get_init_tokens_op() and get_chief_queue_runner().
# Note that get_init_tokens_op() must be called before creating session
# because it modifies the graph.
init_token_op = opt.get_init_tokens_op()
chief_queue_runner = opt.get_chief_queue_runner()
# **************************************************************************************
# Test model
correct = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, "float"))
# Initializing the variables
init_op = tf.initialize_all_variables()
print("---Variables initialized---")
# **************************************************************************************
is_chief = (FLAGS.task_index == 0)
# Create a "supervisor", which oversees the training process.
sv = tf.train.Supervisor(is_chief=is_chief,
logdir="/tmp/train_logs",
init_op=init_op,
global_step=global_step,
save_model_secs=600)
# **************************************************************************************
with sv.prepare_or_wait_for_session(server.target) as sess:
# **************************************************************************************
# After the session is created by the Supervisor and before the main while loop:
if is_chief:
sv.start_queue_runners(sess, [chief_queue_runner])
# Insert initial tokens to the queue.
sess.run(init_token_op)
# **************************************************************************************
# Statistics
net_train_t = 0
# Training
for epoch in range(training_epochs):
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# ======== net training time ========
begin_t = time.time()
sess.run(train, feed_dict={x: batch_x, y: batch_y})
end_t = time.time()
net_train_t += (end_t - begin_t)
# ===================================
# Calculate training accuracy
# acc = sess.run(accuracy, feed_dict={x: mnist.train.images, y: mnist.train.labels})
# print("Epoch:", '%04d' % (epoch+1), " Train Accuracy =", acc)
print("Epoch:", '%04d' % (epoch+1))
print("Training Finished!")
print("Net Training Time: ", net_train_t, "second")
# Testing
print("Testing Accuracy = ", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
sv.stop()
print("done")
Anything wrong with my code? Or can I have a good example to follow?
I think your question can be answered as the comments in the issue #9596 of the tensorflow.
This problem is caused by the bugs of the new version of tf.train.SyncReplicasOptimizer(). You can use old version of this API to avoid this problem.
Another solution is from the Tensorflow Distributed Benchmarks. Take a look at the source code, and you can find that they synchronize workers manually through the queue in the tensorflow. Through experiments, this benchmark runs exactly as what you expect.
Hope these comments and resources can help you solve your problem. Thanks!
I am not sure if you would be interested in user-transparent distributed tensorflow which uses MPI in the backend. We have recently developed one such version with MaTEx: https://github.com/matex-org/matex.
Hence, for distributed TensorFlow, you would not need to write a SyncReplicaOptimizer code, since all the changes are abstracted from the user.
Hope this helps.
One issue is that you need to specify an aggregation_method in the minimize method for it to run synchronously,
train = opt.minimize(cost, global_step=global_step, aggregation_method=tf.AggregationMethod.ADD_N)
I am trying to build a Tensorflow example with a simple multl-layer
perceptron (MLP) functionality with one hidden layer. However, when I tested it and compared to other software e.g. Kaldi nnet1, the convergence during the training is not efficient, or cannot be comparable to Kaldi nnet1. I tried my best to make all the parameters the same (input, int target, batch size, learning rate, etc.), however, still confused where could be the reasons. Some pieces of codes are as follows:
Initialization:
self.weight = [tf.Variable(tf.truncated_normal([440, 8192],stddev=0.1))]
self.bias = [tf.Variable(tf.constant(0.01, shape=8192))]
self.weight.append( tf.Variable(tf.truncated_normal([8192, 8],stddev=0.1)) )
self.bias.append( tf.Variable(tf.constant(0.01, shape=8)) )
self.act = [tf.nn.sigmoid( tf.matmul(self.input, self.weight[0]) + self.bias[0] )]
self.nn_out = tf.matmul(self.act, self.weight[1]) + self.bias[1])
self.nn_softmax = tf.nn.softmax(self.nn_out)
self.nn_tgt = tf.placeholder("int64", shape=[None,])
self.cost_mean = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(self.nn_out, self.nn_tgt))
self.train_step = tf.train.GradientDescentOptimizer(self.learn_rate).minimize(self.cost_mean)
# saver
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.sess.run(tf.initialize_all_variables())
Training:
for epoch in xrange(20):
feats_tr, tgts_tr = shuffle(feats_tr, tgts_tr, random_state=777)
# restore the exisiting model
ckpt = tf.train.get_checkpoint_state(ckpt_dir)
if ckpt and ckpt.model_checkpoint_path:
self.load(ckpt.model_checkpoint_path)
# mini-batch
tr_loss = []
for idx_begin in range(0,len(feats_tr), 512):
idx_end = idx_begin + batch_size
batch_feats, batch_tgts = feats_tr[idx_begin:idx_end],tgts_tr[idx_begin:idx_end]
_, loss_val = self.sess.run([self.train_step, self.cost_mean], feed_dict = {self.nn_in: batch_feats,
self.nn_tgt: batch_tgts,self.learn_rate: learn_rate})
tr_loss.append(loss_val)
# cross-validation
cv_loss = []
for idx_begin in range(0,len(feats_cv), 512):
idx_end = idx_begin + batch_size
batch_feats, batch_tgts = feats[idx_begin:idx_end],tgts[idx_begin:idx_end]
loss_all.append(self.sess.run(self.cost_mean,
feed_dict = { self.nn_in: batch_feats,
self.nn_tgt: batch_tgts}))
print( "Avg Loss for Training: "+str(np.mean(tr_loss)) + \
" Avg Loss for Validation: "+str(np.mean(cv_loss)) )
# save model per epoch if np.mean(cv_loss) less than previous
if (epoch+1)%1==0:
if loss_new < loss:
loss = loss_new
print( "Model accepted in epoch %d" %(epoch+1) )
# save model to ckpt_dir with mdl_nam
self.saver.save(self.sess, mdl_nam, global_step=epoch+1)
else:
print( "Model rejected in epoch %d" %(epoch+1) )
and I generated a simple annealing learning rate control as: if the average of cross-validation loss is not improved by a certain threshold, then halving the 'learn_late' with initial 0.008.
I checked all the parameters when compared to Kaldi nnet1, and the only difference now is the initialization parameters of weights and biases. I am not sure whether initialization will affect too much. However, the convergence in terms of 'cv_loss' during training in Tensorflow (Avg. CV Loss 1.99) is not good as Kaldi nnet1 (Avg. CV Loss 0.95). Can someone help to point out where I did something wrong or I missed something?
Many thanks in advance !!!
At each epoch, you call self.load(ckpt.model_checkpoint_path) which seems to load previously saved weights.
Your model cannot learn if it is reset to the initial weights at each epoch.