I have the pretty common use case of freezing the bottom layers of Inception and training only the first two layers, after which I lower the learning rate and fine tune the entire Inception model.
Here is my code for running the first part
train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = get_dataset()
images, _, labels = load_batch(dataset, batch_size=32)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v3_arg_scope()):
logits, _ = inception.inception_v3(images, num_classes=5, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, 5)
tf.losses.softmax_cross_entropy(one_hot_labels, logits)
total_loss = tf.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.RMSPropOptimizer(0.001, 0.9,
momentum=0.9, epsilon=1.0)
train_op = slim.learning.create_train_op(total_loss, optimizer, variables_to_train=get_variables_to_train())
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=4500,
save_summaries_secs=30,
save_interval_secs=30,
session_config=tf.ConfigProto(gpu_options=gpu_options))
print('Finished training. Last batch loss %f' % final_loss)
which runs properly, then my code for running the second part
train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = get_dataset()
images, _, labels = load_batch(dataset, batch_size=32)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v3_arg_scope()):
logits, _ = inception.inception_v3(images, num_classes=5, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, 5)
tf.losses.softmax_cross_entropy(one_hot_labels, logits)
total_loss = tf.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.RMSPropOptimizer(0.0001, 0.9,
momentum=0.9, epsilon=1.0)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=10000,
save_summaries_secs=30,
save_interval_secs=30,
session_config=tf.ConfigProto(gpu_options=gpu_options))
print('Finished training. Last batch loss %f' % final_loss)
Notice that in the second part, I do not pass anything into create_train_op's variables_to_train parameter. This error is then shown
NotFoundError (see above for traceback): Key InceptionV3/Conv2d_4a_3x3/BatchNorm/beta/RMSProp not found in checkpoint
[[Node: save_1/RestoreV2_49 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_49/tensor_names, save_1/RestoreV2_49/shape_and_slices)]]
[[Node: save_1/Assign_774/_1550 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_2911_save_1/Assign_774", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
I suspect that it's looking for the RMSProp variables for the InceptionV3/Conv2d_4a_3x3 layer, which is non-existent, because I didn't train that layer in the previous checkpoint. I'm not sure how to achieve what I want, as I can see no examples in the documentation about how to do this.
TF Slim has support for reading from a checkpoint whose variable names do not match, described here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/learning.py#L146
You can specify how the variable names in a checkpoint map to the variables in your model.
I hope that helps!
Related
I'm trying to run a simple cnn on cifar10, combining code from 2 examples:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/6_MultiGPU/multigpu_cnn.py
https://github.com/exelban/tensorflow-cifar-10
I'm getting OOM errors.
I first tried the code with the complete cnn , without multi-gpu support, and it is working ok. Next I used the multi-gpu code, ran ok too.
Combining them is not working.
with tf.device('/cpu:0'):
tower_grads = []
reuse_vars = False
# tf Graph input
X = tf.placeholder(tf.float32, shape=[None, _IMAGE_SIZE * _IMAGE_SIZE * _IMAGE_CHANNELS], name='Input')
Y = tf.placeholder(tf.float32, shape=[None, _NUM_CLASSES], name='Output')
phase = tf.placeholder(tf.bool, name='phase')
# learning_rate = tf.placeholder(tf.float32, shape=[], name='learning_rate')
keep_prob = tf.placeholder(tf.float32)
global_step = tf.get_variable(name='global_step', trainable=False, initializer=0)
# Loop over all GPUs and construct their own computation graph
for i in range(_NUM_GPUS):
with tf.device('/gpu:{}'.format(i)):
# learning_rate = tf.placeholder(tf.float32, shape=[], name='learning_rate')
# keep_prob = tf.placeholder(tf.float32)
# Split data between GPUs
_x = X[i * _BATCH_SIZE: (i+1) * _BATCH_SIZE]
_y = Y[i * _BATCH_SIZE: (i+1) * _BATCH_SIZE]
print("x shape:",_x.shape)
print("y shape:",_y.shape)
# Because Dropout have different behavior at training and prediction time, we
# need to create 2 distinct computation graphs that share the same weights.
_x = tf.reshape(_x, [-1, _IMAGE_SIZE, _IMAGE_SIZE, _IMAGE_CHANNELS], name='images')
# Create a graph for training
logits_train, y_pred_cls = feed_net(_x, _NUM_CLASSES, keep_prob, reuse=reuse_vars, is_training=True)
# Create another graph for testing that reuse the same weights
logits_test, y_pred_cls = feed_net(_x, _NUM_CLASSES, keep_prob, reuse=True, is_training=False)
# Define loss and optimizer (with train logits, for dropout to take effect)
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits_train, labels=_y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
grads = optimizer.compute_gradients(loss_op)
# Only first GPU compute accuracy
if i == 0:
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(logits_test, 1), tf.argmax(_y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
reuse_vars = True
tower_grads.append(grads)
tower_grads = average_gradients(tower_grads)
train_op = optimizer.apply_gradients(tower_grads)
The error is happening when running with more than 1 gpu (got 4), after about 90 iterations (less than one epoch)
ResourceExhaustedError: Ran out of GPU memory when allocating 0 bytes for
[[Node: softmax_cross_entropy_with_logits_sg_3 = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:3"](softmax_cross_entropy_with_logits_sg_3/Reshape, softmax_cross_entropy_with_logits_sg_3/Reshape_1)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Node: main_params/map/while/Less_1/_206 = _Send[T=DT_BOOL, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1905_main_params/map/while/Less_1", _device="/job:localhost/replica:0/task:0/device:GPU:0"](main_params/map/while/Less_1)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
UPDATE:
The problem was with how the data was divided across the GPUs.
I used tf.split(X, _NUM_GPUS) for the data and the labels, then I could assign each GPU with it's right data chunk.
Here's the solution:
The problem was with how the data was divided across the GPUs.
I used tf.split(X, _NUM_GPUS) for the data and the labels, then I could assign each GPU with it's right data chunk. Also , only one GPU is running accuracy so it needed to get the full sized data.
I have used the retrain.py script provided at the tensorflow github to finetune a pretrained inceptionV3 model on my own dataset. I have the model saved to disk and now I want to use it as the starting point for another round of training in which I retrain all of the convolutional layers. Below is the code I am attempting to use to create the new graph. I thought that once I have the graph loaded into the default graph I could access the variables in it with tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) and set them all to trainable. However, there doesn't appear to be anything in any of the tf.GraphKeys variables (GLOBAL_VARIABLES, TRAINABLE_VARIABLES, MODEL_VARIABLES etc.). So when I try to create the optimizer I get an error "ValueError: No variables to optimize." What am I doing wrong?
def create_graph(model_path, class_count):
"""Creates a graph from saved GraphDef file and returns a saver."""
with tf.Graph().as_default() as graph:
# Creates graph from saved graph_def.pb.
with tf.gfile.FastGFile(model_path, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
#import logits from saved graph
logits_tensor = graph.get_tensor_by_name("final_training_ops/biases/final_biases:0")
ground_truth_input = tf.placeholder(tf.float32,
[None, class_count],
name='GroundTruthInput')
print(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
#connect logits to the training ops
with tf.name_scope('cross_entropy'):
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
labels=ground_truth_input, logits=logits_tensor)
with tf.name_scope('total'):
cross_entropy_mean = tf.reduce_mean(cross_entropy)
tf.summary.scalar('cross_entropy', cross_entropy_mean)
with tf.name_scope('train'):
optimizer = tf.train.GradientDescentOptimizer(0.001)
train_step = optimizer.minimize(cross_entropy_mean)
return graph, ground_truth_input, cross_entropy_mean, train_step
I have a working RNN using the default softmax loss function for tf.contrib.seq2seq.sequence_loss() (which I'm assuming is tf.nn.softmax()) but would instead like to use tf.nn.softmax_cross_entropy_with_logits(). According to the seq2seq.sequence_loss documentation, one may use softmax_loss_function= to override the default loss function:
softmax_loss_function: Function (labels, logits) -> loss-batch to be
used instead of the standard softmax (the default if this is None).
Note that to avoid confusion, it is required for the function to
accept named arguments.
Here is my code that works:
from tensorflow.python.layers.core import Dense
# Build the graph
train_graph = tf.Graph()
# Set the graph to default to ensure that it is ready for training
with train_graph.as_default():
# Load the model inputs
input_data, targets, keep_prob, lr, target_sequence_length, max_target_sequence_length, source_sequence_length \
= get_model_inputs()
# Create the training and inference logits
training_decoder_output, inference_decoder_output = seq2seq_model(input_data,
targets,
lr,
target_sequence_length,
max_target_sequence_length,
source_sequence_length,
len(source_letter_to_int),
len(target_letter_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
keep_prob)
# Create tensors for the training logits and inference logits
training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')
inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')
# Create the weights for sequence_loss
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(training_logits, targets, masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
# Add variables to collection in order to load them up when retraining a saved graph
tf.add_to_collection("cost", cost)
tf.add_to_collection("train_op", train_op)
My attempt to change the loss function is as follows (I've only indicated the code that is different):
with tf.name_scope("optimization"):
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, len(target_letter_to_int))
y_reshaped = tf.reshape(y_one_hot, [batch_size, len(target_letter_to_int), 30])
# Loss function
loss = tf.nn.softmax_cross_entropy_with_logits(logits=training_logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
cost = tf.contrib.seq2seq.sequence_loss(training_logits, targets, masks, softmax_loss_function=loss)
The line cost = tf.contrib.seq2seq.sequence_loss(training_logits, targets, masks, softmax_loss_function=loss) is now giving me "TypeError: 'Tensor' object is not callable." This is one of the most opaque errors I've seen Tensorflow produce and I haven't found much of anything in the way of explanation on the internet. Any help would be appreciated.
The more accurate description for this issue is that MobileNet behaves bad when is_training is not set to true explicitly.
And I'm referring to the MobileNet that is provided by TensorFlow in their model repository https://github.com/tensorflow/models/blob/master/slim/nets/mobilenet_v1.py.
This is how I create the net (phase_train=True):
with slim.arg_scope(mobilenet_v1.mobilenet_v1_arg_scope(is_training=phase_train)):
features, endpoints = mobilenet_v1.mobilenet_v1(
inputs=images_placeholder, features_layer_size=features_layer_size, dropout_keep_prob=dropout_keep_prob,
is_training=phase_train)
I'm training a recognition network and while training I test on LFW. The results that I get during the training are improving over time and getting a good accuracy.
Before deployment I freeze the graph. if I freeze the graph with is_training=True the results that I get on LFW are the same as during training.
But if I set is_training=False I get results like the network haven't trained at all...
This behavior actually happens with other networks like Inception.
I tend to believe that I miss something very fundamental here and that this is not a bug in TensorFlow...
Any help would be appreciated.
Adding more code...
This is how I prepare for training:
images_placeholder = tf.placeholder(tf.float32, shape=(None, image_size, image_size, 1), name='input')
labels_placeholder = tf.placeholder(tf.int32, shape=(None))
dropout_placeholder = tf.placeholder_with_default(1.0, shape=(), name='dropout_keep_prob')
phase_train_placeholder = tf.Variable(True, name='phase_train')
global_step = tf.Variable(0, name='global_step', trainable=False)
# build graph
with slim.arg_scope(mobilenet_v1.mobilenet_v1_arg_scope(is_training=phase_train_placeholder)):
features, endpoints = mobilenet_v1.mobilenet_v1(
inputs=images_placeholder, features_layer_size=512, dropout_keep_prob=1.0,
is_training=phase_train_placeholder)
# loss
logits = slim.fully_connected(inputs=features, num_outputs=train_data.get_class_count(), activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
weights_regularizer=slim.l2_regularizer(scale=0.00005),
scope='Logits', reuse=False)
tf.losses.sparse_softmax_cross_entropy(labels=labels_placeholder, logits=logits,
reduction=tf.losses.Reduction.MEAN)
loss = tf.losses.get_total_loss()
# normalize output for inference
embeddings = tf.nn.l2_normalize(features, 1, 1e-10, name='embeddings')
# optimizer
optimizer = tf.train.AdamOptimizer()
train_op = optimizer.minimize(loss, global_step=global_step)
This is my train step:
batch_data, batch_labels = train_data.next_batch()
feed_dict = {
images_placeholder: batch_data,
labels_placeholder: batch_labels,
dropout_placeholder: dropout_keep_prob
}
_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
I could add the code for how I freeze the graph but it's not really necessary. it's enough to build the graph with is_train=false, load latest checkpoint and run the evaluation on LWF to reproduce the problem.
Update...
I found that the problem is in the batch normalization layer. it's enough to set this layer to is_training=false to reproduce the problem.
references that I found after finding this:
http://ruishu.io/2016/12/27/batchnorm/
https://github.com/tensorflow/tensorflow/issues/10118
Batch Normalization - Tensorflow
Will update with a solution once I have a tested one.
So I found a solution.
Mainly using this reference: http://ruishu.io/2016/12/27/batchnorm/
From the link:
Note: When is_training is True the moving_mean and moving_variance need to be updated, by default the update_ops are placed in tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to the train_op, example:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) if update_ops: updates = tf.group(*update_ops) total_loss = control_flow_ops.with_dependencies([updates], total_loss)
And to be straight to the point,
instead of creating the optimizer like so:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(total_loss, global_step=global_step)
Do it like this:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(total_loss, global_step=global_step)
That will solve the issue.
is_training should not have this effect. I need to see more of your code to understand what is happening, but odds are the variable names are not matching when you set is_training to false probably because of a variable scope reuse issue.
I would like to try to use Keras Sequential model in order to train a convnet on an image classification problem.
My training set is 18K images 455x255 which is probably too big to fit into memory and so I would like to use some kind of a batch pipeline.
In my original tensorflow implementation I have this code which is simlar to the MNIST tensorflow example
How can I feed this pipeline into the Sequential model, to create something like the Keras cifa10_cnn example
with tf.name_scope('input'):
# Input data
images_initializer = tf.placeholder(
dtype=tf.string,
shape=[len_all_filepaths])
labels_initializer = tf.placeholder(
dtype=tf.int32,
shape=[len_all_filepaths])
input_images = tf.Variable(
images_initializer, trainable=False, collections=[])
input_labels = tf.Variable(
labels_initializer, trainable=False, collections=[])
image, label = tf.train.slice_input_producer(
[input_images, input_labels], num_epochs=FLAGS.num_epochs)
# process path and string tensor into an image and a label
file_contents = tf.read_file(image)
image_contents = tf.image.decode_jpeg(file_contents, channels=NUM_CHANNELS)
image_contents.set_shape([None, None, NUM_CHANNELS])
# Rotate if necessary
rotated_image_contents, = tf.py_func(rotate, [image_contents], [tf.uint8])
rotated_image_contents.set_shape([IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS])
rotated_image_contents = tf.image.per_image_whitening(rotated_image_contents)
images, labels = tf.train.batch(
[rotated_image_contents, label],
batch_size=FLAGS.batch_size,
num_threads=16,
capacity=3 * FLAGS.batch_size
)
# Build a Graph that computes predictions from the inference model.
logits = model.inference(images, len(correct_labels))
# Add to the Graph the Ops for loss calculation.
loss = model.loss(logits, labels)
# Add to the Graph the Ops that calculate and apply gradients.
train_op = model.training(loss, FLAGS.learning_rate)
...
I think the ImageDataGenerator from Keras already does batching for you. I don't understand why the Keras datagen.fit() with a specified batch size and a standard generator doesn't work for your use case.