LookUpError in TensorFlow with tf.cond() - tensorflow

Work environment
TensorFlow release version : 1.3.0-rc2
TensorFlow git version : v1.3.0-rc1-994-gb93fd37
Operating System : CentOS Linux release 7.2.1511 (Core)
Problem Description
I use tf.cond() to move between training and validation datasets at the time of processing. The following snippet shows how I have done :
with tf.variable_scope(tf.get_variable_scope()) as vscope:
for i in range(4):
with tf.device('/gpu:%d'%i):
with tf.name_scope('GPU-Tower-%d'%i) as scope:
worktype = tf.get_variable("wt",[], initializer=tf.zeros_initializer())
worktype = tf.assign(worktype, 1)
workcondition = tf.equal(worktype, 1)
elem = tf.cond(workcondition, lambda: train_iterator.get_next(), lambda: val_iterato\
r.get_next())
net = vgg16cnn2(elem[0],numclasses=256)
img = elem[0]
centropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=ele\
m[1],logits= net))
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES, scope)
regloss = 0.05 * tf.reduce_sum(reg_losses)
total_loss = centropy + regloss
t1 = tf.summary.scalar("Training Batch Loss", total_loss)
tf.get_variable_scope().reuse_variables()
predictions = tf.cast(tf.argmax(tf.nn.softmax(net), 1), tf.int32)
correct_predictions = tf.cast(tf.equal(predictions, elem[1]), tf.float32)
batch_accuracy = tf.reduce_mean(correct_predictions)
t2 = tf.summary.scalar("Training Batch Accuracy", batch_accuracy)
correct_detection.append(correct_predictions)
grads = optim.compute_gradients(total_loss)
So basically based on the value of worktype, a minibatch will be taken from training or validation set.
When I run this code, I get the following LookUp Error :
LookupError: No gradient defined for operation 'GPU-Tower-0/cond/IteratorGetNext_1' (op type: IteratorGetNext)
Why does TensorFlow think that IteratorGetNext_1 requires a gradient ? How can I remedy this ?

The variable worktype is marked as trainable. By default, Optimizer.compute_gradients(...) computes the gradients for all trainable variables.
There are two ways you could solve this:
Set trainable=False in tf.get_variable(...).
Explicitly specify the variables for which the gradients should be computed with the var_list argument of Optimizer.compute_gradients(...).

Related

GradientTape for variable weighted sum of two Sequential models in TensorFlow

Suppose we want to minimize the following equation using gradient descent:
min f(alpha * v + (1-alpha)*w) with v and w the model weights and alpha the weight, between 0 and 1, for the sum resulting in the combined model v_bar or ū (here referred to as m).
alpha = tf.Variable(0.01, name='Alpha', constraint=lambda t: tf.clip_by_value(t, 0, 1))
w_weights = tff.learning.ModelWeights.from_model(w)
v_weights = tff.learning.ModelWeights.from_model(v)
m_weights = tff.learning.ModelWeights.from_model(m)
m_weights_trainable = tf.nest.map_structure(lambda v, w: alpha*v + (tf.constant(1.0) - alpha)*w, v_weights.trainable, w_weights.trainable)
tf.nest.map_structure(lambda v, t: v.assign(t), m_weights.trainable, m_weights_trainable)
In the paper of Adaptive Personalized Federated Learning, formula with update step for alpha suggests updating alpha based on the gradients of model m applied on a minibatch. I tried it with the watch or without, but it always leads to No gradients provided for any variable
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch([alpha])
outputs_m = m.forward_pass(batch)
grad = tape.gradient(outputs_m.loss, alpha)
optimizer.apply_gradients(zip([grad], [alpha]))
Some more information about the initialization of the models:
The m.forward_pass(batch) is the default implementation from tff.learning.Model (found here) by creating a model with tff.learning.from_keras_model and a tf.keras.Sequential model.
def model_fn():
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec = element_spec,
loss = tf.keras.losses.MeanSquaredError(),
metrics = [tf.keras.metrics.MeanSquaredError(),
tf.keras.metrics.MeanAbsoluteError()],
)
w = model_fn()
v = model_fn()
m = model_fn()
Some more experimenting as suggested below by Zachary Garrett:
It seems that whenever this weighted sum is calculated, and the new weights for the model are assigned, then it loses track of the previous trainable variables of both summed models. Again, it leads to the No gradients provided for any variable whenever optimizer.apply_gradients(zip([grad], [alpha])) is called. All gradients seem to be None.
with tf.GradientTape() as tape:
alpha = tf.Variable(0.01, name='Alpha', constraint=lambda t: tf.clip_by_value(t, 0, 1))
m_weights_t = tf.nest.map_structure(lambda w, v: tf.math.scalar_mul(alpha, v, name=None) + tf.math.scalar_mul(tf.constant(1.0) - alpha, w, name=None),
w.trainable,
v.trainable)
m_weights = tff.learning.ModelWeights.from_model(m)
tf.nest.map_structure(lambda v, t: v.assign(t), m_weights.trainable,
m_weights_trainable)
outputs_m = m.forward_pass(batch)
grad = tape.gradient(outputs_m.loss, alpha)
optimizer.apply_gradients(zip([grad], [alpha]))
Another edit:
I think I have a strategy to get it working, but it is bad practice as manually setting trainable_weights or _trainable_weights does not work. Any tips on improving this?
def do_weighted_combination():
def _mapper(target_layer, v_layer, w_layer):
target_layer.kernel = v_layer.kernel * alpha + w_layer.kernel * (1-alpha)
target_layer.bias = v_layer.bias * alpha + w_layer.bias * (1-alpha)
tf.nest.map_structure(_mapper, m.layers, v.layers, w.layers)
with tf.GradientTape(persistent=True) as tape:
do_weighted_combination()
predictions = m(x_data)
loss = m.compiled_loss(y_data, predictions)
g1 = tape.gradient(loss, v.trainable_weights) # Not None
g2 = tape.gradient(loss, alpha) # Not None
For TensorFlow auto-differentiation using tf.GradientTape, operations must occur within the tf.GradientTape Python context manager so that TensorFlow can "see" them.
Possibly what is happening here is that alpha is used outside/before the tape context, when setting the model variables. Then when m.forwad_pass is called TensorFlow doesn't see any access to alpha and thus can't compute a gradient for it (instead returning None).
Moving the
alpha*v + (tf.constant(1.0) - alpha)*w, v_weights.trainable, w_weights.trainable
logic inside the tf.GradientTape context manager (possibly inside m.forward_pass) may be a solution.

How to apply a function to the value of a tensor and then assigning the output to the same tensor

I want to project the updated weights of my network (after performing optimization) to a special space in which I need the value of that tensor to be passed. The function which applies projection gets a numpy array as an input. Is there a way I can do this?
I used tf.assign() as a solution but since my function accepts arrays and not tensors it failed.
Here is a sketch of what I want to do:
W = tf.Variable(...)
...
opt = tf.train.AdamOptimizer(learning_rate).minimize(loss, var_list=['W'])
W = my_function(W)
It seems that tf.control_dependencies is what you need
one simple exmaple:
import tensorflow as tf
var = tf.get_variable('var', initializer=0.0)
# replace `tf.add` with your custom function
addop = tf.add(var, 1)
with tf.control_dependencies([addop]):
updateop = tf.assign(var, addop)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # pylint: disable=no-member
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
updateop.eval()
print(var.eval())
updateop.eval()
print(var.eval())
updateop.eval()
print(var.eval())
output:
1.0
2.0
3.0

Tensorflow: Don't Update if gradient is Nan

I have a deep model to train on CIFAR-10. Training works fine with CPU. However, when I use GPU support, it causes gradients for some batches to be NaNs (I checked it using tf.check_numerics) and it happens randomly but early enough. I believe the problem is related to my GPU.
My question is that: is there away not to update if at least one of the gradients has NaNs and force the model to proceed to the next batch ?
Edit: Perhaps I should elaborate more on my problem.
This is how I apply the gradients:
with tf.control_dependencies([tf.check_numerics(grad, message='Gradient %s check failed, possible NaNs' % var.name) for grad, var in grads]):
# Apply the gradients to adjust the shared variables.
apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
I have thought of using tf.check_numerics first to verify that there are Nans in the gradients, and, then, if there are Nans (check failed) I can "pass" without using opt.apply_gradients. However, is there a way to catch an error with tf.control_dependencies ?
I could figure it out, albeit not in the most elegant way.
My solution is as follows:
1) check all gradients first
2) if gradients are NaNs-free, apply them
3) otherwise, apply fake update (with zero values), this needs gradient override.
This is my code:
First define custom gradient:
#tf.RegisterGradient("ZeroGrad")
def _zero_grad(unused_op, grad):
return tf.zeros_like(grad)
Then define an exception-handling function:
#this is added for gradient check of NaNs
def check_numerics_with_exception(grad, var):
try:
tf.check_numerics(grad, message='Gradient %s check failed, possible NaNs' % var.name)
except:
return tf.constant(False, shape=())
else:
return tf.constant(True, shape=())
Then create conditional node:
num_nans_grads = tf.Variable(1.0, name='num_nans_grads')
check_all_numeric_op = tf.reduce_sum(tf.cast(tf.stack([tf.logical_not(check_numerics_with_exception(grad, var)) for grad, var in grads]), dtype=tf.float32))
with tf.control_dependencies([tf.assign(num_nans_grads, check_all_numeric_op)]):
# Apply the gradients to adjust the shared variables.
def fn_true_apply_grad(grads, global_step):
apply_gradients_true = opt.apply_gradients(grads, global_step=global_step)
return apply_gradients_true
def fn_false_ignore_grad(grads, global_step):
#print('batch update ignored due to nans, fake update is applied')
g = tf.get_default_graph()
with g.gradient_override_map({"Identity": "ZeroGrad"}):
for (grad, var) in grads:
tf.assign(var, tf.identity(var, name="Identity"))
apply_gradients_false = opt.apply_gradients(grads, global_step=global_step)
return apply_gradients_false
apply_gradient_op = tf.cond(tf.equal(num_nans_grads, 0.), lambda : fn_true_apply_grad(grads, global_step), lambda : fn_false_ignore_grad(grads, global_step))

consistent forward / backward pass with tensorflow dropout

For the reinforcement learning one usually applies forward pass of the neural network for each step of the episode in order to calculate policy. Afterwards one could calculate parameter gradients using backpropagation. Simplified implementation of my network looks like this:
class AC_Network(object):
def __init__(self, s_size, a_size, scope, trainer, parameters_net):
with tf.variable_scope(scope):
self.is_training = tf.placeholder(shape=[], dtype=tf.bool)
self.inputs = tf.placeholder(shape=[None, s_size], dtype=tf.float32)
# (...)
layer = slim.fully_connected(self.inputs,
layer_size,
activation_fn=tf.nn.relu,
biases_initializer=None)
layer = tf.contrib.layers.dropout(inputs=layer, keep_prob=parameters_net["dropout_keep_prob"],
is_training=self.is_training)
self.policy = slim.fully_connected(layer, a_size,
activation_fn=tf.nn.softmax,
biases_initializer=None)
self.actions = tf.placeholder(shape=[None], dtype=tf.int32)
self.advantages = tf.placeholder(shape=[None], dtype=tf.float32)
actions_onehot = tf.one_hot(self.actions, a_size, dtype=tf.float32)
responsible_outputs = tf.reduce_sum(self.policy * actions_onehot, [1])
self.policy_loss = - policy_loss_multiplier * tf.reduce_mean(tf.log(responsible_outputs) * self.advantages)
local_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope)
self.gradients = tf.gradients(self.policy_loss, local_vars)
Now during training I will fist rollout the episode by consecutive forward passes (again, simplified version):
s = self.local_env.reset() # list of input variables for the first step
while done == False:
a_dist = sess.run([self.policy],
feed_dict = {self.local_AC.inputs: [s],
self.is_training: True})
a = np.argmax(a_dist)
s, r, done, extra_stat = self.local_env.step(a)
# (...)
and in the end I will calculate gradients by backward pass:
p_l, grad = sess.run([self.policy_loss,
self.gradients],
feed_dict={self.inputs: np.vstack(comb_observations),
self.is_training: True,
self.actions: np.hstack(comb_actions),})
(please note that I could have made a mistake somewhere above trying to remove as much as possible of the original code irrelevant to the issue in question)
So finally the question: Is there a way of ensuring that all the consecutive calls to the sess.run() will generate the same dropout structure? Ideally I would like to have exactly the same dropout structure within each episode and only change it between episodes. Things seem to work well as they are but I continue to wonder.

In what order does TensorFlow evaluate nodes in a computation graph?

I am having a strange bug in TensorFlow. Consider the following code, part of a simple feed-forward neural network:
output = (tf.matmul(layer_3,w_out) + b_out)
prob = tf.nn.sigmoid(output);
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = output, targets = y_, name=None))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(loss, var_list = model_variables)`
(Notice that prob is not used to define the loss function. This is because sigmoid_cross_entropy applies sigmoid internally in its definition)
I later run the optimizer in the following line:
result,step_loss,_ = sess.run(fetches = [output,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]});
The above works just fine. However, if I instead run the following line to run the code, the network seems to perform terribly, even though there shouldn't be any difference!
result,step_loss,_ = sess.run(fetches = [prob,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]});
I have a feeling it has something to do with the order in which TF computes the nodes in the graph during a session, but I'm not sure. What could the issue be?
It's not an issue with the graph, it's just that you are looking at different things.
In the first example you provide:
result,step_loss,_ = sess.run(fetches = [output,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]})
you are saving the result of running the output op in the result python variable.
In the second one:
result,step_loss,_ = sess.run(fetches = [prob,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]})
you are saving the result of the prob op in the result python variable.
Since both ops are different it is to be expected that the values returned by them would be different.
You could run
logits, activation, step_loss, _ = sess.run(fetches = [output, prob, loss, optimizer], ...)
to check your results.