How to properly update variables in a while loop in TensorFlow? - tensorflow

Can someone please explain (or point me to the relevant place in the documentation that I've missed) how to properly update a tf.Variable() in a tf.while_loop? I am trying to update variables in the loop that will store some information until the next iteration of the loop using the assign() method. However, this isn't doing anything.
As the values of mu_tf and sigma_tf are being updated by the minimizer, while step_mu isn't, I am obviously doing something wrong, but I don't understand what it is. Specifically, I guess I should say that I know assign() does not do anything until it is executed when the graph is run, so I know that I can do
sess.run(step_mu.assign(mu_tf))
and that will update step_mu, but I want to do this in the loop correctly. I don't understand how to add an assign operation to the body of the loop.
A simplified working example of what I'm doing follows here:
import numpy as np
import tensorflow as tf
mu_true = 0.5
sigma_true = 1.5
n_events = 100000
# Placeholders
X = tf.placeholder(dtype=tf.float32)
# Variables
mu_tf = tf.Variable(initial_value=tf.random_normal(shape=[], mean=0., stddev=0.1,
dtype=tf.float32),
dtype=tf.float32)
sigma_tf = tf.Variable(initial_value=tf.abs(tf.random_normal(shape=[], mean=1., stddev=0.1,
dtype=tf.float32)),
dtype=tf.float32,
constraint=lambda x: tf.abs(x))
step_mu = tf.Variable(initial_value=-99999., dtype=tf.float32)
step_loss = tf.Variable(initial_value=-99999., dtype=tf.float32)
# loss function
gaussian_dist = tf.distributions.Normal(loc=mu_tf, scale=sigma_tf)
log_prob = gaussian_dist.log_prob(value=X)
negative_log_likelihood = -1.0 * tf.reduce_sum(log_prob)
# optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=0.1)
# sample data
x_sample = np.random.normal(loc=mu_true, scale=sigma_true, size=n_events)
# Construct the while loop.
def cond(step):
return tf.less(step, 10)
def body(step):
# gradient step
train_op = optimizer.minimize(loss=negative_log_likelihood)
# update step parameters
with tf.control_dependencies([train_op]):
step_mu.assign(mu_tf)
return tf.add(step,1)
loop = tf.while_loop(cond, body, [tf.constant(0)])
# Execute the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
step_loss = sess.run(fetches=negative_log_likelihood, feed_dict={X: x_sample})
print('Before loop:\n')
print('mu_tf: {}'.format(sess.run(mu_tf)))
print('sigma_tf: {}'.format(sess.run(sigma_tf)))
print('step_mu: {}'.format(sess.run(step_mu)))
print('step_loss: {}\n'.format(step_loss))
sess.run(fetches=loop, feed_dict={X: x_sample})
print('After loop:\n')
print('mu_tf: {}'.format(sess.run(mu_tf)))
print('sigma_tf: {}'.format(sess.run(sigma_tf)))
print('step_mu: {}'.format(sess.run(step_mu)))
print('step_loss: {}'.format(step_loss))

Related

Gradient Calculation in Tensorflow using GradientTape - Getting unexpected None value

I am having a problem calculating the gradient in TensorFlow 1.15. I think it's something related context manager or keras session, but I am not sure about it. Following is the code I have written:
def create_adversarial_pattern_CW(input_patch, input_label, target_label):
input_patch_T = tf.cast(input_patch,tf.float32)
with tf.GradientTape() as tape:
tape.watch(input_patch_T)
patch_pred = model(input_patch_T)
loss_input_label = soft_dice_loss(input_label, patch_pred[0])
loss_target_label = soft_dice_loss(target_label, patch_pred[0])
f = loss_input_label - loss_target_label
f_grad = tape.gradient(f, input_patch_T)
#-------------------------#
print(type(f_grad))
#-------------------------#
f_grad_sign = tf.sign(f_grad)
return f_grad_sign
def DAG():
sess = K.get_session()
with sess.as_default() as sess:
adv_x_old = tf.cast(X,dtype=tf.float32)
for i in range(iters):
#-------------------------#
#y_pred = model(adv_x_old) -> If I uncomment this line the value of f_grad returned is None, otherwise it works fine, but I need this line
#-------------------------#
perturbations = create_adversarial_pattern_CW(adv_x_old, y, y_target)
adv_x_new = adv_x_old - alpha*perturbations
adv_x_old = adv_x_new
adv_patch_pred = model(adv_x_old)
To fix it, I tried to wrap the commented line as :
with tf.GradientTape() as tape:
with tape.stop_recording():
y_pred = model(adv_x_old)
but I still get the value of f_grad as None.

Why am I getting shape errors when trying to pass a batch from the Tensorflow Dataset API to my session operations?

I am dealing with an issue in my conversion over to the Dataset API and I guess I just don't have enough experience yet with the API to know how to handle the below situation. We currently have image augmentation that we perform currently using queueing and batching. I was tasked with checking out the new Dataset API and converting over our existing implementation using it rather than queues.
What we would like to do is get a reference to all the paths and handle all operations from just that reference. As you see in the dataset initialization, I have mapped the parse_fn to the dataset itself which then goes about reading the file and extracting the initial values from the filenames. However when I then go about calling the iterators next_batch method and then pass those values to get_summary, I'm now getting an error around shape. I have been trying a number of things which just keeps changing the error and so I felt I should see if anyone on SO saw possibly that I was going about this all wrong and should be taking a different route. Does anything jump out as absolutely wrong in my use of the Dataset API?
Should I not be calling the ops this way any longer? I noticed the majority of the examples I saw they would get the batch, pass the variables to the op and then capture that in a variable and pass that to sess.run, however I haven't found an easy way of doing that as of yet with our setup that wasn't erroring so this was the approach I took instead (but its still erroring). I'll be continuing to try to trace down the problem and post here should I find anything, but if anyone sees something please advise. Thanks!
Current Error:
... in get_summary summary, acc = sess.run([self._summary_op,
self._accuracy], feed_dict=feed_dict) ValueError: Cannot feed value of
shape (32,) for Tensor 'ph_input_labels:0', which has shape '(?, 1)
Below is the block where the get_summary method is called and error is fired:
def perform_train():
if __name__ == '__main__':
#Get all our image paths
filenames = data_layer_train.get_image_paths()
next_batch, iterator = preproc_image_fn(filenames=filenames)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
with sess.graph.as_default():
# Set the random seed for tensorflow
tf.set_random_seed(cfg.RNG_SEED)
classifier_network = c_common.create_model(len(products_to_class_dict), is_training=True)
optimizer, global_step_var = c_common.create_optimizer(classifier_network)
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
# Init tables and dataset iterator
sess.run(tf.tables_initializer())
sess.run(iterator.initializer)
cur_epoch = 0
blobs = None
try:
epoch_size = data_layer_train.get_steps_per_epoch()
num_steps = num_epochs * epoch_size
for step in range(num_steps):
timer_summary.tic()
if blobs is None:
#Now populate from our training dataset
blobs = sess.run(next_batch)
# *************** Below is where it is erroring *****************
summary_train, acc = classifier_network.get_summary(sess, blobs["images"], blobs["labels"], blobs["weights"])
...
Believe the error is in preproc_image_fn:
def preproc_image_fn(filenames, images=None, labels=None, image_paths=None, cells=None, weights=None):
def _parse_fn(filename, label, weight):
augment_instance = False
paths=[]
selected_cells=[]
if vals.FIRST_ITER:
#Perform our check of the path to see if _data_augmentation is within it
#If so set augment_instance to true and replace the substring with an empty string
new_filename = tf.regex_replace(filename, "_data_augmentation", "")
contains = tf.equal(tf.size(tf.string_split([filename], "")), tf.size(tf.string_split([new_filename])))
filename = new_filename
if contains is True:
augment_instance = True
core_file = tf.string_split([filename], '\\').values[-1]
product_id = tf.string_split([core_file], ".").values[0]
label = search_tf_table_for_entry(product_id)
weight = data_layer_train.get_weights(product_id)
image_string = tf.read_file(filename)
img = tf.image.decode_image(image_string, channels=data_layer_train._channels)
img.set_shape([None, None, None])
img = tf.image.resize_images(img, [data_layer_train._target_height, data_layer_train._target_width])
#Previously I was returning the below, but I was getting an error from the op when assigning feed_dict stating that it didnt like the dictionary
#retval = dict(zip([filename], [img])), label, weight
retval = img, label, weight
return retval
num_files = len(filenames)
filenames = tf.constant(filenames)
#*********** Setup dataset below ************
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels, weights))
dataset=dataset.map(_parse_fn)
dataset = dataset.repeat()
dataset = dataset.batch(32)
iterator = dataset.make_initializable_iterator()
batch_features, batch_labels, batch_weights = iterator.get_next()
return {'images': batch_features, 'labels': batch_labels, 'weights': batch_weights}, iterator
def search_tf_table_for_entry(self, product_id):
'''Looks up keys in the table and outputs the values. Will return -1 if not found '''
if product_id is not None:
return self._products_to_class_table.lookup(product_id)
else:
if not self._real_eval:
logger().info("class not found in training {} ".format(product_id))
return -1
Where I create the model and have the placeholders used previously:
...
def create_model(self):
weights_regularizer = tf.contrib.layers.l2_regularizer(cfg.TRAIN.WEIGHT_DECAY)
biases_regularizer = weights_regularizer
# Input data.
self._input_images = tf.placeholder(
tf.float32, shape=(None, self._image_height, self._image_width, self._num_channels), name="ph_input_images")
self._input_labels = tf.placeholder(tf.int64, shape=(None, 1), name="ph_input_labels")
self._input_weights = tf.placeholder(tf.float32, shape=(None, 1), name="ph_input_weights")
self._is_training = tf.placeholder(tf.bool, name='ph_is_training')
self._keep_prob = tf.placeholder(tf.float32, name="ph_keep_prob")
self._accuracy = tf.reduce_mean(tf.cast(self._correct_prediction, tf.float32))
...
self.create_summaries()
def create_summaries(self):
val_summaries = []
with tf.device("/cpu:0"):
for var in self._act_summaries:
self._add_act_summary(var)
for var in self._train_summaries:
self._add_train_summary(var)
self._summary_op = tf.summary.merge_all()
self._summary_op_val = tf.summary.merge(val_summaries)
def get_summary(self, sess, images, labels, weights):
feed_dict = {self._input_images: images, self._input_labels: labels,
self._input_weights: weights, self._is_training: False}
summary, acc = sess.run([self._summary_op, self._accuracy], feed_dict=feed_dict)
return summary, acc
Since the error says:
Cannot feed value of shape (32,) for Tensor 'ph_input_labels:0', which has shape '(?, 1)
My guess is your labels in get_summary has the shape [32]. Can you just reshape it to (32, 1)? Or maybe reshape the label earlier in _parse_fn?

In Tensorflow, is it possible to append some summaries to already-merged summary_op?

Let's say, some built-in function returns train_op and summary_op where summary_op is defined by tf.summary.merge(summaries, name='summary_op'), and I cannot touch the function.
Also, let's say, I am going to use the built-in slim.learning.train which takes train_op and summary_op as input arguments.
# -- typical
train_op, summary_op = model_fn(image)
slim.learning.train(train_op, summary_op=summary_op)
# -- my question
train_op, summary_op = model_fn(image)
some_other_summary_list = some_another_function()
summary_op_ = ... # is it possible to append some_other_summary_list to summary_op?
slim.learning.train(train_op, summary_op=summary_op_)
How I can combine summaries in already-merged summary_op and newly-collected summaries some_other_summary_list?
-- If I do tf.merge_all(tf.GraphKeys.SUMMARIES) actually there will be too many summaries since, in model_fn() collect only useful and necessary summaries.
-- I can think of defining separate summary_op2 and define train_step_fn as in:
from tensorflow.contrib.slim.python.slim.learning import train_step
def train_step_fn(...):
... = train_step(...)
if iteration % 100 == 0:
summaries = session.run(summary_op2)
summary_writer.add_summary(summaries, iteration)
slim.learning.train(train_op, summary_op=summary_op, train_step_fn=train_step_fn)
However, this seems too much if I can simply somehow append new summaries to summary_op. Is it possible?
If both "summary_op and newly-collected summaries some_other_summary_list" are created by tf.summary.merge, you can simply merge them again by tf.summary.merge([summary_op, summaries some_other_summary_list]), as demonstrated by this code:
import tensorflow as tf
a = tf.summary.scalar('a', tf.constant(0))
b = tf.summary.scalar('b', tf.constant(1))
c = tf.summary.scalar('c', tf.constant(2))
d = tf.summary.scalar('d', tf.constant(3))
ab = tf.summary.merge([a, b])
cd = tf.summary.merge([c, d])
abcd = tf.summary.merge([ab, cd])
with tf.Session() as sess:
writer = tf.summary.FileWriter('.', sess.graph)
summary = sess.run(abcd)
writer.add_summary(summary)

Tensorflow No gradients provided for any variable LSTM encoder-decoder

I'm new to Tensorflow and i'm trying to impletement LSTM encoder-decoder from scratch using tensorflow, according to this blog post:
https://medium.com/#shiyan/understanding-lstm-and-its-diagrams-37e2f46f1714
This is the code for the encoder (using tf.while_loop)
def decode_timestep(self, context, max_dec, result, C_t_d, H_t_d, current_index):
with tf.variable_scope('decode_scope', reuse=True):
W_f_d = tf.get_variable('W_f_d', (self.n_dim, self.n_dim), tf.float32)
U_f_d = tf.get_variable('U_f_d', (self.n_dim, self.n_dim), tf.float32)
# ...
# Decoder
# Forget Gate
f_t_d = tf.nn.sigmoid(
tf.add(tf.reduce_sum(tf.add(tf.matmul(context, W_f_d), tf.matmul(H_t_d, U_f_d))), b_f_d))
C_t_d_f = tf.multiply(C_t_d, f_t_d)
# Input Gate
# Part 1. New Memory Impaction
i_t_d = tf.nn.sigmoid(
tf.add(tf.reduce_sum(tf.add(tf.matmul(context, W_i_d), tf.matmul(H_t_d, U_i_d))), b_i_d))
# Part 2. Calculate New Memory
c_d_new = tf.nn.tanh(tf.add(tf.reduce_sum(tf.add(tf.matmul(context, W_c_d), tf.matmul(H_t_d, U_c_d))), b_c_d))
# Part 3. Update old Memory
C_t_d_i = tf.add(C_t_d_f, tf.multiply(i_t_d, c_d_new))
# Output Gate
o_t_d = tf.nn.sigmoid(
tf.add(tf.reduce_sum(tf.add(tf.matmul(context, W_o_d), tf.matmul(H_t_d, U_o_d))), b_o_d))
# Calculate new H_t
H_t_d_new = tf.multiply(o_t_d, tf.nn.tanh(C_t_d))
# Write the result of this timestep to the tensor array at pos current_index
result.write(tf.subtract(tf.subtract(max_dec, 1), current_index), H_t_d_new)
# Decrement the current_index by 1
index_next = tf.subtract(current_index, 1)
return context, max_dec, result, C_t_d_i, H_t_d_new, index_next
This is the code for the decoder:
def decode_timestep(self, context, max_dec, result, C_t_d, H_t_d, current_index):
# Same as above
# Calculate new H_t
H_t_d_new = tf.multiply(o_t_d, tf.nn.tanh(C_t_d))
# Write the result of this timestep to the tensor array at pos current_index
result.write(tf.subtract(tf.subtract(max_dec, 1), current_index), H_t_d_new)
# Decrement the current_index by 1
index_next = tf.subtract(current_index, 1)
return context, max_dec, result, C_t_d_i, H_t_d_new, index_next
And this is the code for the session:
with tf.variable_scope('encode_scope', reuse=True):
H_t_e = tf.get_variable('H_t_e')
C_t_e = tf.get_variable('C_t_e')
with tf.variable_scope('global_scope', reuse=True):
current_index = tf.get_variable('current_index', dtype=tf.int32)
context, _, C_t_e, H_t_e, current_index = tf.while_loop(self.encode_timestep_cond, self.encode_timestep,
[X_Sent, max_enc, C_t_e, H_t_e, current_index])
result = tf.TensorArray(tf.float32, size=max_dec)
_,_,result, C_t_e, H_t_e, current_index = tf.while_loop(self.decode_timestep_cond, self.decode_timestep,
[context, max_dec, result, C_t_e, H_t_e, current_index])
loss = tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(result.concat(), Y_Sent)), reduction_indices=1)))
train_step = tf.train.AdamOptimizer().minimize(loss)
The error is :
ValueError: No gradients provided for any variable, check your graph
for ops that do not support gradients, between variables
["", ...
Please help! This is the implementation that i think it should right after what i read and understand from that post. Thank you and sorry about my english!

tensorflow serving uninitialized

Hello I want to initialize variable named result in the code below.
I tried to initialize with this code* when I tried to serving.
sess.run(tf.global_variables_initializer(),feed_dict=
{userLat:0,userLon:0})
I just want to initialize the variable.
The reason for using the variable is to write validate_shape = false.
The reason for using this option is to resolve error 'Outer dimension for outputs must be unknown, outer dimension of 'Variable:0' is 1' when deploying the model version to the Google Cloud ml engine.
Initialization with the following code will output a value when feed_dict is 0 when attempting a prediction.
sess.run(tf.global_variables_initializer(),feed_dict=
{userLat:0,userLon:0})
Is there a way to simply initialize the value of result?
Or is it possible to store the list of stored tensor values as a String with a comma without shape?
It's a very basic question.
I'm sorry.
I am a beginner of the tensor flow.
I need help. Thank you for reading.
import tensorflow as tf
import sys,os
#define filename queue
filenameQueue =tf.train.string_input_producer(['./data.csv'],
shuffle=False,name='filename_queue')
# define reader
reader = tf.TextLineReader()
key,value = reader.read(filenameQueue)
#define decoder
recordDefaults = [ ["null"],[0.0],[0.0]]
sId,lat, lng = tf.decode_csv(
value, record_defaults=recordDefaults,field_delim=',')
taxiData=[]
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
for i in range(18):
data=sess.run([sId, lat, lng])
tmpTaxiData=[]
tmpTaxiData.append(data[0])
tmpTaxiData.append(data[1])
tmpTaxiData.append(data[2])
taxiData.append(tmpTaxiData)
coord.request_stop()
coord.join(threads)
from math import sin, cos,acos, sqrt, atan2, radians
#server input data
userLat = tf.placeholder(tf.float32, shape=[])
userLon = tf.placeholder(tf.float32, shape=[])
R = 6373.0
radian=0.017453292519943295
distanceList=[]
for i in taxiData:
taxiId=tf.constant(i[0],dtype=tf.string,shape=[])
taxiLat=tf.constant(i[1],dtype=tf.float32,shape=[])
taxiLon=tf.constant(i[2],dtype=tf.float32,shape=[])
distanceValue=6371*tf.acos(tf.cos(radian*userLat)*
tf.cos(radian*taxiLat)*tf.cos(radian*taxiLon-
radian*126.8943311)+tf.sin(radian*37.4685225)*tf.sin(radian*taxiLat))
tmpDistance=[]
tmpDistance.append(taxiId)
tmpDistance.append(distanceValue)
distanceList.append(tmpDistance)
# result sort
sId,distances=zip(*distanceList)
indices = tf.nn.top_k(distances, k=len(distances)).indices
gather=tf.gather(sId, indices[::-1])[0:5]
result=tf.Variable(gather,validate_shape=False)
print "Done training!"
# serving
import os
from tensorflow.python.util import compat
model_version = 1
path = os.path.join("Taximodel", str(model_version))
builder = tf.saved_model.builder.SavedModelBuilder(path)
with tf.Session() as sess:
builder.add_meta_graph_and_variables(
sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map= {
"serving_default":
tf.saved_model.signature_def_utils.predict_signature_def(
inputs= {"userLat": userLat, "userLon":userLon},
outputs= {"result": result})
})
builder.save()
print 'Done exporting'
You can try to define the graph so that the output tensor preserves the shape (outer dimension) of the input tensor.
For example, something like:
#server input data
userLoc = tf.placeholder(tf.float32, shape=[None, 2])
def calculate_dist(user_loc):
distanceList = []
for i in taxiData:
taxiId=tf.constant(i[0],dtype=tf.string,shape=[])
taxiLat=tf.constant(i[1],dtype=tf.float32,shape=[])
taxiLon=tf.constant(i[2],dtype=tf.float32,shape=[])
distanceValue=6371*tf.acos(tf.cos(radian*user_loc[0])*
tf.cos(radian*taxiLat)*tf.cos(radian*taxiLon-
radian*126.8943311)+tf.sin(radian*37.4685225)*tf.sin(radian*taxiLat))
tmpDistance=[]
tmpDistance.append(taxiId)
tmpDistance.append(distanceValue)
distanceList.append(tmpDistance)
# result sort
sId,distances=zip(*distanceList)
indices = tf.nn.top_k(distances, k=len(distances)).indices
return tf.gather(sId, indices[::-1])[0:5]
result = tf.map_fn(calculate_dist, userLoc)