tf.scatter_nd_update Variable Requirement vs RNN.__call__ method - tensorflow

I am developing a RNN and am using Tensorflow 1.1. I got the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: The node 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Variable/Assign' has inputs from different frames. The input 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Identity_3' is in frame 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/'. The input 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Variable' is in frame ''.
The error is caused by the lambda function in dynamic rnn method and a piece of code in my RNN.
tensorflow rnn.py "dynamic_rnn / _dynamic_rnn_loop / _time_step" that using a lambda function to call RNN.call method to loop through all inputs.
my code :
if type(myObject) != tf.Variable:
tp = tf.Variable(myObject, validate_shape=False)
else:
tp = myObject
Logically, i repeatedly use tf.scatter_nd_update to update myObject. The pseudo code would be like myObject = scatter_nd_update(myObject, indices, updates). Since tf.scatter_nd_update requires Variable as argument and returns tensor, I need to wrap tensor into Variable. Hence the code above (test variable and then wrap). How should I modify my code to make it work? Thanks!

Related

TypeError: Failed to convert object of type <class 'tuple'> to Tensor. When calling a model with tf.data.dataset.map

I am calling a model in a function detect:
def detect(img):
detector_output = detector(tf.reshape(img, (1, img.shape[0], img.shape[1], img.shape[2])))
classes = detector_output['detection_classes'][0].numpy()
most_likely = tf.convert_to_tensor(classes[0])
box = detector_output['detection_boxes'][0][0]
box = tf.math.multiply(box, [img.shape[0], img.shape[1], img.shape[0], img.shape[1]])
box = tf.cast(box, tf.int16)
return (box, most_likely)
this is called in another function reads via tf.data.datasets map api
dataset = dataset.map(reads, num_parallel_calls = AUTO).batch(32)
I think the issue is that this tensorflow hub model (or all object detection models I could find) does not support batching.
Calling the function via reads by itself works fine.
except if I use the tf.function decorator, then weirdly even by itself detect(img) throws the same error.
I tried with several models from here with the same result.
detector needs the shape with the 1 dimension up front.
I know there should be some reverse flatten() or squeeze() but I couldn't find it, apologies for the bad style!
The issue is also likely here in the reshaping.
the full error:
TypeError: Failed to convert object of type <class 'tuple'> to Tensor. Contents: (1, None, None, 3). Consider casting elements to a supported type.
Edit: I fixed the error by using tf.expand_dims instead of reshaping above.
I'd still be glad for a good explanation to understand better what went trong.
Thank you for your help!

How to initialize tf.contrib.lookup.HashTable used in Tensorflow Estimator model_fn?

I have a tf.contrib.lookup.HashTable declared inside a Tensorflow Estimator model_fn. As the session is not directly available to us in Estimators, I am stuck with not being able to initialize the table. I am aware that if not used with Estimators, table can be initialized with table.init.run() using the session
I tried to initialize the table by using a sessionRunHook which I was already using for some other purpose. I pass the table init op as argument to session run in the before_run function. But table is still not initialized. I also tried to pass tf.tables_initializer() instead, but that did not work too. Another option I tried without success is the tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS.. command.
#sessionRunHook code below
class SaveToCSVHook(tf.train.SessionRunHook):
def begin(self):
samples_weights_table = session.graph.get_tensor_by_name('samples_weights_table:0')
self.samples_weights_table_init_op = samples_weights_table.init
self.table_init_op = tf.tables_initializer() # also tried passing this to self.args instead - same result though
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, samples_weights_table.init)
def after_create_session(self, session, coord):
self.args ={'table_init_op':self.samples_weights_table_init_op}
def before_run(self, run_context):
return tf.train.SessionRunArgs(self.args)
def after_run(self, run_context, run_values):
print(f"Got Values: {run_values.results}")
# Estimator model_fn code below
def model_fn(..)
samples_weights_table = tf.contrib.lookup.HashTable(tf.contrib.lookup.KeyValueTensorInitializer(keysb, values, key_dtype=tf.string, value_dtype=tf.float32,name='samples_weights_table_init_op'), -1.0,name='samples_weights_table')
I get error:
FailedPreconditionError (see above for traceback): Table not initialized
which obviously means the table is not getting initialized
If anyone is interested to know the answer, the hashtable need not be explicitly initialized when used with Estimators. They are initialized by default for high level APIs like Estimators. The error goes away when the initializer code is removed and the table works as expected.

Error evaluating a TensorArray in a while loop

I've built the following TensorArray:
ta = tf.TensorArray(
dtype=tf.float32,
size=0,
dynamic_size=True,
element_shape=tf.TensorShape([None, None])
)
and called ta = ta.write(idx, my_tensor) inside a while_loop.
When evaluating the output = ta.stack() tensor in a session, I receive this error message:
ValueError: Cannot use '.../TensorArrayWrite/TensorArrayWriteV3' as
input to '.../TensorArrayStack_1/TensorArraySizeV3' because
'.../TensorArrayWrite/TensorArrayWriteV3' is in a while loop. See info
log for more details.
I don't understand this error message, could you please help me ?
Update: A minimal example might be difficult to come up with, but this is what I am doing: I am using the reference to the ta TensorArray inside the cell_input_fn of AttentionWrapper. This callback is used in AttentionWrapper's call method, where another TensorArray named alignment_history is being written. Therefore the while_loop code is not designed by me, it's part of the TF dynamic RNN computation tf.nn.dynamic_rnn.
Not sure if this is what's biting you, but you have to make sure your while_loop function takes the tensor array as input and emits an updated one as output; and you have to use the final version of the TensorArray at the end of the while_loop:
def fn(ta_old):
return ta_old.write(...)
ta_final = while_loop(..., body=fn, [tf.TensorArray(...)])
values = ta_final.stack()
specifically you should never access ta_old outside of fn().

Tensorflow: InvalidArgumentError: Input ... incompatible with expected float_ref

The following code results in a very unhelpful error:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
with tf.Session() as s:
val = s.run(x.assign(1))
print(val) # 1
val = s.run(x, {x: 2})
print(val) # 2
val = s.run(x.assign(1), {x: 0.}) # InvalidArgumentError
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node Assign_1 was passed float from _arg_x_0_0:0 incompatible with expected float_ref.
How did I get this error?
Why do I get this error?
Here's what I could infer.
How did I get this error?
This error is seen when attempting to perform the following two operations in a single session run:
A Tensorflow variable is assigned a value
That same variable is also passed a value as part of the feed_dict
This is why the first 2 runs succeed (they both don't simultaneously attempt to perform both these operations).
Why do I get this error?
I am not sure, but I don't think this was an intentional design choice by Google. Here's my explanation:
Firstly, the TF(TensorFlow) source code (basically) resolves x.assign(1) to tf.assign(x, 1) which gives us a hint for better understand the error message when it says Input 0.
The error message refers to x when it says Input 0 of the assign op.
It goes on to say that the first argument of the assign op was passed float from _arg_x_0_0:0.
TLDR
Thus for a run where a TF variable is provided as a feed, that variable will no longer be treated as a variable (but instead as the value it was assigned), and thus any attempts at further assigning a value to it would be erroneous since only TF variables can be assigned a value in the graph.
Fix
If your graph has variable assignment operation, don't pass a value to that same variable in your feed_dict. ¯_(ツ)_/¯. Assuming you're using the feed_dict to provide an initial value, you could instead assign it a value in a prior session run. Or, leverage tf.control_dependencies when building your graph to assign it an initial value from a placeholder as shown below:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
initial_x = tf.placeholder(tf.float32)
assign_from_placeholder = x.assign(initial_x)
with tf.control_dependencies([assign_from_placeholder]):
x_assign = x.assign(1)
with tf.Session() as s:
val = s.run(x_assign, {initial_x: 0.}) # Success!

Variable Assignment TensorFlow Sizing Error

Both are scalars. I'm trying to reassign the variable. But I'm unable to because every iteration changes the size of the variable. I tried all kinds of transformations but they do not work. Any idea?
I'm trying to just have a behaviour like appending an element to a list.
a = tf.Variable(0.00, tf.float32)
b = tf.Variable(0.00, tf.float32)
a.assign(tf.pack([a, b]))
This gives an error:
ValueError: Shapes () and (2,) are not compatible
a is a single scalar variable
b is a single scalar variable
you can only assign other single scalar values to these variables (is not like python that you can assign anything).
when you do tf.pack([a, b]) you are creating a tensor, and you cannot assign a tensor to a single scalar variable. You have to create a new Variable.