Variable Assignment TensorFlow Sizing Error - tensorflow

Both are scalars. I'm trying to reassign the variable. But I'm unable to because every iteration changes the size of the variable. I tried all kinds of transformations but they do not work. Any idea?
I'm trying to just have a behaviour like appending an element to a list.
a = tf.Variable(0.00, tf.float32)
b = tf.Variable(0.00, tf.float32)
a.assign(tf.pack([a, b]))
This gives an error:
ValueError: Shapes () and (2,) are not compatible

a is a single scalar variable
b is a single scalar variable
you can only assign other single scalar values to these variables (is not like python that you can assign anything).
when you do tf.pack([a, b]) you are creating a tensor, and you cannot assign a tensor to a single scalar variable. You have to create a new Variable.

Related

TypeError: 'TensorShape' object is not callable

I am new to Tensorflow programming , i was digging up some functions and got this error in the snippet :
**with** **tf.Session()** as sess_1:
c = tf.constant(5)
d = tf.constant(6)
e = c + d
print(sess_1.run(e))
print(sess_1.run(e.shape()))
Error found :Traceback (most recent call last):
File "C:/Users/Ashu/PycharmProjects/untitled/Bored.py", line 15, in
print(sess_1.run(e.shape()))
TypeError: 'TensorShape' object is not callable
I didn't found it here so can anyone please clarify this silly doubt as i am new learner.Sorry for any typing mistake !
I have a one more doubt , when i uses simply eval() function it doesn't print anything in pycharm , i had to use it along with print() method. So my doubt is when print() method is used it doesn't print the dtype of the tensor , it simply print the tensor or python object value in pycharm.(Why i am not getting the output in the format like : array([1. , 1.,] , dtype=float32))Is it the Pycharm way to print the tensor in new version or is it something i am doing wrong ? So excited to know the thing behind this , please help and pardon if i am wrong at any place.
One confusing aspect of tensorflow for beginners is there are two types of shape: dynamic shape, given by tf.shape(x), and static shape, given by x.shape (assuming x is a tensor). While they represent the same concept, they are used very differently.
Static shape is the shape of a tensor known at run time. Its a data type in its own right, but it can be converted to a list using as_list().
x = tf.placeholder(shape=(None, 3, 4))
static_shape = x.shape
shape_list = x.shape.as_list()
print(shape_list) # [None, 3, 4]
y = tf.reduce_sum(x, axis=1)
print(y.shape.as_list()) # [None, 4]
During operations, tensorflow tracks static shapes as best it can. In the above example, y's shape was calculated based on the partially known shape of x's. Note we haven't even created a session, but the static shape is still known.
Since the batch size is not known, you can't use the static first entry in calculations.
z = tf.reduce_sum(x) / tf.cast(x.shape.as_list()[0], tf.float32) # ERROR
(we could have divided by x.shape.as_list()[1], since that dimension is known at run-time - but that wouldn't demonstrate anything here)
If we need to use a value which is not known statically - i.e. at graph construction time - we can use the dynamic shape of x. The dynamic shape is a tensor - like other tensors in tensorflow - which is evaluated using a session.
z = tf.reduce_sum(x) / tf.cast(tf.shape(x)[0], tf.float32) # all good!
You can't call as_list on the dynamic shape, nor can you inspect its values without going through a session evaluation.
As stated in the documentation, you can only call a session's run method with tensors, operations, or lists of tensors/operations. Your last line of code calls run with the result of e.shape(), which has type TensorShape. The session can't execute a TensorShape argument, so you're getting an error.
When you call print with a tensor, the system prints the tensor's content. If you want to print the tensor's type, use code like print(type(tensor)).

Tensorflow: InvalidArgumentError: Input ... incompatible with expected float_ref

The following code results in a very unhelpful error:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
with tf.Session() as s:
val = s.run(x.assign(1))
print(val) # 1
val = s.run(x, {x: 2})
print(val) # 2
val = s.run(x.assign(1), {x: 0.}) # InvalidArgumentError
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node Assign_1 was passed float from _arg_x_0_0:0 incompatible with expected float_ref.
How did I get this error?
Why do I get this error?
Here's what I could infer.
How did I get this error?
This error is seen when attempting to perform the following two operations in a single session run:
A Tensorflow variable is assigned a value
That same variable is also passed a value as part of the feed_dict
This is why the first 2 runs succeed (they both don't simultaneously attempt to perform both these operations).
Why do I get this error?
I am not sure, but I don't think this was an intentional design choice by Google. Here's my explanation:
Firstly, the TF(TensorFlow) source code (basically) resolves x.assign(1) to tf.assign(x, 1) which gives us a hint for better understand the error message when it says Input 0.
The error message refers to x when it says Input 0 of the assign op.
It goes on to say that the first argument of the assign op was passed float from _arg_x_0_0:0.
TLDR
Thus for a run where a TF variable is provided as a feed, that variable will no longer be treated as a variable (but instead as the value it was assigned), and thus any attempts at further assigning a value to it would be erroneous since only TF variables can be assigned a value in the graph.
Fix
If your graph has variable assignment operation, don't pass a value to that same variable in your feed_dict. ¯_(ツ)_/¯. Assuming you're using the feed_dict to provide an initial value, you could instead assign it a value in a prior session run. Or, leverage tf.control_dependencies when building your graph to assign it an initial value from a placeholder as shown below:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
initial_x = tf.placeholder(tf.float32)
assign_from_placeholder = x.assign(initial_x)
with tf.control_dependencies([assign_from_placeholder]):
x_assign = x.assign(1)
with tf.Session() as s:
val = s.run(x_assign, {initial_x: 0.}) # Success!

Select variable_scope dynamically at runtime

I want to change the variable_scope by the value of some tensors. For an easy example, I defined a very simple code like this:
import tensorflow as tf
def calculate_variable(scope):
with tf.variable_scope(scope or type(self).__name__, reuse=tf.AUTO_REUSE):
w = tf.get_variable('ww', shape=[5], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
return w
w = calculate_variable('in_first')
w1 = calculate_variable('in_second')
The function is very simple. It just returns value defined in a certain variable scope. Now, 'w' and 'w1' would have different values.
What I want to do is to select this variable scope by some condition of tensors. Assuming I have two tensors x, y, if their value is same, I want to get value from the function above with certain variable scope.
x = tf.constant(3)
y = tf.constant(3)
condi = tf.cond(tf.equal(x, y), lambda: 'in_first', lambda: 'in_second')
w_cond = calculate_variable(condi)
I tried many other methods and searched the Internet. However, whenever I want to determine variable_scope from condition of tensors in a similar way to this example, it shows an error.
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
Is there any good workaround?
The way you stated it, this is not possible. variable_scope class explicitly checks that name_or_scope argument is either a string or a VariableScope instance:
...
if not isinstance(self._name_or_scope,
(VariableScope,) + six.string_types):
raise TypeError("VariableScope: name_or_scope must be a string or "
"VariableScope.")
It does not accept a Tensor. This is reasonable, because variable scope is part of graph definition, one can't define variables dynamically.
The closest supported expression is this:
x = tf.constant(3)
y = tf.constant(3)
w_cond = tf.cond(tf.equal(x, y),
lambda: calculate_variable('in_first'),
lambda: calculate_variable('in_second'))
... where you can select any of the two variables at runtime.

Tensorflow: get variable or tensor by name in a nested scope

I have the following code which defines a nested tf.variable_scope().
def function(inputs)
with tf.variable_scope("1st") as scope:
#define some variables here using tf.get_variable()
with tf.variable_scope("2nd") as scope:
#define some variables here using tf.get_variable()
my_wanted_variable = tf.get_variable('my_wanted_variable',[dim0,
dim1], tf.float(32),
tf.constant_initializer(0.0))
In another class, I want to get my_wanted_variable, I use
with tf.variable_scope("function/2nd", reuse=True):
got_my_wanted_variable = tf.get_variable("my_wanted_variable")
I was told that
ValueError: Variable function/2nd/my_wanted_variable does not exist,
or was not created with tf.get_variable(). Did you mean to set
reuse=None in VarScope?
If I set reuse=None when fetching my_wanted_variable then,
ValueError: Shape of a new variable (function/2nd/my_wanted_variable)
must be fully defined, but instead was .
So, how can I get a variable (or tensor) by name in a nested scope.
add debug info:
I used print(xxx.name) to see what is their name and scope indeed, I found that although their scope is right, e.g xxx/function/2nd. all variables which defined in scope 1st and 2nd are not named by their assigned name, for example, my_wanted_variable is xxx/function/2nd/sub:0.
The :0 is normal for every variable (it symbolizes the endpoint).
The name sub is not that weird it just shows that you did not name explicitely the variable so it gave the name of the operation you used (tf.sub() probably) to the tensor.
Use explicitely the argument called name="my_wanted_variable". Try without a scope first to be sure it is named appropriately. Then use print nn.name or inspect the nodes of the graph_def object to check.
Or we could check all the tensor in the debug mode with,
with tf.Session() as sess:
model = tf.train.import_meta_graph('./model.ckpt-30000.meta')
model.restore(sess, tf.train.latest_checkpoint('./'))
graph = tf.get_default_graph()
then, in debug mode,
graph._collections
This will enlist all the context_tensor, training_op, trainable_variables, variables.
or Even Better is:
[tensor.name for tensor in tf.get_default_graph().as_graph_def().node]

tf.scatter_nd_update Variable Requirement vs RNN.__call__ method

I am developing a RNN and am using Tensorflow 1.1. I got the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: The node 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Variable/Assign' has inputs from different frames. The input 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Identity_3' is in frame 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/'. The input 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Variable' is in frame ''.
The error is caused by the lambda function in dynamic rnn method and a piece of code in my RNN.
tensorflow rnn.py "dynamic_rnn / _dynamic_rnn_loop / _time_step" that using a lambda function to call RNN.call method to loop through all inputs.
my code :
if type(myObject) != tf.Variable:
tp = tf.Variable(myObject, validate_shape=False)
else:
tp = myObject
Logically, i repeatedly use tf.scatter_nd_update to update myObject. The pseudo code would be like myObject = scatter_nd_update(myObject, indices, updates). Since tf.scatter_nd_update requires Variable as argument and returns tensor, I need to wrap tensor into Variable. Hence the code above (test variable and then wrap). How should I modify my code to make it work? Thanks!