How can I copy a variable in tensorflow - tensorflow

In numpy I can create a copy of the variable with numpy.copy. Is there a similar method, that I can use to create a copy of a Tensor in TensorFlow?

You asked how to copy a variable in the title, but how to copy a tensor in the question. Let's look at the different possible answers.
(1) You want to create a tensor that has the same value that is currently stored in a variable that we'll call var.
tensor = tf.identity(var)
But remember, 'tensor' is a graph node that will have that value when evaluated, and any time you evaluate it, it will grab the current value of var. You can play around with control flow ops such as with_dependencies() to see the ordering of updates to the variable and the timing of the identity.
(2) You want to create another variable and set its value to the value currently stored in a variable:
import tensorflow as tf
var = tf.Variable(0.9)
var2 = tf.Variable(0.0)
copy_first_variable = var2.assign(var)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
print sess.run(var2)
sess.run(copy_first_variable)
print sess.run(var2)
(3) You want to define a variable and set its starting value to the same thing you already initialized a variable to (this is what nivwu.. above answered):
var2 = tf.Variable(var.initialized_value())
var2 will get initialized when you call tf.initialize_all_variables. You can't use this to copy var after you've already initialized the graph and started running things.

You can do this in a couple of ways.
this will create you a copy: v2 = tf.Variable(v1)
you can also use identity op: v2 = tf.identity(v1) (which I think is a proper way of doing it.
Here is a code example:
import tensorflow as tf
v1 = tf.Variable([[1, 2], [3, 4]])
v_copy1 = tf.Variable(v1)
v_copy2 = tf.identity(v1)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
a, b = sess.run([v_copy1, v_copy2])
sess.close()
print a
print b
Both of them would print the same tensors.

This performs a deep copy
copied_variable = tf.Variable(source_variable.initialized_value())
It also handles intialization properly, i.e.
tf.intialize_all_variables()
will properly initialize source_variable first and then copy that value to copied_variable

In TF2 :
tf.identity() will do the good deed for you. Recently I encountered some problems using the function in google colab. In case that's why you're here, this will be helping you.
Error : Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run Identity: No unary variant device copy function found for direction: 1 and Variant type_index: tensorflow::data::(anonymous namespace)::DatasetVariantWrapper [Op:Identity]
#Erroneous code
tensor1 = tf.data.Dataset.from_tensor_slices([[[1], [2]], [[3], [4]]])
tensor2 = tf.identity(tensor1)
#Correction
tensor1 = tf.data.Dataset.from_tensor_slices([[[1], [2]], [[3], [4]]])
with tf.device('CPU'): tensor2 = tf.identity(tensor1)

Related

TypeError: 'TensorShape' object is not callable

I am new to Tensorflow programming , i was digging up some functions and got this error in the snippet :
**with** **tf.Session()** as sess_1:
c = tf.constant(5)
d = tf.constant(6)
e = c + d
print(sess_1.run(e))
print(sess_1.run(e.shape()))
Error found :Traceback (most recent call last):
File "C:/Users/Ashu/PycharmProjects/untitled/Bored.py", line 15, in
print(sess_1.run(e.shape()))
TypeError: 'TensorShape' object is not callable
I didn't found it here so can anyone please clarify this silly doubt as i am new learner.Sorry for any typing mistake !
I have a one more doubt , when i uses simply eval() function it doesn't print anything in pycharm , i had to use it along with print() method. So my doubt is when print() method is used it doesn't print the dtype of the tensor , it simply print the tensor or python object value in pycharm.(Why i am not getting the output in the format like : array([1. , 1.,] , dtype=float32))Is it the Pycharm way to print the tensor in new version or is it something i am doing wrong ? So excited to know the thing behind this , please help and pardon if i am wrong at any place.
One confusing aspect of tensorflow for beginners is there are two types of shape: dynamic shape, given by tf.shape(x), and static shape, given by x.shape (assuming x is a tensor). While they represent the same concept, they are used very differently.
Static shape is the shape of a tensor known at run time. Its a data type in its own right, but it can be converted to a list using as_list().
x = tf.placeholder(shape=(None, 3, 4))
static_shape = x.shape
shape_list = x.shape.as_list()
print(shape_list) # [None, 3, 4]
y = tf.reduce_sum(x, axis=1)
print(y.shape.as_list()) # [None, 4]
During operations, tensorflow tracks static shapes as best it can. In the above example, y's shape was calculated based on the partially known shape of x's. Note we haven't even created a session, but the static shape is still known.
Since the batch size is not known, you can't use the static first entry in calculations.
z = tf.reduce_sum(x) / tf.cast(x.shape.as_list()[0], tf.float32) # ERROR
(we could have divided by x.shape.as_list()[1], since that dimension is known at run-time - but that wouldn't demonstrate anything here)
If we need to use a value which is not known statically - i.e. at graph construction time - we can use the dynamic shape of x. The dynamic shape is a tensor - like other tensors in tensorflow - which is evaluated using a session.
z = tf.reduce_sum(x) / tf.cast(tf.shape(x)[0], tf.float32) # all good!
You can't call as_list on the dynamic shape, nor can you inspect its values without going through a session evaluation.
As stated in the documentation, you can only call a session's run method with tensors, operations, or lists of tensors/operations. Your last line of code calls run with the result of e.shape(), which has type TensorShape. The session can't execute a TensorShape argument, so you're getting an error.
When you call print with a tensor, the system prints the tensor's content. If you want to print the tensor's type, use code like print(type(tensor)).

Tensorflow: InvalidArgumentError: Input ... incompatible with expected float_ref

The following code results in a very unhelpful error:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
with tf.Session() as s:
val = s.run(x.assign(1))
print(val) # 1
val = s.run(x, {x: 2})
print(val) # 2
val = s.run(x.assign(1), {x: 0.}) # InvalidArgumentError
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node Assign_1 was passed float from _arg_x_0_0:0 incompatible with expected float_ref.
How did I get this error?
Why do I get this error?
Here's what I could infer.
How did I get this error?
This error is seen when attempting to perform the following two operations in a single session run:
A Tensorflow variable is assigned a value
That same variable is also passed a value as part of the feed_dict
This is why the first 2 runs succeed (they both don't simultaneously attempt to perform both these operations).
Why do I get this error?
I am not sure, but I don't think this was an intentional design choice by Google. Here's my explanation:
Firstly, the TF(TensorFlow) source code (basically) resolves x.assign(1) to tf.assign(x, 1) which gives us a hint for better understand the error message when it says Input 0.
The error message refers to x when it says Input 0 of the assign op.
It goes on to say that the first argument of the assign op was passed float from _arg_x_0_0:0.
TLDR
Thus for a run where a TF variable is provided as a feed, that variable will no longer be treated as a variable (but instead as the value it was assigned), and thus any attempts at further assigning a value to it would be erroneous since only TF variables can be assigned a value in the graph.
Fix
If your graph has variable assignment operation, don't pass a value to that same variable in your feed_dict. ¯_(ツ)_/¯. Assuming you're using the feed_dict to provide an initial value, you could instead assign it a value in a prior session run. Or, leverage tf.control_dependencies when building your graph to assign it an initial value from a placeholder as shown below:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
initial_x = tf.placeholder(tf.float32)
assign_from_placeholder = x.assign(initial_x)
with tf.control_dependencies([assign_from_placeholder]):
x_assign = x.assign(1)
with tf.Session() as s:
val = s.run(x_assign, {initial_x: 0.}) # Success!

Select variable_scope dynamically at runtime

I want to change the variable_scope by the value of some tensors. For an easy example, I defined a very simple code like this:
import tensorflow as tf
def calculate_variable(scope):
with tf.variable_scope(scope or type(self).__name__, reuse=tf.AUTO_REUSE):
w = tf.get_variable('ww', shape=[5], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
return w
w = calculate_variable('in_first')
w1 = calculate_variable('in_second')
The function is very simple. It just returns value defined in a certain variable scope. Now, 'w' and 'w1' would have different values.
What I want to do is to select this variable scope by some condition of tensors. Assuming I have two tensors x, y, if their value is same, I want to get value from the function above with certain variable scope.
x = tf.constant(3)
y = tf.constant(3)
condi = tf.cond(tf.equal(x, y), lambda: 'in_first', lambda: 'in_second')
w_cond = calculate_variable(condi)
I tried many other methods and searched the Internet. However, whenever I want to determine variable_scope from condition of tensors in a similar way to this example, it shows an error.
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
Is there any good workaround?
The way you stated it, this is not possible. variable_scope class explicitly checks that name_or_scope argument is either a string or a VariableScope instance:
...
if not isinstance(self._name_or_scope,
(VariableScope,) + six.string_types):
raise TypeError("VariableScope: name_or_scope must be a string or "
"VariableScope.")
It does not accept a Tensor. This is reasonable, because variable scope is part of graph definition, one can't define variables dynamically.
The closest supported expression is this:
x = tf.constant(3)
y = tf.constant(3)
w_cond = tf.cond(tf.equal(x, y),
lambda: calculate_variable('in_first'),
lambda: calculate_variable('in_second'))
... where you can select any of the two variables at runtime.

Tensorflow: how to create a local variable?

I'm trying to understand how local and global variables are different in tensorflow and what's the right way to initialize the variables.
According to the doc, tf.local_variables_initializer:
Returns an Op that initializes all local variables.
This is just a shortcut for variables_initializer(local_variables())
So the essential part is tf.local_variables. The doc:
Local variables - per process variables, usually not saved/restored to checkpoint and used for temporary or intermediate values. For example, they can be used as counters for metrics computation or number of epochs this machine has read data.
It sounds logical, however, no matter how I tried, I couldn't make any variable local.
features = 2
hidden = 3
with tf.variable_scope('start'):
x = tf.placeholder(tf.float32, shape=[None, features], name='x')
y = tf.placeholder(tf.float32, shape=[None], name='y')
with tf.variable_scope('linear'):
W = tf.get_variable(name='W', shape=[features, hidden])
b = tf.get_variable(name='b', shape=[hidden], initializer=tf.zeros_initializer)
z = tf.matmul(x, W) + b
with tf.variable_scope('optimizer'):
predict = tf.reduce_sum(z, axis=1)
loss = tf.reduce_mean(tf.square(y - predict))
optimizer = tf.train.AdamOptimizer(0.1).minimize(loss)
print(tf.local_variables())
The output is always an empty list. How and should I create local variables?
A local variable is just a regular variable that's added to a "special" collection.
The collection is tf.GraphKeys.LOCAL_VARIABLES.
You can pick any variable definition and just add the parameter collections=[tf.GraphKeys.LOCAL_VARIABLES] to add the variable to the specified collection list.
Think I found it. The magic addition to make a variable local is collections=[tf.GraphKeys.LOCAL_VARIABLES] in tf.get_variable. So this way W becomes are local variable:
W = tf.get_variable(name='W', shape=[features, hidden], collections=[tf.GraphKeys.LOCAL_VARIABLES])
The documentation mentions one more possibility that also works:
q = tf.contrib.framework.local_variable(0.0, name='q')

How to add all variables under a scope into a certain collection

In tensorflow python APIs, tf.get_variable has a parameter collections to add the created var to the specified collections. But tf.variable_scope does not.
What's the suggested way to add all variables under a variable scope into a certain collection?
I don't believe there is a way to do this directly. You could file a feature request on Tensorflow's github issues tracker.
I can suggest two workarounds you might try though:
iterate over the result of tf.all_variables(), and extract variables whose names look like ".../scope_name/...". The scope names are encoded in the variable name, separated by / characters.
write wrappers around tf.VariableScope and tf.get_variable() that store the variables created inside the scope in a data structure.
I hope that helps!
I have managed to do this:
import tensorflow as tf
def var_1():
with tf.variable_scope("foo") as foo_scope:
assert foo_scope.name == "ll/foo"
a = tf.get_variable("a", [2, 2])
return foo_scope
def var_2(foo_scope):
with tf.variable_scope("bar"):
b = tf.get_variable("b", [2, 2])
with tf.variable_scope("baz") as other_scope:
c = tf.get_variable("c", [2, 2])
assert other_scope.name == "ll/bar/baz"
with tf.variable_scope(foo_scope) as foo_scope2:
d = tf.get_variable("d", [2, 2])
assert foo_scope2.name == "ll/foo" # Not changed.
def main():
with tf.variable_scope("ll"):
scp = var_1()
var_2(scp)
all_default_global_variables = tf.get_collection_ref(tf.GraphKeys.GLOBAL_VARIABLES)
my_collection = tf.get_collection('my_collection') # create my collection
ll_foo_variables = []
for variable in all_default_global_variables:
if "ll/foo" in variable.name:
ll_foo_variables.append(variable)
tf.add_to_collection('my_collection', ll_foo_variables)
variables_in_my_collection = tf.get_collection_ref("my_collection")
print(variables_in_my_collection)
main()
You can see that in my code in a, b, c and d only a and d have the same scope name ll/foo.
The process:
First I add all variables which are created by default in the tf.GraphKeys.GLOBAL_VARIABLES collection, then I create a collection named my_collection and then I add only those variables with 'll/foo' in the scope name to my_collection.
And what I get I what I expected:
[[<tf.Variable 'll/foo/a:0' shape=(2, 2) dtype=float32_ref>, <tf.Variable 'll/foo/d:0' shape=(2, 2) dtype=float32_ref>]]
import tensorflow as tf
for var in tf.global_variables(scope='model'):
tf.add_to_collection(tf.GraphKeys.MODEL_VARIABLES, var)
Instead of using global_variables, you could also iterate over trainable_variables if that is what you're interested in. In both cases, you do not only capture the variables you created manually using get_variable() but also the ones created by e.g. any tf.layers call.
You could just get all variables within the scope instead of getting a collection:
tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope')
https://stackoverflow.com/a/36536063/9095840