How to add all variables under a scope into a certain collection - tensorflow

In tensorflow python APIs, tf.get_variable has a parameter collections to add the created var to the specified collections. But tf.variable_scope does not.
What's the suggested way to add all variables under a variable scope into a certain collection?

I don't believe there is a way to do this directly. You could file a feature request on Tensorflow's github issues tracker.
I can suggest two workarounds you might try though:
iterate over the result of tf.all_variables(), and extract variables whose names look like ".../scope_name/...". The scope names are encoded in the variable name, separated by / characters.
write wrappers around tf.VariableScope and tf.get_variable() that store the variables created inside the scope in a data structure.
I hope that helps!

I have managed to do this:
import tensorflow as tf
def var_1():
with tf.variable_scope("foo") as foo_scope:
assert foo_scope.name == "ll/foo"
a = tf.get_variable("a", [2, 2])
return foo_scope
def var_2(foo_scope):
with tf.variable_scope("bar"):
b = tf.get_variable("b", [2, 2])
with tf.variable_scope("baz") as other_scope:
c = tf.get_variable("c", [2, 2])
assert other_scope.name == "ll/bar/baz"
with tf.variable_scope(foo_scope) as foo_scope2:
d = tf.get_variable("d", [2, 2])
assert foo_scope2.name == "ll/foo" # Not changed.
def main():
with tf.variable_scope("ll"):
scp = var_1()
var_2(scp)
all_default_global_variables = tf.get_collection_ref(tf.GraphKeys.GLOBAL_VARIABLES)
my_collection = tf.get_collection('my_collection') # create my collection
ll_foo_variables = []
for variable in all_default_global_variables:
if "ll/foo" in variable.name:
ll_foo_variables.append(variable)
tf.add_to_collection('my_collection', ll_foo_variables)
variables_in_my_collection = tf.get_collection_ref("my_collection")
print(variables_in_my_collection)
main()
You can see that in my code in a, b, c and d only a and d have the same scope name ll/foo.
The process:
First I add all variables which are created by default in the tf.GraphKeys.GLOBAL_VARIABLES collection, then I create a collection named my_collection and then I add only those variables with 'll/foo' in the scope name to my_collection.
And what I get I what I expected:
[[<tf.Variable 'll/foo/a:0' shape=(2, 2) dtype=float32_ref>, <tf.Variable 'll/foo/d:0' shape=(2, 2) dtype=float32_ref>]]

import tensorflow as tf
for var in tf.global_variables(scope='model'):
tf.add_to_collection(tf.GraphKeys.MODEL_VARIABLES, var)
Instead of using global_variables, you could also iterate over trainable_variables if that is what you're interested in. In both cases, you do not only capture the variables you created manually using get_variable() but also the ones created by e.g. any tf.layers call.

You could just get all variables within the scope instead of getting a collection:
tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope')
https://stackoverflow.com/a/36536063/9095840

Related

Accessing learned weights of a DNN in CNTK

How can one access to the learned weights of a DNN saved as following:
lstm_network_output.save(model_path)
The weights/parameters of a network can be accessed by calling ‘lstm_network_output.parameters’ which returns a list of ‘Parameter’ variable objects. The value of a Parameter can be obtained using ‘value’ property of the Parameter object in the form of a numpy array. The value of the Parameter can be updated by ‘.value = ’.
If you used name= properties in creating your model, you can also identify layers by name. For example:
model = Sequential([Embedding(300, name='embed'), Recurrence(LSTM(500)), Dense(10)])
E = model.embed.E # accesses the embedding matrix of the embed layer
To know that the parameter is .E, please consult the docstring of the respective function (e.g. help(Embedding)). (In Dense and Convolution, the parameters would be .W and .b.)
The pattern above is for named layers, which are created using as_block(). You can also name intermediate variables, and access them in the same way. E.g.:
W = Parameter((13,42), init=0, name='W')
x = Input(13)
y = times(x, W, name='times1')
W_recovered = y.times1.W
# e.g. check the shape to see that they are the same
W_recovered.shape # --> (13, 42)
W.shape # --> (13, 42)
Technically, this will search all parameters that feed y. In case of a more complex network, you may end up having multiple parameters of the same name. Then an error will be thrown due to the ambiguity. In that case, you must work the .parameters tuple mentioned in Anna's response.
This python code worked for me to visualize some weights:
import numpy as np
import cntk as C
dnnFile = C.cntk_py.Function.load('Models\ConvNet_MNIST_5.dnn') # load model from MS example
layer8 = dnnFile.parameters()[8].value()
filter_num = 0
sliced = layer8.asarray()[ filter_num ][ 0 ] # shows filter works on input image
print(sliced)

What is a local variable in tensorflow?

Tensorflow has this API defined:
tf.local_variables()
Returns all variables created with collection=[LOCAL_VARIABLES].
Returns:
A list of local Variable objects.
What exactly is a local variable in TensorFlow? Can someone give me an example?
Short answer: a local variable in TF is any variable which was created with collections=[tf.GraphKeys.LOCAL_VARIABLES]. For example:
e = tf.Variable(6, name='var_e', collections=[tf.GraphKeys.LOCAL_VARIABLES])
LOCAL_VARIABLES: the subset of Variable objects that are local to each
machine. Usually used for temporarily variables, like counters. Note:
use tf.contrib.framework.local_variable to add to this collection.
They are usually not saved/restored to checkpoint and used for temporary or intermediate values.
Long answer: this was a source of confusion for me as well. In the beginning I thought that local variables mean the same thing as local variable in almost any programming language, but it is not the same thing:
import tensorflow as tf
def some_func():
z = tf.Variable(1, name='var_z')
a = tf.Variable(1, name='var_a')
b = tf.get_variable('var_b', 2)
with tf.name_scope('aaa'):
c = tf.Variable(3, name='var_c')
with tf.variable_scope('bbb'):
d = tf.Variable(3, name='var_d')
some_func()
some_func()
print [str(i.name) for i in tf.global_variables()]
print [str(i.name) for i in tf.local_variables()]
No matter what I tried, I always recieved only global variables:
['var_a:0', 'var_b:0', 'aaa/var_c:0', 'bbb/var_d:0', 'var_z:0', 'var_z_1:0']
[]
The documentation for tf.local_variables have not provided a lot of details:
Local variables - per process variables, usually not saved/restored to
checkpoint and used for temporary or intermediate values. For example,
they can be used as counters for metrics computation or number of
epochs this machine has read data. The local_variable() automatically
adds new variable to GraphKeys.LOCAL_VARIABLES. This convenience
function returns the contents of that collection.
But reading docs for the init method in tf.Variable class, I found that while creating a variable, you can provide what kind of a variable do you want it to be by assigning a list of collections.
The list of possible collection elements is here. So to create a local variable you need to do something like this. You will see it in the list of local_variables:
e = tf.Variable(6, name='var_e', collections=[tf.GraphKeys.LOCAL_VARIABLES])
print [str(i.name) for i in tf.local_variables()]
It's the same as regular variable, but it's in a different collection than default (GraphKeys.VARIABLES). That collection is used by saver to initialize the default list of variables to save, so having a local designation has an effect of not saving that variable by default.
I'm seeing only one place that uses it in the codebase, which is the limit_epochs
with ops.name_scope(name, "limit_epochs", [tensor]) as name:
zero64 = constant_op.constant(0, dtype=dtypes.int64)
epochs = variables.Variable(
zero64, name="epochs", trainable=False,
collections=[ops.GraphKeys.LOCAL_VARIABLES])
I think, here understanding of TensorFlow collections is required.
TensorFlow provides collections, which are named lists of tensors or other objects, such as tf.Variable instances.
Following are in-build collections:
tf.GraphKeys.GLOBAL_VARIABLES #=> 'variables'
tf.GraphKeys.LOCAL_VARIABLES #=> 'local_variables'
tf.GraphKeys.MODEL_VARIABLES #=> 'model_variables'
tf.GraphKeys.TRAINABLE_VARIABLES #=> 'trainable_variables'
In general, at the time of creation of a variable, it can be added to given collection by explicitly passing that collection as one of the collections passed to collections argument.
Theoretically, a variable can be in any combination of in-built or custom collections. But, in-build collections are used for particular purposes:
tf.GraphKeys.GLOBAL_VARIABLES:
The Variable() constructor or get_variable() automatically adds new variables to the graph collection GraphKeys.GLOBAL_VARIABLES, unless the collections argument is explicitly passed and doesn't include GLOBAL_VARIABLE.
By convention, these variables are shared across distributed environments (model variables are subset of these).
See tf.global_variables() for more details.
tf.GraphKeys.TRAINABLE_VARIABLES:
When passed trainable=True (which is default behavior), the Variable() constructor and get_variable() automatically adds new variables to this graph collection. But of course, you can use collections argument to add a variab
le to any desired collection.
By convention, these are the variables which will be trained by an optimizer.
See tf.trainable_variables() for more details.
tf.GraphKeys.LOCAL_VARIABLES:
You can use tf.contrib.framework.local_variable() to add to this collection. But of course, you can use collections argument to add a variable to
any desired collection.
By convention, these are the variables that are local to each machine. They are per process variables, usually not saved/restored to checkpoint and used for temporary or intermediate values. For example, they can be used as counters
for metrics computation or number of epochs this machine has read data.
See tf.local_variables() for more details.
tf.GraphKeys.MODEL_VARIABLES:
You can use tf.contrib.framework.model_variable() to add to this collection. But of course, you can use collections argument to add a variable to
any desired collection.
By convention, these are the variables that are used in the model for inference (feed forward).
See tf.model_variables() for more details.
You can also use your own collections. Any string is a valid collection name, and there is no need to explicitly create a collection. To add a variable (or any other object) to a collection after creating the variable, call tf.add_to_collection().
For example,
tf.__version__ #=> '1.9.0'
# initializing using a Tensor
my_variable01 = tf.get_variable("var01", dtype=tf.int32, initializer=tf.constant([23, 42]))
# initializing using a convenient initializer
my_variable02 = tf.get_variable("var02", shape=[1, 2, 3], dtype=tf.int32, initializer=tf.zeros_initializer)
my_variable03 = tf.get_variable("var03", dtype=tf.int32, initializer=tf.constant([1, 2]), trainable=None)
my_variable04 = tf.get_variable("var04", dtype=tf.int32, initializer=tf.constant([3, 4]), trainable=False)
my_variable05 = tf.get_variable("var05", shape=[1, 2, 3], dtype=tf.int32, initializer=tf.ones_initializer, trainable=True)
my_variable06 = tf.get_variable("var06", dtype=tf.int32, initializer=tf.constant([5, 6]), collections=[tf.GraphKeys.LOCAL_VARIABLES], trainable=None)
my_variable07 = tf.get_variable("var07", dtype=tf.int32, initializer=tf.constant([7, 8]), collections=[tf.GraphKeys.LOCAL_VARIABLES], trainable=True)
my_variable08 = tf.get_variable("var08", dtype=tf.int32, initializer=tf.constant(1), collections=[tf.GraphKeys.MODEL_VARIABLES], trainable=None)
my_variable09 = tf.get_variable("var09", dtype=tf.int32, initializer=tf.constant(2), collections=[tf.GraphKeys.GLOBAL_VARIABLES, tf.GraphKeys.LOCAL_VARIABLES, tf.GraphKeys.MODEL_VARIABLES, tf.GraphKeys.TRAINABLE_VARIABLES, "my_collectio
n"])
my_variable10 = tf.get_variable("var10", dtype=tf.int32, initializer=tf.constant(3), collections=["my_collection"], trainable=True)
[var.name for var in tf.global_variables()] #=> ['var01:0', 'var02:0', 'var03:0', 'var04:0', 'var05:0', 'var09:0']
[var.name for var in tf.local_variables()] #=> ['var06:0', 'var07:0', 'var09:0']
[var.name for var in tf.trainable_variables()] #=> ['var01:0', 'var02:0', 'var05:0', 'var07:0', 'var09:0', 'var10:0']
[var.name for var in tf.model_variables()] #=> ['var08:0', 'var09:0']
[var.name for var in tf.get_collection("trainable_variables")] #=> ['var01:0', 'var02:0', 'var05:0', 'var07:0', 'var09:0', 'var10:0']
[var.name for var in tf.get_collection("my_collection")] #=> ['var09:0', 'var10:0']

How to assign new name or rename an existing tensor in tensorflow?

I am experimenting Tensorflow C++ API. There is a need for me to name a tensor so that I can feed during reference in C++. Take a look at this example
self.initial_state = cell.zero_state(args.batch_size, tf.float32)
print self.initial_state.name
self.initial_state is an tensor created by cell.zero_state method. My question is: How to rename / reassign a name to an existing tensor? I don't want to use the generated name for better recall.
Thanks
Maybe you can try like:
some_tensor = tf.get_variable("my_old_tensor", [1, 2, 3])
new_tensor = tf.identity(some_tensor, name="my_new_tensor")
Refer How to rename a variable which respects the name scope? Answer section says: If you want to "rename" an op, there is no way to do that directly, because a tf.Operation (or tf.Tensor) is immutable once it has been created. The typical way to rename an op is therefore to use tf.identity().

What's the difference of name scope and a variable scope in tensorflow?

What's the differences between these functions?
tf.variable_op_scope(values, name, default_name, initializer=None)
Returns a context manager for defining an op that creates variables.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.
tf.op_scope(values, name, default_name=None)
Returns a context manager for use when defining a Python op.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.
tf.name_scope(name)
Wrapper for Graph.name_scope() using the default graph.
See Graph.name_scope() for more details.
tf.variable_scope(name_or_scope, reuse=None, initializer=None)
Returns a context for variable scope.
Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.
Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.
The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).
It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.
As a result, we end up having two different types of scopes:
name scope, created using tf.name_scope
variable scope, created using tf.variable_scope
Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.
However, name scope is ignored by tf.get_variable. We can see that in the following example:
with tf.name_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)
print(v1.name) # var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0
The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:
with tf.variable_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)
print(v1.name) # my_scope/var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0
This allows us to easily share variables across different parts of the program, even within different name scopes:
with tf.name_scope("foo"):
with tf.variable_scope("var_scope"):
v = tf.get_variable("var", [1])
with tf.name_scope("bar"):
with tf.variable_scope("var_scope", reuse=True):
v1 = tf.get_variable("var", [1])
assert v1 == v
print(v.name) # var_scope/var:0
print(v1.name) # var_scope/var:0
UPDATE
As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.
Both variable_op_scope and op_scope are now deprecated and should not be used at all.
Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:
import tensorflow as tf
def scoping(fn, scope1, scope2, vals):
with fn(scope1):
a = tf.Variable(vals[0], name='a')
b = tf.get_variable('b', initializer=vals[1])
c = tf.constant(vals[2], name='c')
with fn(scope2):
d = tf.add(a * b, c, name='res')
print '\n '.join([scope1, a.name, b.name, c.name, d.name]), '\n'
return d
d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3])
with tf.Session() as sess:
writer = tf.summary.FileWriter('logs', sess.graph)
sess.run(tf.global_variables_initializer())
print sess.run([d1, d2])
writer.close()
Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:
scope_vars
scope_vars/a:0
scope_vars/b:0
scope_vars/c:0
scope_vars/res/res:0
scope_name
scope_name/a:0
b:0
scope_name/c:0
scope_name/res/res:0
You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):
This gives you the answer:
Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.
A good documentation on Sharing variables tells you that
tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().
The same documentation provides a more details how does Variable Scope work and when it is useful.
Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")
tf.name_scope creates namespace for operators in the default graph.
tf.variable_scope creates namespace for both variables and operators in the default graph.
tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.
tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.
Links to the sources above help to disambiguate this documentation issue.
This example shows that all types of scopes define namespaces for both variables and operators with following differences:
scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)
tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly
tf.variable_scope and variable_op_scope add specified or default initializer.
Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:
Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.
Besides the fact that variable_scope's functionality basically extends those of name_scope, together they behave in a way that may surprises you:
with tf.name_scope('foo'):
with tf.variable_scope('bar'):
x = tf.get_variable('x', shape=())
x2 = tf.square(x**2, name='x2')
print(x.name)
# bar/x:0
print(x2.name)
# foo/bar/x2:0
This behavior has its use and justifies the coexistance of both scopes -- but unless you know what you are doing, sticking to variable_scope only will avoid you some headaches due to this.
As for API r0.11, op_scope and variable_op_scope are both deprecated.
name_scope and variable_scope can be nested:
with tf.name_scope('ns'):
with tf.variable_scope('vs'): #scope creation
v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0'
v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0'
v3 = v1 + v2 #v3.name = 'ns/vs/add:0'
You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:
def mysum(a,b,name=None):
with tf.op_scope([a,b],name,"mysum") as scope:
v = tf.get_variable("v", 1)
v2 = tf.Variable([0], name="v2")
assert v.name == "v:0", v.name
assert v2.name == "mysum/v2:0", v2.name
return tf.add(a,b)
def mysum2(a,b,name=None):
with tf.variable_op_scope([a,b],name,"mysum2") as scope:
v = tf.get_variable("v", 1)
v2 = tf.Variable([0], name="v2")
assert v.name == "mysum2/v:0", v.name
assert v2.name == "mysum2/v2:0", v2.name
return tf.add(a,b)
with tf.Graph().as_default():
op = mysum(tf.Variable(1), tf.Variable(2))
op2 = mysum2(tf.Variable(1), tf.Variable(2))
assert op.name == 'mysum/Add:0', op.name
assert op2.name == 'mysum2/Add:0', op2.name
notice the name of the variable v in the two examples.
same for tf.name_scope and tf.variable_scope:
with tf.Graph().as_default():
with tf.name_scope("name_scope") as scope:
v = tf.get_variable("v", [1])
op = tf.add(v, v)
v2 = tf.Variable([0], name="v2")
assert v.name == "v:0", v.name
assert op.name == "name_scope/Add:0", op.name
assert v2.name == "name_scope/v2:0", v2.name
with tf.Graph().as_default():
with tf.variable_scope("name_scope") as scope:
v = tf.get_variable("v", [1])
op = tf.add(v, v)
v2 = tf.Variable([0], name="v2")
assert v.name == "name_scope/v:0", v.name
assert op.name == "name_scope/Add:0", op.name
assert v2.name == "name_scope/v2:0", v2.name
You can read more about variable scope in the tutorial.
A similar question was asked before on Stack Overflow.
From the last section of this page of the tensorflow documentation: Names of ops in tf.variable_scope()
[...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:
with tf.variable_scope("foo"):
x = 1.0 + tf.get_variable("v", [1])
assert x.op.name == "foo/add"
Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.
with tf.variable_scope("foo"):
with tf.name_scope("bar"):
v = tf.get_variable("v", [1])
x = 1.0 + v
assert v.name == "foo/v:0"
assert x.op.name == "foo/bar/add"
When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops.
Tensorflow 2.0 Compatible Answer: The explanations of Andrzej Pronobis and Salvador Dali are very detailed about the Functions related to Scope.
Of the Scope Functions discussed above, which are active as of now (17th Feb 2020) are variable_scope and name_scope.
Specifying the 2.0 Compatible Calls for those functions, we discussed above, for the benefit of the community.
Function in 1.x:
tf.variable_scope
tf.name_scope
Respective Function in 2.x:
tf.compat.v1.variable_scope
tf.name_scope (tf.compat.v2.name_scope if migrated from 1.x to 2.x)
For more information about migration from 1.x to 2.x, please refer this Migration Guide.

How can I copy a variable in tensorflow

In numpy I can create a copy of the variable with numpy.copy. Is there a similar method, that I can use to create a copy of a Tensor in TensorFlow?
You asked how to copy a variable in the title, but how to copy a tensor in the question. Let's look at the different possible answers.
(1) You want to create a tensor that has the same value that is currently stored in a variable that we'll call var.
tensor = tf.identity(var)
But remember, 'tensor' is a graph node that will have that value when evaluated, and any time you evaluate it, it will grab the current value of var. You can play around with control flow ops such as with_dependencies() to see the ordering of updates to the variable and the timing of the identity.
(2) You want to create another variable and set its value to the value currently stored in a variable:
import tensorflow as tf
var = tf.Variable(0.9)
var2 = tf.Variable(0.0)
copy_first_variable = var2.assign(var)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
print sess.run(var2)
sess.run(copy_first_variable)
print sess.run(var2)
(3) You want to define a variable and set its starting value to the same thing you already initialized a variable to (this is what nivwu.. above answered):
var2 = tf.Variable(var.initialized_value())
var2 will get initialized when you call tf.initialize_all_variables. You can't use this to copy var after you've already initialized the graph and started running things.
You can do this in a couple of ways.
this will create you a copy: v2 = tf.Variable(v1)
you can also use identity op: v2 = tf.identity(v1) (which I think is a proper way of doing it.
Here is a code example:
import tensorflow as tf
v1 = tf.Variable([[1, 2], [3, 4]])
v_copy1 = tf.Variable(v1)
v_copy2 = tf.identity(v1)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
a, b = sess.run([v_copy1, v_copy2])
sess.close()
print a
print b
Both of them would print the same tensors.
This performs a deep copy
copied_variable = tf.Variable(source_variable.initialized_value())
It also handles intialization properly, i.e.
tf.intialize_all_variables()
will properly initialize source_variable first and then copy that value to copied_variable
In TF2 :
tf.identity() will do the good deed for you. Recently I encountered some problems using the function in google colab. In case that's why you're here, this will be helping you.
Error : Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run Identity: No unary variant device copy function found for direction: 1 and Variant type_index: tensorflow::data::(anonymous namespace)::DatasetVariantWrapper [Op:Identity]
#Erroneous code
tensor1 = tf.data.Dataset.from_tensor_slices([[[1], [2]], [[3], [4]]])
tensor2 = tf.identity(tensor1)
#Correction
tensor1 = tf.data.Dataset.from_tensor_slices([[[1], [2]], [[3], [4]]])
with tf.device('CPU'): tensor2 = tf.identity(tensor1)