About names of variable scope in tensorflow - tensorflow

Recently I have been trying to learn to use TensorFlow, and I do not understand how variable scopes work exactly. In particular, I have the following problem:
import tensorflow as tf
from tensorflow.models.rnn import rnn_cell
from tensorflow.models.rnn import rnn
inputs = [tf.placeholder(tf.float32,shape=[10,10]) for _ in range(5)]
cell = rnn_cell.BasicLSTMCell(10)
outpts, states = rnn.rnn(cell, inputs, dtype=tf.float32)
print outpts[2].name
# ==> u'RNN/BasicLSTMCell_2/mul_2:0'
Where does the '_2' in 'BasicLSTMCell_2' come from? How does it work when later using tf.get_variable(reuse=True) to get the same variable again?
edit: I think I find a related problem:
def creating(s):
with tf.variable_scope('test'):
with tf.variable_scope('inner'):
a=tf.get_variable(s,[1])
return a
def creating_mod(s):
with tf.variable_scope('test'):
with tf.variable_scope('inner'):
a=tf.Variable(0.0, name=s)
return a
tf.ops.reset_default_graph()
a=creating('a')
b=creating_mod('b')
c=creating('c')
d=creating_mod('d')
print a.name, '\n', b.name,'\n', c.name,'\n', d.name
The output is
test/inner/a:0
test_1/inner/b:0
test/inner/c:0
test_3/inner/d:0
I'm confused...

The answer above is somehow misguiding.
Let me answer why you got two different scope names, even though it looks like that you defined two identical functions: creating and creating_mod.
This is simply because you used tf.Variable(0.0, name=s) to create the variable in the function creating_mod.
ALWAYS use tf.get_variable, if you want your variable to be recognized by scope!
Check out this issue for more details.
Thanks!

The "_2" in "BasicLSTMCell_2" relates to the name scope in which the op outpts[2] was created. Every time you create a new name scope (with tf.name_scope()) or variable scope (with tf.variable_scope()) a unique suffix is added to the current name scope, based on the given string, possibly with an additional suffix to make it unique. The call to rnn.rnn(...) has the following pseudocode (simplified and using public API methods for clarity):
outputs = []
with tf.variable_scope("RNN"):
for timestep, input_t in enumerate(inputs):
if timestep > 0:
tf.get_variable_scope().reuse_variables()
with tf.variable_scope("BasicLSTMCell"):
outputs.append(...)
return outputs
If you look at the names of the tensors in outpts, you'll see that they are the following:
>>> print [o.name for o in outpts]
[u'RNN/BasicLSTMCell/mul_2:0',
u'RNN/BasicLSTMCell_1/mul_2:0',
u'RNN/BasicLSTMCell_2/mul_2:0',
u'RNN/BasicLSTMCell_3/mul_2:0',
u'RNN/BasicLSTMCell_4/mul_2:0']
When you enter a new name scope (by entering a with tf.name_scope("..."): or with tf.variable_scope("..."): block), TensorFlow creates a new, unique name for the scope. The first time the "BasicLSTMCell" scope is entered, TensorFlow uses that name verbatim, because it hasn't been used before (in the "RNN/" scope). The next time, TensorFlow appends "_1" to the scope to make it unique, and so on up to "RNN/BasicLSTMCell_4".
The main difference between variable scopes and name scopes is that a variable scope also has a set of name-to-tf.Variable bindings. By calling tf.get_variable_scope().reuse_variables(), we instruct TensorFlow to reuse rather than create variables for the "RNN/" scope (and its children), after timestep 0. This ensures that the weights are correctly shared between the multiple RNN cells.

Related

Tensorflow: get variable or tensor by name in a nested scope

I have the following code which defines a nested tf.variable_scope().
def function(inputs)
with tf.variable_scope("1st") as scope:
#define some variables here using tf.get_variable()
with tf.variable_scope("2nd") as scope:
#define some variables here using tf.get_variable()
my_wanted_variable = tf.get_variable('my_wanted_variable',[dim0,
dim1], tf.float(32),
tf.constant_initializer(0.0))
In another class, I want to get my_wanted_variable, I use
with tf.variable_scope("function/2nd", reuse=True):
got_my_wanted_variable = tf.get_variable("my_wanted_variable")
I was told that
ValueError: Variable function/2nd/my_wanted_variable does not exist,
or was not created with tf.get_variable(). Did you mean to set
reuse=None in VarScope?
If I set reuse=None when fetching my_wanted_variable then,
ValueError: Shape of a new variable (function/2nd/my_wanted_variable)
must be fully defined, but instead was .
So, how can I get a variable (or tensor) by name in a nested scope.
add debug info:
I used print(xxx.name) to see what is their name and scope indeed, I found that although their scope is right, e.g xxx/function/2nd. all variables which defined in scope 1st and 2nd are not named by their assigned name, for example, my_wanted_variable is xxx/function/2nd/sub:0.
The :0 is normal for every variable (it symbolizes the endpoint).
The name sub is not that weird it just shows that you did not name explicitely the variable so it gave the name of the operation you used (tf.sub() probably) to the tensor.
Use explicitely the argument called name="my_wanted_variable". Try without a scope first to be sure it is named appropriately. Then use print nn.name or inspect the nodes of the graph_def object to check.
Or we could check all the tensor in the debug mode with,
with tf.Session() as sess:
model = tf.train.import_meta_graph('./model.ckpt-30000.meta')
model.restore(sess, tf.train.latest_checkpoint('./'))
graph = tf.get_default_graph()
then, in debug mode,
graph._collections
This will enlist all the context_tensor, training_op, trainable_variables, variables.
or Even Better is:
[tensor.name for tensor in tf.get_default_graph().as_graph_def().node]

How can I reroute the training input pipeline to test pipeline in tensorflow using tf.contrib.graph_editor?

Suppose now I have a training input pipeline which finally generate train_x and train_y using tf.train.shuffle_batch. I export meta graph and re-import the graph in another code file. Now I want to detach the input pipeline, i.e., the train_x and train_y, and connect a new test_x and test_y. How can I make accomplish this using tf.contrib.graph_editor?
EDIT: As suggested by #iga, I change my input directory using input_map
filenames = tf.train.match_filenames_once(FLAGS.data_dir + '*', name='matching_filenames')
if FLAGS.ckpt != '':
latest = FLAGS.log_dir + FLAGS.ckpt
else:
latest = tf.train.latest_checkpoint(FLAGS.log_dir)
if not latest or not os.path.exists(latest+'.meta'):
print("checkpoint " + latest + " does not exist")
sys.exit(1)
saver = tf.train.import_meta_graph(latest+'.meta',
input_map={'matching_filenames:0':filenames},
import_scope='import')
g = tf.get_default_graph()
but I get the following error:
ValueError: graph_def is invalid at node u'matching_filenames/Assign':
Input tensor 'matching_filenames:0' Cannot convert a tensor of type
string to an input of type string_ref.
Are there any elegant way to resolve this?
For this task, you should be able to just use input_map argument to https://www.tensorflow.org/api_docs/python/tf/import_graph_def. If you are using import_meta_graph, you can pass the input_map into its kwargs and it will get passed down to import_graph_def.
RESPONSE TO EDIT: I am assuming that your original graph (the one you are deserializing) had the same matching_filenames variable. Quite confusingly, the tensor name "matching_filenames:0" actually refers to the tensor going from the VariableV2 op to the Assign op. The type of this edge is string_ref and you don't really want to break that edge.
The output from a variable typically goes through an identity op called matching_filenames/read. This is what you want to use as the key in your input_map. For the value, you want the same tensor in your new filenames. So, your call should probably look like:
tf.train.import_meta_graph(latest+'.meta',
input_map={'matching_filenames/read': filenames.read_value()},
import_scope='import')
In general, variables are fairly complicated. If this does not work, you can use some placeholder op and feed the names into it manually.

Tensorflow: How does tf.get_variable work?

I have read about tf.get_variable from this question and also a bit from the documentation available at the tensorflow website. However, I am still not clear and was unable to find an answer online.
How does tf.get_variable work? For example:
var1 = tf.Variable(3.,dtype=float64)
var2 = tf.get_variable("var1",[],dtype=tf.float64)
Does it mean that var2 is another variable with initialization similar to var1? Or is var2 an alias for var1 (I tried and it doesn't seem to)?
How are var1 and var2 related?
How is a variable constructed when the variable we are getting doesn't really exist?
tf.get_variable(name) creates a new variable called name (or add _ if name already exists in the current scope) in the tensorflow graph.
In your example, you're creating a python variable called var1.
The name of that variable in the tensorflow graph is not ** var1, but is Variable:0.
Every node you define has its own name that you can specify or let tensorflow give a default (and always different) one. You can see the name value accessing the name property of the python variable. (ie print(var1.name)).
On your second line, you're defining a Python variable var2 whose name in the tensorflow graph is var1.
The script
import tensorflow as tf
var1 = tf.Variable(3.,dtype=tf.float64)
print(var1.name)
var2 = tf.get_variable("var1",[],dtype=tf.float64)
print(var2.name)
In fact prints:
Variable:0
var1:0
If you, instead, want to define a variable (node) called var1 in the tensorflow graph and then getting a reference to that node, you cannot simply use tf.get_variable("var1"), because it will create a new different variable valled var1_1.
This script
var1 = tf.Variable(3.,dtype=tf.float64, name="var1")
print(var1.name)
var2 = tf.get_variable("var1",[],dtype=tf.float64)
print(var2.name)
prints:
var1:0
var1_1:0
If you want to create a reference to the node var1, you first:
Have to replace tf.Variable with tf.get_variable. The variables created with tf.Variable can't be shared, while the latter can.
Know what the scope of the var1 is and allow the reuse of that scope when declaring the reference.
Looking at the code is the better way for understanding
import tensorflow as tf
#var1 = tf.Variable(3.,dtype=tf.float64, name="var1")
var1 = tf.get_variable(initializer=tf.constant_initializer(3.), dtype=tf.float64, name="var1", shape=())
current_scope = tf.contrib.framework.get_name_scope()
print(var1.name)
with tf.variable_scope(current_scope, reuse=True):
var2 = tf.get_variable("var1",[],dtype=tf.float64)
print(var2.name)
outputs:
var1:0
var1:0
If you define a variable with a name that has been defined before, then TensorFlow throws an exception. Hence, it is convenient to use the tf.get_variable() function instead of tf.Variable(). The function tf.get_variable() returns the existing variable with the same name if it exists, and creates the variable with the specified shape and initializer if it does not exist.

Retrieving an unnamed variable in tensorflow

I've trained up a model and saved it in a checkpoint, but only just realized that I forgot to name one of the variables I'd like to inspect when I restore the model.
I know how to retrieve named variables from tensorflow, (g = tf.get_default_graph() and then g.get_tensor_by_name([name])). In this case, I know its scope, but it is unnamed. I've tried looking in tf.GraphKeys.GLOBAL_VARIABLES, but it doesn't appear there, for some reason.
Here's how it's defined in the model:
with tf.name_scope("contrastive_loss") as scope:
l2_dist = tf.cast(tf.sqrt(1e-4 + tf.reduce_sum(tf.subtract(pred_left, pred_right), 1)), tf.float32) # the variable I want
# I use it here when calculating another named tensor, if that helps.
con_loss = contrastive_loss(l2_dist)
loss = tf.reduce_sum(con_loss, name="loss")
Is there any way of finding the variable without a name?
First of all, following up on my first comment, it makes sense that tf.get_collection given a name scope is not working. From the documentation, if you provide a scope, only variables or operations with assigned names will be returned. So that's out.
One thing you can try is to list the name of every node in your Graph with:
print([node.name for node in tf.get_default_graph().as_graph_def().node])
Or possibly, when restoring from a checkpoint:
saver = tf.train.import_meta_graph(/path/to/meta/graph)
sess = tf.Session()
saver.restore(sess, /path/to/checkpoints)
graph = sess.graph
print([node.name for node in graph.as_graph_def().node])
Another option is to display the graph using tensorboard or Jupyter Notebook and the show_graph command. There might be a built-in show_graph now, but that link is to a git repository where one is defined. You will then have to search for your operation in the graph and then probably retrieve it with:
my_op = tf.get_collection('full_operation_name')[0]
If you want to set it up in the future so that you can retrieve it by name, you need to add it to a collection using tf.add_to_collection:
my_op = tf.some_operation(stuff, name='my_op')
tf.add_to_collection('my_op_name', my_op)
Then retrieve it by restoring your graph and then using:
my_restored_op = tf.get_collection('my_op_name')[0]
You might also be able to get by just naming it and then specifying its scope in tf.get_collection instead, but I am not sure. More information and a helpful tutorial can be found here.
tf.get_collection does not work with unnamed variables. So list the operations with:
graph = sess.graph
print(graph.get_operations())
... find your tensor in the list and then:
global_step_tensor = graph.get_tensor_by_name('complete_operation_name:0')
And I found this tutorial very helpful to understand the mechanism behind these.

What's the difference of name scope and a variable scope in tensorflow?

What's the differences between these functions?
tf.variable_op_scope(values, name, default_name, initializer=None)
Returns a context manager for defining an op that creates variables.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.
tf.op_scope(values, name, default_name=None)
Returns a context manager for use when defining a Python op.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.
tf.name_scope(name)
Wrapper for Graph.name_scope() using the default graph.
See Graph.name_scope() for more details.
tf.variable_scope(name_or_scope, reuse=None, initializer=None)
Returns a context for variable scope.
Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.
Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.
The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).
It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.
As a result, we end up having two different types of scopes:
name scope, created using tf.name_scope
variable scope, created using tf.variable_scope
Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.
However, name scope is ignored by tf.get_variable. We can see that in the following example:
with tf.name_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)
print(v1.name) # var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0
The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:
with tf.variable_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)
print(v1.name) # my_scope/var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0
This allows us to easily share variables across different parts of the program, even within different name scopes:
with tf.name_scope("foo"):
with tf.variable_scope("var_scope"):
v = tf.get_variable("var", [1])
with tf.name_scope("bar"):
with tf.variable_scope("var_scope", reuse=True):
v1 = tf.get_variable("var", [1])
assert v1 == v
print(v.name) # var_scope/var:0
print(v1.name) # var_scope/var:0
UPDATE
As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.
Both variable_op_scope and op_scope are now deprecated and should not be used at all.
Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:
import tensorflow as tf
def scoping(fn, scope1, scope2, vals):
with fn(scope1):
a = tf.Variable(vals[0], name='a')
b = tf.get_variable('b', initializer=vals[1])
c = tf.constant(vals[2], name='c')
with fn(scope2):
d = tf.add(a * b, c, name='res')
print '\n '.join([scope1, a.name, b.name, c.name, d.name]), '\n'
return d
d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3])
with tf.Session() as sess:
writer = tf.summary.FileWriter('logs', sess.graph)
sess.run(tf.global_variables_initializer())
print sess.run([d1, d2])
writer.close()
Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:
scope_vars
scope_vars/a:0
scope_vars/b:0
scope_vars/c:0
scope_vars/res/res:0
scope_name
scope_name/a:0
b:0
scope_name/c:0
scope_name/res/res:0
You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):
This gives you the answer:
Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.
A good documentation on Sharing variables tells you that
tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().
The same documentation provides a more details how does Variable Scope work and when it is useful.
Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")
tf.name_scope creates namespace for operators in the default graph.
tf.variable_scope creates namespace for both variables and operators in the default graph.
tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.
tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.
Links to the sources above help to disambiguate this documentation issue.
This example shows that all types of scopes define namespaces for both variables and operators with following differences:
scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)
tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly
tf.variable_scope and variable_op_scope add specified or default initializer.
Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:
Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.
Besides the fact that variable_scope's functionality basically extends those of name_scope, together they behave in a way that may surprises you:
with tf.name_scope('foo'):
with tf.variable_scope('bar'):
x = tf.get_variable('x', shape=())
x2 = tf.square(x**2, name='x2')
print(x.name)
# bar/x:0
print(x2.name)
# foo/bar/x2:0
This behavior has its use and justifies the coexistance of both scopes -- but unless you know what you are doing, sticking to variable_scope only will avoid you some headaches due to this.
As for API r0.11, op_scope and variable_op_scope are both deprecated.
name_scope and variable_scope can be nested:
with tf.name_scope('ns'):
with tf.variable_scope('vs'): #scope creation
v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0'
v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0'
v3 = v1 + v2 #v3.name = 'ns/vs/add:0'
You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:
def mysum(a,b,name=None):
with tf.op_scope([a,b],name,"mysum") as scope:
v = tf.get_variable("v", 1)
v2 = tf.Variable([0], name="v2")
assert v.name == "v:0", v.name
assert v2.name == "mysum/v2:0", v2.name
return tf.add(a,b)
def mysum2(a,b,name=None):
with tf.variable_op_scope([a,b],name,"mysum2") as scope:
v = tf.get_variable("v", 1)
v2 = tf.Variable([0], name="v2")
assert v.name == "mysum2/v:0", v.name
assert v2.name == "mysum2/v2:0", v2.name
return tf.add(a,b)
with tf.Graph().as_default():
op = mysum(tf.Variable(1), tf.Variable(2))
op2 = mysum2(tf.Variable(1), tf.Variable(2))
assert op.name == 'mysum/Add:0', op.name
assert op2.name == 'mysum2/Add:0', op2.name
notice the name of the variable v in the two examples.
same for tf.name_scope and tf.variable_scope:
with tf.Graph().as_default():
with tf.name_scope("name_scope") as scope:
v = tf.get_variable("v", [1])
op = tf.add(v, v)
v2 = tf.Variable([0], name="v2")
assert v.name == "v:0", v.name
assert op.name == "name_scope/Add:0", op.name
assert v2.name == "name_scope/v2:0", v2.name
with tf.Graph().as_default():
with tf.variable_scope("name_scope") as scope:
v = tf.get_variable("v", [1])
op = tf.add(v, v)
v2 = tf.Variable([0], name="v2")
assert v.name == "name_scope/v:0", v.name
assert op.name == "name_scope/Add:0", op.name
assert v2.name == "name_scope/v2:0", v2.name
You can read more about variable scope in the tutorial.
A similar question was asked before on Stack Overflow.
From the last section of this page of the tensorflow documentation: Names of ops in tf.variable_scope()
[...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:
with tf.variable_scope("foo"):
x = 1.0 + tf.get_variable("v", [1])
assert x.op.name == "foo/add"
Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.
with tf.variable_scope("foo"):
with tf.name_scope("bar"):
v = tf.get_variable("v", [1])
x = 1.0 + v
assert v.name == "foo/v:0"
assert x.op.name == "foo/bar/add"
When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops.
Tensorflow 2.0 Compatible Answer: The explanations of Andrzej Pronobis and Salvador Dali are very detailed about the Functions related to Scope.
Of the Scope Functions discussed above, which are active as of now (17th Feb 2020) are variable_scope and name_scope.
Specifying the 2.0 Compatible Calls for those functions, we discussed above, for the benefit of the community.
Function in 1.x:
tf.variable_scope
tf.name_scope
Respective Function in 2.x:
tf.compat.v1.variable_scope
tf.name_scope (tf.compat.v2.name_scope if migrated from 1.x to 2.x)
For more information about migration from 1.x to 2.x, please refer this Migration Guide.