Retrieving an unnamed variable in tensorflow - tensorflow

I've trained up a model and saved it in a checkpoint, but only just realized that I forgot to name one of the variables I'd like to inspect when I restore the model.
I know how to retrieve named variables from tensorflow, (g = tf.get_default_graph() and then g.get_tensor_by_name([name])). In this case, I know its scope, but it is unnamed. I've tried looking in tf.GraphKeys.GLOBAL_VARIABLES, but it doesn't appear there, for some reason.
Here's how it's defined in the model:
with tf.name_scope("contrastive_loss") as scope:
l2_dist = tf.cast(tf.sqrt(1e-4 + tf.reduce_sum(tf.subtract(pred_left, pred_right), 1)), tf.float32) # the variable I want
# I use it here when calculating another named tensor, if that helps.
con_loss = contrastive_loss(l2_dist)
loss = tf.reduce_sum(con_loss, name="loss")
Is there any way of finding the variable without a name?

First of all, following up on my first comment, it makes sense that tf.get_collection given a name scope is not working. From the documentation, if you provide a scope, only variables or operations with assigned names will be returned. So that's out.
One thing you can try is to list the name of every node in your Graph with:
print([node.name for node in tf.get_default_graph().as_graph_def().node])
Or possibly, when restoring from a checkpoint:
saver = tf.train.import_meta_graph(/path/to/meta/graph)
sess = tf.Session()
saver.restore(sess, /path/to/checkpoints)
graph = sess.graph
print([node.name for node in graph.as_graph_def().node])
Another option is to display the graph using tensorboard or Jupyter Notebook and the show_graph command. There might be a built-in show_graph now, but that link is to a git repository where one is defined. You will then have to search for your operation in the graph and then probably retrieve it with:
my_op = tf.get_collection('full_operation_name')[0]
If you want to set it up in the future so that you can retrieve it by name, you need to add it to a collection using tf.add_to_collection:
my_op = tf.some_operation(stuff, name='my_op')
tf.add_to_collection('my_op_name', my_op)
Then retrieve it by restoring your graph and then using:
my_restored_op = tf.get_collection('my_op_name')[0]
You might also be able to get by just naming it and then specifying its scope in tf.get_collection instead, but I am not sure. More information and a helpful tutorial can be found here.

tf.get_collection does not work with unnamed variables. So list the operations with:
graph = sess.graph
print(graph.get_operations())
... find your tensor in the list and then:
global_step_tensor = graph.get_tensor_by_name('complete_operation_name:0')
And I found this tutorial very helpful to understand the mechanism behind these.

Related

Tensorflow: get variable or tensor by name in a nested scope

I have the following code which defines a nested tf.variable_scope().
def function(inputs)
with tf.variable_scope("1st") as scope:
#define some variables here using tf.get_variable()
with tf.variable_scope("2nd") as scope:
#define some variables here using tf.get_variable()
my_wanted_variable = tf.get_variable('my_wanted_variable',[dim0,
dim1], tf.float(32),
tf.constant_initializer(0.0))
In another class, I want to get my_wanted_variable, I use
with tf.variable_scope("function/2nd", reuse=True):
got_my_wanted_variable = tf.get_variable("my_wanted_variable")
I was told that
ValueError: Variable function/2nd/my_wanted_variable does not exist,
or was not created with tf.get_variable(). Did you mean to set
reuse=None in VarScope?
If I set reuse=None when fetching my_wanted_variable then,
ValueError: Shape of a new variable (function/2nd/my_wanted_variable)
must be fully defined, but instead was .
So, how can I get a variable (or tensor) by name in a nested scope.
add debug info:
I used print(xxx.name) to see what is their name and scope indeed, I found that although their scope is right, e.g xxx/function/2nd. all variables which defined in scope 1st and 2nd are not named by their assigned name, for example, my_wanted_variable is xxx/function/2nd/sub:0.
The :0 is normal for every variable (it symbolizes the endpoint).
The name sub is not that weird it just shows that you did not name explicitely the variable so it gave the name of the operation you used (tf.sub() probably) to the tensor.
Use explicitely the argument called name="my_wanted_variable". Try without a scope first to be sure it is named appropriately. Then use print nn.name or inspect the nodes of the graph_def object to check.
Or we could check all the tensor in the debug mode with,
with tf.Session() as sess:
model = tf.train.import_meta_graph('./model.ckpt-30000.meta')
model.restore(sess, tf.train.latest_checkpoint('./'))
graph = tf.get_default_graph()
then, in debug mode,
graph._collections
This will enlist all the context_tensor, training_op, trainable_variables, variables.
or Even Better is:
[tensor.name for tensor in tf.get_default_graph().as_graph_def().node]

How can I reroute the training input pipeline to test pipeline in tensorflow using tf.contrib.graph_editor?

Suppose now I have a training input pipeline which finally generate train_x and train_y using tf.train.shuffle_batch. I export meta graph and re-import the graph in another code file. Now I want to detach the input pipeline, i.e., the train_x and train_y, and connect a new test_x and test_y. How can I make accomplish this using tf.contrib.graph_editor?
EDIT: As suggested by #iga, I change my input directory using input_map
filenames = tf.train.match_filenames_once(FLAGS.data_dir + '*', name='matching_filenames')
if FLAGS.ckpt != '':
latest = FLAGS.log_dir + FLAGS.ckpt
else:
latest = tf.train.latest_checkpoint(FLAGS.log_dir)
if not latest or not os.path.exists(latest+'.meta'):
print("checkpoint " + latest + " does not exist")
sys.exit(1)
saver = tf.train.import_meta_graph(latest+'.meta',
input_map={'matching_filenames:0':filenames},
import_scope='import')
g = tf.get_default_graph()
but I get the following error:
ValueError: graph_def is invalid at node u'matching_filenames/Assign':
Input tensor 'matching_filenames:0' Cannot convert a tensor of type
string to an input of type string_ref.
Are there any elegant way to resolve this?
For this task, you should be able to just use input_map argument to https://www.tensorflow.org/api_docs/python/tf/import_graph_def. If you are using import_meta_graph, you can pass the input_map into its kwargs and it will get passed down to import_graph_def.
RESPONSE TO EDIT: I am assuming that your original graph (the one you are deserializing) had the same matching_filenames variable. Quite confusingly, the tensor name "matching_filenames:0" actually refers to the tensor going from the VariableV2 op to the Assign op. The type of this edge is string_ref and you don't really want to break that edge.
The output from a variable typically goes through an identity op called matching_filenames/read. This is what you want to use as the key in your input_map. For the value, you want the same tensor in your new filenames. So, your call should probably look like:
tf.train.import_meta_graph(latest+'.meta',
input_map={'matching_filenames/read': filenames.read_value()},
import_scope='import')
In general, variables are fairly complicated. If this does not work, you can use some placeholder op and feed the names into it manually.

How to "append" Op at the beginning of a TensorFlow graph?

I have a GraphDef proto file which I am importing using tf.import_graph_def. Ops can be added at the end of the graph like this:
final_tensor = tf.import_graph_def(graph_def, name='', return_elements=['final_tensor'])
new_tensor = some_op(final_tensor)
But I want to add Ops at the beginning of the graph, so essentially the first Op in the graph_def needs to take the output of my Op as input, how do I do it?
Finally found a way to do this. I am sure the function Yarolsav mentioned in the comments does something similar internally.
new_input = graph_def.node.add()
new_input.op = 'new_op_name' # eg: 'Const', 'Placeholder', 'Add' etc
new_input.name = 'some_new_name'
# set any attributes you want for new_input here
old_input.input[0] = 'some_new_name' # must match with the name above
For details about how to set the attributes, see this file.
The script #Priyatham gives in the link is a good example how to add node in tf graph_def. name, op, input, attr are 4 required elements. name and op could be assigned, whereas input should use extend and attr should use CopyFrom method for assignment, like:
new_node = graph_def.node.add()
new_node.op = "Cast"
new_node.name = "To_Float"
new_node.input.extend(["To_Float"])
new_node.attr["DstT"].CopyFrom(attr_value_pb2.AttrValue(type=types_pb2.DT_FLOAT))
new_node.attr["SrcT"].CopyFrom(attr_value_pb2.AttrValue(type=types_pb2.DT_FLOAT))
new_node.attr["Truncate"].CopyFrom(attr_value_pb2.AttrValue(b=True))

How to visualize a tensor summary in tensorboard

I'm trying to visualize a tensor summary in tensorboard. However I can't see the tensor summary at all in the board. Here is my code:
out = tf.strided_slice(logits, begin=[self.args.uttWindowSize-1, 0], end=[-self.args.uttWindowSize+1, self.args.numClasses],
strides=[1, 1], name='softmax_truncated')
tf.summary.tensor_summary('softmax_input', out)
where out is a multi-dimensional tensor. I guess there must be something wrong with my code. Probably I used the tensor_summary function incorrectly.
What you do is you create a summary op, but you don't invoke it and don't write the summary (see documentation).
To actually create a summary you need to do the following:
# Create a summary operation
summary_op = tf.summary.tensor_summary('softmax_input', out)
# Create the summary
summary_str = sess.run(summary_op)
# Create a summary writer
writer = tf.train.SummaryWriter(...)
# Write the summary
writer.add_summary(summary_str)
Explicitly writing a summary (last two lines) is only necessary if you don't have a higher level helper like a Supervisor. Otherwise you invoke
sv.summary_computed(sess, summary_str)
and the Supervisor will handle it.
More info, also see:
How to manually create a tf.Summary()
Hopefully a workaround which achieves what you want. ..
If you wish to view the tensor values, you can convert them using as_string, then use summary.text. The values will appear in the tensorboard text tab.
Not tried with 3D tensors, but feel free to slice according to needs.
code snippet, which includes use of inserting a print statement to get console output as well.
predictions = tf.argmax(reshaped_logits, 1)
txtPredictions = tf.Print(tf.as_string(predictions),[tf.as_string(predictions)], message='predictions', name='txtPredictions')
txtPredictions_op = tf.summary.text('predictions', txtPredictions)
Not sure whether this is kinda obvious, but you could use something like
def make_tensor_summary(tensor, name='defaultTensorName'):
for i in range(tensor.get_shape()[0]:
for j in range(tensor.get_shape()[1]:
tf.summary.scalar(Name + str(i) + '_' + str(j), tensor[i, j])
in case you know it is a 'matrix-shaped' Tensor in advance.

About names of variable scope in tensorflow

Recently I have been trying to learn to use TensorFlow, and I do not understand how variable scopes work exactly. In particular, I have the following problem:
import tensorflow as tf
from tensorflow.models.rnn import rnn_cell
from tensorflow.models.rnn import rnn
inputs = [tf.placeholder(tf.float32,shape=[10,10]) for _ in range(5)]
cell = rnn_cell.BasicLSTMCell(10)
outpts, states = rnn.rnn(cell, inputs, dtype=tf.float32)
print outpts[2].name
# ==> u'RNN/BasicLSTMCell_2/mul_2:0'
Where does the '_2' in 'BasicLSTMCell_2' come from? How does it work when later using tf.get_variable(reuse=True) to get the same variable again?
edit: I think I find a related problem:
def creating(s):
with tf.variable_scope('test'):
with tf.variable_scope('inner'):
a=tf.get_variable(s,[1])
return a
def creating_mod(s):
with tf.variable_scope('test'):
with tf.variable_scope('inner'):
a=tf.Variable(0.0, name=s)
return a
tf.ops.reset_default_graph()
a=creating('a')
b=creating_mod('b')
c=creating('c')
d=creating_mod('d')
print a.name, '\n', b.name,'\n', c.name,'\n', d.name
The output is
test/inner/a:0
test_1/inner/b:0
test/inner/c:0
test_3/inner/d:0
I'm confused...
The answer above is somehow misguiding.
Let me answer why you got two different scope names, even though it looks like that you defined two identical functions: creating and creating_mod.
This is simply because you used tf.Variable(0.0, name=s) to create the variable in the function creating_mod.
ALWAYS use tf.get_variable, if you want your variable to be recognized by scope!
Check out this issue for more details.
Thanks!
The "_2" in "BasicLSTMCell_2" relates to the name scope in which the op outpts[2] was created. Every time you create a new name scope (with tf.name_scope()) or variable scope (with tf.variable_scope()) a unique suffix is added to the current name scope, based on the given string, possibly with an additional suffix to make it unique. The call to rnn.rnn(...) has the following pseudocode (simplified and using public API methods for clarity):
outputs = []
with tf.variable_scope("RNN"):
for timestep, input_t in enumerate(inputs):
if timestep > 0:
tf.get_variable_scope().reuse_variables()
with tf.variable_scope("BasicLSTMCell"):
outputs.append(...)
return outputs
If you look at the names of the tensors in outpts, you'll see that they are the following:
>>> print [o.name for o in outpts]
[u'RNN/BasicLSTMCell/mul_2:0',
u'RNN/BasicLSTMCell_1/mul_2:0',
u'RNN/BasicLSTMCell_2/mul_2:0',
u'RNN/BasicLSTMCell_3/mul_2:0',
u'RNN/BasicLSTMCell_4/mul_2:0']
When you enter a new name scope (by entering a with tf.name_scope("..."): or with tf.variable_scope("..."): block), TensorFlow creates a new, unique name for the scope. The first time the "BasicLSTMCell" scope is entered, TensorFlow uses that name verbatim, because it hasn't been used before (in the "RNN/" scope). The next time, TensorFlow appends "_1" to the scope to make it unique, and so on up to "RNN/BasicLSTMCell_4".
The main difference between variable scopes and name scopes is that a variable scope also has a set of name-to-tf.Variable bindings. By calling tf.get_variable_scope().reuse_variables(), we instruct TensorFlow to reuse rather than create variables for the "RNN/" scope (and its children), after timestep 0. This ensures that the weights are correctly shared between the multiple RNN cells.