Param for tf.contrib.summary.graph - tensorflow

I am using tensorflow 1.12 and the eager execution mode. I want to summarize the graph to the tensorboard log. I found a function called tf.contrib.summary.graph, however, it requires a parameter called param. What should I pass for this parameter? Thanks.

As documented, the param parameter is for the graph object, which in eager can be a tf.Graph, tf.GraphDef, or a string containing a serialized GraphDef protocol buffer.
Note that in eager execution, by definition there isn't a single computation graph any more because the ops execute immediately instead of building a graph, so this is unlikely to be useful unless you're building traditional tf.Graph computation graphs in addition to running logic eagerly. We may introduce some ways to record graphs in eager mode for TF 2.0, but there still won't be a single graph.

Related

When to use #tf.function decorator and when not? I know tf.function builds graph. But how to know when to build graphs?

I started by Tensorflow journey when it already came to 2.0.0, So never used graphs and sessions as in version1. But recently met tf.function and autographs which suits me. (but what i know is it is used only for train step)
Now when reading project code, many people use tf.function decorator on many other functions when they wanna build graphs. But i don't exactly get their point. How to know when to use graph and when not?
Can anyone help me?
Solution
The decorator, #tf.function conveniently converts a python function to a static tensorflow graph. TensorFlow operates in eager mode by default since version 2.0.0. Although eager mode could help you in line-by-line execution, this comes with the pitfall of relatively slower TensorFlow-code execution when compared to static-graph. Converting a certain function into a static graph increases execution speed while training your model.
Quoting tf.function documentation:
Functions can be faster than eager code, especially for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.
The static graph is created once and does not get updated if the function is called repeatedly with different values (not passed as the input-arguments). You should avoid using #tf.function in such scenarios or update the function definition (if possible) to include all the necessary variability through the input-arguments. However,
Now, if your function gets all its inputs through the function arguments, then if you apply #tf.function you will not see any problem.
Here is an example.
### When not to use #tf.function ###
# some variable that changes with time
var = timestamp()
#tf.function
def func(*args, **kwargs):
# your code
return var
In the example above, the function func() although depends on var, it does not access the variable var through its arguments. Thus, when #tf.function is applied for the first time, it creates a static-graph for func(). However, when the value of var changes in future, this will not get updated in the static-graph. See this for more clarity. Also, I would highly encourage you to see the references section.
For Debugging
Quoting source
You can use tf.config.experimental_run_functions_eagerly (which temporarily disables running functions as functions) for debugging purposes.
References
Better performance with tf.function
When to utilize tf.function
TensorFlow 2.0: tf.function and AutoGraph

How to map words to vocabulary index in TF 2.0 without eager execution

I have a Keras model that trains find when eager mode is on (TF 2.1.0). One of my features is a string that I need to map to its corresponding vocabulary index. However, with eager execution disabled, I cannot find a neat way to do this.
I was initially using tft.apply_vocabulary, which used to work fine but fails without eager execution. I also tried tf.lookup.StaticVocabularyTable:
table = tf.lookup.StaticVocabularyTable(TextFileIdTableInitializer('item_vocab.txt'), 1)
out = table.lookup(input_strings)
which (with eager mode off) fails with:
tensorflow.python.framework.errors_impl.FailedPreconditionError: Table not initialized.
[[{{node transformer/hash_table_Lookup_1/hash_table_Lookup/LookupTableFindV2}}]]
I am able to run the table's _initialize method in a tf.Session, but that feels like too much work for such a common task and is not TF2.0 compatible.
So, how do you map strings to integer indexes from a vocab file without eager execution?
Why not eager?
I have the impression that graph mode training has wider support (e.g. multi-gpu training) and better performance and I'm trying to make sure my code works with eager mode disabled, so that I can eventually tunr it off when I'm done developing. Is that a sensible goal?

What is the meaning behind "WARNING:tensorflow:Passing a `GraphDef` to the SummaryWriter is deprecated"?

What is the meaning behind that warning? I'd love it if someone could explain what a Graph object is and why passing sess.graph_def shouldn't be done.
summary_writer = tf.summary.FileWriter('logistic_logs/',graph_def=sess.graph_def)
In tensorflow you used to define your operations as an acyclic computational graph. With introduciton of Tensorflow 2.0 we no longer have a session object and together with session, the predefined full graph containing all operations is also gone. You can learn more about it here under Eager Execution.
You can learn more about tensorflow graphs and how to visualize them here.

Is an eager-graph compatible same code solution possible?

I am trying to write code that is eager and graph compatible. However, there is very little information online for how to do this, being a literal footnote on TensorFlow's website. Furthermore, what they have wrote is confusing, saying:
The same code written for eager execution will also build a graph during graph execution. Do this by simply running the same code in a new Python session where eager execution is not enabled.
This implies that a same code solution is possible, where the only change required is the addition or removal of tf.enable_eager_execution().
Currently I use tf.keras to define my model and tf.data for my input pipeline. However, many eager operations don't work in graph, with the opposite also being true.
For example, I keep track of my number of epochs using tf.train.Checkpoint(). In eager mode, after restoring I can access it using epochs.numpy() to assign its value to a local variable. However, this does not work with graphs, which instead would require sess.run(epochs) due to the values not being defined during execution.
Again, to compute my gradients in eager I need to use some form of autograd, in my case tf.GradientTape(). This is not compatible with graphs, as "tf.GradientTape.gradients() does not support graph control flow."
I see that tfe.py_func exists, but once again, this only works when eager is not enabled, thus not helping for this problem.
So how do I make a same code solution, when it seems that many aspects of eager and graph directly conflict with each other?

Why is TensorFlow while_loop node required?

Why does the basic static, compiled computation graph structure of TF (as opposed to a dynamic graph) necessitate a dedicated while loop node and doesn't enable the use "regular" Python control flow expressions?
Thanks.
TensorFlow builds the computational graph and makes it static (unchangeable) for efficiency. Once it's finalized, telling the TensorFlow graph to do something is like sending some input to a separate program which you can no longer change besides passing in different inputs. So the TensorFlow graph at that point has no knowledge of the Python control flow. It just runs when called. Because of this, it needs to explicitly know ahead of time where you want to add in a while loop inside the TensorFlow graph. You can however, still use Python control flow and just call the TensorFlow graph as though it were a specific function.