It seems that tf.Layer modules come in two flavours: functions and classes. I normally use the functions directly (e.g, tf.layers.dense) but I'd like to know how to use classes directly (tf.layers.Dense). I've started experimenting with the new eager execution mode in tensorflow and I think using classes are going to be useful there as well but I haven't seen good examples in the documentation. Is there any part of TF documentation that shows how these are used?
I guess it would make sense to use them in a class where these layers are instantiated in the __init__ and then they're linked in the __call__ method when the inputs and dimensions are known?
Are these tf.layer classes related to tf.keras.Model? Is there an equivalent wrapper class for using tf.layers?
Update: for eager execution there's tfe.Network that must be inherited. There's an example here
tf.layers and tf.keras.layer classes are generally interchangeable and in fact at head (and thus by the next release - 1.9), the former actually inherits from the latter.
TensorFlow is moving towards consolidating on tf.keras APIs for constructing models as that makes state ownership more explicit (e.g., parameters are "owned" by the Layer object, as opposed to the functional style where all model parameters are put in a "collection" associated with the complete graph). This style works well for both eager execution and graph construction (support for eager execution is improving with every release). I'd recommend using tf.keras.layers and tf.keras.Model.
Some examples that you may find useful:
MNIST in the tensorflow/models repository
The programmer's guide
Other eager execution samples (where the exact same model definition works for both graph execution and eager execution).
Not all existing TensorFlow examples have been moved to this style, but they slowly will.
Hope that helps.
Related
The difference between the two is muddled in my head, notwithstanding the nuances of what is eager and what isn't. From what I gather, the #tf.function decorator has two benefits in that
it converts functions into TensorFlow graphs for performance, and
allows for a more Pythonic style of coding by interpreting many (but not all) common-place Python operations into tensor operations, e.g. if into tf.cond, etc.
From the definition of tf.py_function, it seems that it does just #2 above. Hence, why bother with tf.py_function when tf.function does the job with a performance improvement to boot and without the inability of the former to serialize?
They do indeed start to resemble each other as they are improved, so it is useful to see where they come from. Initially, the difference was that:
#tf.function turns python code into a series of TensorFlow graph nodes.
tf.py_function wraps an existing python function into a single graph node.
This means that tf.function requires your code to be relatively simple while tf.py_function can handle any python code, no matter how complex.
While this line is indeed blurring, with tf.py_function doing more interpretation and tf.function accepting lot's of complex python commands, the general rule stays the same:
If you have relatively simple logic in your python code, use tf.function.
When you use complex code, like large external libraries (e.g. connecting to a database, or loading a large external NLP package) use tf.py_function.
Please can anyone explain to me this part of tuto:
here is the link https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification
Part:
Customizing the model implementation
Keras is the recommended high-level model API for TensorFlow, and we encourage using Keras models (via tff.learning.from_keras_model or tff.learning.from_compiled_keras_model) in TFF whenever possible.
However, tff.learning provides a lower-level model interface, tff.learning.Model, that exposes the minimal functionality necessary for using a model for federated learning. Directly implementing this interface (possibly still using building blocks like tf.keras.layers) allows for maximum customization without modifying the internals of the federated learning algorithms.
So let's do it all over again from scratch.
Defining model variables, forward pass, and metrics
The first step is to identify the TensorFlow variables we're going to work with. In order to make the following code more legible, let's define a data structure to represent the entire set. This will include variables such as weights and bias that we will train, as well as variables that will hold various cumulative statistics and counters we will update during training, such as loss_sum, accuracy_sum, and num_examples.
MnistVariables = collections.namedtuple(
'MnistVariables', 'weights bias num_examples loss_sum accuracy_sum')
Roughly analogous to the multiple paths Keras exposes to create a Keras model, TFF exposes multiple distinct ways of creating a tff.learning.Model. One of them is through the constructor functions, tff.learning.from_keras_model or tff.learning.from_compiled_keras_model, but each of these functions constructs and returns an instance of the abstract base class tff.learning.Model; the purpose of this section of the tutorial is to show that it is possible to instead directly construct such an instance by implementing the appropriate methods in the abstract interface.
If it is the collections.namedtuple MnistVariables you are asking about, it is simply a data container class introduced for convenience, to help group the tf.Variables which will roughly be used by the TFF runtime to track state during training. One important thing to note from the tff.learning.Model documentation, evidenced by this tutorial, is the line:
All tf.Variables should be introduced in __init__
If you are familiar with TensorFlow Variables, you will understand that controlling their instantiation is quite important.
I am trying to write code that is eager and graph compatible. However, there is very little information online for how to do this, being a literal footnote on TensorFlow's website. Furthermore, what they have wrote is confusing, saying:
The same code written for eager execution will also build a graph during graph execution. Do this by simply running the same code in a new Python session where eager execution is not enabled.
This implies that a same code solution is possible, where the only change required is the addition or removal of tf.enable_eager_execution().
Currently I use tf.keras to define my model and tf.data for my input pipeline. However, many eager operations don't work in graph, with the opposite also being true.
For example, I keep track of my number of epochs using tf.train.Checkpoint(). In eager mode, after restoring I can access it using epochs.numpy() to assign its value to a local variable. However, this does not work with graphs, which instead would require sess.run(epochs) due to the values not being defined during execution.
Again, to compute my gradients in eager I need to use some form of autograd, in my case tf.GradientTape(). This is not compatible with graphs, as "tf.GradientTape.gradients() does not support graph control flow."
I see that tfe.py_func exists, but once again, this only works when eager is not enabled, thus not helping for this problem.
So how do I make a same code solution, when it seems that many aspects of eager and graph directly conflict with each other?
In one deep learning notes (Stanford cs20si), I once saw the following statement regarding eager. I don't quite understand what does the imperative custom layers indicate, and how to understand this code example in the context of imperative custom layers?
Normally, using tensorflow you are not able to access the content of a tensor directly. This means, that you are not able to use if-statements. Instead, you have to construct both possible branches of the branch and then use tf.conditional to include a node which switches between these two, depending on the content of a tensor. This makes it sometimes hard to implement imperative commands in layers.
The example you posted above show, that you are now (with eager execution) able to access the content of tensors, which means, that you can write all the if-statements, for-loops and so on, directly in python and you do not have to construct a huge graph on your own for each possibility. As the code inside the layer is now executed just like a normal imperative programming language, you can call this kind of layer an imperative layer - this is identical to the motivation behind PyTorch.
In Tensorflow (as of v1.2.1), it seems that there are (at least) two parallel APIs to construct computational graphs. There are functions in tf.nn, like conv2d, avg_pool, relu, dropout and then there are similar functions in tf.layers, tf.losses and elsewhere, like tf.layers.conv2d, tf.layers.dense, tf.layers.dropout.
Superficially, it seems that this situation only serves to confuse: for example, tf.nn.dropout uses a 'keep rate' while tf.layers.dropout uses a 'drop rate' as an argument.
Does this distinction have any practical purpose for the end-user / developer?
If not, is there any plan to cleanup the API?
Tensorflow proposes on the one hand a low level API (tf., tf.nn....), and on the other hand, a higher level API (tf.layers., tf.losses.,...).
The goal of the higher level API is to provide functions that greatly simplify the design of the most common neural nets. The lower level API is there for people with special needs, or who wishes to keep a finer control of what is going on.
The situation is a bit confused though, because some functions have the same or similar names, and also, there is no clear way to distinguish at first sight which namespace correspond to which level of the API.
Now, let's look at conv2d for example. A striking difference between tf.nn.conv2d and tf.layers.conv2d is that the later takes care of all the variables needed for weights and biases. A single line of code, and voilĂ , you just created a convolutional layer. With tf.nn.conv2d, you have to take declare the weights variable yourself before passing it to the function. And as for the biases, well, they are actually not even handled: you need to add them yourself later.
Add to that that tf.layers.conv2d also proposes to add regularization and activation in the same function call, you can imagine how this can reduce code size when one's need is covered by the higher-level API.
The higher level also makes some decisions by default that could be considered as best practices. For example, losses in tf.losses are added to the tf.GraphKeys.LOSSES collection by default, which makes recovery and summation of the various component easy and somewhat standardized. If you use the lower level API, you would need to do all of that yourself. Obviously, you would need to be careful when you start mixing low and high level API functions there.
The higher-level API is also an answer to a great need from people that have been otherwise used to similarly high-level function in other frameworks, Theano aside. This is rather obvious when one ponders the number of alternative higher level APIs built on top of tensorflow, such as keras 2 (now part of the official tensorflow API), slim (in tf.contrib.slim), tflearn, tensorlayer, and the likes.
Finally, if I may add an advice: if you are beginning with tensorflow and do not have a preference towards a particular API, I would personnally encourage you to stick to the tf.keras.* API:
Its API is friendly and at least as good as the other high-level APIs built on top of the low-level tensorflow API
It has a clear namespace within tensorflow (although it can -- and sometimes should -- be used with parts from other namespaces, such as tf.data)
It is now a first-class citizen of tensorflow (it used to be in tf.contrib.keras), and care is taken to make new tensorflow features (such as eager) compatible with keras.
Its generic implementation can use other toolkits such as CNTK, and so does not lock you to tensorflow.