TF keras layer is no longer saveable? - tensorflow

After recent upgrade to Tensorflow 2.3 i cannot save TF-agents layers, i get this:
AttributeError: 'ActorDistributionNetwork' object has no attribute 'save_weights'
Since ActorDistributionNetwork is a subclass of tf.keras.layers.Layer, have individual keras layer ability to save themselves been removed? I could not find anything about this in release changes neither for tensorflow nor for tf-agents.
Using model.save_weights is not very convinient for tf-agents, since i have to use different combinations of layers for a custom agent.

Related

Inspecting functional keras model structure

I would like to inspect the layers and connections in a model, after having created a model using the Functional API in Keras. Essentially to start at the output and recursively enumerate the inputs of each layer instance. Is there a way to do this in the Keras or TensorFlow API?
The purpose is to create a more detailed visualisation than the ones provided by Keras (tf.keras.utils.plot_model). The model is generated procedurally based on a parameter file.
I have successfully used attributes of the KerasTensor objects to do this inspection:
output = Dense(1)(...)
print(output)
print(output.node)
print(output.node.keras_inputs)
print(output.node.keras_inputs[0].node)
This wasn't available in TF 2.6, only 2.7, and I realise it's not documented anywhere.
Is there a proper way to do this?

How to load tf.keras models with keras

I've been using the keras module from tensorflow 1.12.0 for training and saving models. I recently came across a seemingly useful library for visualization of the weights/outputs, but they require the models be loaded as a Keras model. I'm running into an error trying to load my tf.keras models using keras, and was hoping someone could provide a solution. Python version 3.5.2, Keras version 2.2.4.
I've defined the custom object for the GlorotUniform since keras doesn't recognize that initializer. Afterwards, when I try to load the model, I get a TypeError.
# This works
model = tf.keras.models.load_model('./densenet_model.h5')
# This does not work
model = keras.models.load_model('./densenet_model.h5', custom_objects={"GlorotUniform": tf.keras.initializers.glorot_uniform})
# This is the error that happens
TypeError: tuple indices must be integers or slices, not list
In summary, I was wondering if there was a simple way to convert a model created with tf.keras to a keras model.
I figured out a workaround. I just load the model with tf.keras.load_model, then save_weights. Then I build the same model with Keras and just use load_weights. I verified that the weights were loaded appropriately by checking the output with my validation dataset.
Instead of from keras.models import load_model I used from tensorflow.python.keras.models import load_model. The problem is solved.

Should I use the standalone Keras library or tf.keras?

As Keras becomes an API for TensorFlow, there are lots of old versions of Keras code, such as https://github.com/keiserlab/keras-neural-graph-fingerprint/blob/master/examples.py
from keras import models
With the current version of TensorFlow, do we need to change every Keras code as?
from tensorflow.keras import models
You are mixing things up:
Keras (https://keras.io/) is a library independent from TensorFlow, which specifies a high-level API for building and training neural networks and is capable of using one of multiple backends (among which, TensorFlow) for low-level tensor computation.
tf.keras (https://www.tensorflow.org/guide/keras) implements the Keras API specification within TensorFlow. In addition, the tf.keras API is optimized to work well with other TensorFlow modules: you can pass a tf.data Dataset to the .fit() method of a tf.keras model, for instance, or convert a tf.keras model to a TensorFlow estimator with tf.keras.estimator.model_to_estimator. Currently, the tf.keras API is the high-level API to look for when building models within TensorFlow, and the integration with other TensorFlow features will continue in the future.
So to answer your question: no, you don't need to convert Keras code to tf.keras code. Keras code uses the Keras library, potentially even runs on top of a different backend than TensorFlow, and will continue to work just fine in the future. Even more, it's important to not just mix up Keras and tf.keras objects within the same script, since this might produce incompatabilities, as you can see for example in this question.
Update: Keras will be abandoned in favor of tf.keras: https://twitter.com/fchollet/status/1174019423541157888

Cannot Reload saved Keras model using tensorflow

I am working in a single jupyter notebook. I create and train a very simple CNN with keras. It compiles, fits, and predicts fine. I save it with:
model.save("mymodel.hd5")
Model is a keras.models.Sequential.
I then read that back in with:
reload_keras_model = keras.models.load_model("mymodel.hd5")
That also works fine. However if I try to read the model in using tensorflow via:
from tensorflow.keras.models import load_model
reload_tf_mmodel = load_model("mymodel.hd5")
That fails with:
ValueError: Unknown layer:layers
Most of the threads I've read on github say "update your model" or comments about custom objects (I'm not using any). My target platform is the rpi zero and I've been able to install tf but unable to install keras, and that's why I want to load via tf. Why would keras and tf.keras handle this model differently and what do I need to update/change to read it in with tf.keras?
While keras (can) use TF as Backend, it does not guarantee that the saved model is readable in TF as well.
Note that you can use keras with both theano and tf, thus reload_keras_model = keras.models.load_model("mymodel.hd5") will work good with both backends, as the saving/loading is done in the "keras" part, and not using the backend.
You can use this tool: keras_to_tensorflow
Or something similar.

Weights and biases in tf.layers module in TensorFlow 1.0

How do you access the weights and biases when using tf.layers module in TensorFlow 1.0? The advantage of tf.layers module is that you don't have to separately create the variables when making a fully connected layer or convolution layer.
I couldn't not find anything in the documentation regarding accessing them or adding them in summaries after they are created.
I don't think tf.layers (i.e. TF core) support summaries yet. Rather you have to use what's in contrib ...knowing that stuff in contrib, may eventually move into core but that the current API may change:
The layers module defines convenience functions summarize_variables,
summarize_weights and summarize_biases, which set the collection
argument of summarize_collection to VARIABLES, WEIGHTS and BIASES,
respectively.
Checkout:
https://www.tensorflow.org/api_guides/python/contrib.layers#Summaries
https://github.com/tensorflow/tensorflow/blob/131c3a67a7b8d27fd918e0bc5bddb3cb086de57e/tensorflow/python/layers/layers.py