Weights and biases in tf.layers module in TensorFlow 1.0 - tensorflow

How do you access the weights and biases when using tf.layers module in TensorFlow 1.0? The advantage of tf.layers module is that you don't have to separately create the variables when making a fully connected layer or convolution layer.
I couldn't not find anything in the documentation regarding accessing them or adding them in summaries after they are created.

I don't think tf.layers (i.e. TF core) support summaries yet. Rather you have to use what's in contrib ...knowing that stuff in contrib, may eventually move into core but that the current API may change:
The layers module defines convenience functions summarize_variables,
summarize_weights and summarize_biases, which set the collection
argument of summarize_collection to VARIABLES, WEIGHTS and BIASES,
respectively.
Checkout:
https://www.tensorflow.org/api_guides/python/contrib.layers#Summaries
https://github.com/tensorflow/tensorflow/blob/131c3a67a7b8d27fd918e0bc5bddb3cb086de57e/tensorflow/python/layers/layers.py

Related

Inspecting functional keras model structure

I would like to inspect the layers and connections in a model, after having created a model using the Functional API in Keras. Essentially to start at the output and recursively enumerate the inputs of each layer instance. Is there a way to do this in the Keras or TensorFlow API?
The purpose is to create a more detailed visualisation than the ones provided by Keras (tf.keras.utils.plot_model). The model is generated procedurally based on a parameter file.
I have successfully used attributes of the KerasTensor objects to do this inspection:
output = Dense(1)(...)
print(output)
print(output.node)
print(output.node.keras_inputs)
print(output.node.keras_inputs[0].node)
This wasn't available in TF 2.6, only 2.7, and I realise it's not documented anywhere.
Is there a proper way to do this?

TF keras layer is no longer saveable?

After recent upgrade to Tensorflow 2.3 i cannot save TF-agents layers, i get this:
AttributeError: 'ActorDistributionNetwork' object has no attribute 'save_weights'
Since ActorDistributionNetwork is a subclass of tf.keras.layers.Layer, have individual keras layer ability to save themselves been removed? I could not find anything about this in release changes neither for tensorflow nor for tf-agents.
Using model.save_weights is not very convinient for tf-agents, since i have to use different combinations of layers for a custom agent.

Extracting representations from different layers of a network in TensorFlow 2

I have the weights of a custom pre-trained model. I need to extract the representations for different inputs that I pass through the model, across its different layers. What would be the best way of doing this?
I am using TensorFlow 2.1.0 and currently load in the weights of the model using either hub.KerasLayer() or tf.saved_model.load()
Any help would be greatly appreciated! I am very new to TensorFlow and have no choice but to use it since the weights were acquired from another source.
tf.saved_model.load() and its wrapper hub.KerasLayer load both the computation graph and the pre-trained weights. I suppose you're dealing with a TF2-style SavedModel that has its computation packaged in TensorFlow functions. If so, there's no easy way to extract intermediate results from within a function. If possible, you could ask the model creator to provide more outputs, or, if you have the model's Python source, build the model from source and initialize its weights with those from the SavedModel (some plumbing required).

How to save and use a trained neural network developed in PyTorch / TensorFlow / Keras?

Are there ways to save a model after training and sharing just the model with others? Like a regular script? Since the network is a collection of float matrices, is it possible to just extract these trained weights and run it on new data to make predictions, instead of requiring the users to install these frameworks too? I am new to these frameworks and will make any clarifications as needed.
PyTorch: As explained in this post, you can save a model's parameters as a dictionary, or load a dictionary to set your model's parameters.
You can also save/load a PyTorch model as an object.
Both procedures require the user to have at least one tensor computation framework installed, e.g. for efficient matrix multiplication.

backpropagation issues with a custom layer (TF/Keras)

I've been working on a prototype and I am having issues with backpropagation.I am currently using the latest keras and tensorflow build ( as tensorflow as a backend, I have looked into cntk, mxnet, and chainer; so far only chainer would allow me to do it but the training time is quite slow..)
My current layer is similar to a convolutional layer with more operations than a simple multiplication.
I know that tensorflow should use automatic differentiation if all the operations support it to calculate the gradient and perform gradient descent.
Currently my layer uses the following operator : reduce_sum, sum, subtraction, multiplication and division.
I also relies on the following methods: extract_image_patches, reshape, transpose.
I doubt any of these would cause an issue with automatic gradient descent. I built 2 layers as tests, one inherits from the base layer in keras while the other inherit directly from _Conv. In both cases whenever I use that layer anywhere in a model no weights are updated during the training process.
How could I solve this problem and fix backpropagation?
Edit:
(Here is the layer implementation https://github.com/roya0045/cvar2/blob/master/tfvar.py,
for the testing iteself see https://github.com/roya0045/cvar2/blob/master/test2.py )