How to Access weights and biases in tensorboard - tensorflow

According to https://www.tensorflow.org/tensorboard/dataframe_api It seems like we can only access scalars (accuracy, loss, etc...) Is there a way to access weights and bias value for each layer at each epoch/iteration (data in histogram tab)?

Related

Keras custom loss function with multiple output model

In a segmentation task I wanted to have my model to have two outputs because I implemented weight maps as suggested in the original U-net paper https://arxiv.org/pdf/1505.04597.pdf.
As per the suggestion I created weightmaps concentrating some of the ground truth mask to have higher weights. Now I have a model with.
weightmap=layers.Lambda(lambda x:x)(weight_map) # A non trainable layer to output this as tensor for loss function
Model=model(inputs=[input,weight_map], outputs=[output,weightmap]
Now I need to compute binary cross entropy loss for the following model
def custom_loss(target,outputs):
loss=K.binary_crossentropy(target,outputs[0]) #ouputs[0] should be the model output
loss=loss*outputs[1] #outputs[1] should be weightmaps
return loss
This output[0] and output[1] slicing of output tensor from model doesnt work.
Is there anything I can do to implement the following with both outputs of model in a single loss function?

where does class_weights or weighted loss penalize the network?

I am working on a Semantic segmentation project where I have to work on multiclass data which is highly imbalanced. I searched for optimizing it during training using the model.fit parameter and in that to use class_weights or sample_weights.
I can implement a following using a class_weight dictionary as
{ 0:1, 1:10,2:15 }
I also saw a method of updating weights in loss function
But at what point do these weights get updated?
If class_weights are used where will it get penalized? I already have a kernel_regularizer for each layer so if my classes have to be penalized based on my class weights then will it penalize the output of each layer y=Wx+b or only at the final layer?
Same if I use a weighted loss function will it get penalized only on the final layer before loss calculation or on each layer and then the final loss is calculated?
Any explanation on this would be very useful.
The class_weights you mentioned in your dictionary are there to account for your imbalanced data. They will never change, they are only there to increase the penalty for misclassified instances of minority classes (that way your network pays more attention to them and the gradients returned treat one 'Class2' instance as if it was 15 times more important than one 'Class0' instance).
The kernel_regularizer you mention resides at your loss function and penalizes large weight norms for weight matrices throughout the network (if you use kernel_regularizer = tf.keras.regularizers.l1(0.01) in a Dense layer, it only affects that layer). So that is a different weight that has nothing to do with classes, only with weights inside your network. Your eventual loss will be something like loss = Cross_entropy + a * norm(Weight_matrix) and that way the network will have as an additional task assigned to it to minimize the classification loss (cross entropy) while the weight norms remain low.

Weights and Neural Networks

Is it possible to know the weight matrix of a fully trained Neural Network with multiple hidden layers. More specifically, Can we check and store these values for every training iteration.
The tf.train.Saver class provides methods to save and restore models. The tf.saved_model.simple_save function is an easy way to build a saved model suitable for serving.
See Official Documentation Here.
On each iteration you are passing a train_op to sess.run asking it to compute that right? Something like this:
sess.run([train_op], feed_dict={...})
You could also ask it to return other values, such as the cost and accuracy tensors using something like this:
_, result_cost, result_accuracy = sess.run([train_op, cost, accuracy], feed_dict={...})
If that all makes sense, then accessing the weight matrix is no more complicated. You just need a reference to the weight matrix tensor (keep it around when you create it or look up the tensor by name):
weight_matrix, _ = sess.run([weight_tensor, train_op], feed_dict={...})
Notice that you can request the value of any tensor (variable, or operation) along with your training. You can also just call sess.run and ask for that particular value:
weight_matrix = sess.run([weight_tensor])

Style Transfer in TensorFlow

I am having trouble understanding the way that the content and style filters are being trained for (e.g. in this paper ) in style transfer algorithms using TensorFlow.
I have examined a few implementations of the algorithm in the linked paper, but I can't quite grok their treatment of this step. To that end, I thought it would be to helpful implement a naive version, without using the pre-trained model. My understanding of the steps involved are:
Train a CNN on a single image (in the paper they use the pre-trained VGG network)
Using the trained network, feed in a white noise image. Define a new loss function that is minimized by updating the input image (this is how the image is 'painted') e.g. 'content' is derived by minimizing the distance between the conv layers in the trained model, and those resulting from the input (white noise) image
Thus, the implementation should be something like:
import TensorFlow as tf
x_in = tf.placeholder(tf.float32, shape=[None, num_pixels], name='x')
y_ = tf.placeholder(tf.float32, shape=[None, num_pixels], name='y')
...
diff = y_-y_out
loss = tf.reduce_sum(tf.abs(diff)) # minimizing 'pixel difference'
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
# training model
for i in range(NUM_TRAINING_STEPS):
_, loss_val = sess.run([train_step, loss],
feed_dict={x_in: input_image, y_: input_image})
After training the model, I can generate a white noise image, but how can I used the trained model to update my input image? My suspicion is that I need to create a second network, where x_in is of type tf.Variable and load the weights and biases from the trained model, but the details of this elude me.
Yes, you could store your input image in a tf.Variable, load the weights from the trained model, and run an optimization loop with the style transfer loss function wrt to the input variable.
you can just use a style transfer as a service site to train styles like http://somatic.io

How to get the gradients of loss with respect to activations in Tensorflow

In the cifar10 example, the gradients of loss with respect to parameters can be computed as follows:
grads_and_vars = opt.compute_gradients(loss)
for grad, var in grads_and_vars:
# ...
Is there any way to get the gradients of loss with respect to activations (not the parameters), and watch them in Tensorboard?
You can use the tf.gradients() function to compute the gradient of any scalar tensor with respect to any other tensor (assuming the gradients are defined for all of the ops between those two tensors):
activations = ...
loss = f(..., activations) # `loss` is some function of `activations`.
grad_wrt_activations, = tf.gradients(loss, [activation])
Visualizing this in TensorBoard is tricky in general, since grad_wrt_activation is (typically) a tensor with the same shape as activation. Adding a tf.histogram_summary() op is probably the easiest way to visualize
# Adds a histogram of `grad_wrt_activations` to the graph, which will be logged
# with the other summaries, and shown in TensorBoard.
tf.histogram_summary("Activation gradient", grad_wrt_activations)