Tensorflow, gradients become NAN even when I clip it - tensorflow

It seems like I have an exploding gradient issue during the training of my reinforcement learning policy.
However, I'm using a gradient clipping by norm with 0.2 as the clipping factor.
I've check both my inputs and my loss and none of them are NAN. Only my gradients face an issue.
All of the gradients without exception becomes Nan in only 1 step and I don't understand how it is possible since I'm clipping it. Shouldn't tensorflow transform the nan gradients into a clipped vector ?
Here is the input data when the nan gradients appear :
INPUT : [0.1, 0.0035909, 0.06, 0.00128137, 0.6, 0.71428571, 0.81645947, 0.46802986, 0.04861736, 0.01430704, 0.08, 0.08966659, 0.02, 0.]
Here are the 10 previous loss value (last value being the one when the gradients become NaN)
[-0.0015171316, -0.0015835371, 0.0002261286, 0.0003917102, -0.0024305983, -0.0054471847, 0.00082066684, 0.0038477872, 0.012144111]
Here is the network I'm using, hiddens_dims is a list containing the number of nodes of the consecutive Dense layers (I'm dynamically making those layers) :
class NeuralNet(tf.keras.Model):
def __init__(self, hiddens_dim = [4,4] ):
self.hidden_layers = [tf.keras.layers.Dense(hidden_dim,
activation= 'elu',
kernel_initializer= tf.keras.initializers.VarianceScaling(),
kernel_regularizer= tf.keras.regularizers.L1(l1= 0.001),
name= f'hidden_{i}')
for i,hidden_dim in enumerate(hiddens_dim)
]
# Output layers
self.output_layer = tf.keras.layers.Dense(self.out_dim,
activation= 'softmax',
kernel_initializer= tf.keras.initializers.GlorotNormal(),
name= 'output')
def call(self, input):
x = input
for layer in self.hidden_layers :
x = layer(x)
output = self.output_layer(x)
return output
Now here is the part where I update the gradient manually :
model = NeuralNet([4,4])
optim = tf.keras.optimizers.Adam(learning_rate= 0.01)
...
with tf.GradientTape() as tape :
loss = compute_loss(rewards, log_probs)
grads = tape.gradient(loss, self.model.trainable_variables)
grads = [(tf.clip_by_norm(grad, clip_norm=self.clip)) for grad in grads]
optim.apply_gradients( zip(grads, self.model.trainable_variables) )
And Finally, here are the gradients in the previous iteration, right before the catastrophe :
Gradient Hidden Layer 1 : [
[-0.00839788, 0.00738428, 0.0006091 , 0.00240378],
[-0.00171666, 0.00157034, 0.00012367, 0.00051114],
[-0.0069742 , 0.00618575, 0.00050313, 0.00201353],
[-0.00263796, 0.00235524, 0.00018991, 0.00076653],
[-0.01119559, 0.01178695, 0.0007518 , 0.00383774],
[-0.08692611, 0.07620181, 0.00630627, 0.02480747],
[-0.10398869, 0.09012008, 0.00754619, 0.02933704],
[-0.04725896, 0.04004722, 0.00343443, 0.01303552],
[-0.00493888, 0.0043246 , 0.00035772, 0.00140733],
[-0.00559061, 0.00484629, 0.00040546, 0.00157689],
[-0.00595227, 0.00524359, 0.00042967, 0.00170693],
[-0.02488269, 0.02446024, 0.00177054, 0.00796351],
[-0.00850916, 0.00703857, 0.00062265, 0.00229139],
[-0.00220688, 0.00196331, 0.0001586 , 0.0006386 ]]
Gradient Hidden Layer 2 : [
[-2.6317715e-04, -2.1482834e-04, 3.0761934e-04, 3.1322116e-04],
[ 8.4564053e-03, 6.7548533e-03, -9.8721031e-03, -1.0047102e-02],
[-3.8322039e-05, -3.1298561e-05, 4.3669730e-05, 4.4472294e-05],
[ 3.6933038e-03, 2.9515910e-03, -4.3102605e-03, -4.3875999e-03]]
Gradient Output Layer :
[-0.0011955 , 0.0011955 ],
[-0.00074397, 0.00074397],
[-0.0001833 , 0.0001833 ],
[-0.00018749, 0.00018749]]
I'm not very familiar with tensorflow so maybe I'm not training the model correctly ? However, the model seemed to train correctly before the gradients become crazy.
I know I can use many other methods to counter exploding gradient (batch norm, dropout, decrease the learning rate etc) but I want to understand why gradient clipping is not working here ? I thought that gradient can't explode when we clip it by definition
Thank you

Related

Extracting gradient of Keras Embedding layer

I want to extract the gradient of a RNN model starting with an embedding layer using Tensorflow's GradientTape (using tensorflow 1.14 with eager execution). The model is a simple LSTM binary classifier, which is trained with a binary crossentropy loss:
inputs = Input(name='inputs', shape=[150])
layer = Embedding(2000, 50, input_length=150)(inputs)
layer = LSTM(64)(layer)
layer = Dense(256, name='FC1')(layer)
layer = Activation('relu')(layer)
layer = Dropout(0.5)(layer)
layer = Dense(1, name='out_layer')(layer)
layer = Activation('sigmoid')(layer)
model = Model(inputs=inputs, outputs=layer)
GradientTape should return "... a list or nested structure of Tensors (or IndexedSlices, or None, or CompositeTensor), one for each element in sources". What is the correct way to use it to recover (and apply) the gradient?
I tried the following code:
with tf.GradientTape() as tape:
y_ = model(inputs)
loss_value = BinaryCrossEntropy()(y_true=targets, y_pred=y_)
grads = tape.gradient(loss_value, model.trainable_variables)
# some custom processing
optimizer = RMSprop(learning_rate=0.001, name="context")
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)), name="context")
I would expect the returned gradient to be of size (2000,50), i.e., the shape of weights for the embedding layer. Instead, it takes a size that depends on the batch size, and cannot be used (at least with the code above) with apply_gradient. Changing the number of inputs consistently changes the first dimension of the gradient to batch_size * 150, while the shape of the trainable variables stays correct. If using 8 inputs, for example, I get the following result:
input shape: (8, 150), output shape: (8, 1)
model.trainable_variables shapes: (2000, 50),(50, 256),(64, 256),(256,),(64, 256),(256,),(256, 1),(1,)
tape.gradient shapes: (1200, 50),(50, 256),(64, 256),(256,),(64, 256),(256,),(256, 1),(1,)
With a batch size of 32, the first compunent would be (4800, 50), and so on. This doesn't match my understanding of GradientTape.gradient, since the returned gradient doesn't have the same size as the sources parameter. What did I miss?

How to compute saliency map using keras backend

I am trying to construct a basic "vanilla gradient" saliency heatmap (gradient-based feature attribution) for MNIST using keras. I know there are libraries such as this one to compute saliency heatmaps, but I would like to construct this from scratch since the vanilla gradient approach seems conceptually straightforward to implement. I have trained the following digit classifier in Keras using functional model definition:
input = layers.Input(shape=(28,28,1), name='input')
conv2d_1 = layers.Conv2D(32, kernel_size=(3, 3), activation='relu')(input)
maxpooling2d_1 = layers.MaxPooling2D(pool_size=(2, 2), name='maxpooling2d_1')(conv2d_1)
conv2d_2 = layers.Conv2D(64, kernel_size=(3, 3), activation='relu')(maxpooling2d_1)
maxpooling2d_2 = layers.MaxPooling2D(pool_size=(2, 2))(conv2d_2)
flatten = layers.Flatten(name='flatten')(maxpooling2d_2)
dropout = layers.Dropout(0.5, name='dropout')(flatten)
dense = layers.Dense(num_classes, activation='softmax', name='dense')(dropout)
model = keras.models.Model(inputs=input, outputs=dense)
Now, I want to compute the saliency map for a single MNIST image. Since the final layer has a softmax activation and the denominator is a normalization term (so that the output nodes add up to 1), I believe that I need to either take the pre-softmax output or change the activation of the trained model linear for computing saliency maps. I will do the latter.
model.layers[-1].activation = tf.keras.activations.linear # swap activation to linear
input = loaded_model.layers[0].input
output = loaded_model.layers[-1].output
input_image = x_test[0] # shape is (28, 28, 1)
pred = np.argmax(loaded_model.predict(np.expand_dims(input_image, axis=0))) # predicted class
However, I am not sure what to do beyond this. I know I can use the following K.gradients(output, input) to compute gradients. That being said, I believe I should compute the gradient of the predicted class with respect to the input image, versus computing the gradient of the entire output. How would I do this? Also, I'm not sure how to evaluate the saliency heatmap for a specific image/prediction. I imagine I will have to use sess = tf.keras.backend.get_session() and sess.run(), but not sure exactly. I would greatly appreciate any help with completing the saliency heatmap code. Thanks!
If you add the activation as a single layer after the last dense layer with:
keras.layers.Activation('softmax')
you can do:
linear_model = keras.Model(input=model, output=model.layers[-2].output)
To then compute the gradients like:
def get_saliency_map(model, image, class_idx):
with tf.GradientTape() as tape:
tape.watch(image)
predictions = model(image)
loss = predictions[:, class_idx]
# Get the gradients of the loss w.r.t to the input image.
gradient = tape.gradient(loss, image)
# take maximum across channels
gradient = tf.reduce_max(gradient, axis=-1)
# convert to numpy
gradient = gradient.numpy()
# normaliz between 0 and 1
min_val, max_val = np.min(gradient), np.max(gradient)
smap = (gradient - min_val) / (max_val - min_val + keras.backend.epsilon())
return smap

ConvLSTM2D prediction is the same as the image at t-1

I am trying to predict the next image in a sequence of images, and I'm not too sure why LSTMs aren't cutting it for me. My predicted image seems to always be a copy of the image at the previous timestep. This is the model that I used. I also had similar results when using Conv3D on my images, but I'm not too sure why this is so. My input has been normalized to be in range [0,1], and I multiplied my output by 255 because my Ys weren't normalized.
This is my LSTM model
lstm = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x,axis=-1),input_shape=X.shape[1:]),
tf.keras.layers.GaussianNoise(0.05),
tf.keras.layers.ConvLSTM2D(25,padding='same',kernel_size=(3,3),return_sequences=True,stateful=False),
tf.keras.layers.ConvLSTM2D(25,padding='same',kernel_size=(3,3),return_sequences=True),
tf.keras.layers.ConvLSTM2D(25,padding='same',kernel_size=(3,3),return_sequences=False),
tf.keras.layers.Conv2D(1,padding='same',kernel_size=(1,1),trainable=False),
tf.keras.layers.Lambda(lambda x: tf.keras.backend.squeeze(x,axis=-1)),
tf.keras.layers.Lambda(lambda x: 255. * tf.clip_by_value(x,0.,1.))
])
And this is my Conv3D model
conv = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x,axis=-1),input_shape=X.shape[1:]),
tf.keras.layers.GaussianNoise(0.05),
tf.keras.layers.Conv3D(25,padding='same',data_format='channels_last',kernel_size=(5,3,3)),
tf.keras.layers.LeakyReLU(),
tf.keras.layers.Conv3D(25,padding='same',data_format='channels_last',kernel_size=(5,3,3)),
tf.keras.layers.LeakyReLU(),
tf.keras.layers.Conv3D(1,padding='same',data_format='channels_last',kernel_size=(1,1,1), trainable=True),
tf.keras.layers.Lambda(lambda x: tf.keras.backend.squeeze(x,axis=-1)),
tf.keras.layers.Conv2D(1,kernel_size=(1,1),data_format='channels_first',trainable=True),
tf.keras.layers.Lambda(lambda x: tf.keras.backend.squeeze(x,axis=-3)),
tf.keras.layers.LeakyReLU(),
tf.keras.layers.Lambda(lambda x: 255. * tf.clip_by_value(x,0.,1.))
])
I tried making a SSIM loss function but it makes my model predict that the next image will be super bright, and performs much worse than simply using mse.
This is the loss function I made. I know it looks extreme but all my images are very similar in structure to each other, so I believe this harshness is warranted. There were no NaN errors during training.
def custom_err(y_true,y_pred):
#ssim has range [-1,1], with -1 being the worst and 1 being the best
def SSIM(y_true,y_pred):
ssim= tf.image.ssim(tf.expand_dims(y_true,-1),tf.expand_dims(y_pred,-1),255.)
return ssim
ssim=SSIM(y_true,y_pred)
return 10**(abs(ssim - 1) * 20) - 1

Backpropagation Using Tensorflow and Numpy MSE not Dropping

I am trying to create a Backpropagation but I do not want to use the GradientDescentOptimizer from TF. I just wanted to update my own weights and biases. The problem is that the Mean Square Error or Cost is not approaching to zero. It just stays at some 0.2xxx. Is it because of my inputs which are 520x1600 (yes, each input has 1600 units and yes, there are 520 of them) or my number of neurons in the Hidden Layer is problematic? I have tried implementing this using the GradientDescentOptimizer and minimize(cost) which is working fine (Cost reduces near to zero as training goes on) but maybe I have an issue in my code of updating the weights and biases.
Here's my code:
import tensorflow as tf
import numpy as np
from BPInputs40 import pattern, desired;
#get the inputs and desired outputs, 520 inputs, each has 1600 units
train_in = pattern
train_out = desired
learning_rate=tf.constant(0.5)
num_input_neurons = len(train_in[0])
num_output_neurons = len(train_out[0])
num_hidden_neurons = 20
#weight matrix initialization with random values
w_h = tf.Variable(tf.random_normal([num_input_neurons, num_hidden_neurons]), dtype=tf.float32)
w_o = tf.Variable(tf.random_normal([num_hidden_neurons, num_output_neurons]), dtype=tf.float32)
b_h = tf.Variable(tf.random_normal([1, num_hidden_neurons]), dtype=tf.float32)
b_o = tf.Variable(tf.random_normal([1, num_output_neurons]), dtype=tf.float32)
# Model input and output
x = tf.placeholder("float")
y = tf.placeholder("float")
def sigmoid(v):
return tf.div(tf.constant(1.0),tf.add(tf.constant(1.0),tf.exp(tf.negative(v*0.001))))
def derivative(v):
return tf.multiply(sigmoid(v), tf.subtract(tf.constant(1.0), sigmoid(v)))
output_h = tf.sigmoid(tf.add(tf.matmul(x,w_h),b_h))
output_o = tf.sigmoid(tf.add(tf.matmul(output_h,w_o),b_o))
error = tf.subtract(output_o,y) #(1x35)
mse = tf.reduce_mean(tf.square(error))
delta_o=tf.multiply(error,derivative(output_o))
delta_b_o=delta_o
delta_w_o=tf.matmul(tf.transpose(output_h), delta_o)
delta_backprop=tf.matmul(delta_o,tf.transpose(w_o))
delta_h=tf.multiply(delta_backprop,derivative(output_h))
delta_b_h=delta_h
delta_w_h=tf.matmul(tf.transpose(x),delta_h)
#updating the weights
train = [
tf.assign(w_h, tf.subtract(w_h, tf.multiply(learning_rate, delta_w_h))),
tf.assign(b_h, tf.subtract(b_h, tf.multiply(learning_rate, tf.reduce_mean(delta_b_h, 0)))),
tf.assign(w_o, tf.subtract(w_o, tf.multiply(learning_rate, delta_w_o))),
tf.assign(b_o, tf.subtract(b_o, tf.multiply(learning_rate, tf.reduce_mean(delta_b_o, 0))))
]
sess = tf.Session()
sess.run(tf.global_variables_initializer())
err,target=1, 0.005
epoch, max_epochs = 0, 2000000
while epoch < max_epochs:
epoch += 1
err, _ = sess.run([mse, train],{x:train_in,y:train_out})
if (epoch%1000 == 0):
print('Epoch:', epoch, '\nMSE:', err)
answer = tf.equal(tf.floor(output_o + 0.5), y)
accuracy = tf.reduce_mean(tf.cast(answer, "float"))
print(sess.run([output_o], feed_dict={x: train_in, y: train_out}));
print("Accuracy: ", (1-err) * 100 , "%");
Update: I got it working now. The MSE dropped to almost zero once I increased the number of neurons in the hidden layer. I tried using 5200 and 6400 neurons for the hidden layer and with just 5000 epochs, the accuracy was almost 99%. Also, the largest learning rate I used is 0.1 because when above that, the MSE will not be close to zero.
I'm not an expert in this field, but it looks like your weights are updated correctly. And the fact that your MSE decreases from some higher values to 0.2xxx is the strong indicator of that. I would definitely try to run this problem with way more hidden neurons (e.g. 500)
Btw, are your inputs normalized? If not, that obviously could be the reason

Gradients are always zero

I have written an algorithm using tensorflow framework and faced with the problem, that tf.train.Optimizer.compute_gradients(loss) returns zero for all weights. Another problem is if I put batch size larger than about 5, tf.histogram_summary for weights throws an error that some of values are NaN.
I cannot provide here a reproducible example, because my code is quite bulky and I am not so good in TF for make it shorter. I will try to paste here some fragments.
Main loop:
images_ph = tf.placeholder(tf.float32, shape=some_shape)
labels_ph = tf.placeholder(tf.float32, shape=some_shape)
output = inference(BATCH_SIZE, images_ph)
loss = loss(labels_ph, output)
train_op = train(loss, global_step)
session = tf.Session()
session.run(tf.initialize_all_variables())
for i in xrange(MAX_STEPS):
images, labels = train_dataset.get_batch(BATCH_SIZE, yolo.INPUT_SIZE, yolo.OUTPUT_SIZE)
session.run([loss, train_op], feed_dict={images_ph : images, labels_ph : labels})
Train_op (here is the problem occures):
def train(total_loss)
opt = tf.train.AdamOptimizer()
grads = opt.compute_gradients(total_loss)
# Here gradients are zeros
for grad, var in grads:
if grad is not None:
tf.histogram_summary("gradients/" + var.op.name, grad)
return opt.apply_gradients(grads, global_step=global_step)
Loss (the loss is calculated correctly, since it changes from sample to sample):
def loss(labels, output)
return tf.reduce_mean(tf.squared_difference(labels, output))
Inference: a set of convolution layers with ReLU followed by 3 fully connected layers with sigmoid activation in the last layer. All weights initialized by truncated normal rv's. All labels are vectors of fixed length with real numbers in range [0,1].
Thanks in advance for any help! If you have some hypothesis for my problem, please share I will try them. Also I can share the whole code if you like.