I have an example of DNN learning XOR (right click to open in new tab): https://colab.research.google.com/drive/1M5xFp4gaXPCbnejM8-5_yLp1B6UvwdL8
I'm confused in these 2 lines (related to backpropagation):
Grads = T.gradient(Loss,[W1,B1,W2,B2]);
Optim.apply_gradients(zip(Grads,[W1,B1,W2,B2]));
I'm guessing the backward loop is at T.gradient because those are gradient values related to loss, but I'm still not clear. The questions are:
Question1. Is there backpropagation (the backward loop) in those 2 lines?
Question2. If there is backpropagation, it's at T.gradient or Optim.apply_gradients?
Question3. Because backpropagation is done backward, is the order of [W1,B1,W2,B2] important? I believe, eg. this shuffled [B1,W2,B2,W1] can't be the same, because backpropagation needs layer order from output back to input.
From my trying, when shuffling the order of weights and biases in the variable array, the optimisation process is still working. But backpropagation needs layer order from output back to input, I don't get this.
Source code:
#!pip install tensorflow==2.0.0rc2
%tensorflow_version 2.x
%reset -f
#libs
import tensorflow as tf;
#data
X = [[0,0],[0,1],[1,0],[1,1]];
Y = [[0], [1], [1], [0] ];
X = tf.convert_to_tensor(X,tf.float32);
Y = tf.convert_to_tensor(Y,tf.float32);
#model
W1 = tf.Variable(tf.random.uniform([2,20],-1,1));
B1 = tf.Variable(tf.random.uniform([ 20],-1,1));
W2 = tf.Variable(tf.random.uniform([20,1],-1,1));
B2 = tf.Variable(tf.random.uniform([ 1],-1,1));
#tf.function
def feedforward(X):
H1 = tf.nn.leaky_relu(tf.matmul(X,W1) + B1);
Out = tf.sigmoid(tf.matmul(H1,W2) + B2);
return Out;
#end def
#train
Optim = tf.keras.optimizers.SGD(1e-1);
Steps = 1000;
for I in range(Steps):
if I%(Steps/10)==0:
Out = feedforward(X);
Loss = tf.reduce_sum(tf.square(Y-Out));
print("Loss:",Loss.numpy());
#end if
with tf.GradientTape() as T:
Out = feedforward(X);
Loss = tf.reduce_sum(tf.square(Y-Out));
#end with
#BACKPROPAGATION HERE?
Grads = T.gradient(Loss,[W1,B1,W2,B2]);
Optim.apply_gradients(zip(Grads,[W1,B1,W2,B2]));
#end for
Out = feedforward(X);
Loss = tf.reduce_sum(tf.square(Y-Out));
print("Loss:",Loss.numpy(),"(Last)");
print("\nDone.");
#eof
Let's take this one step at a time.
Step 1: Calculation of Gradients:
Grads = T.gradient(Loss,[W1,B1,W2,B2])
Here, we calculate the gradients of the loss with respect to the variables in the provided list. The list of gradients is indexed based on the indices of the variables. This means that Grads[0] will be the gradients with respect to W1, and so on.
Step 2: Next, we perform the update. This is done in:
Optim.apply_gradients(zip(Grads,[W1,B1,W2,B2]))
Here, Grads[0] are used to update W1, Grads[1] to update B1 and so on.
Note that gradient calculation and the update steps are performed separately. So as long as the variables appear in the same order in both lists, there shouldn't be any problems.
Also, GradientTape has to be used with Eager Execution.
With TensorFlow 2 in default eager mode, and even without the #tf.function decorator to make graph. TensorFlow is still tracking the relation between tensors while calculation: https://stats.stackexchange.com/a/272000/142160
TensorFlow tracks every variables here:
with tf.GradientTape() as T:
Out = feedforward(X);
Loss = tf.reduce_sum(tf.square(Y-Out));
It is automatic differentiation (kinda Monte Carlo method) instead of mathematical differentiation, and thus, all gradients obtained by the following function is already at their proper depths in backpropagation (just like the backward loop to calculate errors at all layers):
Grads = T.gradient(Loss,[W1,B1,W2,B2]);
After that, optimiser will apply gradients to change weights and biases:
Optim.apply_gradients(zip(Grads,[W1,B1,W2,B2]));
Related
I'm trying to train a model in self-supervised learning. The flow chart is something like the following:
Let's assume that N1 is already trained and we want to train just N2. This is my current implementation:
x_1 = tf.placeholder(tf.float32, [None, 128, 128, 1])
x_2 = tf.placeholder(tf.float32, [None, 128, 128, 1])
s_t1 = tf.stop_gradient(N1(x_1)) # treat s_t1 as a constant
s_t2_pred = N2(s_t1))
s_t2 = tf.stop_gradient(N1(x_2)) # treat s_t2 as a constant
loss = some_loss_function(s_t2, s_t2_pred)
train_op = tf.train.AdamOptimizer(lr).minimize(loss)
In this way, I should be optimizing only N2. What makes me confused is the fact that if I were to use the following code I would obtain very different results (much better than the above):
# treat everything as a variable:
s_t1 = N1(x_1)
s_t2_pred = N2(s_t1)
s_t2 = N1(x_2)
loss = some_loss_function(s_t2, s_t2_pred)
var_list = take_all_variables_in_N2()
train_op = tf.train.AdamOptimizer(lr).minimize(loss, var_list)
I wonder what is the problem with the first implementation. What is exactly the behaviour of tf.stop_gradient (the documentation is a bit poor)? How does this differ from the second approach?
From a practical perspective in semi-supervised learning: what is the difference between the two? Which one is the correct approach?
Thank you :)
I added a possible solution to the problem in the comments below. I would still be happy to receive any feedback from more experienced users and to share some opinions on the best approach to structure a self-supervised learning problem in tensorflow.
Bye, G.
I found a possible solution to my question and I'm posting it here, in case someone may find it useful.
Apparently, tf.stop_gradients() only stops the new gradients to be back-propagated through the layers, but: if we have a momentum term (e.g. when using Adam or RMSProp) the variables of such layers could still be updated due to some gradients cumulated in the past (contained in the momentum term). Let's have a look at the simple case of SGD + Momentum; the formula would be:
w1 = w0 - a*grad(loss) - b*v0
where w1 and w0 are the weights at time 0 and 1, a is the learning rate v0 is the accumulated velocity (a function of the past gradients). Using tf.stop_gradients() is equivalent to multiplying the second term for zero. Then, the update rule becomes:
w1 = w0 - b*v0
e.g. we still have a momentum component that can update the weights.
A workaround to this problem would be to explicitly passing the variables to be updated to the optimizer. For example:
var_list = take_all_variables_in_N2()
train_op = tf.train.AdamOptimizer(lr).minimize(loss, var_list)
References:
[1] http://ruder.io/optimizing-gradient-descent/
[2] Using stop_gradient with AdamOptimizer in TensorFlow
Here is what I'm trying to implement:
We calculate loss based on F(X), as usual. But we also define "adversarial loss" which is a loss based on F(X + e). e is defined as dF(X)/dX multiplied by some constant. Both loss and adversarial loss are backpropagated for the total loss.
In tensorflow, this part (getting dF(X)/dX) can be coded like below:
grad, = tf.gradients( loss, X )
grad = tf.stop_gradient(grad)
e = constant * grad
Below is my pytorch code:
class DocReaderModel(object):
def __init__(self, embedding=None, state_dict=None):
self.train_loss = AverageMeter()
self.embedding = embedding
self.network = DNetwork(opt, embedding)
self.optimizer = optim.SGD(parameters)
def adversarial_loss(self, batch, loss, embedding, y):
self.optimizer.zero_grad()
loss.backward(retain_graph=True)
grad = embedding.grad
grad.detach_()
perturb = F.normalize(grad, p=2)* 0.5
self.optimizer.zero_grad()
adv_embedding = embedding + perturb
network_temp = DNetwork(self.opt, adv_embedding) # This is how to get F(X)
network_temp.training = False
network_temp.cuda()
start, end, _ = network_temp(batch) # This is how to get F(X)
del network_temp # I even deleted this instance.
return F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
def update(self, batch):
self.network.train()
start, end, pred = self.network(batch)
loss = F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y)
loss_total = loss + loss_adv
self.optimizer.zero_grad()
loss_total.backward()
self.optimizer.step()
I have few questions:
1) I substituted tf.stop_gradient with grad.detach_(). Is this correct?
2) I was getting "RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time." so I added retain_graph=True at the loss.backward. That specific error went away.
However now I'm getting a memory error after few epochs (RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu:58
). I suspect I'm unnecessarily retaining graph.
Can someone let me know pytorch's best practice on this? Any hint / even short comment will be highly appreciated.
I think you are trying to implement generative adversarial network (GAN), but from the code, I don't understand and can't follow to what you are trying to achieve as there are a few missing pieces for a GAN to works. I can see there's a discriminator network module, DNetwork but missing the generator network module.
If to guess, when you say 'loss function twice', I assumed you mean you have one loss function for the discriminator net and another for the generator net. If that's the case, let me share how I would implement a basic GAN model.
As an example, let's take a look at this Wasserstein GAN Jupyter notebook
I'll skip the less important bits and zoom into the important ones here:
First, import PyTorch libraries and set up
# Set up batch size, image size, and size of noise vector:
bs, sz, nz = 64, 64, 100 # nz is the size of the latent z vector for creating some random noise later
Build a discriminator module
class DCGAN_D(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
Build a generator module
class DCGAN_G(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
Put them all together
netG = DCGAN_G().cuda()
netD = DCGAN_D().cuda()
Optimizer needs to be told what variables to optimize. A module automatically keeps track of its variables.
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
One forward step and one backward step for Discriminator
Here, the network can calculate gradient during the backward pass, depends on the input to this function. So, in my case, I have 3 type of losses; generator loss, dicriminator real image loss, dicriminator fake image loss. I can get gradient of loss function three times for 3 different net passes.
def step_D(input, init_grad):
# input can be from generator's generated image data or input image from dataset
err = netD(input)
err.backward(init_grad) # backward pass net to calculate gradient
return err # loss
Control trainable parameters [IMPORTANT]
Trainable parameters in the model are those that require gradients.
def make_trainable(net, val):
for p in net.parameters():
p.requires_grad = val # note, i.e, this is later set to False below in netG update in the train loop.
In TensorFlow, this part can be coded like below:
grad = tf.gradients(loss, X)
grad = tf.stop_gradient(grad)
So, I think this will answer your first question, "I substituted tf.stop_gradient with grad.detach_(). Is this correct?"
Train loop
You can see here how's the 3 different loss functions are being called here.
def train(niter, first=True):
for epoch in range(niter):
# Make iterable from PyTorch DataLoader
data_iter = iter(dataloader)
i = 0
while i < n:
###########################
# (1) Update D network
###########################
make_trainable(netD, True)
# train the discriminator d_iters times
d_iters = 100
j = 0
while j < d_iters and i < n:
j += 1
i += 1
# clamp parameters to a cube
for p in netD.parameters():
p.data.clamp_(-0.01, 0.01)
data = next(data_iter)
##### train with real #####
real_cpu, _ = data
real_cpu = real_cpu.cuda()
real = Variable( data[0].cuda() )
netD.zero_grad()
# Real image discriminator loss
errD_real = step_D(real, one)
##### train with fake #####
fake = netG(create_noise(real.size()[0]))
input.data.resize_(real.size()).copy_(fake.data)
# Fake image discriminator loss
errD_fake = step_D(input, mone)
# Discriminator loss
errD = errD_real - errD_fake
optimizerD.step()
###########################
# (2) Update G network
###########################
make_trainable(netD, False)
netG.zero_grad()
# Generator loss
errG = step_D(netG(create_noise(bs)), one)
optimizerG.step()
print('[%d/%d][%d/%d] Loss_D: %f Loss_G: %f Loss_D_real: %f Loss_D_fake %f'
% (epoch, niter, i, n,
errD.data[0], errG.data[0], errD_real.data[0], errD_fake.data[0]))
"I was getting "RuntimeError: Trying to backward through the graph a second time..."
PyTorch has this behaviour; to reduce GPU memory usage, during the .backward() call, all the intermediary results (if you have like saved activations, etc.) are deleted when they are not needed anymore. Therefore, if you try to call .backward() again, the intermediary results don't exist and the backward pass cannot be performed (and you get the error you see).
It depends on what you are trying to do. You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.
Can someone let me know pytorch's best practice on this
As you can see from the PyTorch code above and from the way things are being done in PyTorch which is trying to stay Pythonic, you can get a sense of PyTorch's best practice there.
If you want to work with higher-order derivatives (i.e. a derivative of a derivative) take a look at the create_graph option of backward.
For example:
loss = get_loss()
loss.backward(create_graph=True)
loss_grad_penalty = loss + loss.grad
loss_grad_penalty.backward()
I am wondering if tf.stop_gradient stops the gradient computation of just a given op, or stops the update of its input tf.variable ? I have the following problem - During the forward path computation in MNIST, I would like to perform a set of operations on the weights (let's say W to W*) and then do a matmul with inputs. However, I would like to exclude these operations from the backward path. I want only dE/dW computed during training with back propagation. The code I wrote prevents W from getting updated. Could you please help me understand why ? If these were variables, I understand I should set their trainable property to false, but these are operations on weights. If stop_gradient cannot be used for this purpose, then how do I build two graphs, one for forward path and the other for back propagation ?
def build_layer(inputs, fmap, nscope,layer_size1,layer_size2, faulty_training):
with tf.name_scope(nscope):
if (faulty_training):
## trainable weight
weights_i = tf.Variable(tf.truncated_normal([layer_size1, layer_size2],stddev=1.0 / math.sqrt(float(layer_size1))),name='weights_i')
## Operations on weight whose gradient should not be computed during backpropagation
weights_fx_t = tf.multiply(268435456.0,weights_i)
weight_fx_t = tf.stop_gradient(weights_fx_t)
weights_fx = tf.cast(weights_fx_t,tf.int32)
weight_fx = tf.stop_gradient(weights_fx)
weights_fx_fault = tf.bitwise.bitwise_xor(weights_fx,fmap)
weight_fx_fault = tf.stop_gradient(weights_fx_fault)
weights_fl = tf.cast(weights_fx_fault, tf.float32)
weight_fl = tf.stop_gradient(weights_fl)
weights = tf.stop_gradient(tf.multiply((1.0/268435456.0),weights_fl))
##### end transformation
else:
weights = tf.Variable(tf.truncated_normal([layer_size1, layer_size2],stddev=1.0 / math.sqrt(float(layer_size1))),name='weights')
biases = tf.Variable(tf.zeros([layer_size2]), name='biases')
hidden = tf.nn.relu(tf.matmul(inputs, weights) + biases)
return weights,hidden
I am using the tensorflow gradient descent optimizer to do the training.
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)
Stop gradient will prevent the backpropagation from continuing past that node in the graph. You code doesn't have any path from weights_i to the loss except the one that goes through weights_fx_t where the gradient is stopped. This is what is causing weights_i not to be updated during training. You don't need to put stop_gradient after every step. Using it just once will stop the backpropagation there.
If stop_gradient doesn't do what you want then you can get the gradients by doing tf.gradients and you can write your own update op by using tf.assign. This will allow you to alter the gradients however you want.
The feature I'm after is to be able to tell what the gradient of a given variable is with respect to my error function given some data.
One way to do this would be to see how much the variable has changed after a call to train, but obviously that can vary massively based on the learning algorithm (for example it would be almost impossible to tell with something like RProp) and just isn't very clean.
Thanks in advance.
The tf.gradients() function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors—including variables. Consider the following simple example:
data = tf.placeholder(tf.float32)
var = tf.Variable(...) # Must be a tf.float32 or tf.float64 variable.
loss = some_function_of(var, data) # some_function_of() returns a `Tensor`.
var_grad = tf.gradients(loss, [var])[0]
You can then use this symbolic gradient to evaluate the gradient in some specific point (data):
sess = tf.Session()
var_grad_val = sess.run(var_grad, feed_dict={data: ...})
In TensorFlow 2.0 you can use GradientTape to achieve this. GradientTape records the gradients of any computation that happens in the context of that. Below is an example of how you might do that.
import tensorflow as tf
# Here goes the neural network weights as tf.Variable
x = tf.Variable(3.0)
# TensorFlow operations executed within the context of
# a GradientTape are recorded for differentiation
with tf.GradientTape() as tape:
# Doing the computation in the context of the gradient tape
# For example computing loss
y = x ** 2
# Getting the gradient of network weights w.r.t. loss
dy_dx = tape.gradient(y, x)
print(dy_dx) # Returns 6
I'm wondering how to use stop_gradient in tensorflow, and the documentation is not clear to me.
I'm currently using stop_gradient to produce the gradient of the loss function w.r.t. the word embeddings in a CBOW word2vec model. I want to just get the value, and not do backpropagation (as I'm generating adversarial examples).
Currently, I'm using the code:
lossGrad = gradients.gradients(loss, embed)[0]
real_grad = lossGrad.eval(feed_dict)
But when I run this, it does the backpropogation anyway! What am I doing wrong, and just as importantly, how can I fix this?
CLARIFICATION: To clarify by "backpropagation" I mean "calculating values and updating model parameters".
UPDATE
If I run the two lines above after the first training step, the I get a different loss after 100 training steps than when I don't run those two lines. I might be fundamentally misunderstanding something about Tensorflow.
I've tried setting using set_random_seed both in the beginning of the graph declaration and before each training step. The total loss is consistent between multiple runs, but not between including/excluding those two lines. So if it's not the RNG causing the disparity, and it's not unanticipated updating of the model parameters between training steps, do you have any idea what would cause this behavior?
SOLUTION
Welp, it's a bit late but here's how I solved it. I only wanted to optimize over some, but not all, variables. I thought that the way to prevent optimizing some variables would be to use stop_grad - but I never found a way to make that work. Maybe there is a way, but what worked for me was to adjust my optimizer to only optimize over a list of variables. So instead of:
opt = tf.train.GradientDescentOptimizer(learning_rate=eta)
train_op = opt.minimize(loss)
I used:
opt = tf.train.GradientDescentOptimizer(learning_rate=eta)
train_op = opt.minimize(loss, var_list=[variables to optimize over])
This prevented opt from updating the variables not in var_list. Hopefully it works for you, too!
tf.stop_gradient provides a way to not compute gradient with respect to some variables during back-propagation.
For example, in the code below, we have three variables, w1, w2, w3 and input x. The loss is square((x1.dot(w1) - x.dot(w2 * w3))). We want to minimize this loss wrt to w1 but want to keep w2 and w3 fixed. To achieve this we can just put tf.stop_gradient(tf.matmul(x, w2*w3)).
In the figure below, I plotted how w1, w2, and w3 from their initial values as the function of training iterations. It can be seen that w2 and w3 remain fixed while w1 changes until it becomes equal to w2 * w3.
An image showing that w1 only learns but not w2 and w3:
import tensorflow as tf
import numpy as np
w1 = tf.get_variable("w1", shape=[5, 1], initializer=tf.truncated_normal_initializer())
w2 = tf.get_variable("w2", shape=[5, 1], initializer=tf.truncated_normal_initializer())
w3 = tf.get_variable("w3", shape=[5, 1], initializer=tf.truncated_normal_initializer())
x = tf.placeholder(tf.float32, shape=[None, 5], name="x")
a1 = tf.matmul(x, w1)
a2 = tf.matmul(x, w2*w3)
a2 = tf.stop_gradient(a2)
loss = tf.reduce_mean(tf.square(a1 - a2))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
gradients = optimizer.compute_gradients(loss)
train_op = optimizer.apply_gradients(gradients)
tf. gradients(loss, embed) computes the partial derivative of the tensor loss with respect to the tensor embed. TensorFlow computes this partial derivative by backpropagation, so it is expected behavior that evaluating the result of tf. gradients(...) performs backpropagation. However, evaluating that tensor does not perform any variable updates, because the expression does not include any assignment operations.
tf.stop_gradient() is an operation that acts as the identity function in the forward direction but stops the accumulated gradient from flowing through that operator in the backward direction. It does not prevent backpropagation altogether, but instead prevents an individual tensor from contributing to the gradients that are computed for an expression. The documentation for the operation has more details about the operation, and when to use it.