I am implementing deepfool using tensorflow, but I do not know how to calculate the gradients, so I hope if you could help me work it out.
the source code file is in https://drive.google.com/drive/folders/1JEiIidq8sNi03aliHFaYEESjDu87O6Tv?usp=sharing
with tf.GradientTape() as tape:
tape.watch(x)
fs = model(x)
loss_value = loss_func(one_hot_label_0, fs)
grad_orig = tape.gradient(fs[0, I[0]], x)
I have changed the arguments of tape.gradient() several times, but useless.
def loss_func(labels, logits):
return tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)
while k_i == label and loop_i < max_iter:
pert = np.inf
one_hot_label_0 = tf.one_hot(label, num_classes)
with tf.GradientTape() as tape:
tape.watch(x)
fs = model(x)
loss_value = loss_func(one_hot_label_0, fs)
grad_orig = tape.gradient(fs[0, I[0]], x)
for k in range(1, num_classes):
one_hot_label_k = tf.one_hot(I[k], num_classes)
with tf.GradientTape() as tape:
tape.watch(x)
fs = model(x)
loss_value = loss_func(one_hot_label_k, fs)
cur_grad = tape.gradient(fs[0, I[k]], x)
Could you elaborate why the gradient computation is not working? Do you have a stack trace?
Here's an example of a gradient computation performed as part of the fast gradient method implementation in CleverHans. You might also want to look at the PGD implementation, which calls the fast gradient method in a for loop.
First many thanks to 'idol' Nicolas, Merci beaucoup. Here I define the loss_func as follows.
def loss_func(logits, I, k):
# return tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)
return logits[0, I[k]]
...
loss_value = loss_func(fs, I, 0)
grad_orig = tape.gradient(loss_value, x)
Then I can calculate the gradient normally. I have two obvious discoveries, one is the attack is model-dependent since different model gets different attack samples with the same image; the other one is that many attacks to the mnist dataset are perceptible since most images changed a lot, but I do find samples with few changes as reported in the paper. However, I am not sure what I did is as the way in the deepfool paper, especially the gradient.
Related
I'm trying to implement a Physical Informed Neural Network. The differential part in the loss did bring some improvment (compare to the classical neural net) in the (supposed) unknown area. This unknown area is actually known but I just removed them from training and testing data set to check performance of PINN vs other technics. Here is the code I m using :
model = tf.keras.Sequential([
layers.Dense(units=64, activation='relu', input_shape=(2,)),
layers.Dense(units=64, activation='relu'),
layers.Dense(units=1,)
])
optimizer = tf.keras.optimizers.Adam()
objective = tf.keras.losses.Huber()
metric = tf.keras.metrics.MeanAbsoluteError()
w_phys = 0.5
w_loss = 1.0 - w_phys
with tf.device('gpu:0'):
for epoch in range(epochs):
cumulative_loss_train = 0.0
metric.reset_states()
for mini_batch, gdth in dataset:
with tf.GradientTape(persistent=True) as tape:
tape.watch(unknown_area_SOCP_tensor)
tape.watch(mini_batch)
# Physics loss
predictions_unkwon = model(unknown_area_SOCP_tensor, training=True)
d_f = tape.gradient(predictions_unkwon, unknown_area_SOCP_tensor)
# Physics part with P #
dp = tf.convert_to_tensor(1/((K*unknown_area_SOCP_tensor[:,0]+L)**2-4*R*unknown_area_SOCP_tensor[:,1]), dtype = np.float64)
phys_loss_p = 10*tf.cast(tf.math.reduce_mean(tf.math.square(d_f[:,1]**2 - dp)), np.float32)
# Traditionall loss #
predictions = model(mini_batch, training=True)
loss = objective(gdth, predictions)
# Compute grads #
grads = tape.gradient(w_loss*loss + w_phys*(phys_loss_p), model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
cumulative_loss_train += loss
metric.update_state(gdth, predictions)
del tape
So far so good. K, R and L were fixed parameter.
Next step was to assume they were unknwon and try to figure out if we could learn them.
I tried first by focusing only on R parameter. Here is the code used :
with tf.device('gpu:0'):
for epoch in range(epochs):
cumulative_loss_train = 0.0
metric.reset_states()
for mini_batch, gdth in dataset:
with tf.GradientTape(persistent=True) as tape:
tape.watch(unknown_area_SOCP_tensor)
tape.watch(mini_batch)
tape.watch(R)
# Physics loss
predictions_unkwon = model(unknown_area_SOCP_tensor, training=True)
d_f = tape.gradient(predictions_unkwon, unknown_area_SOCP_tensor)
# Physics part with P #
dp = tf.convert_to_tensor(1/((K*unknown_area_SOCP_tensor[:,0]+L)**2-4*R*unknown_area_SOCP_tensor[:,1]), dtype = np.float64)
phys_loss_p = 10*tf.cast(tf.math.reduce_mean(tf.math.square(d_f[:,1]**2 - dp)), np.float32)
# Traditionall loss #
predictions = model(mini_batch, training=True)
loss = objective(gdth, predictions)
# Compute grads #
grads = tape.gradient(w_loss*loss + w_phys*(phys_loss_p), model.trainable_variables + [R])
optimizer.apply_gradients(zip(grads, model.trainable_variables + [R]))
cumulative_loss_train += loss
metric.update_state(gdth, predictions)
del tape
But that lead to terrible result (like high loss and poor metric). Worse, the value of R has to be positive, and at the end of the training, R was estimated as a negative value...
I'm quite confident on the equation since I have checked a lot of time, and it seems accurate compared to simulation software I'm using. Moreover, the equation brought value to the learning (as predictions on the unknwon were way better).
Did I miss something here ?
Thanks for your help !
I post my answer here in case this may help someone one day.
My issue came from gradient value which was too high. Clipping gradients finally solved my issue.
Without using #tf.function, the script work perfectly
I want to use it to speed up training, but it's giving me error where I reuse the weight matrix from the embedding layers.
I think the error is caused by get_weights(), because it converts tensor back to numpy
I tried to use a tf.keras.layers.Dense instead of re-using the weights from embedding, and it worked perfectly.
class Example(tf.keras.Model):
def __init__(self,):
super(Example, self).__init__()
self.embed_dim = embed_dim
self.vocab_size = vocab_size
self.embed = tf.keras.layers.Embedding(self.vocab_size, self.embed_dim)
...
def call(self, inputs, trianing):
...
embed_matrix = self.embed.get_weights()
# a dense layer
Vhid = tf.matmul(self.kernel, tf.transpose(embed_matrix[0]))
pred_w = tf.matmul(pred, Vhid) + self.bias
In my train script.
I did
#tf.function
def train_step(x, y, training=None):
with tf.GradientTape() as tape:
pred = model(x, y, training)
losses = compute_loss(y, pred)
grads = tape.gradient(losses, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return losses
/home/thomas/projects/tf_convsent/models/.py:195 call *
embed_matrix = self.embed.get_weights() # [vocab_size, 300]
/home/thomas/.conda/envs/tf2_p37/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:1177 get_weights
return backend.batch_get_value(params)
/home/thomas/.conda/envs/tf2_p37/lib/python3.7/site-packages/tensorflow/python/keras/backend.py:3011 batch_get_value
raise RuntimeError('Cannot get value inside Tensorflow graph function.')
RuntimeError: Cannot get value inside Tensorflow graph function.
Found the easiest solution which improved 50% training speed(122 hrs to ~65 hrs)
just change
embed_matrix = self.embed.get_weights()
to
embed_matrix = self.embed.weights
will do the trick.
Is there any way to get all gradients values of a backpropagation? I want to calculate the sum of all gradients. Is there any fast method to obtain this and plot it in tensorboard with tf.summary.salar ?
You can get the explicit gradient tensor via using optimizer.compute_gradients() (refer to: https://www.tensorflow.org/api_docs/python/tf/train/Optimizer#compute_gradients) which will return a list of (grad, var) pairs.
Here is an example:
# given some optimizer
optimizer = tf.train.AdamOptimizer()
# and some loss
loss = ... some loss ...
# var_list = None, means all variable in the graph
grads_vars = optimizer.compute_gradients(loss=loss, var_list=None)
# OR:
# I prefer to state it explicitly
# if you use tf.keras.Model this is straightforward
grads_vars = optimizer.compute_gradients(loss=loss, var_list=model.variables)
# average the grads
mean_grad = tf.zeros(())
for grad, var in grads_vars:
mean_grad += tf.reduce_mean(grad)
mean_grad /= len(grads_vars)
Just put mean grad to the tf.summary
Or you could use the tf.GradientTape() flavor (which requires newer version of Tensorflow).
# tfe = tf.contrib.eager
with tfe.GradientTape() as tape:
y_hat = model(x)
loss = tf.losses.mean_squared_error(y, y_hat)
grads = tape.gradient(loss, model.variables)
# average the grads
mean_grad = tf.zeros(())
for grad in grads:
mean_grad += tf.reduce_mean(grad)
mean_grad /= len(grads)
Note that on newer versions of Tensorflow tfe.GradientTape works on both eager and graph modes.
Since Adam Optimizer keeps an pair of running averages like mean/variance for the gradients, I wonder how it should properly handle weight decay. I have seen two ways of implementing it.
Only update mean/variance from the gradients based on the objective loss, decay weight explicitly at each mini-batch. (the following code is taken from https://github.com/dmlc/mxnet/blob/v0.7.0/python/mxnet/optimizer.py)
weight[:] -= lr*mean/(sqrt(variance) + self.epsilon)
wd = self._get_wd(index)
if wd > 0.:
weight[:] -= (lr * wd) * weight
Update mean/variance from the gradients based on the objective loss + regularization loss, and update weights like usual. (the following code is taken from https://github.com/dmlc/mxnet/blob/master/src/operator/optimizer_op-inl.h#L210)
grad = scalar<DType>(param.rescale_grad) * grad +
scalar<DType>(param.wd) * weight;
// stuff
Assign(out, req[0],
weight -
scalar<DType>(param.lr) * mean /
(F<square_root>(var) + scalar<DType>(param.epsilon)));
These two approaches sometimes show significant difference in training results. And I actually think the first one makes more sense (and find it gives better results time to time). Caffe and old version of mxnet follow the first approach, while torch, tensorflow and new version of mxnet follow the second one.
Really appreciate your help!
Edit: see also this PR which just got merged into TF.
When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. When using any other optimizer, this is not true.
Weight decay (don't know how to TeX here, so excuse my pseudo-notation):
w[t+1] = w[t] - learning_rate * dw - weight_decay * w
L2-regularization:
loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params)
Computing the gradient of the extra term in L2-regularization gives lambda * w and thus inserting it into the SGD update equation
dloss_dw = dactual_loss_dw + lambda * w
w[t+1] = w[t] - learning_rate * dw
gives the same as weight decay, but mixes lambda with the learning_rate. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper Fixing weight decay in Adam for more details. (Edit: AFAIK, this 1987 Hinton paper introduced "weight decay", literally as "each time the weights are updated, their magnitude is also decremented by 0.4%" at page 10)
That being said, there doesn't seem to be support for "proper" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.
One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and "attaching" it to your train_op. Both of these are just crude work-arounds, though. My current code:
# In the network definition:
with arg_scope([layers.conv2d, layers.dense],
weights_regularizer=layers.l2_regularizer(weight_decay)):
# define the network.
loss = # compute the actual loss of your problem.
train_op = optimizer.minimize(loss, global_step=global_step)
if args.weight_decay not in (None, 0):
with tf.control_dependencies([train_op]):
sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0)
train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)))
This somewhat makes use of TensorFlow's provided bookkeeping. Note that the arg_scope takes care of appending an L2-regularization term for every layer to the REGULARIZATION_LOSSES graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.
Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.
I came across the same question. I think this code that I got from here will work for you. It implements the weight decay adam optimizer by inheritance from the tf.train.Optimizer. This is the cleanest solution I have found:
class AdamWeightDecayOptimizer(tf.train.Optimizer):
"""A basic Adam optimizer that includes "correct" L2 weight decay."""
def __init__(self,
learning_rate,
weight_decay_rate=0.0,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-6,
exclude_from_weight_decay=None,
name="AdamWeightDecayOptimizer"):
"""Constructs a AdamWeightDecayOptimizer."""
super(AdamWeightDecayOptimizer, self).__init__(False, name)
self.learning_rate = learning_rate
self.weight_decay_rate = weight_decay_rate
self.beta_1 = beta_1
self.beta_2 = beta_2
self.epsilon = epsilon
self.exclude_from_weight_decay = exclude_from_weight_decay
def apply_gradients(self, grads_and_vars, global_step=None, name=None):
"""See base class."""
assignments = []
for (grad, param) in grads_and_vars:
if grad is None or param is None:
continue
param_name = self._get_variable_name(param.name)
m = tf.get_variable(
name=param_name + "/adam_m",
shape=param.shape.as_list(),
dtype=tf.float32,
trainable=False,
initializer=tf.zeros_initializer())
v = tf.get_variable(
name=param_name + "/adam_v",
shape=param.shape.as_list(),
dtype=tf.float32,
trainable=False,
initializer=tf.zeros_initializer())
# Standard Adam update.
next_m = (
tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad))
next_v = (
tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2,
tf.square(grad)))
update = next_m / (tf.sqrt(next_v) + self.epsilon)
# Just adding the square of the weights to the loss function is *not*
# the correct way of using L2 regularization/weight decay with Adam,
# since that will interact with the m and v parameters in strange ways.
#
# Instead we want ot decay the weights in a manner that doesn't interact
# with the m/v parameters. This is equivalent to adding the square
# of the weights to the loss with plain (non-momentum) SGD.
if self._do_use_weight_decay(param_name):
update += self.weight_decay_rate * param
update_with_lr = self.learning_rate * update
next_param = param - update_with_lr
assignments.extend(
[param.assign(next_param),
m.assign(next_m),
v.assign(next_v)])
return tf.group(*assignments, name=name)
def _do_use_weight_decay(self, param_name):
"""Whether to use L2 weight decay for `param_name`."""
if not self.weight_decay_rate:
return False
if self.exclude_from_weight_decay:
for r in self.exclude_from_weight_decay:
if re.search(r, param_name) is not None:
return False
return True
def _get_variable_name(self, param_name):
"""Get the variable name from the tensor name."""
m = re.match("^(.*):\\d+$", param_name)
if m is not None:
param_name = m.group(1)
return param_name
And you can use it in the following way (I have made some changes to make it useful in a more general context), This function will return a train_op that can be used in the Session:
def create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps):
"""Creates an optimizer training op."""
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.constant(value=init_lr, shape=[], dtype=tf.float32)
# Implements linear decay of the learning rate.
learning_rate = tf.train.polynomial_decay(
learning_rate,
global_step,
num_train_steps,
end_learning_rate=0.0,
power=1.0,
cycle=False)
# Implements linear warmup. I.e., if global_step < num_warmup_steps, the
# learning rate will be `global_step/num_warmup_steps * init_lr`.
if num_warmup_steps:
global_steps_int = tf.cast(global_step, tf.int32)
warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32)
global_steps_float = tf.cast(global_steps_int, tf.float32)
warmup_steps_float = tf.cast(warmup_steps_int, tf.float32)
warmup_percent_done = global_steps_float / warmup_steps_float
warmup_learning_rate = init_lr * warmup_percent_done
is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32)
learning_rate = (
(1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate)
# It is recommended that you use this optimizer for fine tuning, since this
# is how the model was trained (note that the Adam m/v variables are NOT
# loaded from init_checkpoint.)
optimizer = AdamWeightDecayOptimizer(
learning_rate=learning_rate,
weight_decay_rate=0.01,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-6)
tvars = tf.trainable_variables()
grads = tf.gradients(loss, tvars)
# You can do clip gradients if you need in this step(in general it is not neccessary)
# (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
train_op = optimizer.apply_gradients(
zip(grads, tvars), global_step=global_step)
# Normally the global step update is done inside of `apply_gradients`.
# However, `AdamWeightDecayOptimizer` doesn't do this. But if you use
# a different optimizer, you should probably take this line out.
new_global_step = global_step + 1
train_op = tf.group(train_op, [global_step.assign(new_global_step)])
return train_op
Assume I have the following loss function:
loss_a = tf.reduce_mean(my_loss_fn(model_output, targets))
loss_b = tf.reduce_mean(my_other_loss_fn(model_output, targets))
loss_final = loss_a + tf.multiply(alpha, loss_b)
To visualize the norm of the gradients w.r.t to loss_final one could do this:
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
grads_and_vars = optimizer.compute_gradients(loss_final)
grads, _ = list(zip(*grads_and_vars))
norms = tf.global_norm(grads)
gradnorm_s = tf.summary.scalar('gradient norm', norms)
train_op = optimizer.apply_gradients(grads_and_vars, name='train_op')
However, I would like to plot the norm of the gradients w.r.t to loss_a and loss_b separately. How can I do this in the most efficient way? Do I have to call compute_gradients(..) on both loss_a and loss_b separately and then add those two gradients together before passing them to optimizer.apply_gradients(..)? I know that this would mathematically be correct due to the summation rule, but it just seems a bit cumbersome and I also don't know how you would implement the summation of the gradients correctly. Also, loss_final is rather simple, because it's just a summation. What if loss_final was more complicated, e.g. a division?
I'm using Tensorflow 0.12.
You are right that combining gradients could get messy. Instead just compute the gradients of each of the losses as well as the final loss. Because tensorflow optimizes the directed acyclic graph (DAG) before compilation, this doesn't result in duplication of work.
For example:
import tensorflow as tf
with tf.name_scope('inputs'):
W = tf.Variable(dtype=tf.float32, initial_value=tf.random_normal((4, 1), dtype=tf.float32), name='W')
x = tf.random_uniform((6, 4), dtype=tf.float32, name='x')
with tf.name_scope('outputs'):
y = tf.matmul(x, W, name='y')
def my_loss_fn(output, targets, name):
return tf.reduce_mean(tf.abs(output - targets), name=name)
def my_other_loss_fn(output, targets, name):
return tf.sqrt(tf.reduce_mean((output - targets) ** 2), name=name)
def get_tensors(loss_fn):
loss = loss_fn(y, targets, 'loss')
grads = tf.gradients(loss, W, name='gradients')
norm = tf.norm(grads, name='norm')
return loss, grads, norm
targets = tf.random_uniform((6, 1))
with tf.name_scope('a'):
loss_a, grads_a, norm_a = get_tensors(my_loss_fn)
with tf.name_scope('b'):
loss_b, grads_b, norm_b = get_tensors(my_loss_fn)
with tf.name_scope('combined'):
loss = tf.add(loss_a, loss_b, name='loss')
grad = tf.gradients(loss, W, name='gradients')
with tf.Session() as sess:
tf.global_variables_initializer().run(session=sess)
writer = tf.summary.FileWriter('./tensorboard_results', sess.graph)
res = sess.run([norm_a, norm_b, grad])
print(*res, sep='\n')
Edit: In response to your comment... You can check the DAG of a tensorflow model using tensorboard. I've updated the code to store the graph.
Run tensorboard --logdir $PWD/tensorboard_results in a terminal and navigate to the url printed on the commandline (typically http://localhost:6006/). Then click on GRAPH tab to view the DAG. You can recursively expand the tensors, ops, namespaces to see subgraphs to see individual operations and their inputs.