I want to understand the grad_ys paramter in tf.gradients. I've seen it used like a multiplyer of the true gradient but its not crear in the definition. Mathematically how would the whole expression look like?
Edit: better clarification of notation is here
ys are summed up to make a single scalar y, and then tf.gradients computes dy/dx where x represents variables from xs
grad_ys represent the "starting" backprop value. They are 1 by default, but a different value can be when you want to chain several tf.gradients calls together -- you can pass in the output of previous tf.gradients call into grad_ys to continue the backprop flow.
For formal definition, look at the chained expression in Reverse Accumulation here: https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation
The term corresponding to dy/dw3 * dw3/dw2 in TensorFlow is a vector of 1's (think of it as if TensorFlow wraps cost with a dummy identity op). When you specify grad_ys this term is replaced with grad_ys instead of vector of 1s
Related
Quantization schemes are generally non-differentiable because they pass through the threshold, such as round or sign function. It means that we can not get the gradient of trainable variables due to the nature of chain rule.
Instead, we can use a trick called 'straight-through-estimator', which enable us to back-propagating the gradient of individual trainable variables.
One such method is tf.fake_quant_with_min_max_vars, The advantages of this format are that it can represent arbitrary magnitudes of ranges, they don’t have to be symmetrical, it can represent signed and unsigned values, and the linear spread makes doing multiplications straightforward.Blog, Paper.
So, my question is, can we differentiate the fake_quant function? And if so, does this function apply 'straight-through-estimator'?
I did a little bit of this with some snippet code
x = tf.cast(np.random.normal(0,1,(10,10), tf.float32)
x_q = tf.fake_quant_with_min_max_vars(x, min=tf.reduce_min(x), max=tf.reduce_max(x), num_bits=3)
grad = tf.gradients(x_q, x)
In that case, almost every grad have value 1(i.e, gradient 1), which means it pass through the gradient itself.
However, sometimes a few samples have gradient 0, or other constant, such as 2, 3, 4...
Am I missing what's going on?
I want to train a model using tensorflow.
I have the following variable which I want the model to learn it
Mj=tf.get_variable('Mj_',dtype=tf.float32, shape=[500,4],initializer=tf.random_uniform_initializer(maxval=1, minval=0))
I want the resulted value of Mj to be between 0 and 1. How can I add this constraint?
The proper way to do this would be to pass the clipping function tf.clip_by_value as the constraint argument to the tf.Variable constructor:
Mj=tf.get_variable('Mj_',
dtype=tf.float32,
shape=[500,4],
initializer=tf.random_uniform_initializer(maxval=1, minval=0),
constraint=lambda t: tf.clip_by_value(t, 0, 1))
From the docs of tf.Variable:
constraint: An optional projection function to be applied to the
variable after being updated by an Optimizer (e.g. used to implement
norm constraints or value constraints for layer weights). The function
must take as input the unprojected Tensor representing the value of
the variable and return the Tensor for the projected value (which must
have the same shape). Constraints are not safe to use when doing
asynchronous distributed training.
Or you might want to consider simply adding a nonlinearity tf.sigmoid on top of your variable.
Mj=tf.get_variable('Mj_',dtype=tf.float32, shape=[500,4])
Mj_out=tf.sigmoid(Mj)
This will transform your variable to range between 0 and 1. Read more about activation functions here.
I think the function you're looking for is tf.clip_by_value.
Link to Docs.
I am using this function of tensorflow to get my function jacobian. Came across two problems:
The tensorflow documentation is contradicted to itself in the following two paragraph if I am not mistaken:
gradients() adds ops to the graph to output the partial derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys.
Blockquote
Blockquote
Returns:
A list of sum(dy/dx) for each x in xs.
Blockquote
According to my test, it is, in fact, return a vector of len(ys) which is the sum(dy/dx) for each x in xs.
I do not understand why they designed it in a way that the return is the sum of the columns(or row, depending on how you define your Jacobian).
How can I really get the Jacobian?
4.In the loss, I need the partial derivative of my function with respect to input (x), but when I am optimizing with respect to the network weights, I define x as a placeholder whose value is fed later, and weights are variable, in this case, can I still define the symbolic derivative of function with respect to input (x)? and put it in the loss? ( which later when we optimize with respect to weights will bring second order derivative of the function.)
I think you are right and there is a typo there, it was probably meant to be "of length len(ys)".
For efficiency. I can't explain exactly the reasoning, but this seems to be a pretty fundamental characteristic of how TensorFlow handles automatic differentiation. See issue #675.
There is no straightforward way to get the Jacobian matrix in TensorFlow. Take a look at this answer and again issue #675. Basically, you need one call to tf.gradients per column/row.
Yes, of course. You can compute whatever gradients you want, there is no real difference between a placeholder and any other operation really. There are a few operations that do not have a gradient because it is not well defined or not implemented (in which case it will generally return 0), but that's all.
I am trying to understand gradient descent algorithm by plotting the error vs value of parameters in the function. What would be an example of a simple function of the form y = f(x) with just just one input variable x and two parameters w1 and w2 such that it has a non-convex loss function ? Is y = w1.tanh(w2.x) an example ? What i am trying to achieve is this :
How does one know if the function has a non-convex loss function without plotting the graph ?
In iterative optimization algorithms such as gradient descent or Gauss-Newton, what matters is whether the function is locally convex. This is correct (on a convex set) if and only if the Hessian matrix (Jacobian of gradient) is positive semi-definite. As for a non-convex function of one variable (see my Edit below), a perfect example is the function you provide. This is because its second derivative, i.e Hessian (which is of size 1*1 here) can be computed as follows:
first_deriv=d(w1*tanh(w2*x))/dx= w1*w2 * sech^2(w2*x)
second_deriv=d(first_deriv)/dx=some_const*sech^2(w2*x)*tanh(w2*x)
The sech^2 part is always positive, so the sign of second_deriv depends on the sign of tanh, which can vary depending on the values you supply as x and w2. Therefore, we can say that it is not convex everywhere.
Edit: It wasn't clear to me what you meant by one input variable and two parameters, so I assumed that w1 and w2 were fixed beforehand, and computed the derivative w.r.t x. But I think that if you want to optimize w1 and w2 (as I suppose it makes more sense if your function is from a toy neural net), then you can compute the 2*2 Hessian in a similar way.
The same way as in high-school algebra: the second derivative tells you the direction of flex. If that's negative in all orientations, then the function is convex.
Since tensorflow supports variable reuse, some part of computing graph may occur multiple times in both forward and backward process. So my question is, is it possible to update variables with respect their certain occurrences in the compute graph?
For example, in X_A->Y_B->Y_A->Y_B, Y_B occurs twice, how to update them respectively? I mean, at first, we take the latter occurrence as constant, and update the previous one, then do opposite.
A more simple example is, say X_A, Y_B, Y_A are all scalar variable, then let Z = X_A * Y_B * Y_A * Y_B, here the gradient of Z w.r.t both occurrences of Y_B is X_A * Y_B * Y_A, but actually the gradient of Z to Y_B is 2*X_A * Y_B * Y_A. In this example computing gradients respectively may seems unnecessary, but not always are those computation commutative.
In the first example, gradients to the latter occurrence may be computed by calling tf.stop_gradient on X_A->Y_B. But I could not think of a way to fetch the previous one. Is there a way to do it in tensorflow's python API?
Edit:
#Seven provided an example on how to deal with it when reuse a single variable. However often it's a variable scope that is reused, which contains many variables and functions that manage them. As far as I know, their is no way to reuse a variable scope with applying tf.stop_gradient to all variables it contains.
With my understanding, when you use A = tf.stop_gradient(A), A will be considered as a constant. I have an example here, maybe it can help you.
import tensorflow as tf
wa = tf.get_variable('a', shape=(), dtype=tf.float32,
initializer=tf.constant_initializer(1.5))
b = tf.get_variable('b', shape=(), dtype=tf.float32,
initializer=tf.constant_initializer(7))
x = tf.placeholder(tf.float32, shape=())
l = tf.stop_gradient(wa*x) * (wa*x+b)
op_gradient = tf.gradients(l, x)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print sess.run([op_gradient], feed_dict={x:11})
I have a workaround for this question. Define a custom getter for the concerning variable scope, which wraps the default getter with tf.stop_gradient. This could set all variables returned in this scope as a Tensor contributing no gradients, though sometimes things get complicated because it returns a Tensor instead of a variable, such as when using tf.nn.batch_norm.