I use the op "assign" to change the value of variables instead of "=", but I found the gradient I got is quite different. Could anyone tell me the difference and why? thanks!
Like change w = w1 to op1 = tf.assign(w, w1) sess.run(op1)
= and tf.assign are different operations.
= is a python operation, in which you assign a python value to a python variable
tf.assign is a Tensorflow operation that assigns the value to the variable ref and returns the assign operation.
= is executed in python and doesn't affect the computation graph.
tf.assign is a node in the computational graph.
To understand, let's run this simple script
import tensorflow as tf
x = tf.Variable(1)
y = tf.Variable(2)
x = y
print(x.name, y.name)
a = tf.Variable(1)
b = tf.Variable(2)
# override a, otherwise a content is 1
a = a.assign(b)
print(a.name, b.name)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(sess.run([x, y, a, b]))
print(x.name, y.name) outputs Variable_1:0 Variable_1:0
because = is executed in python and you've overwritten the variable x.
print(a.name, b.name) outputs Assign:0 Variable_3:0 because you defined an assign op in the computational graph, now a is an assign op.
When you run the defined graph, you get:
[2, 2, 2, 2]
But these values are computed differently: one is a computation in the graph, the others no.
If you forgot to assign a to the assign op created with tf.assign (thus you change the line a = a.assign(b) to a.assign(b)), then when you evaluate the graph, you'll get:
[2, 2, 1, 2]
Related
I want to change my tf.placeholder values such that:
values < SmallConstant is set to 0.
It's not exactly clipping, so I can't use: tf.clip_by_value()
I tried the suggestion in Conditional assignment of tensor values in TensorFlow, and this is what I have so far:
x = tf.placeholder(tf.float32, None)
condition = tf.less(x, tf.constant(SmallConst))
tf.assign(x, tf.where(condition, tf.zeros_like(x), x))
However, on running this, I get an error saying
AttributeError: 'Tensor' object has no attribute 'assign'
It seems tf.assign() can be done on tf.Variable but not on tf.placeholder.
Is there any other way I can do this?
Thank you!
Yes, it's even easier than you think:
x = tf.placeholder(tf.float32, None)
# create a bool tensor the same shape as x
condition = x < SmallConst
# create tensor same shape as x, with values greater than SmallConst set to 0
to_remove = x*tf.to_float(condition)
# set all values of x less than SmallConst to 0
x_clipped = x - to_remove
I'd normally just put that into one line like:
x_clipped = x - x*tf.to_float(x < small_const)
note: using tf.to_float on a tensor of type bool will give you 0.0s in place of Falses and 1.0s in place of Trues
Additional information for cleaner code:
The numerical operators (e.g. <, >=, +, - etc, but not ==) are overloaded for tensorflow tensors such that you can use native python variables with tensors to get a new tensor that is the result of that operation. So tf.constant() is actually fairly rarely actually needed. Example of this in action:
a = tf.placeholder(tf.int32)
b = a + 1
c = a > 0
print(b) # gives "<tf.Tensor 'add:0' shape=<unknown> dtype=int32>"
print(c) # gives "<tf.Tensor 'Greater:0' shape=<unknown> dtype=bool>"
sess.run(b, {a: 1}) # gives scalar int32 numpy array with value 2
sess.run(c, {a: 1}) # gives scalar bool numpy array with value True
This is also true of numpy.
tf.assign() only works on Variables because it will
Update 'ref' by assigning 'value' to it.
Tensors in tensorflow are immutable. The result of any operation on a tensor is another tensor, but the original tensor will never change. Variables, however are mutable, and you change their value with tf.assign()
Look at the code:
import tensorflow as tf
x = tf.constant(1.0)
y = tf.constant(2.0)
z = tf.constant(3.0)
def f1():
return tf.Print(x, [x])
def f2():
return tf.Print(z, [z])
op = tf.cond(x>y, f1, f2)
with tf.Session() as sess:
sess.run(op)
I'm very puzzled, the output of tf.Print is 3.0
As we know, tf.Print(z, [z]) will output the value of z only when z is evaluated, but I don't think I have evaluated z.
Another question is about tf.cond, how does it add node to graph, for example how does add tf.Print to graph, I think it should relate some tensor with the return of tf.Print, otherwise tf.Print won't be executed.
I'm so puzzled.
I think you might have got the order of the arguments to tf.cond confused. The call:
tf.cond(predicate, f, g)
is equivalent to "if predicate is true then evaluate f, otherwise evaluate g"
In your example, since your predicate x > y is false, f2 is evaluated
Note
Since tensorflow 1.4, tf.cond will accept key-word arguments true_fn and false_fn, so you can avoid any confusion by writing:
tf.cond(predicate, true_fn=f, false_fn=g)
# Or equivalently...
tf.cond(predicate, false_fn=g, true_fn=f)
import tensorflow as tf
x = tf.Variable(1)
x = x + 1
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(x))
print(sess.run(x))
The output is
2
2
But I think the output should be
2
3
The first run, x has been update to 2, and the second run, x should be 3.
Who can tell me why the second run of x is 2 either? If x can't be update by the first run, how is the parameter of neural network update?
Update
x = tf.Variable(1)
x.assign(x+1)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
#print(x)
print(sess.run(x))
print(sess.run(x))
The output is
1
1
It is amazing.
Here's an analysis of your and Ishant Mrinal examples, it should help you understand what's going on here.
Example 1
x = tf.Variable(1)
Creation of a python variable x. Creation of a Tensorflow node variable_1. The python variable x holds a logical pointer to the node variable_1.
x = x + 1
Python assignment, destructive operation.
x now holds a pointer to the operation sum(variable_1, constant(1)).
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
usual code of initialization of variables.
print(sess.run(x))
Execution of x = execution of sum(variable_1, constant(1)) = 2
print(sess.run(x))
Execution of x = execution of sum(variable_1, constant(1)) = 2
Example 2
x = tf.Variable(1)
Creation of a python variable x. Creation of a Tensorflow node variable_1. The python variable x holds a logical pointer to the node variable_1.
init = tf.global_variables_initializer()
initialization of variable_1.
with tf.Session() as sess:
init.run()
execution of the initialization.
# here x point to the variable
print(sess.run(x))
evaluation of x = 1.
x = x + 1
Definition of a new node, exactly as in the previous example.
print(sess.run(x))
evaluation of the sum operation, thus 2.
Example 3
x = tf.Variable(1)
usual creation.
as_op = x.assign(x+1)
definition of a sum node followed by the definition of an assignment node, held by the python variable as_op.
This operation forces the order of execution between these 2 nodes. First executes the sum node, then use the result to assign it to the node variable variable_1.
with tf.Session() as sess:
init.run()
usual init ops
# here x point to the variable
print(sess.run(x))
evaluation of variable_1, thus 1.
sess.run(as_op)
execution of sum and assigment, thus temp = variable_1 + 1; variable_1 = temp;
print(sess.run(x))
extraction of the value pointed to x, thus 2.
tensorflow add op returns a updated tensor
x = x + 1
with tf.Session() as sess:
init.run()
print(x)
# here x is a addition op; hence the result same for both run
> result Tensor("add:0", shape=(), dtype=int32)
for both run x is just the same add op
print(sess.run(x)) # result: 2
print(sess.run(x)) # result: 2
if you change the location of the addition op then values will be different; since you will accessing initial value and updated value from the add op
x = tf.Variable(1)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
# here x point to the variable
print(sess.run(x)) # result: 1
x = x + 1
# here x point to the add op
print(sess.run(x)) # result: 2
Using assign op to get the updated value
x = tf.Variable(1)
as_op = x.assign(x+1)
with tf.Session() as sess:
init.run()
# here x point to the variable
print(sess.run(x)) # result: 1
sess.run(as_op) # variable updated
print(sess.run(x)) # result: 2
Consider for example this example:
train_op = opt.minimize(loss)
gradients = tf.gradients(loss, tf.trainable_variables())
Are the gradients computed twice or just once?
Or this example:
a = y + z
b = y + z
Is the addition y + z computed twice or just once?
It is computed only once. See this post for more info about this and other optimizations tensorflow does at runtime.
When I get the output of the network, it is a tensor with the size like [batch_size, height, weight].The content is the probability. What I want to do is to set a threshold to the tensor, and do the binarization processing. So what should I do to the tensor?
You can use tf.clip_by_value:
import tensorflow as tf
a = tf.random_uniform(shape=(2, 3, 3), minval=-1, maxval=3)
b = tf.clip_by_value(a, 0, 1)
with tf.Session() as sess:
A, B = sess.run([a, b])
print A, '\n'
print B
Here everything above 1 will become 1, everything below 0 will be 0. Everything else will stay the same.
Also take a look at this answer which does a similar thing but convert everything to 0 or 1: