I was trying to assign a variable y in tensorflow which is to be dependent on x. But, even upon changing the value of x, y doesn't change.
import tensorflow as tf
sess = tf.Session()
x=tf.Variable(4,name='x')
model = tf.global_variables_initializer()
sess.run(model)
y=tf.Variable(2*x,name='y')
model = tf.global_variables_initializer()
sess.run(model)
sess.run(x)
sess.run(tf.assign(x,2))
print(sess.run(y))
I am expecting an output 4, but I'm getting 8. Any help would be appreciated.
Gramercy...
y=tf.Variable(2*x, name='y') just means y will be initialized by x*2, change this line into y = 2 * x will do as you expected.
Related
I do generally use Tensorflow Keras backend but recently I'm working on a project where there is need of T.F 1.x
I'm trying a simple code, but getting an error:
x2 = tf.constant(-2.0, name="x", dtype=tf.float32)
a = tf.placeholder(name='a',dtype=tf.float32)
b = tf.constant(13.0, name="b", dtype=tf.float32)
y = tf.Variable(tf.add(tf.multiply(a, x2), b))
init = tf.global_variables_initializer()
with tf.Session() as session:
print(session.run(init,feed_dict={a:5.0}))
ValueError: initial_value must have a shape specified at y=Variable()... line.
Does anyone know the solution? Thanks in advance
The variable "y" is dependent on the variable "a" which is a placeholder. So defining the shape of "a" will make code run properly
x2 = tf.constant(-2.0, name="x", dtype=tf.float32)
a = tf.placeholder(name='A',shape=(1,),dtype=tf.float32)
b = tf.constant(13.0, name="b", dtype=tf.float32)
y = tf.Variable(tf.add(tf.multiply(a, x2), b))
init = tf.global_variables_initializer()
with tf.Session() as session:
print(session.run(init,feed_dict={a:[5.0]}))
print(session.run(y))
I would like to show my example below:
x = tf.placeholder(dtype=...)
a = numpy.asarray([784, 10])
z = slim.fully_connected(x, 10, weights_initializer=?)
I have tried weights_initializer = lambda x1:a, it reports the error: TypeError: () got an unexpected keyword argument 'dtype'
I also found another post here:https://github.com/tensorflow/tensorflow/issues/4016
However, I still don't know the answer. Thank you very much.
Sorry, I don't really understand what you're trying to do.
If your fully connected layer has 10 hidden neurons then your initializer must have the shape (input_shape, 10), what you're giving is a (2,) shape. Secondly, to initialize weights with a constant matrix you should use tf.constant_initializer(..) function.
Are you trying to do the following: (you can change the init function used with numpy)
import tensorflow as tf
import numpy as np
slim = tf.contrib.slim
input_size = ?
x = tf.placeholder(dtype=tf.float32, shape=[input_size])
a = np.random.normal((input_size, 10))
z = slim.fully_connected(x, 10,
weights_initializer=tf.constant_initializer(a))
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
Look at the code:
import tensorflow as tf
x = tf.Variable(1)
x = tf.stop_gradient(x)
y = 2 * x
g = tf.gradients(y, x)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(g))
I want to freeze x and the gradient of y wrt x is zero, but the output is 2, so what's wrong?
Update
import tensorflow as tf
x0 = tf.Variable(1)
x1 = tf.stop_gradient(x0)
y = 2 * x1
g = tf.gradients(y, x0)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(g))
I use x1 which don't overwrite x0, but when I run this code, it will raise a error. If tf.stop_gradient act as tf.identity, I think x0 will have a path to y in computation graph and the gradient is 0 rather than raising a error? Can someone tell me what tf.stop_gradient does indeed?
tf.stop_gradient() stops the gradient computation at the specified point in the computation graph. Applying it to the variable is "too late", but you can apply it to y.
import tensorflow as tf
x = tf.Variable(1)
y = tf.stop_gradient(2 * x)
g = tf.gradients(y, x)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(g))
Note that this will not output 0 but instead throws an error, since the gradient of y w.r.t. to x is undefined in this case and you explicitly ask for it. In a real-world scenario this would probably not be the case.
I've been trying to understand how variables are initialized in Tensorflow. Below, I created a simple example which defines a variable in some variable_scope and the process is wrapped in the subfunction.
In my understanding, this code creates a variable 'x' inside the 'test_scope' at tf.initialize_all_variables() stage and it can always be accessed after that using tf.get_variable(). But this code ended up with the Attempting to use uninitialized value error at print(x.eval()) line.
I don't have any idea about how Tensorflow initializes variables. Can I get any help? Thank you.
import tensorflow as tf
def create_var_and_prod_with(y):
with tf.variable_scope('test_scope'):
x = tf.Variable(0.0, name='x', trainable=False)
return x * y
s = tf.InteractiveSession()
y = tf.Variable(1.0, name='x', trainable=False)
create_var_and_prod_with(y)
s.run(tf.initialize_all_variables())
with tf.variable_scope('test_scope'):
x = tf.get_variable('x', [1], initializer=tf.constant_initializer(0.0), trainable=False)
print(x.eval())
print(y.eval())
If you want to reuse a variable, you have to declare it using get_variables and than explicitly ask to the scope to make the variables reusable.
If you change the line
x = tf.Variable(0.0, name='x', trainable=False)
with:
x = tf.get_variable('x', [1], trainable=False)
And you ask to the scope to make the already defined variable available:
with tf.variable_scope('test_scope') as scope:
scope.reuse_variables()
x = tf.get_variable('x', [1], initializer=tf.constant_initializer(0.0), trainable=False)
Then you can run print(x.eval(), y.eval()) without problems.
If you want to reuse a variable with tf.get_variable('x'), the variable has to be created in the first place with tf.get_variable('x').
Moreover, when you want to retrieve a created variable, you need to be in a scope withreuse=True`.
Here is what your code should look like:
import tensorflow as tf
def create_var_and_prod_with(y):
with tf.variable_scope('test_scope'):
x = tf.get_variable('x', [1], initializer=tf.constant_initializer(0.0), trainable=False)
return x * y
y = tf.Variable(1.0, name='x', trainable=False)
create_var_and_prod_with(y)
with tf.variable_scope('test_scope', reuse=True):
x = tf.get_variable('x') # you only need the name to retrieve x
# Try to put the session only at the end when it is needed
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print(x.eval())
print(y.eval())
You can read more about it in this tutorial.
I am trying to initialize zero vectors in tensorflow as follow:
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
init = tf.initialize_all_variables()
# Tensorflow run
sess = tf.Session()
sess.run(init)
However, I am getting the following error:
InvalidArgumentError: dims must be a vector of int32.
Can you please help me fixing this problem?
Thanks
It works for me if you do this
W = tf.zeros([784, 10])
b = tf.zeros([10])
init = tf.initialize_all_variables()
# Tensorflow run
sess = tf.Session()
sess.run(init)
Also if you do it the way you're doing it. You'd still need to initialize W and b later on anyways as W below won't be initialized by the zeros tensor.
W = tf.Variable(tf.zeros([3,4]), name='x')
b = tf.Variable(x + 6, name='y')
model = tf.initialize_all_variables()
with tf.Session() as session:
session.run(model)
#Error: Attempting to use uninitialized value b
The example above will give an error but the one below won't and will give the correct answer.
W = tf.zeros([3,4], name='x')
b = tf.Variable(x + 6, name='y')
model = tf.initialize_all_variables()
with tf.Session() as session:
session.run(model)
If you want to do it the way you're with weights and biases (I'm guessing W and b stand for) I suggest looking here.
https://www.tensorflow.org/versions/r0.9/how_tos/variable_scope/index.html#variable-scope-example
Let me know if you still have any questions.