What is a callable in Tensorflow? - tensorflow

I thought a callable is just a function from the tf-library that I call. This:
tensor = tf.while_loop(tf.less(tf.rank(tensor), ndims), # cond
tf.append(tensor, axis=axis), # body
loop_vars = [tensor]) # loop_vars
errors to TypeError: cond must be callable.
What is a callable condition if not tf.less()?

A callable is anything that can be called. See here.
The cond should be a function. You can use lambda (See here) to make your condition callable.
Here there is a minimal example of how to use tf.while_loop:
i = tf.constant(0)
c = lambda i: tf.less(i, 10)
b = lambda i: tf.add(i, 1)
r = tf.while_loop(c, b, [i])
And in the end, not a bad idea to post a minimal code that actually runs and generates your error.

tf.less is an Operation object. To make it callable, just use a lambda:
tensor = tf.while_loop(lambda tensor: tf.less(tf.rank(tensor), ndims), # cond
lambda tensor: tf.append(tensor, axis=axis), # body
loop_vars = [tensor]) # loop_vars

Related

Initialise tensors with ones in tensorflow 1

I want to initialise a variable declared using get_variable function with 1s .
I tried the following methods :
1.tf.get_variable(name = 'yd1', shape = shape_t, dtype = tf.float32,initializer = tf.ones())
Error received -> TypeError: ones() takes at least 1 argument (0 given)
tf.get_variable(name = 'yd1', shape = shape_t ,dtype = tf.float32,initializer = tf.ones(shape=shape_t))
Error received -> ValueError("If initializer is a constant, do not specify shape.")
What is the best way to initialise a variable with ones?
tf.zeros_initializer can be used to initialise with 0s, but there is no equivalent for ones in tf 1
You need to use tf.ones_initializer:
tf.get_variable(name='yd1', shape=shape_t, dtype=tf.float32,
initializer=tf.ones_initializer())
Alternatively, as the second error message says, you can use a constant value, but then do not pass a shape:
tf.get_variable(name='yd1', initializer=tf.ones(shape=shape_t, dtype=tf.float32))
tf.ones_initializer is not available in tf 1, It was introduced in tf 2.
The following code does the work
tf.get_variable(name = 'yd1', shape = shape_t ,dtype = tf.float32,initializer = tf.constant_initializer(1))

what is the difference of 'call' v.s '__call__' RNN methods in tensorflow?

I know what__call__ is,but what confuses me is that some classes like BasicRNNCell or tf.nn.rnn_cell.MultiRNNCell have this 'call' method instead of _call__ . What is this plain call method? it seems like the same thing , if it is not then i didnt see it being called.
I found this explanation somewhere with no clue. can you clarify please?
"The call function is where the logic of your cell will live. RNNCell’s __call_ method will wrap your call method and help with scoping and other logistics."
sample:
def call(self, inputs, state):
total_hidden_size = sum(c._h_above_size for c in self._cells)
# split out the part of the input that stores values of ha
raw_inp = inputs[:, :-total_hidden_size] # [B, I]
raw_h_aboves = inputs[:, -total_hidden_size:] # [B, sum(ha_l)]
ha_splits = [c._h_above_size for c in self._cells]
h_aboves = array_ops.split(value=raw_h_aboves,
num_or_size_splits=ha_splits, axis=1)
z_below = tf.ones([tf.shape(inputs)[0], 1]) # [B, 1]
raw_inp = array_ops.concat([raw_inp, z_below], axis=1) # [B, I + 1]
In tensorflow2.0, if you define your network by subclassing tf.keras.Model, you need to implement the model's forward pass in call().
https://www.tensorflow.org/api_docs/python/tf/keras/Model

while_loop error in Tensorflow

I tried to use while_loop in Tensorflow, but when I try to return the target output from callable in while loop, it gives me an error because the shape is increased every time.
The output should be contains (0 or 1) values based on data value (input array). If data value is large than 5 return 1 else return 0. The returned value must be added into output
This is the code::
import numpy as np
import tensorflow as tf
data = np.random.randint(10, size=(30))
data = tf.constant(data, dtype= tf.float32)
global output
output= tf.constant([], dtype= tf.float32)
i = tf.constant(0)
c = lambda i: tf.less(i, 30)
def b(i):
i= tf.add(i,1)
cond= tf.cond(tf.greater(data[i-1], tf.constant(5.)), lambda: tf.constant(1.0), lambda: tf.constant([0.0]))
output =tf.expand_dims(cond, axis = i-1)
return i, output
r,out = tf.while_loop(c, b, [i])
print(out)
sess= tf.Session()
sess.run(out)
The error::
r, out = tf.while_loop(c, b, [i])
ValueError: The two structures don't have the same number of elements.
First structure (1 elements): [tf.Tensor 'while/Identity:0' shape=()
dtype=int32]
Second structure (2 elements): [tf.Tensor 'while/Add:0' shape=()
dtype=int32, tf.Tensor 'while/ExpandDims:0' shape=unknown
dtype=float32>]
I use tensorflow-1.1.3 and python-3.5
How can I change my code to gives me the target result?
EDIT::
I edit the code based on #mrry answer, but I still have an issue that the output is incorrect answer
the output is numbers summation
a = tf.ones([10,4])
print(a)
a = tf.reduce_sum(a, axis = 1)
i =tf.constant(0)
c = lambda i, _:tf.less(i,10)
def Smooth(x):
return tf.add(x,2)
summ = tf.constant(0.)
def b(i,_):
global summ
summ = tf.add(summ, tf.cast(Smooth(a[i]), tf.float32))
i= tf.add(i,1)
return i, summ
r, smooth_l1 = tf.while_loop(c, b, [i, smooth_l1])
print(smooth_l1)
sess = tf.Session()
print(sess.run(smooth_l1))
the out put is 6.0 (wrong).
The tf.while_loop() function requires that the following four lists have the same length, and the same type for each element:
The list of arguments to the cond function (c in this case).
The list of arguments to the body function (b in this case).
The list of return values from the body function.
The list of loop_vars representing the loop variables.
Therefore, if your loop body has two outputs, you must add a corresponding argument to b and c, and a corresponding element to loop_vars:
c = lambda i, _: tf.less(i, 30)
def b(i, _):
i = tf.add(i, 1)
cond = tf.cond(tf.greater(data[i-1], tf.constant(5.)),
lambda: tf.constant(1.0),
lambda: tf.constant([0.0]))
# NOTE: This line fails with a shape error, because the output of `cond` has
# a rank of either 0 or 1, but axis may be as large as 28.
output = tf.expand_dims(cond, axis=i-1)
return i, output
# NOTE: Use a shapeless `tf.placeholder_with_default()` because the shape
# of the output will vary from one iteration to the next.
r, out = tf.while_loop(c, b, [i, tf.placeholder_with_default(0., None)])
As noted in the comments, the body of the loop (specifically the call to tf.expand_dims()) seems to be incorrect and this program won't work as-is, but hopefully this is enough to get you started.
If you see this error:
ValueError: The two structures don't have the same number of elements.
If you see it in a while_loop, that means your inputs and outputs out of the while loop have different shapes.
I solved it by making sure that I return the same structure of loop_vars from my while loop function, the condition function must also accept same loop vars.
Here is an example code
loop_vars = [i, loss, batch_size, smaller_str_lens]
def condition(*loop_vars):
i = loop_vars[0]
batch_size = loop_vars[2]
return tf.less(i, batch_size)
def body(*loop_vars):
i, loss, batch_size, smaller_str_lens = loop_vars
tf.print("The loop passed here")
## logic here
i = tf.add(i, 1)
return i, loss, batch_size, smaller_str_lens
loss = tf.while_loop(condition, compare_strings, loop_vars)[1]
The body func must return loop vars, and the condition func must accept loop vars

Using scatter_nd_add op inside a while loop?

I'm trying to do a tf.scatter_nd_add inside a tf.while loop. It gives me an error message though.
TypeError: 'ScatterNdAdd' Op requires that input 'ref' be a mutable tensor (e.g.: a tf.Variable)
It seems like the way that while loops work is that it changes the input variable from being of type tf.Variable to having some other type. Is it possible to do a scatter_nd_add op inside a while loop?
Here's a code excerpt:
self.hough = tf.Variable(tf.zeros((num_points, num_points), dtype=tf.float32),
name='hough')
self.i = tf.constant(0)
c = lambda i, hough: tf.less(i, tf.squeeze(tf.shape(self.img_x)))
b = lambda i, hough: self.ProcessPixel(i, hough)
self.r = tf.while_loop(c, b, [self.i, self.hough], back_prop=False)
This is the body of the loop:
def ProcessPixel(self, i, hough):
pixel_x, pixel_y = self.img_x[self.i], self.img_y[self.i]
result = self.GetLinesThroughPixel(pixel_x, pixel_y)
idx = tf.stack([tf.range(num_points, dtype=tf.int64), result])
pixel_val = self.img[tf.to_int32(pixel_x), tf.to_int32(pixel_y)]
print type(hough)
updated_hough = tf.scatter_nd_add(hough, tf.transpose(idx),
updates=pixel_val * tf.ones(num_points))
return tf.add(i, 1), updated_hough

Confused by the behavior of `tf.cond`

I need a conditional control flow in my graph. If pred is True, the graph should call an op that updates a variable and then returns it, otherwise it returns the variable unchanged. A simplified version is:
pred = tf.constant(True)
x = tf.Variable([1])
assign_x_2 = tf.assign(x, [2])
def update_x_2():
with tf.control_dependencies([assign_x_2]):
return tf.identity(x)
y = tf.cond(pred, update_x_2, lambda: tf.identity(x))
with tf.Session() as session:
session.run(tf.initialize_all_variables())
print(y.eval())
However, I find that both pred=True and pred=False lead to the same result y=[2], which means the assign op is also called when update_x_2 is not selected by tf.cond. How to explain this? And how to solve this problem?
TL;DR: If you want tf.cond() to perform a side effect (like an assignment) in one of the branches, you must create the op that performs the side effect inside the function that you pass to tf.cond().
The behavior of tf.cond() is a little unintuitive. Because execution in a TensorFlow graph flows forward through the graph, all operations that you refer to in either branch must execute before the conditional is evaluated. This means that both the true and the false branches receive a control dependency on the tf.assign() op, and so y always gets set to 2, even if pred is False.
The solution is to create the tf.assign() op inside the function that defines the true branch. For example, you could structure your code as follows:
pred = tf.placeholder(tf.bool, shape=[])
x = tf.Variable([1])
def update_x_2():
with tf.control_dependencies([tf.assign(x, [2])]):
return tf.identity(x)
y = tf.cond(pred, update_x_2, lambda: tf.identity(x))
with tf.Session() as session:
session.run(tf.initialize_all_variables())
print(y.eval(feed_dict={pred: False})) # ==> [1]
print(y.eval(feed_dict={pred: True})) # ==> [2]
pred = tf.constant(False)
x = tf.Variable([1])
def update_x_2():
assign_x_2 = tf.assign(x, [2])
with tf.control_dependencies([assign_x_2]):
return tf.identity(x)
y = tf.cond(pred, update_x_2, lambda: tf.identity(x))
with tf.Session() as session:
session.run(tf.initialize_all_variables())
print(y.eval())
This will get the result of [1].
This answer is quite the same as the above answer. But what I wanna share is you can put every ops you would like to use in its branch function. Because, given your example code, tensor x is can be directly used by the update_x_2 function.