Using scatter_nd_add op inside a while loop? - tensorflow

I'm trying to do a tf.scatter_nd_add inside a tf.while loop. It gives me an error message though.
TypeError: 'ScatterNdAdd' Op requires that input 'ref' be a mutable tensor (e.g.: a tf.Variable)
It seems like the way that while loops work is that it changes the input variable from being of type tf.Variable to having some other type. Is it possible to do a scatter_nd_add op inside a while loop?
Here's a code excerpt:
self.hough = tf.Variable(tf.zeros((num_points, num_points), dtype=tf.float32),
name='hough')
self.i = tf.constant(0)
c = lambda i, hough: tf.less(i, tf.squeeze(tf.shape(self.img_x)))
b = lambda i, hough: self.ProcessPixel(i, hough)
self.r = tf.while_loop(c, b, [self.i, self.hough], back_prop=False)
This is the body of the loop:
def ProcessPixel(self, i, hough):
pixel_x, pixel_y = self.img_x[self.i], self.img_y[self.i]
result = self.GetLinesThroughPixel(pixel_x, pixel_y)
idx = tf.stack([tf.range(num_points, dtype=tf.int64), result])
pixel_val = self.img[tf.to_int32(pixel_x), tf.to_int32(pixel_y)]
print type(hough)
updated_hough = tf.scatter_nd_add(hough, tf.transpose(idx),
updates=pixel_val * tf.ones(num_points))
return tf.add(i, 1), updated_hough

Related

Tensorflow 2: Nested TensorArray

What's wrong with this code? Edit: It works on CPU, but fails when ran on GPU. It runs for a few iterations, then fails with one of errors (github issue here):
2019-12-02 12:59:29.727966: F tensorflow/core/framework/tensor_shape.cc:445] Check failed: end <= dims() (1 vs. 0)
Process finished with exit code -1073740791 (0xC0000409)
or
tensorflow.python.framework.errors_impl.InvalidArgumentError: Tried to set a tensor with incompatible shape at a list index. Item element shape: [3,3] list shape: [3]
[[{{node while/body/_1/TensorArrayV2Write/TensorListSetItem}}]] [Op:__inference_computeElement_73]
#tf.function
def computeElement_byBin():
c = tf.TensorArray(tf.int64, size=1, infer_shape=False, element_shape=(3,))
const = tf.cast(tf.constant([1, 2, 3]), tf.int64)
c = c.write(0, const)
c_c = c.concat()
return c_c
#tf.function
def computeElement():
c = tf.TensorArray(tf.int64, size=1, infer_shape=False, element_shape=(3,))
for x in tf.range(50):
byBinVariant = computeElement_byBin()
c = c.write(0, byBinVariant)
return c.concat()
k = 0
while True:
k += 1
r = computeElement()
print('iteration: %s, result: %s' % (k, r))
I played around with it more and narrowed it down a bit:
#tf.function
def computeElement():
tarr = tf.TensorArray(tf.int32, size=1,clear_after_read=False)
tarr = tarr.write(0, [1])
concat = tarr.concat()
# PROBLEM HERE
for x in tf.range(50):
concat = tarr.concat()
return concat
If you set tf.config.threading.set_inter_op_parallelism_threads(1) the bug goes away, which means it's to do with parallelization of the unrolled tensorflow loop. Knowing that tensorflow unrolls statically when looping over a python variable rather than a tensor, I could confirm that this code worked:
#tf.function
def computeElement(arr):
tarr = tf.TensorArray(tf.int32, size=1)
tarr = tarr.write(0, [1])
concat = tarr.concat()
a = 0
while a<arr:
concat = tarr.concat()
a+=1
return concat
k = 0
while True:
k += 1
r = computeElement(50)
So solution for now is to loop over a python variable rather than a tensor.

How to define (sparse) variable diagonal tensors

In a problem I want to solve using Tensorflow, I want to build a n-dimensional rank tensor that is 'diagonal' by blocks. That is, I want to generate a tensor object from a concatenation of low order tensors.
I have tried to define the whole tf.Variable tensor and then to impose the value 0 to some variables but Tensorflow does not allow assignments when working with variable tensors.
Moreover, I would want to create 'diagonal' tensors with the same independent variables, as, for example, using a stacked 2D representation, being A a 2 dimensional tensor:
T = [A, 0;0 , A]
My current source code:
shape1 = [3,3,10,10]
shape2 = [3,3]
i1 = tf.truncated_normal(shape1, stddev=1.0, dtype = tf.float32)
i2 = tf.truncated_normal(shape2, stddev=1.0, dtype = tf.float32)
A = tf.Variable(i1)
V = tf.Variable(i2)
for i in range(10):
for j in range(10):
if i != j:
A[:,:,i,j] = tf.zeros((3,3))
else:
A[:,:,i,j] = V
Of course, this code returns the error Variable object does not support item assignment.
What I want, at the end of the day, is to define a variable tensor such as:
T[:,:,i,j] = tf.zeros([D0,D1]), if i != j
and
T[:,:,i,j] = A, if i = j
with A = tf.variable([D0,D1])
Thank you very much in advance!
One way would be to use tf.stack, which converts a list of tensors of dimension n to a tensor of dimension n+1.
l = []
for i in range(10):
li = [V * 0.0 if i != j else V for j in range(10)]
Ai = tf.stack(li)
l.append(Ai)
A = tf.stack(l)

Got error when setting read_batch_size in tf.contrib.learn.read_batch_examples. default is ok

I modified code of the wide & deep tutorial for reading large input from file using tf.contrib.learn.read_batch_examples. For speeding up the training process, I set the read_batch_size and got an error ValueError: All shapes must be fully defined: [TensorShape([]), TensorShape([Dimension(None)])]
My piece of codeļ¼š
def input_fn_pre(batch_size, filename):
examples_op = tf.contrib.learn.read_batch_examples(
filename,
batch_size=5000,
reader=tf.TextLineReader,
num_epochs=5,
num_threads=5,
read_batch_size=2500,
parse_fn=lambda x: tf.decode_csv(x, [tf.constant(['0'], dtype=tf.string)] * len(COLUMNS) * 2500, use_quote_delim=False))
examples_dict = {}
for i, col in enumerate(COLUMNS):
examples_dict[col] = examples_op[:, i]
feature_cols = {k: tf.string_to_number(examples_dict[k], out_type=tf.float32) for k in CONTINUOUS_COLUMNS}
feature_cols.update({k: dense_to_sparse(examples_dict[k]) for k in CATEGORICAL_COLUMNS})
label = tf.string_to_number(examples_dict[LABEL_COLUMN], out_type=tf.int32)
return feature_cols, label
while using the default parameter setting is ok:
def input_fn_pre(batch_size, filename):
examples_op = tf.contrib.learn.read_batch_examples(
filename,
batch_size=5000,
reader=tf.TextLineReader,
num_epochs=5,
num_threads=5,
parse_fn=lambda x: tf.decode_csv(x, [tf.constant(['0'], dtype=tf.string)] * len(COLUMNS), use_quote_delim=False))
examples_dict = {}
for i, col in enumerate(COLUMNS):
examples_dict[col] = examples_op[:, i]
feature_cols = {k: tf.string_to_number(examples_dict[k], out_type=tf.float32) for k in CONTINUOUS_COLUMNS}
feature_cols.update({k: dense_to_sparse(examples_dict[k]) for k in CATEGORICAL_COLUMNS})
label = tf.string_to_number(examples_dict[LABEL_COLUMN], out_type=tf.int32)
return feature_cols, label
There is not enough explanation in the tensorflow doc.
I did not see any difference between your two code snippets. Could you update your code?

while_loop error in Tensorflow

I tried to use while_loop in Tensorflow, but when I try to return the target output from callable in while loop, it gives me an error because the shape is increased every time.
The output should be contains (0 or 1) values based on data value (input array). If data value is large than 5 return 1 else return 0. The returned value must be added into output
This is the code::
import numpy as np
import tensorflow as tf
data = np.random.randint(10, size=(30))
data = tf.constant(data, dtype= tf.float32)
global output
output= tf.constant([], dtype= tf.float32)
i = tf.constant(0)
c = lambda i: tf.less(i, 30)
def b(i):
i= tf.add(i,1)
cond= tf.cond(tf.greater(data[i-1], tf.constant(5.)), lambda: tf.constant(1.0), lambda: tf.constant([0.0]))
output =tf.expand_dims(cond, axis = i-1)
return i, output
r,out = tf.while_loop(c, b, [i])
print(out)
sess= tf.Session()
sess.run(out)
The error::
r, out = tf.while_loop(c, b, [i])
ValueError: The two structures don't have the same number of elements.
First structure (1 elements): [tf.Tensor 'while/Identity:0' shape=()
dtype=int32]
Second structure (2 elements): [tf.Tensor 'while/Add:0' shape=()
dtype=int32, tf.Tensor 'while/ExpandDims:0' shape=unknown
dtype=float32>]
I use tensorflow-1.1.3 and python-3.5
How can I change my code to gives me the target result?
EDIT::
I edit the code based on #mrry answer, but I still have an issue that the output is incorrect answer
the output is numbers summation
a = tf.ones([10,4])
print(a)
a = tf.reduce_sum(a, axis = 1)
i =tf.constant(0)
c = lambda i, _:tf.less(i,10)
def Smooth(x):
return tf.add(x,2)
summ = tf.constant(0.)
def b(i,_):
global summ
summ = tf.add(summ, tf.cast(Smooth(a[i]), tf.float32))
i= tf.add(i,1)
return i, summ
r, smooth_l1 = tf.while_loop(c, b, [i, smooth_l1])
print(smooth_l1)
sess = tf.Session()
print(sess.run(smooth_l1))
the out put is 6.0 (wrong).
The tf.while_loop() function requires that the following four lists have the same length, and the same type for each element:
The list of arguments to the cond function (c in this case).
The list of arguments to the body function (b in this case).
The list of return values from the body function.
The list of loop_vars representing the loop variables.
Therefore, if your loop body has two outputs, you must add a corresponding argument to b and c, and a corresponding element to loop_vars:
c = lambda i, _: tf.less(i, 30)
def b(i, _):
i = tf.add(i, 1)
cond = tf.cond(tf.greater(data[i-1], tf.constant(5.)),
lambda: tf.constant(1.0),
lambda: tf.constant([0.0]))
# NOTE: This line fails with a shape error, because the output of `cond` has
# a rank of either 0 or 1, but axis may be as large as 28.
output = tf.expand_dims(cond, axis=i-1)
return i, output
# NOTE: Use a shapeless `tf.placeholder_with_default()` because the shape
# of the output will vary from one iteration to the next.
r, out = tf.while_loop(c, b, [i, tf.placeholder_with_default(0., None)])
As noted in the comments, the body of the loop (specifically the call to tf.expand_dims()) seems to be incorrect and this program won't work as-is, but hopefully this is enough to get you started.
If you see this error:
ValueError: The two structures don't have the same number of elements.
If you see it in a while_loop, that means your inputs and outputs out of the while loop have different shapes.
I solved it by making sure that I return the same structure of loop_vars from my while loop function, the condition function must also accept same loop vars.
Here is an example code
loop_vars = [i, loss, batch_size, smaller_str_lens]
def condition(*loop_vars):
i = loop_vars[0]
batch_size = loop_vars[2]
return tf.less(i, batch_size)
def body(*loop_vars):
i, loss, batch_size, smaller_str_lens = loop_vars
tf.print("The loop passed here")
## logic here
i = tf.add(i, 1)
return i, loss, batch_size, smaller_str_lens
loss = tf.while_loop(condition, compare_strings, loop_vars)[1]
The body func must return loop vars, and the condition func must accept loop vars

Tensorflow: constructing the params tensor for tf.map_fn

import tensorflow as tf
import numpy as np
def lineeqn(slope, intercept, y, x):
return np.sign(y-(slope*x) - intercept)
# data size
DS = 100000
N = 100
x1 = tf.random_uniform([DS], -1, 0, dtype=tf.float32, seed=0)
x2 = tf.random_uniform([DS], 0, 1, dtype=tf.float32, seed=0)
# line representing the target function
rand1 = np.random.randint(0, DS)
rand2 = np.random.randint(0, DS)
T_x1 = x1[rand1]
T_x2 = x1[rand2]
T_y1 = x2[rand1]
T_y2 = x2[rand2]
slope = (T_y2 - T_y1)/(T_x2 - T_x1)
intercept = T_y2 - (slope * T_x2)
# extracting training samples from the data set
training_indices = np.random.randint(0, DS, N)
training_x1 = tf.gather(x1, training_indices)
training_x2 = tf.gather(x2, training_indices)
training_x1_ex = tf.expand_dims(training_x1, 1)
training_x2_ex = tf.expand_dims(training_x2, 1)
slope_tensor = tf.fill([N], slope)
slope_ex = tf.expand_dims(slope_tensor, 1)
intercept_tensor = tf.fill([N], intercept)
intercept_ex = tf.expand_dims(intercept_tensor, 1)
params = tf.concat(1, [slope_ex, intercept_ex, training_x2_ex, training_x1_ex])
training_y = tf.map_fn(lineeqn, params)
The lineeqn function requires 4 parameters, so params should be a tensor where each element is 4-element tensor. When I try to run the above code, I get the error TypeError: lineeqn() takes exactly 4 arguments (1 given). Can someone please explain what is wrong with the way I have constructed the params tensor? What does tf.map_fn do to the params tensor?
A similar question has been asked here. The reason you are getting this error is because the function called by map_fn - lineeqn in your case - is required to take exactly one tensor argument.
Rather than a list of arguments to the function, the parameter elems is expected to be a list of items, where the mapped function is called for each item contained in the list.
So in order to take multiple arguments to your function, you would have to unpack them yourself from each item, e.g.
def lineeqn(item):
slope, intercept, y, x = tf.unstack(item, num=4)
return np.sign(y - (slope * x) - intercept)
and call it as
training_y = tf.map_fn(lineeqn, list_of_parameter_tensors)
Here, you call the line equation for each tensor in the list_of_parameter_tensors, where each tensor would describe a tuple (slope, intercept, y, x) of packed arguments.
(Note that depending on the shape of the actual argument tensors, it might also be that instead of tf.concat you could have to use tf.pack.)