while_loop error in Tensorflow - tensorflow

I tried to use while_loop in Tensorflow, but when I try to return the target output from callable in while loop, it gives me an error because the shape is increased every time.
The output should be contains (0 or 1) values based on data value (input array). If data value is large than 5 return 1 else return 0. The returned value must be added into output
This is the code::
import numpy as np
import tensorflow as tf
data = np.random.randint(10, size=(30))
data = tf.constant(data, dtype= tf.float32)
global output
output= tf.constant([], dtype= tf.float32)
i = tf.constant(0)
c = lambda i: tf.less(i, 30)
def b(i):
i= tf.add(i,1)
cond= tf.cond(tf.greater(data[i-1], tf.constant(5.)), lambda: tf.constant(1.0), lambda: tf.constant([0.0]))
output =tf.expand_dims(cond, axis = i-1)
return i, output
r,out = tf.while_loop(c, b, [i])
print(out)
sess= tf.Session()
sess.run(out)
The error::
r, out = tf.while_loop(c, b, [i])
ValueError: The two structures don't have the same number of elements.
First structure (1 elements): [tf.Tensor 'while/Identity:0' shape=()
dtype=int32]
Second structure (2 elements): [tf.Tensor 'while/Add:0' shape=()
dtype=int32, tf.Tensor 'while/ExpandDims:0' shape=unknown
dtype=float32>]
I use tensorflow-1.1.3 and python-3.5
How can I change my code to gives me the target result?
EDIT::
I edit the code based on #mrry answer, but I still have an issue that the output is incorrect answer
the output is numbers summation
a = tf.ones([10,4])
print(a)
a = tf.reduce_sum(a, axis = 1)
i =tf.constant(0)
c = lambda i, _:tf.less(i,10)
def Smooth(x):
return tf.add(x,2)
summ = tf.constant(0.)
def b(i,_):
global summ
summ = tf.add(summ, tf.cast(Smooth(a[i]), tf.float32))
i= tf.add(i,1)
return i, summ
r, smooth_l1 = tf.while_loop(c, b, [i, smooth_l1])
print(smooth_l1)
sess = tf.Session()
print(sess.run(smooth_l1))
the out put is 6.0 (wrong).

The tf.while_loop() function requires that the following four lists have the same length, and the same type for each element:
The list of arguments to the cond function (c in this case).
The list of arguments to the body function (b in this case).
The list of return values from the body function.
The list of loop_vars representing the loop variables.
Therefore, if your loop body has two outputs, you must add a corresponding argument to b and c, and a corresponding element to loop_vars:
c = lambda i, _: tf.less(i, 30)
def b(i, _):
i = tf.add(i, 1)
cond = tf.cond(tf.greater(data[i-1], tf.constant(5.)),
lambda: tf.constant(1.0),
lambda: tf.constant([0.0]))
# NOTE: This line fails with a shape error, because the output of `cond` has
# a rank of either 0 or 1, but axis may be as large as 28.
output = tf.expand_dims(cond, axis=i-1)
return i, output
# NOTE: Use a shapeless `tf.placeholder_with_default()` because the shape
# of the output will vary from one iteration to the next.
r, out = tf.while_loop(c, b, [i, tf.placeholder_with_default(0., None)])
As noted in the comments, the body of the loop (specifically the call to tf.expand_dims()) seems to be incorrect and this program won't work as-is, but hopefully this is enough to get you started.

If you see this error:
ValueError: The two structures don't have the same number of elements.
If you see it in a while_loop, that means your inputs and outputs out of the while loop have different shapes.
I solved it by making sure that I return the same structure of loop_vars from my while loop function, the condition function must also accept same loop vars.
Here is an example code
loop_vars = [i, loss, batch_size, smaller_str_lens]
def condition(*loop_vars):
i = loop_vars[0]
batch_size = loop_vars[2]
return tf.less(i, batch_size)
def body(*loop_vars):
i, loss, batch_size, smaller_str_lens = loop_vars
tf.print("The loop passed here")
## logic here
i = tf.add(i, 1)
return i, loss, batch_size, smaller_str_lens
loss = tf.while_loop(condition, compare_strings, loop_vars)[1]
The body func must return loop vars, and the condition func must accept loop vars

Related

How to compute batch-wise Jacobians using vmap in JAX?

I want to solve a 2D-differential equation using neural network and working with the JAX library. The neural network function I am using basically approximates the function u = f(x,y) and goes something like this:
def f(params, inputs_x, inputs_y):
inputs = jnp.concatenate((inputs_x, inputs_y), axis=1)
for w, b in params:
outputs = jnp.dot(inputs, w)
inputs = jnn.swish(outputs)
return outputs
params is a PyTree that contains the weights and biases matrices. For the 2D problem, let's take layer sizes as something like [2,5,1]. There are 10 batches of (x_inputs, y_inputs) passed onto the function, hence inputs_x, inputs_y both are of shapes (10,1). Therefore, the output I want should also have the shape (10,1). But, the real problem comes when I'm trying to find out du/dx, du/dy, d2u/dx2 or d2u/dy2. I am writing something like this:
u = lambda x,y: f(params, x, y)
u = lambda x,y: f(params, x)
u_x = lambda x,y: vmap(jacfwd(u,argnums=0), in_axes=(0,0))(x,y)
u_xx = lambda x,y: vmap(jacfwd(u_x,argnums=0), in_axes=(0,0))(x,y)
I am getting errors.
If I was solving a 1D differential equation, then everything was going fine. In that case, the neural network function is something like this:
def f(params, inputs):
for w, b in params:
outputs = jnp.dot(inputs, w)
inputs = jnn.swish(outputs)
return outputs
u = lambda x,: f(params, x)
u_x = lambda x: vmap(jacfwd(u,argnums=0))(x)
Layer Sizes are [1,5,1] and I pass 10 batches of inputs into the neural network function and compute the gradients using vmap. Everything works fine!
As soon as I have a 2D problem and two input neurons, the layer sizes become [2,5,1] and then I pass 10 batches of inputs for both x and y together, vmap doesn't work anymore. I wanted to find du/dx, du/dy, d2u/dx2 or d2u/dy2 using the neural network and four functions below, and I expect all the four functions to return me results of shape (10,1), but I am getting error.
It looks like your function is not compatible with vmap, because it expects explicit batch dimensions. You can fix this by concatenating along axis=-1 rather than axis=1. Then your function calls could look something like the following:
from functools import partial
import jax
import jax.numpy as jnp
from jax import nn as jnn
def f(params, inputs_x, inputs_y):
inputs = jnp.concatenate((inputs_x, inputs_y), axis=-1)
for w, b in params:
outputs = jnp.dot(inputs, w)
inputs = jnn.swish(outputs)
return outputs
# Some example inputs and parameters
inputs_x = jnp.ones((10, 1))
inputs_y = jnp.ones((10, 1))
params = [
(jnp.ones((2, 5)), 1),
(jnp.ones((5, 1)), 1)
]
u = partial(f, params)
# u: (10,1)->(10,1)
print(u(inputs_x, inputs_y).shape)
# (10, 1)
# u: (1)->(1) batched to (10,1)->(10,1)
print(jax.vmap(u)(inputs_x, inputs_y).shape)
# (10, 1)
# ∇u: (1) -> (1,1) batched to (10,1)->(10,1,1)
print(jax.vmap(jax.jacobian(u))(inputs_x, inputs_y).shape)
# (10, 1, 1)
# ∇²u: (1) -> (1,1,1) batched to (10,1)->(10,1,1,1)
print(jax.vmap(jax.hessian(u))(inputs_x, inputs_y).shape)
# (10, 1, 1, 1)

Keras custom layer on ragged tensor to reduce dimensionallity

I'm trying to write a custom layer that will handle variable-length vectors, and reduce them to the same length vector.
The length is known in advance because the reason for the variable lengths is that I have several different data types that I encode using a different number of features.
In a sense, it is similar to Embedding only for numerical values.
I've tried using padding, but the results were bad, so I'm trying this approach instead.
So, for example let's say I have 3 data types, which I encode with 3, 4, 6 length vectors.
arr = [
# example one (data type 1 [len()==3], datat type 3[len()==6]) - force values as floats
[[1.0,2.0,3],[1,2,3,4,5,6]],
# example two (data type 2 [len()==4], datat type 3len()==6]) - force values as floats
[[1.0,2,3,4],[1,2,3,4,5,6]],
]
I tried implementing a custom layer like:
class DimensionReducer(tf.keras.layers.Layer):
def __init__(self, output_dim, expected_lengths):
super(DimensionReducer, self).__init__()
self._supports_ragged_inputs = True
self.output_dim = output_dim
for l in expected_lengths:
setattr(self,f'w_{l}', self.add_weight(shape=(l, self.output_dim),initializer='random_normal',trainable=True))
setattr(self, f'b_{l}',self.add_weight(shape=(self.output_dim,), initializer='random_normal',trainable=True))
def call(self, inputs):
print(inputs.shape)
# batch
if len(inputs.shape) == 3:
print("batch")
result = []
for i,x in enumerate(inputs):
_result = []
for v in x:
l = len(v)
print(l)
print(v)
w = getattr(self, f'w_{l}')
b = getattr(self, f'b_{l}')
out = tf.matmul([v],w) + b
_result.append(out)
result.append(tf.concat(_result, 0))
r = tf.stack(result)
print("batch output:",r.shape)
return r
Which seems to be working when called directly:
dim = DimensionReducer(3, [3,4,6])
dim(tf.ragged.constant(arr))
But when I try to incorporate it into a model, it fails:
import tensorflow as tf
val_ragged = tf.ragged.constant(arr)
inputs_ragged = tf.keras.layers.Input(shape=(None,None), ragged=True)
outputs_ragged = DimensionReducer(3, [3,4,6])(inputs_ragged)
model_ragged = tf.keras.Model(inputs=inputs_ragged, outputs=outputs_ragged)
# this one with RaggedTensor doesn't
print(model_ragged(val_ragged))
With
AttributeError: 'DimensionReducer' object has no attribute 'w_Tensor("dimension_reducer_98/strided_slice:0", shape=(), dtype=int32)'
I'm not sure how am I to implement such a layer, or what I'm doing wrong.

How to use the tf.case api of TensorFlow correctly?

I want to design a follow function for expanding any 1D/2D/3D matrix to a 4D matrix.
import tensorflow as tf
def inputs_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3),
tf.equal(_ranks, 2): lambda: tf.expand_dims(tf.expand_dims(inputs, 0), 3),
tf.equal(_ranks, 1): lambda: tf.expand_dims(tf.expand_dims(tf.expand_dims(inputs, 0), 0), 3)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = inputs_2_4D(mat_1d)
print(result.eval())
The function, however, cannot run well. It can only perform to output a 4-D matrix when the mat_3d and mat-4d tensors are passed into it. There will be some errors information if a 1D or 2D matrix are passed to the function.
When passing mat_3dormat_4dinto inputs_2_4D(), they can be expanded to a 4D matrix or original matrix:
mat_3d -----> [[[[1]
[1]]]]
mat_4d -----> [[[[1 1]]]]
When mat_1dormat_2dmatrixes are passed into inputs_2_4D, error information:
ValueError: dim 3 not in the interval [-2, 1]. for 'case/cond/ExpandDims' (op: 'ExpandDims') with input shapes: [2], [] and with computed input tensors: input[1] = <3>.
I tested another similar function before. That function can run correctly.
import tensorflow as tf
def test_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.constant(3),
tf.equal(_ranks, 2): lambda: tf.constant(2),
tf.equal(_ranks, 1): lambda: tf.constant(1)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = test_2_4D(mat_3d)
print(result.eval())
This function can correctly output the corresponding results when passing all of matrixes.
test_2_4D() RESULTS:
mat_1d -----> 1
mat_2d -----> 2
mat_3d -----> 3
mat_4d -----> [[[[1 1]]]]
I don't know why the correct branch in inputs_2_4D() cannot be found while the tf.equal() in each branch were executed. I feel that the 1st and 2nd branches in the function seem to still work if the input matrix is "mat_1d" or "mat_2d". So, the program will crash down. Please help me to analyze this problem!
I think I worked out what the problem is here. Turns out all condition/function pairs are evaluated. This can be revealed by giving the ops different names. The problem is that if your input is, say, rank 2, Tensorflow seems to still evaluate tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3). This leads to a crash because it cannot expand dim 3 for a rank-2 tensor (the maximum allowed value is 2).
This actually makes sense since with tf.case you're basically saying "I don't know which of these cases is going to be true at runtime, so check which one is appropriate and execute the corresponding function". However this means that Tensorflow needs to prepare execution paths for all possible cases, which in this case leads to invalid computations (trying to expand invalid dimensions).
At this point it would be nice to know a little more about your problem, i.e. why exactly you need that function. If you have different inputs and you simply want to bring them all to 4D, but each input always has the same dimensionality, consider simply using Python if-statements. Example:
inputs3d = tf.constant([[[1,1]]]) # this is always 3D
inputs2d = tf.constant([[1,1]]) # this is alwayas 2D
...
def inputs_2_4D(inputs):
_rank = len(inputs.shape.as_list())
if _rank == 3:
return tf.expand_dims(inputs, 3)
elif _rank == 2:
return tf.expand_dims(tf.expand_dims(inputs, 0), 3)
...
This will check the input rank while the graph is being built (not at runtime like tf.case) and really only prepare those expand_dims ops that are appropriate for the given input.
However if you have a single inputs tensor and this could have different ranks at different times of your program this would require a different solution. Please let us know which problem you're trying to solve!
I have implement the functionality I want through 2 ways. Now, I provide my code to share.
The 1st method based on tf.cond:
def inputs_2_4D(inputs):
_rank1d = tf.rank(inputs)
def _1d_2_2d(): return tf.expand_dims(inputs, 0)
def _greater_than_1d(): return tf.identity(inputs)
_tmp_2d = tf.cond(_rank1d < 2, _1d_2_2d, _greater_than_1d)
_rank2d = tf.rank(_tmp_2d)
def _2d_2_3d(): return tf.expand_dims(_tmp_2d, 0)
def _greater_than_2d(): return tf.identity(_tmp_2d)
_tmp_3d = tf.cond(_rank2d < 3, _2d_2_3d, _greater_than_2d)
_rank3d = tf.rank(_tmp_3d)
def _3d_2_4d(): return tf.expand_dims(_tmp_3d, 3)
def _greater_than_3d(): return tf.identity(_tmp_3d)
return (tf.cond(_rank3d < 4, _3d_2_4d, _greater_than_3d))
The 2nd method based on tf.case with tf.cond:
def inputs_2_4D_1(inputs):
_rank = tf.rank(inputs)
def _assign_original(): return tf.identity(inputs)
def _dummy(): return tf.expand_dims(inputs, 0)
_1d = tf.cond(tf.equal(_rank, 1), _assign_original, _dummy)
_2d = tf.cond(tf.equal(_rank, 2), _assign_original, _dummy)
_3d = tf.cond(tf.equal(_rank, 3), _assign_original, _dummy)
def _1d_2_4d(): return tf.expand_dims(tf.expand_dims(tf.expand_dims(_1d, 0), 0), 3)
def _2d_2_4d(): return tf.expand_dims(tf.expand_dims(_2d, 0), 3)
def _3d_2_4d(): return tf.expand_dims(_3d, 3)
return (tf.case({tf.equal(_rank, 1): _1d_2_4d,
tf.equal(_rank, 2): _2d_2_4d,
tf.equal(_rank, 3): _3d_2_4d},
default=_assign_original))
I think the efficiency of the 2nd method should be less than the 1st method's, because the function _dummy() always wastes 2 operations when allocating inputs into _1d,_2d,_3d respectively.

Tensorflow: constructing the params tensor for tf.map_fn

import tensorflow as tf
import numpy as np
def lineeqn(slope, intercept, y, x):
return np.sign(y-(slope*x) - intercept)
# data size
DS = 100000
N = 100
x1 = tf.random_uniform([DS], -1, 0, dtype=tf.float32, seed=0)
x2 = tf.random_uniform([DS], 0, 1, dtype=tf.float32, seed=0)
# line representing the target function
rand1 = np.random.randint(0, DS)
rand2 = np.random.randint(0, DS)
T_x1 = x1[rand1]
T_x2 = x1[rand2]
T_y1 = x2[rand1]
T_y2 = x2[rand2]
slope = (T_y2 - T_y1)/(T_x2 - T_x1)
intercept = T_y2 - (slope * T_x2)
# extracting training samples from the data set
training_indices = np.random.randint(0, DS, N)
training_x1 = tf.gather(x1, training_indices)
training_x2 = tf.gather(x2, training_indices)
training_x1_ex = tf.expand_dims(training_x1, 1)
training_x2_ex = tf.expand_dims(training_x2, 1)
slope_tensor = tf.fill([N], slope)
slope_ex = tf.expand_dims(slope_tensor, 1)
intercept_tensor = tf.fill([N], intercept)
intercept_ex = tf.expand_dims(intercept_tensor, 1)
params = tf.concat(1, [slope_ex, intercept_ex, training_x2_ex, training_x1_ex])
training_y = tf.map_fn(lineeqn, params)
The lineeqn function requires 4 parameters, so params should be a tensor where each element is 4-element tensor. When I try to run the above code, I get the error TypeError: lineeqn() takes exactly 4 arguments (1 given). Can someone please explain what is wrong with the way I have constructed the params tensor? What does tf.map_fn do to the params tensor?
A similar question has been asked here. The reason you are getting this error is because the function called by map_fn - lineeqn in your case - is required to take exactly one tensor argument.
Rather than a list of arguments to the function, the parameter elems is expected to be a list of items, where the mapped function is called for each item contained in the list.
So in order to take multiple arguments to your function, you would have to unpack them yourself from each item, e.g.
def lineeqn(item):
slope, intercept, y, x = tf.unstack(item, num=4)
return np.sign(y - (slope * x) - intercept)
and call it as
training_y = tf.map_fn(lineeqn, list_of_parameter_tensors)
Here, you call the line equation for each tensor in the list_of_parameter_tensors, where each tensor would describe a tuple (slope, intercept, y, x) of packed arguments.
(Note that depending on the shape of the actual argument tensors, it might also be that instead of tf.concat you could have to use tf.pack.)

How to use maxout activation function in tensorflow?

I want to use maxout activation function in tensorflow, but I don't know which function should use.
I sent a pull request for maxout, here is the link:
https://github.com/tensorflow/tensorflow/pull/5528
Code is as follows:
def maxout(inputs, num_units, axis=None):
shape = inputs.get_shape().as_list()
if axis is None:
# Assume that channel is the last dimension
axis = -1
num_channels = shape[axis]
if num_channels % num_units:
raise ValueError('number of features({}) is not a multiple of num_units({})'
.format(num_channels, num_units))
shape[axis] = -1
shape += [num_channels // num_units]
outputs = tf.reduce_max(tf.reshape(inputs, shape), -1, keep_dims=False)
return outputs
Here is how it works:
I don't think there is a maxout activation but there is nothing stopping yourself from making it yourself. You could do something like the following.
with tf.variable_scope('maxout'):
layer_input = ...
layer_output = None
for i in range(n_maxouts):
W = tf.get_variable('W_%d' % d, (n_input, n_output))
b = tf.get_variable('b_%d' % i, (n_output,))
y = tf.matmul(layer_input, W) + b
if layer_output is None:
layer_output = y
else:
layer_output = tf.maximum(layer_output, y)
Note that this is code I just wrote in my browser so there may be syntax errors but you should get the general idea. You simply perform a number of linear transforms and take the maximum across all the transforms.
How about this code?
This seems to work in my test.
def max_out(input_tensor,output_size):
shape = input_tensor.get_shape().as_list()
if shape[1] % output_size == 0:
return tf.transpose(tf.reduce_max(tf.split(input_tensor,output_size,1),axis=2))
else:
raise ValueError("Output size or input tensor size is not fine. Please check it. Reminder need be zero.")
I refer the diagram in the following page.
From version 1.4 on you can use tf.contrib.layers.maxout.
Maxout is a layer such that it calculates N*M output for a N*1 input, and then it returns the maximum value across the column, i.e., the final output has shape N*1 as well. Basically it uses multiple linear fittings to mimic a complex function.