How to compute the ranks of an array in Tensorflow? - tensorflow

How to get the ranking of an array, like pandas.DataFrame.rank()?
For exmaple, for this array:
a = tf.constant([0, 2, 3, 3])
The result i would expect is:
([0, 1, 2, 2])

This is the answer I find:
__, rank = tf.unique(a)
print(rank)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([0, 1, 2, 2], dtype=int32)>

Related

shuffling the contents of Tensor

I have a tensor which which is composed of 2D arrays representing audio frames with shape 121*400. I want to shuffle the rows of individual arrays and not the arrays contained in the tensor. Is this possible without iterating through the tensor and shuffling each array.
To shuffle each array you could use this:
def tf_shuffle_second_axis(t):
# Uniquely random along second axis
rnd = tf.argsort(tf.random.uniform(t.shape),axis=1)
# Add batch dimension for gathering
rnd = tf.concat([tf.repeat(tf.range(t.shape[0])[...,tf.newaxis,tf.newaxis],tf.shape(rnd)[1],axis=1),rnd[...,tf.newaxis]],axis=2)
# Return shuffled tensor
return tf.gather_nd(t,rnd,batch_dims=0)
For example
a = tf.reshape(tf.range(16), [4,4])
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]], dtype=int32)>
tf_shuffle_second_axis(a)
Output
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[ 2, 1, 0, 3],
[ 4, 6, 5, 7],
[ 9, 11, 10, 8],
[14, 12, 13, 15]], dtype=int32)>
UPDATE
To shuffle the whole row use tf.shuffle
tf.random.shuffle(a)
Output
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[ 8, 9, 10, 11],
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[12, 13, 14, 15]], dtype=int32)>

A question about axis in tensorflow.stack (tensorflow= 1.14)

Using tensorflow.stack what does it mean to have axis=-1 ?
I'm using tensorflow==1.14
Using axis=-1 simply means to stack the tensors along the last axis (as per the python list indexing syntax).
Let's take a look at how this looks like using these tensors of shape (2, 2):
>>> x = tf.constant([[1, 2], [3, 4]])
>>> y = tf.constant([[5, 6], [7, 8]])
>>> z = tf.constant([[9, 10], [11, 12]])
The default behavior for tf.stack as described in the documentation is to stack the tensors along the first axis (index 0) resulting in a tensor of shape (3, 2, 2)
>>> tf.stack([x, y, z], axis=0)
<tf.Tensor: shape=(3, 2, 2), dtype=int32, numpy=
array([[[ 1, 2],
[ 3, 4]],
[[ 5, 6],
[ 7, 8]],
[[ 9, 10],
[11, 12]]], dtype=int32)>
Using axis=-1, the three tensors are stacked along the last axis instead, resulting in a tensor of shape (2, 2, 3)
>>> tf.stack([x, y, z], axis=-1)
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 5, 9],
[ 2, 6, 10]],
[[ 3, 7, 11],
[ 4, 8, 12]]], dtype=int32)>

How to generate a [1,1,1] according to a tensor of shape [5,4,3]?

I want to apply the operation tf.tile, e.g. tf.tile(A, [1, 1, b]) where A has shape [5,4,3]. How to generate [1, 1, 1] according to A? Then I set the [1, 1, 1]'s third element to b, where b is a placeholder.
This is my code, but it doesn't work, how to fix it?
d = tf.shape(A)
for i in range(tf.rank(A)): #wrong, tf.rank(A) as a tensor can't be here
d[i] = 1
d[2] = b
result = tf.tile(A, d)
The easiest solution is probably to use tf.one_hot to build your multiples tensor directly.
>>> b = 2
>>> tf.one_hot(indices=[b], depth=tf.rank(A), on_value=b, off_value=1)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 1, 2], dtype=int32)>
Alternatively, you can use tf.ones_like to generate a tensor of 1 with the same shape as the Tensor passed as an argument.
>>> A = tf.random.uniform((5,4,3))
>>> tf.shape(A)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([5, 4, 3], dtype=int32)>
>>> tf.ones_like(tf.shape(A))
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 1, 1], dtype=int32)>
Note that in tensorflow, you can't do item assignment on a tensor (so d[2] = b won't work for example). To generate your tensor [1,1,b] you can use tf.concat:
>>> b = 2
>>> tf.concat([tf.ones_like(tf.shape(A)[:-1]),[b]],axis=0)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 1, 2], dtype=int32)>

Tensorflow 2 - tf.slice and its NumPy slice syntax incompatible behavior

Question
Please confirm if the below is as designed and expected, or an issue of tf. slice, or a mistake in the usage of tf. slice. If a mistake, kindly suggest how to correct it.
Background
Introduction to tensor slicing - Extract tensor slices says Numpy-like slice syntax is an alternative of tf. slice.
Perform NumPy-like tensor slicing using tf. slice.
t1 = tf.constant([0, 1, 2, 3, 4, 5, 6, 7])
print(tf.slice(t1,
begin=[1],
size=[3]))
Alternatively, you can use a more Pythonic syntax. Note that tensor slices are evenly spaced over a start-stop range.
print(t1[1:4])
Problem
To Update the dark orange region.
TYPE = tf.int32
N = 4
D = 5
shape = (N,D)
# Target to update
Y = tf.Variable(
initial_value=tf.reshape(tf.range(N*D,dtype=TYPE), shape=shape),
trainable=True
)
print(f"Target Y: \n{Y}\n")
---
Target Y:
<tf.Variable 'Variable:0' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
tf. slice does not work.
# --------------------------------------------------------------------------------
# Slice region in the target to be updated
# --------------------------------------------------------------------------------
S = tf.slice( # Error "EagerTensor' object has no attribute 'assign'"
Y,
begin=[0,1], # Coordinate (n,d) as the start point
size=[3,2] # Shape (3,2) -> (n+3, n+2) as the end point
)
print(f"Slice to update S: \n{S}\n")
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(f"Values to set V: \n{V}\n")
# Assing V to S region of T
S.assign(V)
---
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-17-e5692b1750c8> in <module>
24
25 # Assing V to S region of T
---> 26 S.assign(V)
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'assign'
However, slice syntax works.
S = Y[
0:3, # From coordinate (n=0,d), slice rows (0,1,2) or 'size'=3 -> shape (3,?)
1:3 # From coordinate (n=0,d=1), slice columns (1,2) or 'size'=2 -> shape (3,2)
]
print(f"Slice to update S: \n{S}\n")
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(f"Values to set V: \n{V}\n")
# Assing V to S region of T
S.assign(V)
---
<tf.Variable 'UnreadVariable' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 1, 3, 4],
[ 5, 1, 1, 8, 9],
[10, 1, 1, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
In my understanding, the above behavior is expected or not a bug at least. As the error said, there is no attribute called assign in tf. Tensor (EagerTensor for eager execution) but there is in tf. Variable. And generally, tf. slice returns a tensor as its output and thus it doesn't possess assign attribute.
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'assign'
But when we do np like slicing and use it to modify the original tf. Variable, it seamlessly works.
Possible Solution
A workaround is to use tf.strided_slice instead of tf.slice. If we follow the source code of it, we will see, it takes the var argument which is a variable corresponding to input_
#tf_export("strided_slice")
#dispatch.add_dispatch_support
def strided_slice(input_,
begin,
end,
..........
var=None,
name=None):
And when we pass a parameter for var that basically corresponding to the input_, it then calls assign function that is defined within it
def assign(val, name=None):
"""Closure that holds all the arguments to create an assignment."""
if var is None:
raise ValueError("Sliced assignment is only supported for variables")
else:
if name is None:
name = parent_name + "_assign"
return var._strided_slice_assign(
begin=begin,
end=end,
strides=strides,
value=val,
name=name,
begin_mask=begin_mask,
end_mask=end_mask,
ellipsis_mask=ellipsis_mask,
new_axis_mask=new_axis_mask,
shrink_axis_mask=shrink_axis_mask)
So, when we pass var in the tf.strided_slice, it will return an assignable object.
Code
Here is the full working code for reference.
import tensorflow as tf
print(tf.__version__)
TYPE = tf.int32
N = 4
D = 5
shape = (N,D)
# Target to update
Y = tf.Variable(
initial_value=tf.reshape(tf.range(N*D,dtype=TYPE), shape=shape),
trainable=True
)
Y
2.4.1
<tf.Variable 'Variable:0' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
Now, we use tf.stried_slice instead of tf.slice.
S = tf.strided_slice(
Y,
begin = [0, 1],
end = [3, 3],
var = Y,
name ='slice_op'
)
S
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 1, 2],
[ 6, 7],
[11, 12]], dtype=int32)>
Update the variables with no attribution error.
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(V)
print()
# Assing V to S region of T
S.assign(V)
tf.Tensor(
[[1 1]
[1 1]
[1 1]], shape=(3, 2), dtype=int32)
<tf.Variable 'UnreadVariable' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 1, 3, 4],
[ 5, 1, 1, 8, 9],
[10, 1, 1, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
Using np like slicing.
# slicing
S = Y[
0:3,
1:3
]
S
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 1, 2],
[ 6, 7],
[11, 12]], dtype=int32)>
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(V)
# Assing V to S region of T
S.assign(V)
tf.Tensor(
[[1 1]
[1 1]
[1 1]], shape=(3, 2), dtype=int32)
<tf.Variable 'UnreadVariable' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 1, 3, 4],
[ 5, 1, 1, 8, 9],
[10, 1, 1, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
Materials
tf.Variable - tf.Tensor.

Calling reshape on an LSTMStateTuple turns it into a tensor

I was using dynamic_rnn with an LSTMCell, which put out an LSTMStateTuple containing the inner state. Calling reshape on this object (by my mistake) results in a tensor without causing any error at graph creation. I didn't get any error at runtime when feeding input through the graph, either.
Code:
cell = tf.contrib.rnn.LSTMCell(size, state_is_tuple=True, ...)
outputs, states = tf.nn.dynamic_rnn(cell, inputs, ...)
print(states) # state is an LSTMStateTuple
states = tf.reshape(states, [-1, size])
print(states) # state is a tensor of shape [?, size]
Is this a bug (I ask because it's not documented anywhere)? What is the reshaped tensor holding?
I have conducted a similar experiment which may gives you some hints:
>>> s = tf.constant([[0, 0, 0, 1, 1, 1],
[2, 2, 2, 3, 3, 3]])
>>> t = tf.constant([[4, 4, 4, 5, 5, 5],
[6, 6, 6, 7, 7, 7]])
>>> g = tf.reshape((s, t), [-1, 3]) # <tf.Tensor 'Reshape_1:0' shape=(8, 3) dtype=int32>
>>> sess.run(g)
array([[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7]], dtype=int32)
We can see that it just concatenates the two tensors in the first dimension and performs the reshaping. Since the LSTMStateTuple is like a namedtuple then it has the same effect as tuple and I think this is also what happens in your case.
Let's go further,
>>> st = tf.contrib.rnn.LSTMStateTuple(s, t)
>>> gg = tf.reshape(st, [-1, 3])
>>> sess.run(gg)
array([[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7]], dtype=int32)
We can see that if we create a LSTMStateTuple, the result verifies our assumption.