How to shift values in tensor - tensorflow

I have tensor T of shape [batch_size, A] with values and tensor S of shape [batch_size] with shift parameters.
I would like to shift values in T[b] by S[b] positions to the right, the last S[b] elements of T[b] should be dropped and new elements should be set to 0.
So basically want to do something like:
for i in range(batch_size):
T[i] = zeros[:S[i]] + T[i, :A-S[i]]
Example:
For:
T = [[1, 2, 3], [4, 5, 6]]
S = [1, 2]
Return:
T' = [[0, 1, 2], [0, 0, 4]]
Is there some easy way to do it?

You can use tf.concat and tf.stack for that purpose:
T_shift = tf.zeros((batch_size, A), tf.float32)
tmp = []
for i in xrange(batch_size):
tmp.append(tf.concat([T_shift[i, :S[i, 0]],T[i, :17 - S[i,0]]], axis = 0))
T_shift = tf.stack(tmp)

If you are working in Tensorflow 2, you can use the tf.roll for that purpose:
"The elements are shifted positively (towards larger indices) by the
offset of shift along the dimension of axis. Negative shift values
will shift elements in the opposite direction. Elements that roll
passed the last position will wrap around to the first and vice versa.
Multiple shifts along multiple axes may be specified."
tf.roll(
input, shift, axis, name=None
)
# 't' is [0, 1, 2, 3, 4]
roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]
# shifting along multiple dimensions
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]
# shifting along the same axis multiple times
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]

Related

convert CSR format to dense/COO format in tensorflow

tf.sparse_to_dense() fucntion in tensorflow only support ((data, (row_ind, col_ind)), [shape=(M, N)]) format. How can I convert standard CSR tensor (((data, indices, indptr), [shape=(M, N)])) to dense representation in tensorflow?
For example given, data, indices and indptr the function will return dense tensor.
e.g., inputs:
indices = [1 3 3 0 1 2 2 3]
indptr = [0 2 3 6 8]
data = [2 4 1 3 2 1 1 5]
expected output:
[[0, 2, 0, 4],
[0, 0, 0, 1],
[3, 2, 1, 0],
[0, 0, 1, 5]]
According to Scipy documentation, we can convert it back by the following:
the column indices for row i are stored in indices[indptr[i]:indptr[i+1]] and their
corresponding values are stored in data[indptr[i]:indptr[i+1]].
If the shape parameter is not supplied, the matrix dimensions are
inferred from the index arrays.
It is relatively easily to convert from the CSR format to the COO by expanding the indptr argument to get the row indices. Here is an example using a subtraction, tf.repeat and tf.range. The shape of the final sparse tensor is inferred from the max indices in the rows/columns respectively (but can also be provided explicitly).
def csr_to_sparse(data, indices, indptr, dense_shape=None):
rep = tf.math.subtract(indptr[1:], indptr[:-1])
row_indices = tf.repeat(tf.range(tf.size(rep)), rep)
sparse_indices = tf.cast(tf.stack((row_indices, indices), axis=-1), tf.int64)
if dense_shape is None:
max_row = tf.math.reduce_max(row_indices)
max_col = tf.math.reduce_max(indices)
dense_shape = (max_row + 1, max_col + 1)
return tf.SparseTensor(indices=sparse_indices, values=data, dense_shape=dense_shape)
With your example:
>>> indices = [1, 3, 3, 0, 1, 2, 2, 3]
>>> indptr = [0, 2, 3, 6, 8,]
>>> data = [2, 4, 1, 3, 2, 1, 1, 5]
>>> tf.sparse.to_dense(csr_to_sparse(data, indices, indptr))
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[0, 2, 0, 4],
[0, 0, 0, 1],
[3, 2, 1, 0],
[0, 0, 1, 5]], dtype=int32)>

pytorch repeat 3rd dimension

I'm following this example on doc
In [42]: x = torch.tensor([1,2,3])
In [45]: x.repeat(4,2)
Out[45]: tensor([[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3]])
In [46]: x.repeat(4,2).shape
Out[46]: torch.Size([4, 6])
So far, so good.
But why does repeating just 1 time on 3rd dimension expand 3rd dim to 3 (not 1)?
[On the doc]
>>> x.repeat(4, 2, 1).size()
torch.Size([4, 2, 3])
Double checking.
In [43]: x.repeat(4,2,1)
Out[43]:
tensor([[[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3]]])
Why does it behave this way?
It expands the size([3]) tensor it only once along first dim. The (4,2,1) is the number of times you want to repeat a (3,) tensor. So, the final tensor is (4,2,3), because you repeat the (3,) once over last axis, twice over second last and 4 times over the first axis.
x = torch.tensor([1, 2, 3])
x.shape
torch.Size([3])
Then,
xx = x.repeat(4,2,1)
xx.shape
torch.Size([4, 2, 3])

Elementwise concatenation in numpy

I'm trying to concatenate 2 arrays element wise. I have the concatenation working to produce the correct shape but it has not been applied element wise.
So i have this array
[0, 1]
[2, 3]
[4, 5]
I want to append each element in the array with each element. the target result would be
[0, 1, 0, 1]
[0, 1, 2, 3]
[0, 1, 4, 5]
[2, 3, 0, 1]
[2, 3, 2, 3]
[2, 3, 4, 5]
[4, 5, 0, 1]
[4, 5, 2, 3]
[4, 5, 4, 5]
i think i may need to change an axis but then i can't get the broadcasting to work.
any help would be greatly appreciated. lots to learn in numpy !
a = np.arange(6).reshape(3, 2))
b = np.concatenate((a, a), axis=1)
One way would be stacking replicated versions created with np.repeat and np.tile -
In [52]: n = len(a)
In [53]: np.hstack((np.repeat(a,n,axis=0),np.tile(a,(n,1))))
Out[53]:
array([[0, 1, 0, 1],
[0, 1, 2, 3],
[0, 1, 4, 5],
[2, 3, 0, 1],
[2, 3, 2, 3],
[2, 3, 4, 5],
[4, 5, 0, 1],
[4, 5, 2, 3],
[4, 5, 4, 5]])
Another would be with broadcasted-assignment, since you mentioned broadcasting -
def create_mesh(a):
m,n = a.shape
out = np.empty((m,m,2*n),dtype=a.dtype)
out[...,:n] = a[:,None]
out[...,n:] = a
return out.reshape(-1,2*n)
One solution is to build on senderle's cartesian_product to extend this to 2D arrays. Here's how I usually do this:
# Your input array.
arr
# array([[0, 1],
# [2, 3],
# [4, 5]])
idxs = cartesian_product(*[np.arange(len(arr))] * 2)
arr[idxs].reshape(idxs.shape[0], -1)
# array([[0, 1, 0, 1],
# [0, 1, 2, 3],
# [0, 1, 4, 5],
# [2, 3, 0, 1],
# [2, 3, 2, 3],
# [2, 3, 4, 5],
# [4, 5, 0, 1],
# [4, 5, 2, 3],
# [4, 5, 4, 5]])

Construct matrix using selection of rows and columns in numpy

In NumPy, suppose I have a matrix X:
X = array([[3, 1, 4, 5], [5, 1, 2, 1], [4, 4, 0, 1], [0, 3, 0, 3], [1, 2, 3, 4])
How can I construct a new matrix using the first (row 0), last second and last (row 3, 4) of X?
The resulting matrix is:
Y = array([[3, 1, 4, 5], [0, 3, 0, 3], [1, 2, 3, 4])
I cannot list all the rows I want to include for the new matrix because for the data I have, it will be like choosing the (20, 60), (90, 120) row of the original matrix to construct a new matrix.
Use np.r_ to get those concatenated row indices and simply index into the rows of the input array, like so -
X[np.r_[0, 3:5]] # for sample case
X[np.r_[20:60, 90:120]] # for actual case
Sample run -
In [146]: X
Out[146]:
array([[3, 1, 4, 5],
[5, 1, 2, 1],
[4, 4, 0, 1],
[0, 3, 0, 3],
[1, 2, 3, 4]])
In [147]: X[np.r_[0, 3:5]]
Out[147]:
array([[3, 1, 4, 5],
[0, 3, 0, 3],
[1, 2, 3, 4]])
Sample run for shape test on a bigger random array -
In [150]: X = np.random.rand(200,10)
In [151]: X[np.r_[20:60, 90:120]].shape
Out[151]: (70, 10) # 70 rows selected

Calling reshape on an LSTMStateTuple turns it into a tensor

I was using dynamic_rnn with an LSTMCell, which put out an LSTMStateTuple containing the inner state. Calling reshape on this object (by my mistake) results in a tensor without causing any error at graph creation. I didn't get any error at runtime when feeding input through the graph, either.
Code:
cell = tf.contrib.rnn.LSTMCell(size, state_is_tuple=True, ...)
outputs, states = tf.nn.dynamic_rnn(cell, inputs, ...)
print(states) # state is an LSTMStateTuple
states = tf.reshape(states, [-1, size])
print(states) # state is a tensor of shape [?, size]
Is this a bug (I ask because it's not documented anywhere)? What is the reshaped tensor holding?
I have conducted a similar experiment which may gives you some hints:
>>> s = tf.constant([[0, 0, 0, 1, 1, 1],
[2, 2, 2, 3, 3, 3]])
>>> t = tf.constant([[4, 4, 4, 5, 5, 5],
[6, 6, 6, 7, 7, 7]])
>>> g = tf.reshape((s, t), [-1, 3]) # <tf.Tensor 'Reshape_1:0' shape=(8, 3) dtype=int32>
>>> sess.run(g)
array([[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7]], dtype=int32)
We can see that it just concatenates the two tensors in the first dimension and performs the reshaping. Since the LSTMStateTuple is like a namedtuple then it has the same effect as tuple and I think this is also what happens in your case.
Let's go further,
>>> st = tf.contrib.rnn.LSTMStateTuple(s, t)
>>> gg = tf.reshape(st, [-1, 3])
>>> sess.run(gg)
array([[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7]], dtype=int32)
We can see that if we create a LSTMStateTuple, the result verifies our assumption.