I have a Numpy array which consists of several square sub-blocks. For example:
A = [A_1 | A_2 | ... A_n],
each of them has the same size. I would like to transpose it in the following way:
B = [A_1^T | A_2^T| ... A_n^T].
Is there a way to do it instead of slicing the original array and then transposing each sub-block?
Assuming that A_i has shape (M, M), I can see two scenarios:
Your entire array A is already in shape (N, M, M). In this case, you can transpose the submatrices A_i using np.ndarray.swapaxes or np.ndarray.transpose. Example:
A = np.arange(36).reshape(4, 3, 3)
# 4 submatrices A_0 ... A_3 each with shape (3, 3)
# array([[[ 0, 1, 2],
# [ 3, 4, 5],
# [ 6, 7, 8]],
#
# [[ 9, 10, 11],
# [12, 13, 14],
# [15, 16, 17]],
#
# [[18, 19, 20],
# [21, 22, 23],
# [24, 25, 26]],
#
# [[27, 28, 29],
# [30, 31, 32],
# [33, 34, 35]]])
B = A.swapaxes(1, 2)
# The submatrices are transposed:
# array([[[ 0, 3, 6],
# [ 1, 4, 7],
# [ 2, 5, 8]],
#
# [[ 9, 12, 15],
# [10, 13, 16],
# [11, 14, 17]],
#
# [[18, 21, 24],
# [19, 22, 25],
# [20, 23, 26]],
#
# [[27, 30, 33],
# [28, 31, 34],
# [29, 32, 35]]])
Your entire array A has only two dimensions, i.e. shape (M, N * M). In this case, you can bring your array to three dimensions first, then swap the axes, and then shape your array back to 2 dimensions. Example:
A = np.arange(36).reshape(3, 12)
# array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
# [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
# [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]])
# A_i: ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^
B = A.reshape(3, 4, 3).swapaxes(0, 2).reshape(3, 12)
# array([[ 0, 12, 24, 3, 15, 27, 6, 18, 30, 9, 21, 33],
# [ 1, 13, 25, 4, 16, 28, 7, 19, 31, 10, 22, 34],
# [ 2, 14, 26, 5, 17, 29, 8, 20, 32, 11, 23, 35]])
# A_i^T: ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^
Related
I have a batch of data with shape [?, dim],
x=[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]]
and a tensor indicates repetition number for each row with shape [?,1], say:
rep_nums=[[1],[2],[1],[3],[1]]
and expecting result to be :
[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[15, 16, 17, 18, 19],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]]
I tried dynamic_partition as this mentioned, but only works in TF2.x, which not compatible my pre-exist project.
I think tf.repeat will help.
import tensorflow as tf
c1 = tf.constant([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
times = tf.constant([1, 2, 1, 3, 1])
res = tf.repeat(c1, times, axis=0)
with tf.Session() as sess:
print (sess.run(res))
Looking at the answers to this question: How to understand numpy's combined slicing and indexing example
I'm still unable to understand the result of indexing with a combination of a slice and two 1d arrays, like this:
>>> m = np.arange(36).reshape(3,3,4)
>>> m
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]],
[[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35]]])
>>> m[1:3, [2,1],[2,1]]
array([[22, 17],
[34, 29]])
Why is the result equivalent to this?
np.array([
[m[1,2,2],m[1,1,1]],
[m[2,2,2],m[2,1,1]]
])
PyTorch doesn't seem to have documentation for tensor.stride().
Can someone confirm my understanding?
My questions are three-fold.
Stride is for accessing an element in the storage. So stride size will be the same as the dimension of the tensor. Correct?
For each dimension, the corresponding element of stride tells how much it takes to move along the 1-dimensional storage. Correct?
For example:
In [15]: x = torch.arange(1,25)
In [16]: x
Out[16]:
tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
, 19, 20, 21, 22, 23, 24])
In [17]: a = x.view(4,3,2)
In [18]: a
Out[18]:
tensor([[[ 1, 2],
[ 3, 4],
[ 5, 6]],
[[ 7, 8],
[ 9, 10],
[11, 12]],
[[13, 14],
[15, 16],
[17, 18]],
[[19, 20],
[21, 22],
[23, 24]]])
In [20]: a.stride()
Out[20]: (6, 2, 1)
How does having this information help perform tensor operations efficiently? Basically this is showing the memory layout. So how does it help?
I do not understand why my slicing operation does not work. My intention is to apply the slice [::2] to each sub array of a so that the size of x is (3, 5), but things don't go as expected.
a = np.arange(0,30)
a.shape = (3, -1)
x = a[:][::2]
a : array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]])
The actual output is
x: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]])
The desired output is
x : array([0, 2, 4, 6, 8],
[10, 12, 14, 16, 18],
[20, 22, 24, 26, 28])
Typo:
x = a[:,::2]
Otherwise you select the full array on the first dimension, and then do the same again, one the first dimension, not the second.
X.shape == (10,4)
y.shape == (10)
I'd like to produce M, where each entry in M is defined as M[r,c] == X[r, y[r]]; that is, use y to index into the appropriate column of X.
How can I do this efficiently (without loops)?
M could have a single column, though eventually I need to broadcast it so that it has the same shape as X. c starts from the first col of X (0) and goes to the last (9).
Just do :
X=np.arange(40).reshape(10,4)
Y=np.random.randint(0,4,10)
M=X[range(10),Y]
for
In [8]: X
Out[8]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23],
[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35],
[36, 37, 38, 39]])
In [9]: Y
Out[9]: array([1, 1, 3, 3, 1, 2, 2, 3, 2, 1])
In [10]: M
Out[10]: array([ 1, 5, 11, 15, 17, 22, 26, 31, 34, 37])