trainX.size == 43120000
trainX = trainX.reshape([-1, 28, 28, 1])
(1)Does reshape accept a list as an argment instead of a tuple?
(2)Are the following two statements equivalent?
trainX = trainX.reshape([-1, 28, 28, 1])
trainX = trainX.reshape((55000, 28, 28, 1))
Try the variations:
In [1]: np.arange(12).reshape(3,4)
Out[1]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
In [2]: np.arange(12).reshape([3,4])
Out[2]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
In [3]: np.arange(12).reshape((3,4))
Out[3]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
With the reshape method, the shape can be arguments, a tuple or a list. In the reshape function is has to be in a list or tuple, to separate them from the first array argument
In [4]: np.reshape(np.arange(12), (3,4))
Out[4]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
and yes, one -1 can be used. The total size of the reshape is fixed, so one value can be deduced from the others.
In [5]: np.arange(12).reshape(-1,4)
Out[5]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
The method documentation has this note:
Unlike the free function numpy.reshape, this method on ndarray allows
the elements of the shape parameter to be passed in as separate arguments.
For example, a.reshape(10, 11) is equivalent to
a.reshape((10, 11)).
It's a builtin function, but the signature looks like x.reshape(*shape), and it tries to be flexible as long as the values make sense.
From the numpy documentation:
newshape : int or tuple of ints
The new shape should be compatible with the original shape. If an
integer, then the result will be a 1-D array of that length. One shape
dimension can be -1. In this case, the value is inferred from the
length of the array and remaining dimensions.
So yes, -1 for one dimension is fine and your two statements are equivalent. About the tuple requirement,
>>> import numpy as np
>>> a = np.arange(9)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8])
>>> a.reshape([3,3])
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>>
So apparently a list is good as well.
Related
I'd like to write a numpy function that takes an MxN array A, a window length L, and an MxP array idxs of starting indices into the M rows of A that selects P arbitrary slices of length L from each of the M rows of A. Except, I would love for this to work on the last dimension of A, and not necessarily care how many dimensions A has, so all dims of A and idxs match except the last one. Examples:
If A is just 1D:
A = np.array([1, 2, 3, 4, 5, 6])
window_len = 3
idxs = np.array([1, 3])
result = magical_routine(A, idxs, window_len)
Where result is a 2x3 array since I selected 2 slices of len 3:
np.array([[ 2, 3, 4],
[ 4, 5, 6]])
If A is 2D:
A = np.array([[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9,10,11,12],
[13,14,15,16,17,18]])
window_len = 3
idxs = np.array([[1, 3],
[0, 1],
[2, 2]])
result = magical_routine(A, idxs, window_len)
Where result is a 3x2x3 array since there are 3 rows of A, and I selected 2 slices of len 3 from each row:
np.array([[[ 2, 3, 4], [ 4, 5, 6]],
[[ 7, 8, 9], [ 8, 9,10]],
[[15,16,17], [15,16,17]]])
And so on.
I have discovered an number of inefficient ways to do this, along with ways that work for a specific number of dimensions of A. For 2D, the following is pretty tidy:
col_idxs = np.add.outer(idxs, np.arange(window_len))
np.take_along_axis(A[:, np.newaxis], col_idxs, axis=-1)
I can't see a nice way to generalize this for 1D and other D's though...
Is anyone aware of an efficient way that generalizes to any number of dims?
For your 1d case
In [271]: A=np.arange(1,7)
In [272]: idxs = np.array([1,3])
Using the kind of iteration that this questions usually gets:
In [273]: np.vstack([A[i:i+3] for i in idxs])
Out[273]:
array([[2, 3, 4],
[4, 5, 6]])
Alternatively generate all indices, and one indexing. linspace is handy for this (though it's not the only option):
In [278]: j = np.linspace(idxs,idxs+3,3,endpoint=False)
In [279]: j
Out[279]:
array([[1., 3.],
[2., 4.],
[3., 5.]])
In [282]: A[j.T.astype(int)]
Out[282]:
array([[2, 3, 4],
[4, 5, 6]])
for the 2d
In [284]: B
Out[284]:
array([[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12],
[13, 14, 15, 16, 17, 18]])
In [285]: idxs = np.array([[1, 3],
...: [0, 1],
...: [2, 2]])
In [286]: j = np.linspace(idxs,idxs+3,3,endpoint=False)
In [287]: j
Out[287]:
array([[[1., 3.],
[0., 1.],
[2., 2.]],
[[2., 4.],
[1., 2.],
[3., 3.]],
[[3., 5.],
[2., 3.],
[4., 4.]]])
With a bit of trial and error, pair up the indices to get:
In [292]: B[np.arange(3)[:,None,None],j.astype(int).transpose(1,2,0)]
Out[292]:
array([[[ 2, 3, 4],
[ 4, 5, 6]],
[[ 7, 8, 9],
[ 8, 9, 10]],
[[15, 16, 17],
[15, 16, 17]]])
Or iterate as in the first case, but with an extra layer:
In [294]: np.array([[B[j,i:i+3] for i in idxs[j]] for j in range(3)])
Out[294]:
array([[[ 2, 3, 4],
[ 4, 5, 6]],
[[ 7, 8, 9],
[ 8, 9, 10]],
[[15, 16, 17],
[15, 16, 17]]])
With sliding windows:
In [295]: aa = np.lib.stride_tricks.sliding_window_view(A,3)
In [296]: aa.shape
Out[296]: (4, 3)
In [297]: aa
Out[297]:
array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6]])
In [298]: aa[[1,3]]
Out[298]:
array([[2, 3, 4],
[4, 5, 6]])
and
In [300]: bb = np.lib.stride_tricks.sliding_window_view(B,(1,3))
In [301]: bb.shape
Out[301]: (3, 4, 1, 3)
In [302]: bb[np.arange(3)[:,None],idxs,0,:]
Out[302]:
array([[[ 2, 3, 4],
[ 4, 5, 6]],
[[ 7, 8, 9],
[ 8, 9, 10]],
[[15, 16, 17],
[15, 16, 17]]])
I got it! I was almost there:
def magical_routine(A, idxs, window_len=2000):
col_idxs = np.add.outer(idxs, np.arange(window_len))
return np.take_along_axis(A[..., np.newaxis, :], col_idxs, axis=-1)
I just needed to always add the new axis to A's second to last dim, and then leave remaining axes alone.
I have a numpy array of 2D shape
a=np.array([[1,2,3,4,5,6],
[7,8,9,10,11,12],
[13,14,15,16,17,18]])
and trying to convert into 3D shape of dimension (3,3,2) i.e,
np.array([[ 1,2,3],
[7,8,9],
[13,14,15]])
in 3rd dimension with index 1 and
np.array([[4,5,6],
[10,11,12],
[16,17,18]])
in 3rd dimension with index 2.
I tried to reshape as a.reshape(3,3,2) and getting this
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
Any suggestions to convert this?
Use swapaxes:
a.reshape(3,2,3).swapaxes(0,1)
output:
array([[[ 1, 2, 3],
[ 7, 8, 9],
[13, 14, 15]],
[[ 4, 5, 6],
[10, 11, 12],
[16, 17, 18]]])
For example, you have array
a = np.array([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]]])
We want to iterate through slices at the last dimension, i.e. [0,1,2], [3,4,5], [6,7,8], [9,10,11]. Any way to achieve this without the for loop? Thanks!
Tried this but it does not work, because numpy does not interpret the tuple in the way we wanted - a[(0, 0),:] is not the same as a[0, 0, :]
[a[i,:] for i in zip(*product(*(range(ii) for ii in a.shape[:-1])))]
More generally, any way for the last k dimensions? Something equivalent to looping through a[i,j,k, ...].
In [26]: a = np.array([[[ 0, 1, 2],
...: [ 3, 4, 5]],
...:
...: [[ 6, 7, 8],
...: [ 9, 10, 11]]])
In [27]: [a[i,j,:] for i in range(2) for j in range(2)]
Out[27]: [array([0, 1, 2]), array([3, 4, 5]), array([6, 7, 8]), array([ 9, 10, 11])]
or
In [31]: list(np.ndindex(2,2))
Out[31]: [(0, 0), (0, 1), (1, 0), (1, 1)]
In [32]: [a[i,j] for i,j in np.ndindex(2,2)]
another
list(a.reshape(-1,3))
For example, I got a tensor [30,6,6,3]: 30 is the batch_size, 6X6 is height x width, 3 is channels).
How could I rearrange its elements from every 3X3 to 1X9, like pixels in MATLAB? As the picture described:
tf.reshape() seems unworkable.
You can do these kinds of transformations by using combination of transpose and reshape. Numpy and TensorFlow logic is the same, so here's a simpler example using numpy. Suppose you have 4x4 array and want to spit it into 4 sub-arrays by skipping rows/columns like in your example.
IE, starting with
a=array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
You want to obtain 4 sub-images like
[0, 2]
[8, 10]
and
[1, 3]
[9, 11]
etc
First you can generate subarrays by stepping over columns
b = a.reshape((4,2,2)).transpose([2,0,1])
This generates the following array
array([[[ 0, 2],
[ 4, 6],
[ 8, 10],
[12, 14]],
[[ 1, 3],
[ 5, 7],
[ 9, 11],
[13, 15]]])
Now you skip the rows
c = b.reshape([2,2,2,2]).transpose(2,0,1,3)
This generates following array
array([[[[ 0, 2],
[ 8, 10]],
[[ 1, 3],
[ 9, 11]]],
[[[ 4, 6],
[12, 14]],
[[ 5, 7],
[13, 15]]]])
Now notice that you have the desired subarrays, but the leftmost shape is 2x2, but you want to have 4, so you reshape
c.reshape([4,2,2])
which gives you
array([[[ 0, 2],
[ 8, 10]],
[[ 1, 3],
[ 9, 11]],
[[ 4, 6],
[12, 14]],
[[ 5, 7],
[13, 15]]])
Note that the general technique of combining n,m array into n*m single dimension is to do reshape(m*n, ...). Because of row-major order, the dimensions to flatten must be on the left for reshape to work as a flattening operation. So if in your example the channels are the last dimension, you will need to transpose them to the left, flatten (using reshape), and then transpose them back.
Consider 3D tensor of T(w x h x d).
The goal is to create a tensor of R(w x h x K) where K = d x k by tiling along 3rd dimension in a unique way.
The tensor should repeat each slice in 3rd dimension k times, meaning :
T[:,:,0]=R[:,:,0:k] and T[:,:,1]=R[:,:,k:2*k]
There's a subtle difference with standard tiling which gives T[:,:,0]=R[:,:,::k], repeats at every kth in 3rd dimension.
Use np.repeat along that axis -
np.repeat(T,k,axis=2)
Sample run -
In [688]: # Setup
...: w,h,d = 2,3,4
...: k = 2
...: T = np.random.randint(0,9,(w,h,d))
...:
...: # Original approach
...: R = np.zeros((w,h,d*k),dtype=T.dtype)
...: for i in range(4):
...: R[:,:,i*k:(i+1)*k] = T[:,:,i][...,None]
...:
In [692]: T
Out[692]:
array([[[4, 5, 6, 4],
[5, 4, 4, 3],
[8, 0, 0, 8]],
[[7, 3, 8, 0],
[8, 7, 0, 8],
[3, 6, 8, 5]]])
In [690]: R
Out[690]:
array([[[4, 4, 5, 5, 6, 6, 4, 4],
[5, 5, 4, 4, 4, 4, 3, 3],
[8, 8, 0, 0, 0, 0, 8, 8]],
[[7, 7, 3, 3, 8, 8, 0, 0],
[8, 8, 7, 7, 0, 0, 8, 8],
[3, 3, 6, 6, 8, 8, 5, 5]]])
In [691]: np.allclose(R, np.repeat(T,k,axis=2))
Out[691]: True
Alternatively with np.tile and reshape -
np.tile(T[...,None],k).reshape(w,h,-1)