How to multiply rows of list by another list? - numpy

I have two lists (l1, l2) with arbitrary dimensions, say l1.shape = (3,2) and l2.shape = (2,2,2).
l1 = np.array([[1, 10],
[2, 20],
[3, 30]])
l2 = np.array([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]],)
I want to multiply each row of l1 by the entire l2 and save each result in another list. Therefore, creating a new list (l3) with an extra dimension as follows, l3.shape = (3,2,2,2). So I expect:
l3 = np.array([[[[1, 20],
[3, 40]],
[[5, 60],
[7, 80]]],
[[[2, 40],
[6, 80]],
[[10, 120],
[14, 160]]],
[[[3, 60],
[9, 120]],
[[15, 180],
[21, 240]]]])
I have tried multiplying row lists separately using that numpy can broadcast arrays so they have the same size. However, I haven't been able to do it with lists of arbitrary size and this is important for my application.
I have tried:
l3 = l1[0,:] * l2
array([[[1, 20],
[3, 40]],
[[5, 60],
[7, 80]]])
But, I cannot be able to create the index of l1 automatically in a general way so the number of dimensions of both lists doesn't matter.

Try the following,
(l1[:,:,None,None]*l2.T).swapaxes(1,3)
This should give you the desired output

Related

Algorithms of Joining arrays in numpy

I'm new in numpy, I understand the methods of "Joining arrays" in lower shape such as (n1, n2) beacause we can visualize, like a matrix.
But I don't undestand the logic in higher dimensions (n0, ...., n_{d-1}) of course I can't visualize that. To visualize I usually imagine a multidimensional array like a tree, so (n0, ...., n_{d-1}) means that at level (axis) i of tree every node has n_{i} children. So at level 0 (the root) we have n0 children and so on.
In substance what is the formal exact definiton of "Joining arrays" algorithms?
https://numpy.org/doc/stable/reference/routines.array-manipulation.html
Let's see I can illustrate some basic array operations.
First make a 2d array. Start with a 1d, [0,1,...5], and reshape it to (2,3):
In [1]: x = np.arange(6).reshape(2,3)
In [2]: x
Out[2]:
array([[0, 1, 2],
[3, 4, 5]])
I can join 2 copies of x along the 1st dimension (vstack, v for vertical also does this):
In [3]: np.concatenate([x,x], axis=0)
Out[3]:
array([[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]])
Note that the result is (4,3); no new dimension.
Or join them 'horizontally':
In [4]: np.concatenate([x,x], axis=1)
Out[4]:
array([[0, 1, 2, 0, 1, 2], # (2,6) shape
[3, 4, 5, 3, 4, 5]])
But if I supply them to np.array I make a 3d array (2,2,3) shape:
In [5]: np.array([x,x])
Out[5]:
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]])
This action of np.array is really no different from making a 2d array from nested lists, np.array([[1,2],[3,4]]). We could just add a layer of nesting, just like Out[5} without the line breaks. I tend to think of this 3d array as having 2 blocks, each with 2 rows and 3 columns. But the names are just a convenience.
stack acts like np.array, making a 3d array. It actually changes the input arrays to (1,2,3) shape, and concatenates on the first axis.
In [6]: np.stack([x,x])
Out[6]:
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]])
stack lets us join the array in other ways
In [7]: np.stack([x,x], axis=1) # expand to (2,1,3) and concatante
Out[7]:
array([[[0, 1, 2],
[0, 1, 2]],
[[3, 4, 5],
[3, 4, 5]]])
In [8]: np.stack([x,x], axis=2) # expand to (2,3,1) and concatenate
Out[8]:
array([[[0, 0],
[1, 1],
[2, 2]],
[[3, 3],
[4, 4],
[5, 5]]])
concatenate and the other stack functions don't add anything new to basic numpy arrays. They just provide a way(s) of making a new array from existing ones. There aren't any special algorithms.
If it helps you could think of these join functions as creating a new "blank" array, and filling it with copies of the source arrays. For example that last stack can be done with:
In [9]: res = np.zeros((2,3,2), int)
In [10]: res
Out[10]:
array([[[0, 0],
[0, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]]])
In [11]: res[:,:,0] = x
In [12]: res[:,:,1] = x
In [13]: res
Out[13]:
array([[[0, 0],
[1, 1],
[2, 2]],
[[3, 3],
[4, 4],
[5, 5]]])

Difference between : and , in numpy

Some resources have mentioned that in numpy's array slicing, array[2,:,1] results in the same as array[2][:][1] , but I do not get the same ones in this case:
array3d = np.array([[[1, 2], [3, 4]],[[5, 6], [7, 8]], [[9, 10], [11, 12]]])
array3d[2,:,1]
out: array([10, 12])
and:
array3d[2][:][1]
out: array([11, 12])
What is the difference?
some resources is wrong!
In [1]: array3d = np.array([[[1, 2], [3, 4]],[[5, 6], [7, 8]], [[9, 10], [11, 12
...: ]]])
In [2]: array3d
Out[2]:
array([[[ 1, 2],
[ 3, 4]],
[[ 5, 6],
[ 7, 8]],
[[ 9, 10],
[11, 12]]])
When the indices are all scalar this kind of decomposition works:
In [3]: array3d[2,0,1]
Out[3]: 10
In [4]: array3d[2][0][1]
Out[4]: 10
One index reduces the dimension, picking one 'plane':
In [5]: array3d[2]
Out[5]:
array([[ 9, 10],
[11, 12]])
[:] on that does nothing - it is not a place holder by itself. Within the multidimensional index it is a slice - the whole thing in that dimension. We see the same behavior with lists. alist[2] returns an element, alist[:] returns a copy of the whole list.
In [6]: array3d[2][:]
Out[6]:
array([[ 9, 10],
[11, 12]])
Remember, numpy is a python package. Python syntax still applies at all levels. x[a][b][c] does 3 indexing operations in sequence, 'chaining' them. x[a,b,c] is one indexing operation, passing a tuple of to x. It's numpy code that interprets that tuple.
We have to use a multidimensional index on the remaining dimensions:
In [7]: array3d[2][:,1]
Out[7]: array([10, 12])
In [8]: array3d[2,:,1]
Out[8]: array([10, 12])
The interpreter actually does:
In [9]: array3d.__getitem__((2,slice(None),1))
Out[9]: array([10, 12])
In [11]: array3d.__getitem__(2).__getitem__((slice(None),1))
Out[11]: array([10, 12])

delete more than one row at a time numpy array python

I want to delete all row after the second row, however, when I try to apply the following code, the function delete only the third and the 5th rows and keep the forth any idea on how to improve this without doing a loop
arr1 = array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
arr2 = array([[10, 11, 12], [13, 14, 15]])
arr1 = concatenate((arr1, arr2), axis=0)
print(arr1)
print(delete(arr1, (2, 4), axis=0))
If the data you want to delete is contiguous (like in your example), using numpy's array indexing is arguably the easiest way to achieve what you want.
import numpy as np
arr1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
arr2 = np.array([[10, 11, 12], [13, 14, 15]])
arr3 = np.r_[arr1, arr2]
# First dimension corresponds to rows, second dimension corresponds to columns.
print(arr3[:2, :])
You can just pass a tuple as the second argument.
>>> Arr = array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> delete(Arr, (0, 2), axis=0)
array([[4, 5, 6]])
Btw, delete(Arr, (3), axis=0) does not work in your example, since the maximum index for this array is 2.
Concerning your edit, the error is the same as above: you are using an index (3 or 5) which does not correspond to an actual index of the array. Arr[3] or Arr[5] does not make any sense for an array of shape (3,3).
I used the following code and it works well
arr1 = array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
arr2 = array([[10, 11, 12], [13, 14, 15]])
arr1 = concatenate((arr1, arr2), axis=0)
print(arr1)
print(delete(arr1, (range(2, 5)), axis=0))

Delete specified column index from every row of 2d numpy array

I have a numpy array A as follows:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
and another numpy array column_indices_to_be_deleted as follows:
array([1, 0, 2])
I want to delete the element from every row of A specified by the column indices in column_indices_to_be_deleted. So, column index 1 from row 0, column index 0 from row 1 and column index 2 from row 2 in this case, to get a new array that looks like this:
array([[1, 3],
[5, 6],
[7, 8]])
What would be the simplest way of doing that?
One way with masking created with broadcatsed-comparison -
In [43]: a # input array
Out[43]:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
In [44]: remove_idx # indices to be removed from each row
Out[44]: array([1, 0, 2])
In [45]: n = a.shape[1]
In [46]: a[remove_idx[:,None]!=np.arange(n)].reshape(-1,n-1)
Out[46]:
array([[1, 3],
[5, 6],
[7, 8]])
Another mask based approach with the mask created with array-assignment -
In [47]: mask = np.ones(a.shape,dtype=bool)
In [48]: mask[np.arange(len(remove_idx)), remove_idx] = 0
In [49]: a[mask].reshape(-1,a.shape[1]-1)
Out[49]:
array([[1, 3],
[5, 6],
[7, 8]])
Another with np.delete -
In [64]: m,n = a.shape
In [66]: np.delete(a.flat,remove_idx+n*np.arange(m)).reshape(m,-1)
Out[66]:
array([[1, 3],
[5, 6],
[7, 8]])

how to rearrange elements in a tensor, like in MATLAB?

For example, I got a tensor [30,6,6,3]: 30 is the batch_size, 6X6 is height x width, 3 is channels).
How could I rearrange its elements from every 3X3 to 1X9, like pixels in MATLAB? As the picture described:
tf.reshape() seems unworkable.
You can do these kinds of transformations by using combination of transpose and reshape. Numpy and TensorFlow logic is the same, so here's a simpler example using numpy. Suppose you have 4x4 array and want to spit it into 4 sub-arrays by skipping rows/columns like in your example.
IE, starting with
a=array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
​
You want to obtain 4 sub-images like
[0, 2]
[8, 10]
and
[1, 3]
[9, 11]
etc
First you can generate subarrays by stepping over columns
b = a.reshape((4,2,2)).transpose([2,0,1])
This generates the following array
array([[[ 0, 2],
[ 4, 6],
[ 8, 10],
[12, 14]],
[[ 1, 3],
[ 5, 7],
[ 9, 11],
[13, 15]]])
Now you skip the rows
c = b.reshape([2,2,2,2]).transpose(2,0,1,3)
This generates following array
array([[[[ 0, 2],
[ 8, 10]],
[[ 1, 3],
[ 9, 11]]],
[[[ 4, 6],
[12, 14]],
[[ 5, 7],
[13, 15]]]])
Now notice that you have the desired subarrays, but the leftmost shape is 2x2, but you want to have 4, so you reshape
c.reshape([4,2,2])
which gives you
array([[[ 0, 2],
[ 8, 10]],
[[ 1, 3],
[ 9, 11]],
[[ 4, 6],
[12, 14]],
[[ 5, 7],
[13, 15]]])
Note that the general technique of combining n,m array into n*m single dimension is to do reshape(m*n, ...). Because of row-major order, the dimensions to flatten must be on the left for reshape to work as a flattening operation. So if in your example the channels are the last dimension, you will need to transpose them to the left, flatten (using reshape), and then transpose them back.