Slice 3D ndarray with 2D ndarray in numpy? - numpy

My apologies if this has been answered many times, but I just can't find a solution.
Assume the following code:
import numpy as np
A,_,_ = np.meshgrid(np.arange(5),np.arange(7),np.arange(10))
B = (rand(7,10)*5).astype(int)
How can I slice A using B so that B represent the indexes in the first and last dimensions of A (I.e A[magic] = B)?
I have tried
A[:,B,:] which doesn't work due to peculiarities of advanced indexing.
A[:,B,np.arange(10)] generates 7 copies of the matrix I'm after
A[np.arange(7),B,np.arange(10)] gives the error:
ValueError: shape mismatch: objects cannot be broadcast to a single shape
Any other suggestions?

These both work:
A[0, B, 0]
A[B, B, B]
Really, only the B in axis 1 matters, the others can be any range that will broadcast to B.shape and are limited by A.shape[0] (for axis 1) and A.shape[2] (for axis 2), for a ridiculous example:
A[range(7) + range(3), B, range(9,-1, -1)]
But you don't want to use : because then you'll get, as you said, 7 or 10 (or both!) "copies" of the array you want.
A, _, _ = np.meshgrid(np.arange(5),np.arange(7),np.arange(10))
B = (rand(7,10)*A.shape[1]).astype(int)
np.allclose(B, A[0, B, 0])
#True
np.allclose(B, A[B, B, B])
#True
np.allclose(B, A[range(7) + range(3), B, range(9,-1, -1)])
#True

Related

Expand a dimension of 3-dimensional array into a diagonal matrix with vectorized computations

I have np.ndarray A of shape (N, M, D).
I'd like to create np.ndarray B of shape (N, M, D, D) such that for every pair of fixed indices n, m along axes 0 and 1
B[n, m] = np.eye(A[n, m])
I understand how to solve this problem using cycles, yet I'd like to write code performing this in vectorized manner. How can this be done using numpy?
import numpy as np
A = ... # Your array here
n, m, d = A.shape
indices = np.arange(d)
B = np.zeros((n, m, d, d))
B[:, :, indices, indices] = A

How to find matrix common members of matrices in Numpy

I have a 2D matrix A and a vector B. I want to find all row indices of elements in A that are also contained in B.
A = np.array([[1,9,5], [8,4,9], [4,9,3], [6,7,5]], dtype=int)
B = np.array([2, 4, 8, 10, 12, 18], dtype=int)
My current solution is only to compare A to one element of B at a time but that is horribly slow:
res = np.array([], dtype=int)
for i in range(B.shape[0]):
cres, _ = (B[i] == A).nonzero()
degElem = np.append(res, cres)
res = np.unique(res)
The following Matlab statement would solve my issue:
find(any(reshape(any(reshape(A, prod(size(A)), 1) == B, 2),size(A, 1),size(A, 2)), 2))
However comparing a row and a colum vector in Numpy does not create a Boolean intersection matrix as it does in Matlab.
Is there a proper way to do this in Numpy?
We can use np.isin masking.
To get all the row numbers, it would be -
np.where(np.isin(A,B).T)[1]
If you need them split based on each element's occurence -
[np.flatnonzero(i) for i in np.isin(A,B).T if i.any()]
Posted MATLAB code seems to be doing broadcasting. So, an equivalent one would be -
np.where(B[:,None,None]==A)[1]

How to create a new array of tensors from old one

I have a tensor [a, b, c, d, e, f, g, h, i] with dimension 9 X 1536. I need to create a new tensor which is like [(a,b), (a,c), (a,d), (a,e),(a,f),(a,g), (a,h), (a,i)] with dimension [8 x 2 x 1536]. How can I do it with tensorflow ?
I tried like this
x = tf.zeros((9x1536))
x_new = tf.stack([(x[0],x[1]),
(x[0], x[2]),
(x[0], x[3]),
(x[0], x[4]),
(x[0], x[5]),
(x[0], x[6]),
(x[0], x[7]),
(x[0], x[8])])
This seems to work but I would like to know if there is a better solution or approach which can be used instead of this
You can obtain the desired output with a combination of tf.concat, tf.tile and tf.expand_dims:
import tensorflow as tf
import numpy as np
_in = tf.constant(np.random.randint(0,10,(9,1536)))
tile_shape = [(_in.shape[0]-1).value] + [1]*len(_in.shape[1:].as_list())
_out = tf.concat([
tf.expand_dims(
tf.tile(
[_in[0]],
tile_shape
)
,
1),
tf.expand_dims(_in[1:], 1)
],
1
)
tf.tile repeats the first element of _in creating a tensor of length len(_in)-1 (I compute separately the shape of the tile because we want to tile only on the first dimension).
tf.expand_dims adds a dimension we can then concat on
Finally, tf.concat stitches together the two tensors giving the desired result.
EDIT: Rewrote to fit the OP's actual use-case with multidimensional tensors.

Python Memory error on scipy stats. Scipy linalg lstsq <> manual beta

Not sure if this question belongs here or on crossvalidated but since the primary issue is programming language related, I am posting it here.
Inputs:
Y= big 2D numpy array (300000,30)
X= 1D array (30,)
Desired Output:
B= 1D array (300000,) each element of which regression coefficient of regressing each row (element of length 30) of Y against X
So B[0] = scipy.stats.linregress(X,Y[0])[0]
I tried this first:
B = scipy.stats.linregress(X,Y)[0]
hoping that it will broadcast X according to shape of Y. Next I broadcast X myself to match the shape of Y. But on both occasions, I got this error:
File "C:\...\scipy\stats\stats.py", line 3011, in linregress
ssxm, ssxym, ssyxm, ssym = np.cov(x, y, bias=1).flat
File "C:\...\numpy\lib\function_base.py", line 1766, in cov
return (dot(X, X.T.conj()) / fact).squeeze()
MemoryError
I used manual approach to calculate beta, and on Sascha's suggestion below also used scipy.linalg.lstsq as follows
B = lstsq(Y.T, X)[0] # first estimate of beta
Y1=Y-Y.mean(1)[:,None]
X1=X-X.mean()
B1= np.dot(Y1,X1)/np.dot(X1,X1) # second estimate of beta
The two estimates of beta are very different however:
>>> B1
Out[10]: array([0.135623, 0.028919, -0.106278, ..., -0.467340, -0.549543, -0.498500])
>>> B
Out[11]: array([0.000014, -0.000073, -0.000058, ..., 0.000002, -0.000000, 0.000001])
Scipy's linregress will output slope+intercept which defines the regression-line.
If you want to access the coefficients naturally, scipy's lstsq might be more appropriate, which is an equivalent formulation.
Of course you need to feed it with the correct dimensions (your data is not ready; needs preprocessing; swap dims).
Code
import numpy as np
from scipy.linalg import lstsq
Y = np.random.random((300000,30))
X = np.random.random(30)
x, res, rank, s = lstsq(Y.T, X) # Y transposed!
print(x)
print(x.shape)
Output
[ 1.73122781e-05 2.70274135e-05 9.80840639e-06 ..., -1.84597771e-05
5.25035470e-07 2.41275026e-05]
(300000,)

Convolution along one axis only

I have two 2-D arrays with the same first axis dimensions. In python, I would like to convolve the two matrices along the second axis only. I would like to get C below without computing the convolution along the first axis as well.
import numpy as np
import scipy.signal as sg
M, N, P = 4, 10, 20
A = np.random.randn(M, N)
B = np.random.randn(M, P)
C = sg.convolve(A, B, 'full')[(2*M-1)/2]
Is there a fast way?
You can use np.apply_along_axis to apply np.convolve along the desired axis. Here is an example of applying a boxcar filter to a 2d array:
import numpy as np
a = np.arange(10)
a = np.vstack((a,a)).T
filt = np.ones(3)
np.apply_along_axis(lambda m: np.convolve(m, filt, mode='full'), axis=0, arr=a)
This is an easy way to generalize many functions that don't have an axis argument.
With ndimage.convolve1d, you can specify the axis...
np.apply_along_axis won't really help you, because you're trying to iterate over two arrays. Effectively, you'd have to use a loop, as described here.
Now, loops are fine if your arrays are small, but if N and P are large, then you probably want to use FFT to convolve instead.
However, you need to appropriately zero pad your arrays first, so that your "full" convolution has the expected shape:
M, N, P = 4, 10, 20
A = np.random.randn(M, N)
B = np.random.randn(M, P)
A_ = np.zeros((M, N+P-1), dtype=A.dtype)
A_[:, :N] = A
B_ = np.zeros((M, N+P-1), dtype=B.dtype)
B_[:, :P] = B
A_fft = np.fft.fft(A_, axis=1)
B_fft = np.fft.fft(B_, axis=1)
C_fft = A_fft * B_fft
C = np.real(np.fft.ifft(C_fft))
# Test
C_test = np.zeros((M, N+P-1))
for i in range(M):
C_test[i, :] = np.convolve(A[i, :], B[i, :], 'full')
assert np.allclose(C, C_test)
for 2D arrays, the function scipy.signal.convolve2d is faster and scipy.signal.fftconvolve can be even faster (depending on the dimensions of the arrays):
Here the same code with N = 100000
import time
import numpy as np
import scipy.signal as sg
M, N, P = 10, 100000, 20
A = np.random.randn(M, N)
B = np.random.randn(M, P)
T1 = time.time()
C = sg.convolve(A, B, 'full')
print(time.time()-T1)
T1 = time.time()
C_2d = sg.convolve2d(A, B, 'full')
print(time.time()-T1)
T1 = time.time()
C_fft = sg.fftconvolve(A, B, 'full')
print(time.time()-T1)
>>> 12.3
>>> 2.1
>>> 0.6
Answers are all the same with slight differences due to different computation methods used (e.g., fft vs direct multiplication, but i don't know what exaclty convolve2d uses):
print(np.max(np.abs(C - C_2d)))
>>>7.81597009336e-14
print(np.max(np.abs(C - C_fft)))
>>>1.84741111298e-13
Late answer, but worth posting for reference. Quoting from comments of the OP:
Each row in A is being filtered by the corresponding row in B. I could
implement it like that, just thought there might be a faster way.
A is on the order of 10s of gigabytes in size and I use overlap-add.
Naive / Straightforward Approach
import numpy as np
import scipy.signal as sg
M, N, P = 4, 10, 20
A = np.random.randn(M, N) # (4, 10)
B = np.random.randn(M, P) # (4, 20)
C = np.vstack([sg.convolve(a, b, 'full') for a, b in zip(A, B)])
>>> C.shape
(4, 29)
Each row in A is convolved with each respective row in B, essentially convolving M 1D arrays/vectors.
No Loop + CUDA Supported Version
It is possible to replicate this operation by using PyTorch's F.conv1d. We have to imagine A as a 4-channel, 1D signal of length 10. We wish to convolve each channel in A with a specific kernel of length 20. This is a special case called a depthwise convolution, often used in deep learning.
Note that torch's conv is implemented as cross-correlation, so we need to flip B in advance to do actual convolution.
import torch
import torch.nn.functional as F
#torch.no_grad()
def torch_conv(A, B):
M, N, P = A.shape[0], A.shape[1], B.shape[1]
C = F.conv1d(A, B[:, None, :], bias=None, stride=1, groups=M, padding=N+(P-1)//2)
return C.numpy()
# Convert A and B to torch tensors + flip B
X = torch.from_numpy(A) # (4, 10)
W = torch.from_numpy(np.fliplr(B).copy()) # (4, 20)
# Do grouped conv and get np array
Y = torch_conv(X, W)
>>> Y.shape
(4, 29)
>>> np.allclose(C, Y)
True
Advantages of using a depthwise convolution with torch:
No loops!
The above solution can also run on CUDA/GPU, which can really speed things up if A and B are very large matrices. (From OP's comment, this seems to be the case: A is 10GB in size.)
Disadvantages:
Overhead of converting from array to tensor (should be negligible)
Need to flip B once before the operation