Vectorized version of argmax across 2D heatmaps - numpy

Input: C=17,H,W numpy array of 17 (C=Channels) body-joint HxW heatmaps.
Desired output: Array of C=17 pairs of (Y,X) that specify the coordinates within H and W with the maximum value per each channel.
I would like to use fully vectorized solution (ideally one-liner) and replace my current per-channel solution wrapped in one ineffective "for-cycle":
kpts = [np.array(unravel_index(j.argmax(), j.shape)) for j in input]

Something like:
import numpy as np
C, H, W = 17, 10, 10
arr = np.random.rand(C, H, W)
out = np.array(np.unravel_index(arr.reshape(C, -1).argmax(axis=1), (H, W)))
out is shape (2, 17), which you can optionally transpose.

Related

How to find matrix common members of matrices in Numpy

I have a 2D matrix A and a vector B. I want to find all row indices of elements in A that are also contained in B.
A = np.array([[1,9,5], [8,4,9], [4,9,3], [6,7,5]], dtype=int)
B = np.array([2, 4, 8, 10, 12, 18], dtype=int)
My current solution is only to compare A to one element of B at a time but that is horribly slow:
res = np.array([], dtype=int)
for i in range(B.shape[0]):
cres, _ = (B[i] == A).nonzero()
degElem = np.append(res, cres)
res = np.unique(res)
The following Matlab statement would solve my issue:
find(any(reshape(any(reshape(A, prod(size(A)), 1) == B, 2),size(A, 1),size(A, 2)), 2))
However comparing a row and a colum vector in Numpy does not create a Boolean intersection matrix as it does in Matlab.
Is there a proper way to do this in Numpy?
We can use np.isin masking.
To get all the row numbers, it would be -
np.where(np.isin(A,B).T)[1]
If you need them split based on each element's occurence -
[np.flatnonzero(i) for i in np.isin(A,B).T if i.any()]
Posted MATLAB code seems to be doing broadcasting. So, an equivalent one would be -
np.where(B[:,None,None]==A)[1]

Efficient way to calculate the pairwise matrix product between one tensor and all the rolling of another tensor

Suppose we have two tensors:
tensor A whose shape is (d,m,n)
tensor B whose shape is (d,n,l).
If we want to get the pairwise matrix product of the right-most matrix of A and B, I think we can use np.einsum('dmn,...nl->d...ml',A,B) whose size is (d,d,m,l). However, I would like to get the pairwise product of not all the pairs.
Import a parameter k, 1<=k<=d, I want to get the following pairwise matrix product:
from
A(0,...)#B(0,...)
to
A(0,...)#B(k-1,...)
;
from
A(1,...)#B(1,...)
to
A(1,...)#B(k,...)
;
....
;
from
A(d-2,...)#B(d-2,...),
A(d-2,...)#B(d-1,...)
to
A(d-2,...)#B(k-3,...)
;
from
A(d-1,...)#B(d-1,...)
to
A(d-1,...)#B(k-2,...)
.
Note here we we use a rolling way to deal with tensor B. (like numpy.roll).
Finally, we actually get a tensor whose shape is (d,k,m,l).
What's the most efficient way to do this.
I know several ways like:
First get np.einsum('dmn,...nl->d...ml',A,B), then use a mask to extract the (d,k) pairs.
tile B first, then use einsum in some way.
But I think there exists a better way.
I doubt you can do much better than a for loop. Here is, for example, a vectorized version using einsum and stride_tricks compared to a double for loop:
Code:
from simple_benchmark import BenchmarkBuilder, MultiArgument
import numpy as np
from numpy.lib.stride_tricks import as_strided
B = BenchmarkBuilder()
#B.add_function()
def loopy(A,B,k):
d,m,n = A.shape
l = B.shape[-1]
out = np.empty((d,k,m,l),int)
for i in range(d):
for j in range(k):
out[i,j] = A[i]#B[(i+j)%d]
return out
#B.add_function()
def vectory(A,B,k):
d,m,n = A.shape
l = B.shape[-1]
BB = np.concatenate([B,B[:k-1]],0)
BB = as_strided(BB,(d,k,n,l),np.repeat(BB.strides,(2,1,1)))
return np.einsum("ikl,ijln->ijkn",A,BB)
#B.add_arguments('d x k x m x n x l')
def argument_provider():
for exp in range(10):
d,k,m,n,l = (np.r_[1.6,1.5,1.5,1.5,1.5]**exp*(4,2,2,2,2)).astype(int)
print(d,k,m,n,l)
A = np.random.randint(0,10,(d,m,n))
B = np.random.randint(0,10,(d,n,l))
yield k*d*m*n*l,MultiArgument([A,B,k])
r = B.run()
r.plot()
import pylab
pylab.savefig('diagwa.png')

How to create a new array of tensors from old one

I have a tensor [a, b, c, d, e, f, g, h, i] with dimension 9 X 1536. I need to create a new tensor which is like [(a,b), (a,c), (a,d), (a,e),(a,f),(a,g), (a,h), (a,i)] with dimension [8 x 2 x 1536]. How can I do it with tensorflow ?
I tried like this
x = tf.zeros((9x1536))
x_new = tf.stack([(x[0],x[1]),
(x[0], x[2]),
(x[0], x[3]),
(x[0], x[4]),
(x[0], x[5]),
(x[0], x[6]),
(x[0], x[7]),
(x[0], x[8])])
This seems to work but I would like to know if there is a better solution or approach which can be used instead of this
You can obtain the desired output with a combination of tf.concat, tf.tile and tf.expand_dims:
import tensorflow as tf
import numpy as np
_in = tf.constant(np.random.randint(0,10,(9,1536)))
tile_shape = [(_in.shape[0]-1).value] + [1]*len(_in.shape[1:].as_list())
_out = tf.concat([
tf.expand_dims(
tf.tile(
[_in[0]],
tile_shape
)
,
1),
tf.expand_dims(_in[1:], 1)
],
1
)
tf.tile repeats the first element of _in creating a tensor of length len(_in)-1 (I compute separately the shape of the tile because we want to tile only on the first dimension).
tf.expand_dims adds a dimension we can then concat on
Finally, tf.concat stitches together the two tensors giving the desired result.
EDIT: Rewrote to fit the OP's actual use-case with multidimensional tensors.

Transform a numpy 3D ndarray to a symmetric form with respect to a specific index

In the case of a matrix mat n x n, i can do the following
sym = 0.5 * (mat + mat.T)
the operation gives the desired result sym[i,j] = sym[j,i]
Suppose we have a 3D array ndarr[i,j,k], where i,j,k 0,1,...n,
then ndarr is n x n x n. The idea is to obtain the following "symmetric" form
nsym[i,j,k] = nsym[j,i,k] using ndarr. I tried this:
import numpy as np
# Generate some random matrix, n = 5
ndarr = np.random.beta(0.1,1,(5,5,5))
# First attempt to symmetrize
sym1 = np.array([0.5*(ndarr[:,:,k]+ndarr[:,:,k].T) for k in range(5)])
The problem here is that sym1[i,j,k] != sym1[j,i,k] as it is required. In fact I obtain sym1[i,j,k] = sym1[i,k,j], symmetric under the exchange of the last two symbols!
# Second attempt
sym2 = 0.5*(ndarr+ndarr.T)
Same problem here and sym2 is symmetric with respect the second index sym2[i,j,k]=sym2[k,j,i].
To resume, the goal is to find a symmetric form for a 3D array with respect to the third index and to preserve the values in the diagonal for the original ndarr[i,i,i].
The problem here is that you're not using the correct transpose:
sym = 0.5 * (ndarr + np.transpose(ndarr, (1, 0, 2)))
By default, np.transpose and the .T property will reverse the order of the axes. In your case, we want to only flip the first two axes: (0,1,2) -> (1,0,2).
EDIT: The reason your first attempt failed is because you were concatenating each symmetrized matrix along the first axis, not the last. It's more clear if you make ndarr with shape (5, 5, 3):
In [16]: sym = np.array([0.5*(ndarr[:,:,k]+ndarr[:,:,k].T) for k in range(3)])
In [17]: sym.shape
Out[17]: (3L, 5L, 5L)
In any case, the version above with np.transpose is cleaner and more efficient.

Convolution along one axis only

I have two 2-D arrays with the same first axis dimensions. In python, I would like to convolve the two matrices along the second axis only. I would like to get C below without computing the convolution along the first axis as well.
import numpy as np
import scipy.signal as sg
M, N, P = 4, 10, 20
A = np.random.randn(M, N)
B = np.random.randn(M, P)
C = sg.convolve(A, B, 'full')[(2*M-1)/2]
Is there a fast way?
You can use np.apply_along_axis to apply np.convolve along the desired axis. Here is an example of applying a boxcar filter to a 2d array:
import numpy as np
a = np.arange(10)
a = np.vstack((a,a)).T
filt = np.ones(3)
np.apply_along_axis(lambda m: np.convolve(m, filt, mode='full'), axis=0, arr=a)
This is an easy way to generalize many functions that don't have an axis argument.
With ndimage.convolve1d, you can specify the axis...
np.apply_along_axis won't really help you, because you're trying to iterate over two arrays. Effectively, you'd have to use a loop, as described here.
Now, loops are fine if your arrays are small, but if N and P are large, then you probably want to use FFT to convolve instead.
However, you need to appropriately zero pad your arrays first, so that your "full" convolution has the expected shape:
M, N, P = 4, 10, 20
A = np.random.randn(M, N)
B = np.random.randn(M, P)
A_ = np.zeros((M, N+P-1), dtype=A.dtype)
A_[:, :N] = A
B_ = np.zeros((M, N+P-1), dtype=B.dtype)
B_[:, :P] = B
A_fft = np.fft.fft(A_, axis=1)
B_fft = np.fft.fft(B_, axis=1)
C_fft = A_fft * B_fft
C = np.real(np.fft.ifft(C_fft))
# Test
C_test = np.zeros((M, N+P-1))
for i in range(M):
C_test[i, :] = np.convolve(A[i, :], B[i, :], 'full')
assert np.allclose(C, C_test)
for 2D arrays, the function scipy.signal.convolve2d is faster and scipy.signal.fftconvolve can be even faster (depending on the dimensions of the arrays):
Here the same code with N = 100000
import time
import numpy as np
import scipy.signal as sg
M, N, P = 10, 100000, 20
A = np.random.randn(M, N)
B = np.random.randn(M, P)
T1 = time.time()
C = sg.convolve(A, B, 'full')
print(time.time()-T1)
T1 = time.time()
C_2d = sg.convolve2d(A, B, 'full')
print(time.time()-T1)
T1 = time.time()
C_fft = sg.fftconvolve(A, B, 'full')
print(time.time()-T1)
>>> 12.3
>>> 2.1
>>> 0.6
Answers are all the same with slight differences due to different computation methods used (e.g., fft vs direct multiplication, but i don't know what exaclty convolve2d uses):
print(np.max(np.abs(C - C_2d)))
>>>7.81597009336e-14
print(np.max(np.abs(C - C_fft)))
>>>1.84741111298e-13
Late answer, but worth posting for reference. Quoting from comments of the OP:
Each row in A is being filtered by the corresponding row in B. I could
implement it like that, just thought there might be a faster way.
A is on the order of 10s of gigabytes in size and I use overlap-add.
Naive / Straightforward Approach
import numpy as np
import scipy.signal as sg
M, N, P = 4, 10, 20
A = np.random.randn(M, N) # (4, 10)
B = np.random.randn(M, P) # (4, 20)
C = np.vstack([sg.convolve(a, b, 'full') for a, b in zip(A, B)])
>>> C.shape
(4, 29)
Each row in A is convolved with each respective row in B, essentially convolving M 1D arrays/vectors.
No Loop + CUDA Supported Version
It is possible to replicate this operation by using PyTorch's F.conv1d. We have to imagine A as a 4-channel, 1D signal of length 10. We wish to convolve each channel in A with a specific kernel of length 20. This is a special case called a depthwise convolution, often used in deep learning.
Note that torch's conv is implemented as cross-correlation, so we need to flip B in advance to do actual convolution.
import torch
import torch.nn.functional as F
#torch.no_grad()
def torch_conv(A, B):
M, N, P = A.shape[0], A.shape[1], B.shape[1]
C = F.conv1d(A, B[:, None, :], bias=None, stride=1, groups=M, padding=N+(P-1)//2)
return C.numpy()
# Convert A and B to torch tensors + flip B
X = torch.from_numpy(A) # (4, 10)
W = torch.from_numpy(np.fliplr(B).copy()) # (4, 20)
# Do grouped conv and get np array
Y = torch_conv(X, W)
>>> Y.shape
(4, 29)
>>> np.allclose(C, Y)
True
Advantages of using a depthwise convolution with torch:
No loops!
The above solution can also run on CUDA/GPU, which can really speed things up if A and B are very large matrices. (From OP's comment, this seems to be the case: A is 10GB in size.)
Disadvantages:
Overhead of converting from array to tensor (should be negligible)
Need to flip B once before the operation