referencing rows in a matrix using index from another matrix - numpy

You have an original sparse matrix X:
>>print type(X)
>>print X.todense()
<class 'scipy.sparse.csr.csr_matrix'>
[[1,4,3]
[3,4,1]
[2,1,1]
[3,6,3]]
You have a second sparse matrix Z, which is derived from some rows of X (say the values are doubled so we can see the difference between the two matrices). In pseudo-code:
>>Z = X[[0,2,3]]
>>print Z.todense()
[[1,4,3]
[2,1,1]
[3,6,3]]
>>Z = Z*2
>>print Z.todense()
[[2, 8, 6]
[4, 2, 2]
[6, 12,6]]
What's the best way of retrieving the rows in Z using the ORIGINAL indices from X. So for instance, in pseudo-code:
>>print Z[[0,3]]
[[2,8,6] #0 from Z, and what would be row **0** from X)
[6,12,6]] #2 from Z, but what would be row **3** from X)
That is, how can you retrieve rows from Z, using indices that refer to the original rows position in the original matrix X? To do this, you can't modify X in anyway (you can't add an index column to the matrix X), but there are no other limits.

If you have the original indices in an array i, and the values in i are in increasing order (as in your example), you can use numpy.searchsorted(i, [0, 3]) to find the indices in Z that correspond to indices [0, 3] in the original X. Here's a demonstration in an IPython session:
In [39]: X = csr_matrix([[1,4,3],[3,4,1],[2,1,1],[3,6,3]])
In [40]: X.todense()
Out[40]:
matrix([[1, 4, 3],
[3, 4, 1],
[2, 1, 1],
[3, 6, 3]])
In [41]: i = array([0, 2, 3])
In [42]: Z = 2 * X[i]
In [43]: Z.todense()
Out[43]:
matrix([[ 2, 8, 6],
[ 4, 2, 2],
[ 6, 12, 6]])
In [44]: Zsub = Z[searchsorted(i, [0, 3])]
In [45]: Zsub.todense()
Out[45]:
matrix([[ 2, 8, 6],
[ 6, 12, 6]])

Related

Given two arrays, `a` and `b`, how to find efficiently all combinations of elements in `b` that have equal value in `a`?

Given two arrays, a and b, how to find efficiently all combinations of elements in b that have equal value in a?
here is an example:
Given
a = [0, 0, 0, 1, 1, 2, 2, 2, 2]
b = [1, 2, 4, 5, 9, 3, 7, 22, 10]
how would you calculate
c = [[1, 2],
[1, 4],
[2, 4],
[5, 9],
[3, 7],
[3, 22],
[3, 10],
[7, 22],
[7, 10],
[22, 10]]
?
a can be assumed to be sorted.
I can do this with loops, a la:
import torch
a = torch.tensor([0, 0, 0, 1, 1, 2, 2, 2, 2])
b = torch.tensor([1, 2, 4, 5, 9, 3, 7, 22, 10])
jumps = torch.cat((torch.tensor([0]),
torch.where(a.diff() > 0)[0] + 1,
torch.tensor([len(a)])))
cs = []
for i in range(len(jumps) - 1):
cs.append(torch.combinations(b[jumps[i]:jumps[i + 1]]))
c = torch.cat(cs)
Is there any efficient way to avoid the loop? The solution should work for CPU and CUDA.
Also, the solution should have runtime O(m * m), where m is the largest number of equal elements in a and not O(n * n) where n is the length of of a.
I prefer solutions for pytorch, but I am curious for solution for numpy as well.
I think the overhead of using torch is only justified for bigger datasets, as there is basically no computational difficulty in the function, imho you can achieve same results with:
from collections import Counter
def find_combinations1(a, b):
count_a = Counter(a)
combinations = []
for x in set(b):
if count_a[x] == b.count(x):
combinations.append(x)
return combinations
or even a simpler:
def find_combinations2(a, b):
return list(set(a) & set(b))
With pytorch I assume the most simple approach is:
import torch
def find_combinations3(a, b):
a = torch.tensor(a)
b = torch.tensor(b)
eq = torch.eq(a, b.view(-1, 1))
indices = torch.nonzero(eq)
return indices[:, 1]
This option has of course a time complexity of O(n*m) where n is the size of a and m is the size of b, and O(n+m) is the memory for the tensors.

Multidimensional numpy.outer without flatten

x is N by M matrix.
y is 1 by L vector.
I want to return "outer product" between x and y, let's call it z.
z[n,m,l] = x[n,m] * y[l]
I could probably do this using einsum.
np.einsum("ij,k->ijk", x[:, :, k], y[:, k])
or reshape afterwards.
np.outer(x[:, :, k], y).reshape((x.shape[0],x.shape[1],y.shape[0]))
But I'm thinking of doing this in np.outer only or something seems simpler, memory efficient.
Is there a way?
It's one of those numpy "can't know unless you happen to know" bits: np.outer flattens multidimensional inputs while np.multiply.outer doesn't:
m,n,l = 3,4,5
x = np.arange(m*n).reshape(m,n)
y = np.arange(l)
np.multiply.outer(x,y).shape
# (3, 4, 5)
The code for outer is:
multiply(a.ravel()[:, newaxis], b.ravel()[newaxis, :], out)
As its docs says, it flattens (i.e. ravel). If the arrays are already 1d, that expression could be written as
a[:,None] * b[None,:]
a[:,None] * b # broadcasting auto adds the None to b
We could apply broadcasting rules to your (n,m)*(1,l):
In [2]: x = np.arange(12).reshape(3,4); y = np.array([[1,2]])
In [3]: x.shape, y.shape
Out[3]: ((3, 4), (1, 2))
You want a (n,m,l), which a (n,m,1) * (1,1,l) achieves. We need to add a trailing dimension to x. The extra leading 1 on y is automatic:
In [4]: z = x[...,None]*y
In [5]: z.shape
Out[5]: (3, 4, 2)
In [6]: z
Out[6]:
array([[[ 0, 0],
[ 1, 2],
[ 2, 4],
[ 3, 6]],
[[ 4, 8],
[ 5, 10],
[ 6, 12],
[ 7, 14]],
[[ 8, 16],
[ 9, 18],
[10, 20],
[11, 22]]])
Using einsum:
In [8]: np.einsum('nm,kl->nml', x, y).shape
Out[8]: (3, 4, 2)
The fact that you approved:
In [9]: np.multiply.outer(x,y).shape
Out[9]: (3, 4, 1, 2)
suggests y isn't really (1,l) but rather (l,)`. Adjust for either is easy.
I don't think there's much difference in memory efficiency among these. In this small example In[4] is fastest, but not by much.

Elegantly generate result array in numpy

I have my X and Y numpy arrays:
X = np.array([0,1,2,3])
Y = np.array([0,1,2,3])
And my function which maps x,y values to Z points:
def z(x,y):
return x+y
I wish to produce the obvious thing required for a 3D plot: the 2-dimensional numpy array for the corresponding Z-values. I believe it should look like:
Z = np.array([[0, 1, 2, 3],
[1, 2, 3, 4],
[2, 3, 4, 5],
[3, 4, 5, 6]])
I can do this in several lines, but I'm looking for the briefest most elegant piece of code.
For a function that is array aware it is more economical to use an open grid:
>>> import numpy as np
>>>
>>> X = np.array([0,1,2,3])
>>> Y = np.array([0,1,2,3])
>>>
>>> def z(x,y):
... return x+y
...
>>> XX, YY = np.ix_(X, Y)
>>> XX, YY
(array([[0],
[1],
[2],
[3]]), array([[0, 1, 2, 3]]))
>>> z(XX, YY)
array([[0, 1, 2, 3],
[1, 2, 3, 4],
[2, 3, 4, 5],
[3, 4, 5, 6]])
If your grid axes are ranges you can directly create the grid using np.ogrid
>>> XX, YY = np.ogrid[:4, :4]
>>> XX, YY
(array([[0],
[1],
[2],
[3]]), array([[0, 1, 2, 3]]))
If the function is not array aware you can make it so using np.vectorize:
>>> def f(x, y):
... if x > y:
... return x
... else:
... return -x
...
>>> np.vectorize(f)(*np.ogrid[-3:4, -3:4])
array([[ 3, 3, 3, 3, 3, 3, 3],
[-2, 2, 2, 2, 2, 2, 2],
[-1, -1, 1, 1, 1, 1, 1],
[ 0, 0, 0, 0, 0, 0, 0],
[ 1, 1, 1, 1, -1, -1, -1],
[ 2, 2, 2, 2, 2, -2, -2],
[ 3, 3, 3, 3, 3, 3, -3]])
One very short way to achieve what you want is to produce a meshgrid from your coordinates:
X,Y = np.meshgrid(x,y)
z = X+Y
or more general:
z = f(X,Y)
or even in one line:
z = f(*np.meshgrid(x,y))
EDIT:
If your function also may return a constant, you have to somehow infer the dimensions that the result should have. If you want to continue using meshgrids one very simple way would be re-write your function in this way:
def f(x,y):
return x*0+y*0+a
where a would be your constant. numpy would then take care of the dimensions for you. This is of course a bit weird looking, so instead you could write
def f(x,y):
return np.full(x.shape, a)
If you really want to go with functions that work both on scalars and arrays, it's probably best to go with np.vectorize as in #PaulPanzer's answer.

Dot product between 2D and 3D numpy arrays

I have 2 arrays x and y with shapes (2, 3, 3), respectively, (3, 3). I want to compute the dot product z with shape (2, 3) in the following way:
x = np.array([[[a111, a121, a131], [a211, a221, a231], [a311, a321, a331]],
[[a112, a122, a132], [a212, a222, a232], [a312, a322, a332]]])
y = np.array([[b11, b12, b13], [b21, b22, b23], [b31, b32, b33]])
z = np.array([[a111*b11+a121*b12+a131*b13, a211*b21+a221*b22+a231*b23, a311*b31+a321*b32+a331*b33],
[a112*b11+a122*b12+a132*b13, a212*b21+a222*b22+a232*b23, a312*b31+a322*b32+a332*b33]])
Any ideas on how to do this in a vectorized way?
On the sum-reductions shown in the question, it seems the reduction is along the last axis, while keeping the second axis of x aligned with the first axis of y. Because of that requirement of axis-alignment, we can use np.einsum. Thus, one vectorized solution would be -
np.einsum('ijk,jk->ij',x, y)
Sample run -
In [255]: x
Out[255]:
array([[[5, 1, 7],
[2, 1, 7],
[5, 1, 2]],
[[6, 4, 7],
[3, 8, 1],
[1, 7, 7]]])
In [256]: y
Out[256]:
array([[5, 4, 7],
[8, 2, 5],
[2, 3, 3]])
In [260]: np.einsum('ijk,jk->ij',x, y)
Out[260]:
array([[78, 53, 19],
[95, 45, 44]])
In [261]: 5*5 + 1*4 + 7*7
Out[261]: 78
In [262]: 2*8 + 1*2 + 7*5
Out[262]: 53

Reduce a dimension of numpy array by selecting

I have a 3d array
A = np.random.random((4,4,3))
and a index matrix
B = np.int_(np.random.random((4,4))*3)
How do I get a 2D array from A based on index matrix B?
In general, how to get a N-1 dimensional array from a ND array and a N-1 dimensional index array?
Lets take an example:
>>> A = np.random.randint(0,10,(3,3,2))
>>> A
array([[[0, 1],
[8, 2],
[6, 4]],
[[1, 0],
[6, 9],
[7, 7]],
[[1, 2],
[2, 2],
[9, 7]]])
Use fancy indexing to take simple indices. Note that the all indices must be of the same shape and the shape of each index will be what is returned.
>>> ind = np.arange(2)
>>> A[ind,ind,ind]
array([0, 9]) #Index (0,0,0) and (1,1,1)
>>> ind = np.arange(2).reshape(2,1)
>>> A[ind,ind,ind]
array([[0],
[9]])
So for your example we need to supply the grid for the first two dimensions:
>>> A = np.random.random((4,4,3))
>>> B = np.int_(np.random.random((4,4))*3)
>>> A
array([[[ 0.95158697, 0.37643036, 0.29175815],
[ 0.84093397, 0.53453123, 0.64183715],
[ 0.31189496, 0.06281937, 0.10008886],
[ 0.79784114, 0.26428462, 0.87899921]],
[[ 0.04498205, 0.63823379, 0.48130828],
[ 0.93302194, 0.91964805, 0.05975115],
[ 0.55686047, 0.02692168, 0.31065731],
[ 0.92822499, 0.74771321, 0.03055592]],
[[ 0.24849139, 0.42819062, 0.14640117],
[ 0.92420031, 0.87483486, 0.51313695],
[ 0.68414428, 0.86867423, 0.96176415],
[ 0.98072548, 0.16939697, 0.19117458]],
[[ 0.71009607, 0.23057644, 0.80725518],
[ 0.01932983, 0.36680718, 0.46692839],
[ 0.51729835, 0.16073775, 0.77768313],
[ 0.8591955 , 0.81561797, 0.90633695]]])
>>> B
array([[1, 2, 0, 0],
[1, 2, 0, 1],
[2, 1, 1, 1],
[1, 2, 1, 2]])
>>> x,y = np.meshgrid(np.arange(A.shape[0]),np.arange(A.shape[1]))
>>> x
array([[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3]])
>>> y
array([[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3]])
>>> A[x,y,B]
array([[ 0.37643036, 0.48130828, 0.24849139, 0.71009607],
[ 0.53453123, 0.05975115, 0.92420031, 0.36680718],
[ 0.10008886, 0.02692168, 0.86867423, 0.16073775],
[ 0.26428462, 0.03055592, 0.16939697, 0.90633695]])
If you prefer to use mesh as suggested by Daniel, you may also use
A[tuple( np.ogrid[:A.shape[0], :A.shape[1]] + [B] )]
to work with sparse indices. In the general case you could use
A[tuple( np.ogrid[ [slice(0, end) for end in A.shape[:-1]] ] + [B] )]
Note that this may also be used when you'd like to index by B an axis different from the last one (see for example this answer about inserting an element into a list).
Otherwise you can do it using broadcasting:
A[np.arange(A.shape[0])[:, np.newaxis], np.arange(A.shape[1])[np.newaxis, :], B]
This may be generalized too but it's a bit more complicated.