Tensordot for numpy array and scipy sparse matrix - numpy

For a current project I have to compute the inner product of a lot of vectors with the same matrix (which is quite sparse). The vectors are associated with a two dimensional grid so I store the vectors in a three dimensional array:
E.g:
X is an array of dim (I,J,N). The matrix A is of dim (N,N). Now the task is to compute A.dot(X[i,j]) for each i,j in I,J.
For numpy arrays, this is quite easily accomplished with
Y = X.dot(A.T)
Now I'd like to store A as sparse matrix since it is sparse and only contains a very limited number of nonzero entries which results in a lot of unnecessary multiplications. Unfortunately, the above solution won't work since the numpy dot doesn't work with sparse matrices. And to the best of my knowledge there is not tensordot-like operation for scipy sparse.
Does anybody know a nice and efficient way to compute the above array Y with a sparse matrix A?

The obvious approach is to run a loop over your vectors and use the sparse matrix's .dot method:
def naive_sps_x_dense_vecs(sps_mat, dense_vecs):
rows, cols = sps_mat.shape
I, J, _ = dense_vecs.shape
out = np.empty((I, J, rows))
for i in xrange(I):
for j in xrange(J):
out[i, j] = sps_mat.dot(dense_vecs[i, j])
return out
But you may be able to speed things up a little by reshaping your 3d array to 2d and avoid the Python looping:
def sps_x_dense_vecs(sps_mat, dense_vecs):
rows, cols = sps_mat.shape
vecs_shape = dense_vecs.shape
dense_vecs = dense_vecs.reshape(-1, cols)
out = sps_mat.dot(dense_vecs.T).T
return out.reshape(vecs.shape[:-1] + (rows,))
The problem is that we need to have the sparse matrix be the first argument, so that we can call its .dot method, which means that the return is transposed, which in turns means that after transposing, the last reshape is going to trigger a copy of the whole array. So for fairly large values of I and J, combined with not-so-large values of N, the latter method will be several times faster than the former, but performance may even be reversed for other combinations of the parameters:
n, i, j = 100, 500, 500
a = sps.rand(n, n, density=1/n, format='csc')
vecs = np.random.rand(i, j, n)
>>> np.allclose(naive_sps_x_dense_vecs(a, vecs), sps_x_dense_vecs(a, vecs))
True
n, i, j = 100, 500, 500
%timeit naive_sps_x_dense_vecs(a, vecs)
1 loops, best of 3: 3.85 s per loop
%timeit sps_x_dense_vecs(a, vecs)
1 loops, best of 3: 576 ms per
n, i, j = 1000, 200, 200
%timeit naive_sps_x_dense_vecs(a, vecs)
1 loops, best of 3: 791 ms per loop
%timeit sps_x_dense_vecs(a, vecs)
1 loops, best of 3: 1.3 s per loop

You could use jaxto achieve what you are looking for. Let's suppose your sparse matrix is in csr_arrayformat. You would first transform it into a jax BCOO array
from scipy import sparse
from jax.experimental import sparse as jaxsparse
import jax.numpy as jnp
def convert_to_BCOO(x):
x = x.transpose() #get the transpose
x = x.tocoo()
x = jaxsparse.BCOO((x.data, jnp.column_stack((x.row, x.col))),
shape=x.shape)
x = L.sort_indices()
You could then use jax.sparsify to create a sparsified dot product as follows.
def dot(x, y):
return jnp.dot(x, y)
sp_dot = jaxsparse.sparsify(dot)
A_transpose = convert_to_BCOO(A)
Y = sp_dot(X,A_transpose)
The function sp_dot now follows the exact same rules as numpy.dot.
Hope this helps!

Related

Efficient way to calculate the pairwise matrix product between one tensor and all the rolling of another tensor

Suppose we have two tensors:
tensor A whose shape is (d,m,n)
tensor B whose shape is (d,n,l).
If we want to get the pairwise matrix product of the right-most matrix of A and B, I think we can use np.einsum('dmn,...nl->d...ml',A,B) whose size is (d,d,m,l). However, I would like to get the pairwise product of not all the pairs.
Import a parameter k, 1<=k<=d, I want to get the following pairwise matrix product:
from
A(0,...)#B(0,...)
to
A(0,...)#B(k-1,...)
;
from
A(1,...)#B(1,...)
to
A(1,...)#B(k,...)
;
....
;
from
A(d-2,...)#B(d-2,...),
A(d-2,...)#B(d-1,...)
to
A(d-2,...)#B(k-3,...)
;
from
A(d-1,...)#B(d-1,...)
to
A(d-1,...)#B(k-2,...)
.
Note here we we use a rolling way to deal with tensor B. (like numpy.roll).
Finally, we actually get a tensor whose shape is (d,k,m,l).
What's the most efficient way to do this.
I know several ways like:
First get np.einsum('dmn,...nl->d...ml',A,B), then use a mask to extract the (d,k) pairs.
tile B first, then use einsum in some way.
But I think there exists a better way.
I doubt you can do much better than a for loop. Here is, for example, a vectorized version using einsum and stride_tricks compared to a double for loop:
Code:
from simple_benchmark import BenchmarkBuilder, MultiArgument
import numpy as np
from numpy.lib.stride_tricks import as_strided
B = BenchmarkBuilder()
#B.add_function()
def loopy(A,B,k):
d,m,n = A.shape
l = B.shape[-1]
out = np.empty((d,k,m,l),int)
for i in range(d):
for j in range(k):
out[i,j] = A[i]#B[(i+j)%d]
return out
#B.add_function()
def vectory(A,B,k):
d,m,n = A.shape
l = B.shape[-1]
BB = np.concatenate([B,B[:k-1]],0)
BB = as_strided(BB,(d,k,n,l),np.repeat(BB.strides,(2,1,1)))
return np.einsum("ikl,ijln->ijkn",A,BB)
#B.add_arguments('d x k x m x n x l')
def argument_provider():
for exp in range(10):
d,k,m,n,l = (np.r_[1.6,1.5,1.5,1.5,1.5]**exp*(4,2,2,2,2)).astype(int)
print(d,k,m,n,l)
A = np.random.randint(0,10,(d,m,n))
B = np.random.randint(0,10,(d,n,l))
yield k*d*m*n*l,MultiArgument([A,B,k])
r = B.run()
r.plot()
import pylab
pylab.savefig('diagwa.png')

Numpy implementation of conditional update

I have a 2d np array of values 0 and 1 called t. I have another 2d array initialized with random values called q. They are the same size. I want to update q so that in every place t is 0 q is changed to be -np.inf
I believe the below loops work
for i in range(0, np.shape(t)[0]):
for j in range(0, np.shape(t)[1]):
if t[i, j] == 0:
q[i, j] = -np.inf
However I am wondering if there is a more efficient solution using numpy.
Using np.isclose would be a better way, e.g.
q[np.isclose(t, 0.0)] = np.NINF

Loop through numpy array on indexes and apply function [duplicate]

I have two arrays that have the shapes N X T and M X T. I'd like to compute the correlation coefficient across T between every possible pair of rows n and m (from N and M, respectively).
What's the fastest, most pythonic way to do this? (Looping over N and M would seem to me to be neither fast nor pythonic.) I'm expecting the answer to involve numpy and/or scipy. Right now my arrays are numpy arrays, but I'm open to converting them to a different type.
I'm expecting my output to be an array with the shape N X M.
N.B. When I say "correlation coefficient," I mean the Pearson product-moment correlation coefficient.
Here are some things to note:
The numpy function correlate requires input arrays to be one-dimensional.
The numpy function corrcoef accepts two-dimensional arrays, but they must have the same shape.
The scipy.stats function pearsonr requires input arrays to be one-dimensional.
Correlation (default 'valid' case) between two 2D arrays:
You can simply use matrix-multiplication np.dot like so -
out = np.dot(arr_one,arr_two.T)
Correlation with the default "valid" case between each pairwise row combinations (row1,row2) of the two input arrays would correspond to multiplication result at each (row1,row2) position.
Row-wise Correlation Coefficient calculation for two 2D arrays:
def corr2_coeff(A, B):
# Rowwise mean of input arrays & subtract from input arrays themeselves
A_mA = A - A.mean(1)[:, None]
B_mB = B - B.mean(1)[:, None]
# Sum of squares across rows
ssA = (A_mA**2).sum(1)
ssB = (B_mB**2).sum(1)
# Finally get corr coeff
return np.dot(A_mA, B_mB.T) / np.sqrt(np.dot(ssA[:, None],ssB[None]))
This is based upon this solution to How to apply corr2 functions in Multidimentional arrays in MATLAB
Benchmarking
This section compares runtime performance with the proposed approach against generate_correlation_map & loopy pearsonr based approach listed in the other answer.(taken from the function test_generate_correlation_map() without the value correctness verification code at the end of it). Please note the timings for the proposed approach also include a check at the start to check for equal number of columns in the two input arrays, as also done in that other answer. The runtimes are listed next.
Case #1:
In [106]: A = np.random.rand(1000, 100)
In [107]: B = np.random.rand(1000, 100)
In [108]: %timeit corr2_coeff(A, B)
100 loops, best of 3: 15 ms per loop
In [109]: %timeit generate_correlation_map(A, B)
100 loops, best of 3: 19.6 ms per loop
Case #2:
In [110]: A = np.random.rand(5000, 100)
In [111]: B = np.random.rand(5000, 100)
In [112]: %timeit corr2_coeff(A, B)
1 loops, best of 3: 368 ms per loop
In [113]: %timeit generate_correlation_map(A, B)
1 loops, best of 3: 493 ms per loop
Case #3:
In [114]: A = np.random.rand(10000, 10)
In [115]: B = np.random.rand(10000, 10)
In [116]: %timeit corr2_coeff(A, B)
1 loops, best of 3: 1.29 s per loop
In [117]: %timeit generate_correlation_map(A, B)
1 loops, best of 3: 1.83 s per loop
The other loopy pearsonr based approach seemed too slow, but here are the runtimes for one small datasize -
In [118]: A = np.random.rand(1000, 100)
In [119]: B = np.random.rand(1000, 100)
In [120]: %timeit corr2_coeff(A, B)
100 loops, best of 3: 15.3 ms per loop
In [121]: %timeit generate_correlation_map(A, B)
100 loops, best of 3: 19.7 ms per loop
In [122]: %timeit pearsonr_based(A, B)
1 loops, best of 3: 33 s per loop
#Divakar provides a great option for computing the unscaled correlation, which is what I originally asked for.
In order to calculate the correlation coefficient, a bit more is required:
import numpy as np
def generate_correlation_map(x, y):
"""Correlate each n with each m.
Parameters
----------
x : np.array
Shape N X T.
y : np.array
Shape M X T.
Returns
-------
np.array
N X M array in which each element is a correlation coefficient.
"""
mu_x = x.mean(1)
mu_y = y.mean(1)
n = x.shape[1]
if n != y.shape[1]:
raise ValueError('x and y must ' +
'have the same number of timepoints.')
s_x = x.std(1, ddof=n - 1)
s_y = y.std(1, ddof=n - 1)
cov = np.dot(x,
y.T) - n * np.dot(mu_x[:, np.newaxis],
mu_y[np.newaxis, :])
return cov / np.dot(s_x[:, np.newaxis], s_y[np.newaxis, :])
Here's a test of this function, which passes:
from scipy.stats import pearsonr
def test_generate_correlation_map():
x = np.random.rand(10, 10)
y = np.random.rand(20, 10)
desired = np.empty((10, 20))
for n in range(x.shape[0]):
for m in range(y.shape[0]):
desired[n, m] = pearsonr(x[n, :], y[m, :])[0]
actual = generate_correlation_map(x, y)
np.testing.assert_array_almost_equal(actual, desired)
For those interested in computing the Pearson correlation coefficient between a 1D and 2D array, I wrote the following function, where x is a 1D array and y a 2D array.
def pearsonr_2D(x, y):
"""computes pearson correlation coefficient
where x is a 1D and y a 2D array"""
upper = np.sum((x - np.mean(x)) * (y - np.mean(y, axis=1)[:,None]), axis=1)
lower = np.sqrt(np.sum(np.power(x - np.mean(x), 2)) * np.sum(np.power(y - np.mean(y, axis=1)[:,None], 2), axis=1))
rho = upper / lower
return rho
Example run:
>>> x
Out[1]: array([1, 2, 3])
>>> y
Out[2]: array([[ 1, 2, 3],
[ 6, 7, 12],
[ 9, 3, 1]])
>>> pearsonr_2D(x, y)
Out[3]: array([ 1. , 0.93325653, -0.96076892])

numpy n-dimensional smart indexing over large tensors - memory efficiency

I'm working with large tensors, so numpy memory allocations for temporary tensors begin significantly influencing execution time + code sometimes raises memory allocation errors during those intermediate steps. Here're two approaches for indexing one tensor with int values of another tensor (like, result_ijk = a[i, b[i, j], k]) that I came up with, and even though second one seems more memory-efficient, I feel like creating this enormous index-matrix and iterating over all it's values (even in parallel) is kind of wired (and hits memory limits quite often):
def test():
i, j, k, l = 10, 20, 30, 40 # in reality, they're like 1e3..1e6
a = np.random.rand(i, j, k)
b = np.random.randint(0, j, size=i*l).reshape((i, l))
# c_ilk = c[i, b[i, l], k]; shape(c) = (10, 40, 30)
tmp = a[:, b, :] # <= i*ijk additional memory allocated (!) crazy
c1 = np.diagonal(tmp, axis1=0, axis2=1).transpose([2, 0, 1])
print(c1.shape)
# another approach:
ii, ll = np.indices((i, l)) # <= 2*i*l of temporary ints allocated
tmp2 = b[ii, ll] # i*l of ints allocated, slow ops
c2 = a[ii, tmp2] # slow ops over tensor
print(c2.shape)
print(np.allclose(c1, c2))
test()
- any suggestions on how one could optimize this type of n-dim smart indexing code?
If I'm going to use this piece of ~vectorized code in Theano, does it also going to allocate all those temporary buffers or it could somehow manage to build them "on-fly"? Is there any package that would perform such indexing in lazy\more efficient manner without allocation of these ii-like tensors?
(note: I need to take gradients over it in the end, so I can't use fancy jit-compilers like numba :( )
You only need to allocate an array of integers of length i to get your desired result:
i_idx = np.arange(i)
c = a[i_idx[:, None], b[i_idx, :], :]
# or you can use the terser c = a[i_idx[:, None], b[i_idx]]
Broadcasting takes care of duplicating values as needed on the fly, without having to allocate memory for them.
If you time this for large-ish arrays, you'll notice it is only marginally faster than your second approach: as noted by others, the intermediate indexing array is going to be several orders of magnitude smaller than your overall computation, so optimizing it has a small effect on the total runtime or memory footprint.
Some methods :
i,j,k,l=[100]*4
a = np.random.randint(0,5,(i, j, k))
b = np.random.randint(0, j,(i, l))
def test1():
# c_ilk = c[i, b[i, l], k]; shape(c) = (2,3,5)
tmp = a[:, b, :] # <= i*ijk additional memory allocated (!) crazy
c1 = np.diagonal(tmp, axis1=0, axis2=1).transpose([2, 0, 1])
return c1
def test2():
ii, ll = np.indices((i, l)) # <= 2*i*l of temporary ints allocated
tmp2 = b[ii, ll] # i*l of ints allocated, slow ops
c2 = a[ii, tmp2] # slow ops over tensor
#print(c2.shape)
return c2
def test3():
c3=np.empty((i,l,k),dtype=a.dtype)
for ii in range(i):
for ll in range(l):
c3[ii,ll]=a[ii,b[ii,ll]]
return c3
from numba import jit
test4=jit(test3)
And the corresponding benchmarks :
In [54]: %timeit test1()
1 loop, best of 3: 720 ms per loop
In [55]: %timeit test2()
100 loops, best of 3: 7.79 ms per loop
In [56]: %timeit test3()
10 loops, best of 3: 43.7 ms per loop
In [57]: %timeit test4()
100 loop, best of 3: 4.99 ms per loop
That seems to show (see #Eelco Hoogendoorn comment) that your second method is nearly optimal for big sizes, while the first is a bad choice.
For numba you can just use this part of the code, and apply gradient in a non "jited" function.

Calculate "v^T A v" for a matrix of vectors v

I have a k*n matrix X, and an k*k matrix A. For each column of X, I'd like to calculate the scalar
X[:, i].T.dot(A).dot(X[:, i])
(or, mathematically, Xi' * A * Xi).
Currently, I have a for loop:
out = np.empty((n,))
for i in xrange(n):
out[i] = X[:, i].T.dot(A).dot(X[:, i])
but since n is large, I'd like to do this faster if possible (i.e. using some NumPy functions instead of a loop).
This seems to do it nicely:
(X.T.dot(A)*X.T).sum(axis=1)
Edit: This is a little faster. np.einsum('...i,...i->...', X.T.dot(A), X.T). Both work better if X and A are Fortran contiguous.
You can use the numpy.einsum:
np.einsum('ji,jk,ki->i',x,a,x)
This will get the same result. Let's see if it is much faster:
Looks like dot is still the fastest option, particularly because it uses threaded BLAS, as opposed to einsum which runs on one core.
import perfplot
import numpy as np
def setup(n):
k = n
x = np.random.random((k, n))
A = np.random.random((k, k))
return x, A
def loop(data):
x, A = data
n = x.shape[1]
out = np.empty(n)
for i in range(n):
out[i] = x[:, i].T.dot(A).dot(x[:, i])
return out
def einsum(data):
x, A = data
return np.einsum('ji,jk,ki->i', x, A, x)
def dot(data):
x, A = data
return (x.T.dot(A)*x.T).sum(axis=1)
perfplot.show(
setup=setup,
kernels=[loop, einsum, dot],
n_range=[2**k for k in range(10)],
logx=True,
logy=True,
xlabel='n, k'
)
You can't do it faster unless you parallelize the whole thing: One thread per column. You'll still use loops - you can't get away from that.
Map reduce is a nice way to look at this problem: map step multiples, reduce step sums.