Explination of numpy's einsum - numpy

I am currently doing some studies on computing a 4th order tensor in numpy with the einsum function.
The tensor I am computing is written in Einstein notation and the function einsun does the work perfectly! But I would like to know what it is doing in the following case:
import numpy as np
a=np.array([[2,0,3],[0,1,0],[0, 0, 4]])
b= np.eye(3)
r1=np.einsum("ij,kl->ijkl", a, b)
r2=np.einsum("ik,jl->ijkl", a, b)
in r1 I am basically doing the standard tensor product (equivalent to np.tensordot(a,b,axes=0)).
What about in r2?
I know I can get the value by doing a[:,None,:,None]*b[None,:,None,:] but I do not know what the indexing is doing. Does this operation have a name?
Sorry if this is too basic!

I tried to use the transpose definition to change multiple axes.
It works for 'ij,kl -> ijkl' , 'ik,jl->ijkl' ,'kl,ij->ijkl'
but fails for 'il,jk->ijkl', 'jl,ik->ijkl'and 'jk,il->ijkl'
import numpy as np
a=np.eye(3)
a[0][0]=2
a[0][-1]=3
a[-1][-1]=4
b=np.eye(3)
def permutation(str_,Arr):
Arr=np.reshape(Arr,[3,3,3,3])
def splitString(str_):
tmp1=str_.split(',')
tmp2=tmp1[1].split('->')
str_idx1=tmp1[0]
str_idx2=tmp2[0]
str_idx_out=tmp2[1]
return str_idx1,str_idx2, str_idx_out
idx_a, idx_b, idx_out=splitString(str_)
dict_={'i':0,'j':1,'k':2,'l':3}
def split(word):
return [char for char in word]
a,b=split(idx_a)
c,d=split(idx_b)
Arr=np.transpose(Arr,(dict_[a],dict_[b],dict_[c],dict_[d]))
return Arr
str_='jk,il->ijkl'
d=np.outer(a,b)
f=np.einsum(str_, a,b)
check=permutation(str_,d)
if (np.count_nonzero(f-check)==0):
print ('Code is working!')
else:
print("Something is wrong...")
Appreciate your suggestions!

r2 is essentially the same tensor as r1, but the indices are rearranged. In particular, r2[i,j,k,l] is equal to a[i,k]*b[k,l].
For instance:
>>> r2[0,1,2,1]
3.0
This corresponds to the fact that a[0,2]*b[1,1] is 3 * 1, which is indeed 3.
Another way to think about this is to observe that a[:,j,:,l] is equal to a whenever j == l and is a zero-matrix otherwise.

Related

Computing quick convex hull using Numba

I came across to this nice implementation of computing convex hull of 2d points using Numpy implementation. I would like to be able to #njit this function to use it inside my other Numba jitted code. However I'm not able to modify it, to run, as it uses recursion, and unsupported Numba features? Can anybody help me to rewrite this?
import numpy as np
from numba import njit
def process(S, P, a, b):
signed_dist = np.cross(S[P] - S[a], S[b] - S[a])
K = [i for s, i in zip(signed_dist, P) if s > 0 and i != a and i != b]
if len(K) == 0:
return (a, b)
c = max(zip(signed_dist, P))[1]
return process(S, K, a, c)[:-1] + process(S, K, c, b)
def quickhull_2d(S: np.ndarray) -> np.ndarray:
a, b = np.argmin(S[:,0]), np.argmax(S[:,0])
max_index = np.argmax(S[:,0])
max_element = S[max_index]
return process(S, np.arange(S.shape[0]), a, max_index)[:-1] + process(S, np.arange(S.shape[0]), max_index, a)[:-1]
Example data input and output
points = np.array([[0, 0], [1, 1], [0.5, 0.5], [0, 1], [1, 0]])
ch = quickhull_2d(points)
print(ch)
[0, 4, 1, 3]
print(points[ch])
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]]
There are many issues in this code for Numba to be used.
First of all, returning variable-sized tuples is not possible in Numba because the type of a tuple implicitly includes its size. A tuple is basically a structured type and not a list. See this post and this one for more information about this issue. The solution is basically to return a list (slow) or an array (fast).
Moreover, the type of the parameters change from one function to another. Indeed, process is called in quickhull_2d with a P defined as a Numpy array and then called from process itself with P defined as a list. List and array are completely different things. It is better to use array when possible in Numba unless you use a list to add an unknown number of items (not small nor bounded).
Additionally, max(zip(signed_dist, P))[1] is apparently unsupported by Numba and it is not very efficient anyway (nor idiomatic for a Numpy code). P[np.argmax(signed_dist)] should be used instead.
Furthermore, np.cross also does not seems supported for the general case and you need to currently use cross2d instead (from numba.np.extensions).
Finally, when you use recursive function like this, it is better to specify the input type of the parameters so to avoid weird errors. This can be done thanks to a signature string.
The resulting code is:
import numpy as np
from numba import njit
from numba.np.extensions import cross2d
#njit('(float64[:,:], int64[:], int64, int64)')
def process(S, P, a, b):
signed_dist = cross2d(S[P] - S[a], S[b] - S[a])
K = np.array([i for s, i in zip(signed_dist, P) if s > 0 and i != a and i != b], dtype=np.int64)
if len(K) == 0:
return [a, b]
c = P[np.argmax(signed_dist)]
return process(S, K, a, c)[:-1] + process(S, K, c, b)
#njit('(float64[:,:],)')
def quickhull_2d(S: np.ndarray) -> np.ndarray:
a, b = np.argmin(S[:,0]), np.argmax(S[:,0])
max_index = np.argmax(S[:,0])
max_element = S[max_index]
return process(S, np.arange(S.shape[0]), a, max_index)[:-1] + process(S, np.arange(S.shape[0]), max_index, a)[:-1]
points = np.array([[0, 0], [1, 1], [0.5, 0.5], [0, 1], [1, 0]])
ch = quickhull_2d(points)
print(ch) # print [0, 4, 1, 3]
Note that the compilation time is slow and the execution time should not be great. This is due to lists (and so temporary array for the runtime performance). The next step is simply to use arrays. The bad news is that concatenate is not supported by Numba (because the general case is not easy to implement though specific case are trivial). You can create a new array and copy each part (or even better: you can preallocate an array and slice it during the recursive calls).
Also not that any recursive function can be transformed to a non-recursive function using a manual stack. That being said, it may be slower and make the code more verbose. There are some benefits to this approach though: it avoid stack overflow when the recursion is deep and it may be faster if the function is rewritten so not to stack one of the function call thanks to tail call optimization.

numpy.corrcoeff() MemoryError

Can't understand MemoryError I get using numpy.corrcoeff() to find correlation coefficient between 2 vectors smin & smax as following:
import numpy as np
from numpy import random as rn
r=0.01
sigma=0.2
T=1
K=1
N=252
h=T/N
M = 50000
Z = rn.randn(M,N)
S=np.ones((M,N+1))
smax=np.ones((M,1))
smin=np.ones((M,1))
for i in range(0,N):
S[:,i+1]=S[:,i]*(np.exp((r-(sigma**2)/2)*h+sigma*Z[:,i]*np.sqrt(h)))
for j in range(0,M):
smax[j,:]=np.exp(-r*T)*(np.max(S[j,:])>K)*(np.max(S[j,:])-K)
smin[j,:]=np.exp(-r*T)*(np.min(S[j,:])<K)*(K-np.min(S[j,:]))
c=np.corrcoef(smax,smin)
print(c)
if there is another way to find correlation coeff.,like using pandas it's also good.
The shape of your arrays here is what is the problem. The function documentation states that x is a "1-D or 2-D array containing multiple variables and observations. Each row of x represents a variable, and each column a single observation of all those variables." and that y is an additional set of variables and observations. So this is trying to allocate an array of size (10000, 10000), which is huge.
If you just want to calculate the pearson correlation coefficient between two one dimensional vectors, you can use a much simpler formula than what is implemented here. This documentation has the formula I am referring to.
https://hydroerr.readthedocs.io/en/stable/api/HydroErr.HydroErr.pearson_r.html#HydroErr.HydroErr.pearson_r
But to be able to still use the numpy version you need to pass in the observations and predictions in the same parameter x, and x and y need to be 1D arrays.
import numpy as np
simulated_array = np.random.rand(50000)
observed_array = np.random.rand(50000)
c = np.corrcoef([simulated_array, observed_array])[1, 0]
More explanation about this here.

Triple tensor product with Tensorflow

Suppose I have a matrix A and two vectors x,y, of appropriate dimensions. I want to compute the dot product x' * A * y, where x' denotes the transpose. This should result in a scalar.
Is there a convenient API function in Tensorflow to do this?
(Note that I am using Tensorflow 2).
Use tf.linalg.tensordot(). See the documentation
As you have mentioned in the question that you are trying to find dot product. In this case tf.matmul() will not work, as it is only for cross product of metrices.
Demo code snippet
import tensorflow as tf
A = tf.constant([[1,4,6],[2,1,5],[3,2,4]])
x = tf.constant([3,2,7])
result = tf.linalg.tensordot(tf.transpose(x), A, axes=1)
result = tf.linalg.tensordot(result, x, axes=1)
print(result)
And the result will be
>>>tf.Tensor(532, shape=(), dtype=int32)
Few points I want to mention here
Don't forget the axes argument inside tf.linalg.tensordot()
When you create tf.zeros(5) it will create a list of shape 5 and it will be like [0,0,0,0,0], when you transpose this it will give you the same list. But if you create it like tf.zeros((5,1)), it would be a vector of shape (5,1) and the result will be
[
[0],[0],[0],[0],[0]
]
Now you can transpose this and the result will be different, but I recommend you do the code snippet I have mentioned. In case of dot product you don't have to bother much about this.
If you are still facing issues, will be very happy to help you.
Just do the following,
import tensorflow as tf
x = tf.constant([1,2])
a = tf.constant([[2,3],[3,4]])
y = tf.constant([2,3])
z = tf.reshape(tf.matmul(tf.matmul(x[tf.newaxis,:], a), y[:, tf.newaxis]),[])
print(z.numpy())
Returns
>>> 49
Just use tf.transpose and multiplication operator like this:
tf.transpose(x)* A * y .
Based on your example:
x = tf.zeros(5)
A = tf.zeros((5,5))
How about
x = tf.expand_dims(x, -1)
tf.matmul(tf.matmul(x, A, transpose_a=True), x)

Efficient way to calculate the pairwise matrix product between one tensor and all the rolling of another tensor

Suppose we have two tensors:
tensor A whose shape is (d,m,n)
tensor B whose shape is (d,n,l).
If we want to get the pairwise matrix product of the right-most matrix of A and B, I think we can use np.einsum('dmn,...nl->d...ml',A,B) whose size is (d,d,m,l). However, I would like to get the pairwise product of not all the pairs.
Import a parameter k, 1<=k<=d, I want to get the following pairwise matrix product:
from
A(0,...)#B(0,...)
to
A(0,...)#B(k-1,...)
;
from
A(1,...)#B(1,...)
to
A(1,...)#B(k,...)
;
....
;
from
A(d-2,...)#B(d-2,...),
A(d-2,...)#B(d-1,...)
to
A(d-2,...)#B(k-3,...)
;
from
A(d-1,...)#B(d-1,...)
to
A(d-1,...)#B(k-2,...)
.
Note here we we use a rolling way to deal with tensor B. (like numpy.roll).
Finally, we actually get a tensor whose shape is (d,k,m,l).
What's the most efficient way to do this.
I know several ways like:
First get np.einsum('dmn,...nl->d...ml',A,B), then use a mask to extract the (d,k) pairs.
tile B first, then use einsum in some way.
But I think there exists a better way.
I doubt you can do much better than a for loop. Here is, for example, a vectorized version using einsum and stride_tricks compared to a double for loop:
Code:
from simple_benchmark import BenchmarkBuilder, MultiArgument
import numpy as np
from numpy.lib.stride_tricks import as_strided
B = BenchmarkBuilder()
#B.add_function()
def loopy(A,B,k):
d,m,n = A.shape
l = B.shape[-1]
out = np.empty((d,k,m,l),int)
for i in range(d):
for j in range(k):
out[i,j] = A[i]#B[(i+j)%d]
return out
#B.add_function()
def vectory(A,B,k):
d,m,n = A.shape
l = B.shape[-1]
BB = np.concatenate([B,B[:k-1]],0)
BB = as_strided(BB,(d,k,n,l),np.repeat(BB.strides,(2,1,1)))
return np.einsum("ikl,ijln->ijkn",A,BB)
#B.add_arguments('d x k x m x n x l')
def argument_provider():
for exp in range(10):
d,k,m,n,l = (np.r_[1.6,1.5,1.5,1.5,1.5]**exp*(4,2,2,2,2)).astype(int)
print(d,k,m,n,l)
A = np.random.randint(0,10,(d,m,n))
B = np.random.randint(0,10,(d,n,l))
yield k*d*m*n*l,MultiArgument([A,B,k])
r = B.run()
r.plot()
import pylab
pylab.savefig('diagwa.png')

Python Memory error on scipy stats. Scipy linalg lstsq <> manual beta

Not sure if this question belongs here or on crossvalidated but since the primary issue is programming language related, I am posting it here.
Inputs:
Y= big 2D numpy array (300000,30)
X= 1D array (30,)
Desired Output:
B= 1D array (300000,) each element of which regression coefficient of regressing each row (element of length 30) of Y against X
So B[0] = scipy.stats.linregress(X,Y[0])[0]
I tried this first:
B = scipy.stats.linregress(X,Y)[0]
hoping that it will broadcast X according to shape of Y. Next I broadcast X myself to match the shape of Y. But on both occasions, I got this error:
File "C:\...\scipy\stats\stats.py", line 3011, in linregress
ssxm, ssxym, ssyxm, ssym = np.cov(x, y, bias=1).flat
File "C:\...\numpy\lib\function_base.py", line 1766, in cov
return (dot(X, X.T.conj()) / fact).squeeze()
MemoryError
I used manual approach to calculate beta, and on Sascha's suggestion below also used scipy.linalg.lstsq as follows
B = lstsq(Y.T, X)[0] # first estimate of beta
Y1=Y-Y.mean(1)[:,None]
X1=X-X.mean()
B1= np.dot(Y1,X1)/np.dot(X1,X1) # second estimate of beta
The two estimates of beta are very different however:
>>> B1
Out[10]: array([0.135623, 0.028919, -0.106278, ..., -0.467340, -0.549543, -0.498500])
>>> B
Out[11]: array([0.000014, -0.000073, -0.000058, ..., 0.000002, -0.000000, 0.000001])
Scipy's linregress will output slope+intercept which defines the regression-line.
If you want to access the coefficients naturally, scipy's lstsq might be more appropriate, which is an equivalent formulation.
Of course you need to feed it with the correct dimensions (your data is not ready; needs preprocessing; swap dims).
Code
import numpy as np
from scipy.linalg import lstsq
Y = np.random.random((300000,30))
X = np.random.random(30)
x, res, rank, s = lstsq(Y.T, X) # Y transposed!
print(x)
print(x.shape)
Output
[ 1.73122781e-05 2.70274135e-05 9.80840639e-06 ..., -1.84597771e-05
5.25035470e-07 2.41275026e-05]
(300000,)