I came across to this nice implementation of computing convex hull of 2d points using Numpy implementation. I would like to be able to #njit this function to use it inside my other Numba jitted code. However I'm not able to modify it, to run, as it uses recursion, and unsupported Numba features? Can anybody help me to rewrite this?
import numpy as np
from numba import njit
def process(S, P, a, b):
signed_dist = np.cross(S[P] - S[a], S[b] - S[a])
K = [i for s, i in zip(signed_dist, P) if s > 0 and i != a and i != b]
if len(K) == 0:
return (a, b)
c = max(zip(signed_dist, P))[1]
return process(S, K, a, c)[:-1] + process(S, K, c, b)
def quickhull_2d(S: np.ndarray) -> np.ndarray:
a, b = np.argmin(S[:,0]), np.argmax(S[:,0])
max_index = np.argmax(S[:,0])
max_element = S[max_index]
return process(S, np.arange(S.shape[0]), a, max_index)[:-1] + process(S, np.arange(S.shape[0]), max_index, a)[:-1]
Example data input and output
points = np.array([[0, 0], [1, 1], [0.5, 0.5], [0, 1], [1, 0]])
ch = quickhull_2d(points)
print(ch)
[0, 4, 1, 3]
print(points[ch])
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]]
There are many issues in this code for Numba to be used.
First of all, returning variable-sized tuples is not possible in Numba because the type of a tuple implicitly includes its size. A tuple is basically a structured type and not a list. See this post and this one for more information about this issue. The solution is basically to return a list (slow) or an array (fast).
Moreover, the type of the parameters change from one function to another. Indeed, process is called in quickhull_2d with a P defined as a Numpy array and then called from process itself with P defined as a list. List and array are completely different things. It is better to use array when possible in Numba unless you use a list to add an unknown number of items (not small nor bounded).
Additionally, max(zip(signed_dist, P))[1] is apparently unsupported by Numba and it is not very efficient anyway (nor idiomatic for a Numpy code). P[np.argmax(signed_dist)] should be used instead.
Furthermore, np.cross also does not seems supported for the general case and you need to currently use cross2d instead (from numba.np.extensions).
Finally, when you use recursive function like this, it is better to specify the input type of the parameters so to avoid weird errors. This can be done thanks to a signature string.
The resulting code is:
import numpy as np
from numba import njit
from numba.np.extensions import cross2d
#njit('(float64[:,:], int64[:], int64, int64)')
def process(S, P, a, b):
signed_dist = cross2d(S[P] - S[a], S[b] - S[a])
K = np.array([i for s, i in zip(signed_dist, P) if s > 0 and i != a and i != b], dtype=np.int64)
if len(K) == 0:
return [a, b]
c = P[np.argmax(signed_dist)]
return process(S, K, a, c)[:-1] + process(S, K, c, b)
#njit('(float64[:,:],)')
def quickhull_2d(S: np.ndarray) -> np.ndarray:
a, b = np.argmin(S[:,0]), np.argmax(S[:,0])
max_index = np.argmax(S[:,0])
max_element = S[max_index]
return process(S, np.arange(S.shape[0]), a, max_index)[:-1] + process(S, np.arange(S.shape[0]), max_index, a)[:-1]
points = np.array([[0, 0], [1, 1], [0.5, 0.5], [0, 1], [1, 0]])
ch = quickhull_2d(points)
print(ch) # print [0, 4, 1, 3]
Note that the compilation time is slow and the execution time should not be great. This is due to lists (and so temporary array for the runtime performance). The next step is simply to use arrays. The bad news is that concatenate is not supported by Numba (because the general case is not easy to implement though specific case are trivial). You can create a new array and copy each part (or even better: you can preallocate an array and slice it during the recursive calls).
Also not that any recursive function can be transformed to a non-recursive function using a manual stack. That being said, it may be slower and make the code more verbose. There are some benefits to this approach though: it avoid stack overflow when the recursion is deep and it may be faster if the function is rewritten so not to stack one of the function call thanks to tail call optimization.
Related
I am currently doing some studies on computing a 4th order tensor in numpy with the einsum function.
The tensor I am computing is written in Einstein notation and the function einsun does the work perfectly! But I would like to know what it is doing in the following case:
import numpy as np
a=np.array([[2,0,3],[0,1,0],[0, 0, 4]])
b= np.eye(3)
r1=np.einsum("ij,kl->ijkl", a, b)
r2=np.einsum("ik,jl->ijkl", a, b)
in r1 I am basically doing the standard tensor product (equivalent to np.tensordot(a,b,axes=0)).
What about in r2?
I know I can get the value by doing a[:,None,:,None]*b[None,:,None,:] but I do not know what the indexing is doing. Does this operation have a name?
Sorry if this is too basic!
I tried to use the transpose definition to change multiple axes.
It works for 'ij,kl -> ijkl' , 'ik,jl->ijkl' ,'kl,ij->ijkl'
but fails for 'il,jk->ijkl', 'jl,ik->ijkl'and 'jk,il->ijkl'
import numpy as np
a=np.eye(3)
a[0][0]=2
a[0][-1]=3
a[-1][-1]=4
b=np.eye(3)
def permutation(str_,Arr):
Arr=np.reshape(Arr,[3,3,3,3])
def splitString(str_):
tmp1=str_.split(',')
tmp2=tmp1[1].split('->')
str_idx1=tmp1[0]
str_idx2=tmp2[0]
str_idx_out=tmp2[1]
return str_idx1,str_idx2, str_idx_out
idx_a, idx_b, idx_out=splitString(str_)
dict_={'i':0,'j':1,'k':2,'l':3}
def split(word):
return [char for char in word]
a,b=split(idx_a)
c,d=split(idx_b)
Arr=np.transpose(Arr,(dict_[a],dict_[b],dict_[c],dict_[d]))
return Arr
str_='jk,il->ijkl'
d=np.outer(a,b)
f=np.einsum(str_, a,b)
check=permutation(str_,d)
if (np.count_nonzero(f-check)==0):
print ('Code is working!')
else:
print("Something is wrong...")
Appreciate your suggestions!
r2 is essentially the same tensor as r1, but the indices are rearranged. In particular, r2[i,j,k,l] is equal to a[i,k]*b[k,l].
For instance:
>>> r2[0,1,2,1]
3.0
This corresponds to the fact that a[0,2]*b[1,1] is 3 * 1, which is indeed 3.
Another way to think about this is to observe that a[:,j,:,l] is equal to a whenever j == l and is a zero-matrix otherwise.
I am not using numpy but Eigen::Tensor C++ API, which only has contraction operations, this is just to enable me think through implementation from python.
So 'ij, ijk -> ik' is basically like doing a for loop for each of the first dimensions.
a = np.random.uniform(size=[10, 4])
b = np.random.uniform(size=[10, 4, 4])
vec = []
for i in range(10):
vec.append(a[i].dot(b[i]))
print(np.stack(vec, axis=0))
## or with einsum
print(np.einsum('ij,ijk->ik', a, b))
This can not seem to be done easily with tensordot. Any suggestions?
Not sure if this question belongs here or on crossvalidated but since the primary issue is programming language related, I am posting it here.
Inputs:
Y= big 2D numpy array (300000,30)
X= 1D array (30,)
Desired Output:
B= 1D array (300000,) each element of which regression coefficient of regressing each row (element of length 30) of Y against X
So B[0] = scipy.stats.linregress(X,Y[0])[0]
I tried this first:
B = scipy.stats.linregress(X,Y)[0]
hoping that it will broadcast X according to shape of Y. Next I broadcast X myself to match the shape of Y. But on both occasions, I got this error:
File "C:\...\scipy\stats\stats.py", line 3011, in linregress
ssxm, ssxym, ssyxm, ssym = np.cov(x, y, bias=1).flat
File "C:\...\numpy\lib\function_base.py", line 1766, in cov
return (dot(X, X.T.conj()) / fact).squeeze()
MemoryError
I used manual approach to calculate beta, and on Sascha's suggestion below also used scipy.linalg.lstsq as follows
B = lstsq(Y.T, X)[0] # first estimate of beta
Y1=Y-Y.mean(1)[:,None]
X1=X-X.mean()
B1= np.dot(Y1,X1)/np.dot(X1,X1) # second estimate of beta
The two estimates of beta are very different however:
>>> B1
Out[10]: array([0.135623, 0.028919, -0.106278, ..., -0.467340, -0.549543, -0.498500])
>>> B
Out[11]: array([0.000014, -0.000073, -0.000058, ..., 0.000002, -0.000000, 0.000001])
Scipy's linregress will output slope+intercept which defines the regression-line.
If you want to access the coefficients naturally, scipy's lstsq might be more appropriate, which is an equivalent formulation.
Of course you need to feed it with the correct dimensions (your data is not ready; needs preprocessing; swap dims).
Code
import numpy as np
from scipy.linalg import lstsq
Y = np.random.random((300000,30))
X = np.random.random(30)
x, res, rank, s = lstsq(Y.T, X) # Y transposed!
print(x)
print(x.shape)
Output
[ 1.73122781e-05 2.70274135e-05 9.80840639e-06 ..., -1.84597771e-05
5.25035470e-07 2.41275026e-05]
(300000,)
Suppose I want to create an array b which is a version of array a with the i'th row set to zero.
Currently, I have to do:
b = a.copy()
b[i, :] = 0
Which is a bit annoying, because you can't do that in lambdas, and everything else in numpy is functional. I'd like a function similar to theano's set_subtensor, where you could go
b = a.set_subtensor((i, slice(None)), 0)
or
b = np.set_subtensor(a, (i, slice(None)), 0)
As far as I can tell, there's nothing like that in numpy. Or is there?
Edit
The answer appears to be no, there is no such function, you need to define one yourself. See hpaulj's response.
Do you mean a simple function like this:
def subtensor(a, ind, val):
b=a.copy()
b[ind] = val
return b
In [192]: a=np.arange(12).reshape(3,4)
In [194]: subtensor(a,(1,slice(None)),0)
Out[194]:
array([[ 0, 1, 2, 3],
[ 0, 0, 0, 0],
[ 8, 9, 10, 11]])
Indexing takes a tuple like (1, slice(None)).
There are some alternative assignment functions like put, place, copyto, but none look like do this task.
These are equivalent:
b[0,:] = 1
b.__setitem__((0,slice(None)),1)
That is, the Python interpreter converts [] syntax into a method call.
This is an in-place operation. I don't know of anything that first makes a copy.
Functions like choose and where return copies, but they (normally) work with boolean masks, not indexing tuples.
My apologies if this has been answered many times, but I just can't find a solution.
Assume the following code:
import numpy as np
A,_,_ = np.meshgrid(np.arange(5),np.arange(7),np.arange(10))
B = (rand(7,10)*5).astype(int)
How can I slice A using B so that B represent the indexes in the first and last dimensions of A (I.e A[magic] = B)?
I have tried
A[:,B,:] which doesn't work due to peculiarities of advanced indexing.
A[:,B,np.arange(10)] generates 7 copies of the matrix I'm after
A[np.arange(7),B,np.arange(10)] gives the error:
ValueError: shape mismatch: objects cannot be broadcast to a single shape
Any other suggestions?
These both work:
A[0, B, 0]
A[B, B, B]
Really, only the B in axis 1 matters, the others can be any range that will broadcast to B.shape and are limited by A.shape[0] (for axis 1) and A.shape[2] (for axis 2), for a ridiculous example:
A[range(7) + range(3), B, range(9,-1, -1)]
But you don't want to use : because then you'll get, as you said, 7 or 10 (or both!) "copies" of the array you want.
A, _, _ = np.meshgrid(np.arange(5),np.arange(7),np.arange(10))
B = (rand(7,10)*A.shape[1]).astype(int)
np.allclose(B, A[0, B, 0])
#True
np.allclose(B, A[B, B, B])
#True
np.allclose(B, A[range(7) + range(3), B, range(9,-1, -1)])
#True