How to combine scipy interp1d with mpmath quadosc - numpy

I have a density function (from quantum mechanics calculations) to be multiplied with the spherical Bessel function with a momentum grid (momentum q 1d array, real space distance r 1d array, so need to calculate jn(q*r) 2d array). The product will be integrated across the real space to get results as function of momentum (results in 1d array same shape as q).
The Bessel function has oscillation, while the density function fast decay over a threshold distance. I used the adaptive quadrature in quadpy which is fine when oscillation is slow but it fails with high oscillation (for high momentum values or high orders in Bessel functions). The mpmath quadosc could be a nice option, but currently I have the problem that "object arrays are not supported", which seems to be the same as in Relation between mpmath and scipy: Type Error, what would be the best way to solve it since the density function is calculated outside of the mpmath.
from mpmath import besselj, sqrt, pi, besseljzero, inf,quadosc
from scipy.interpolate import interp1d
n = 1
q = np.geomspace(1e-7, 500, 1000)
# lets create a fake gaussian density
x = np.geomspace(1e-7, 10, 1000)
y = np.exp(-(x-5)**2)
density = interp1d(x,y,kind='cubic',fill_value=0,bounds_error=False)
# if we just want to integrate the spherical bessel function
def spherical_jn(x,n=n):
return besselj(n + 1 / 2, x) * sqrt(pi / 2 / x)
# this is fine
vals = quadosc(
spherical_jn, [0, inf], zeros=lambda m: besseljzero(n + 1 / 2, m)
)
# now we want to integrate the spherical bessel function times the density
def spherical_jn_density(x,n=lprimeprime):
grid = q[..., None] *x
return besselj(n + 1 / 2, grid) * sqrt(pi / 2 / grid)*density(x)
# this will fail
vals_density = quadosc(
spherical_jn_density, [0, inf], zeros=lambda m: besseljzero(n + 1 / 2, m)
)
Expect: an accurate integral of highly oscillating spherical Bessel function with arbitrary decaying function (decay to zero at large distance).

Your density is interp callable, which works like:
In [33]: density(.5)
Out[33]: array(1.60522789e-09)
It does not work when given a mpmath object:
In [34]: density(mpmath.mpf(.5))
ValueError: object arrays are not supported
It's ok if x is first converted to ordinary float:
In [37]: density(float(mpmath.mpf(.5)))
Out[37]: array(1.60522789e-09)
Tweaking your function:
def spherical_jn_density(x,n=1):
print(repr(x))
grid = q[..., None] *x
return besselj(n + 1 / 2, grid) * sqrt(pi / 2 /grid) * density(x)
and trying to run the quadosc (with a smaller q)
In [57]: vals_density = quadosc(
...: spherical_jn_density, [0, inf], zeros=lambda m: besseljzero(n + 1 / 2, m))
mpf('0.506414729137261838698106')
TypeError: cannot create mpf from array([[mpf('5.06414729137261815781894e-8')],
[mpf('0.000000473559111442409924364745')],
[mpf('0.00000442835129247081824275722')],
[mpf('0.0000414104484439061558283487')],
[mpf('0.000387237851532012775822723')],
[mpf('0.00362114295531604773233197')],
[mpf('0.0338620727569835882851491')],
[mpf('0.316651395857188250996884')],
[mpf('2.96107409661232278850947')],
[mpf('27.6896294168963266721213')]], dtype=object)
In other words,
besselj(n + 1 / 2, grid)
is having problems, even before trying to evaluate density(x). mpmath functions don't work with numpy arrays; and many numpy/scipy functions don't work with mpmath objects.

Related

How do i efficiently find the distance from a set of lines/rays to a point cloud

How can i efficiently find the euclidean distance from N unbounded rays (parametrized by a point and a direction) and M points in 3D, using Python/Numpy/PyTorch?
The goal is to end up with N distances, from each ray to its nearest point.
The naive solution is to compute the distance between each ray and each point, but this has complexity O(NM).
Does there exist any algorithm that may speed up this query? Perhaps one based on r-trees?
Answering my own question:
I ended up using a ray-marching approximation, based on BallTree
from sklearn.neighbors import BallTree
import numpy as np
def rays_pc_distance(
ray_origins : np.ndarray, # (N, 3)
ray_dirs : np.ndarray, # (N, 3)
points : np.ndarray, # (M, 3)
n_steps : int = 10,
) -> np.ndarray:
index = BallTree(points)
min_d = index.query(ray_origins + np.zeros_like(ray_dirs), k=1, return_distance=True)[0]
acc_d = min_d.copy()
for _ in range(n_steps):
current_d = index.query(ray_origins + acc_d * ray_dirs, k=1, return_distance=True)[0]
np.minimum(current_d, min_d, out=min_d)
acc_d += current_d #* 0.8
return min_d # (N, 1)
Please let me know if there is something seriously wrong with this approach.

How to efficiently compute an L2 distance between rows of two array using only basic numpy operations? [duplicate]

I have 2 lists of points as numpy.ndarray, each row is the coordinate of a point, like:
a = np.array([[1,0,0],[0,1,0],[0,0,1]])
b = np.array([[1,1,0],[0,1,1],[1,0,1]])
Here I want to calculate the euclidean distance between all pairs of points in the 2 lists, for each point p_a in a, I want to calculate the distance between it and every point p_b in b. So the result is
d = np.array([[1,sqrt(3),1],[1,1,sqrt(3)],[sqrt(3),1,1]])
How to use matrix multiplication in numpy to compute the distance matrix?
Using direct numpy broadcasting, you can do this:
dist = np.sqrt(((a[:, None] - b[:, :, None]) ** 2).sum(0))
Alternatively, scipy has a routine that will compute this slightly more efficiently (particularly for large matrices)
from scipy.spatial.distance import cdist
dist = cdist(a, b)
I would avoid solutions that depend on factoring-out matrix products (of the form A^2 + B^2 - 2AB), because they can be numerically unstable due to floating point roundoff errors.
To compute the squared euclidean distance for each pair of elements off them - x and y, we need to find :
(Xik-Yjk)**2 = Xik**2 + Yjk**2 - 2*Xik*Yjk
and then sum along k to get the distance at coressponding point as dist(Xi,Yj).
Using associativity, it reduces to :
dist(Xi,Yj) = sum_k(Xik**2) + sum_k(Yjk**2) - 2*sum_k(Xik*Yjk)
Bringing in matrix-multiplication for the last part, we would have all the distances, like so -
dist = sum_rows(X^2), sum_rows(Y^2), -2*matrix_multiplication(X, Y.T)
Hence, putting into NumPy terms, we would end up with the euclidean distances for our case with a and b as the inputs, like so -
np.sqrt((a**2).sum(1)[:,None] + (b**2).sum(1) - 2*a.dot(b.T))
Leveraging np.einsum, we could replace the first two summation-reductions with -
np.einsum('ij,ij->i',a,a)[:,None] + np.einsum('ij,ij->i',b,b)
More info could be found on eucl_dist package's wiki page (disclaimer: I am its author).
If you have 2 each 1-dimensional arrays, x and y, you can convert the arrays into matrices with repeating columns, transpose, and apply the distance formula. This assumes that x and y are coordinated pairs. The result is a symmetrical distance matrix.
x = [1, 2, 3]
y = [4, 5, 6]
xx = np.repeat(x,3,axis = 0).reshape(3,3)
yy = np.repeat(y,3,axis = 0).reshape(3,3)
dist = np.sqrt((xx-xx.T)**2 + (yy-yy.T)**2)
dist
Out[135]:
array([[0. , 1.41421356, 2.82842712],
[1.41421356, 0. , 1.41421356],
[2.82842712, 1.41421356, 0. ]])
L2 distance = (a^2 + b^2 - 2ab)^0.5
a = np.random.randn(5, 3)
b = np.random.randn(2, 3)
a2 = np.sum(np.square(a), axis = 1)[..., None]
b2 = np.sum(np.square(b), axis = 1)[None, ...]
ab = -2*np.dot(a, b.T)
dist = np.sqrt(a2 + b2 + ab)

Knn give more weight to specific feature in distance

I'm using the Kobe Bryant Dataset.
I wish to predict the shot_made_flag with KnnRegressor.
I've used game_date to extract year and month features:
# covert season to years
kobe_data_encoded['season'] = kobe_data_encoded['season'].apply(lambda x: int(re.compile('(\d+)-').findall(x)[0]))
# add year and month using game_date
kobe_data_encoded['year'] = kobe_data_encoded['game_date'].apply(lambda x: int(re.compile('(\d{4})').findall(x)[0]))
kobe_data_encoded['month'] = kobe_data_encoded['game_date'].apply(lambda x: int(re.compile('-(\d+)-').findall(x)[0]))
kobe_data_encoded = kobe_data_encoded.drop(columns=['game_date'])
and I wish to use season, year, month features to give them more weight in the distance function so events with closer date to the current event will be closer neighbors but still maintain reasonable distances to potential other datapoints, so for example I don't wish an event withing the same day would be the closest neighbor just because of the date features but it'll take into account the other features such as shot_range etc..
To give it more weight I've tried to use metric argument with custom distance function but the arguments of the function are just numpy array without column information of pandas so I'm not sure what I can do and how to implement what I'm trying to do.
EDIT:
Using larger weights for date features to find the optimal k with cv of 10 running on k from [1, 100]:
from IPython.display import display
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
# scaling
min_max_scaler = preprocessing.MinMaxScaler()
scaled_features_df = kobe_data_encoded.copy()
column_names = ['loc_x', 'loc_y', 'minutes_remaining', 'period',
'seconds_remaining', 'shot_distance', 'shot_type', 'shot_zone_range']
scaled_features = min_max_scaler.fit_transform(scaled_features_df[column_names])
scaled_features_df[column_names] = scaled_features
not_classified_df = scaled_features_df[scaled_features_df['shot_made_flag'].isnull()]
classified_df = scaled_features_df[scaled_features_df['shot_made_flag'].notnull()]
X = classified_df.drop(columns=['shot_made_flag'])
y = classified_df['shot_made_flag']
cv = StratifiedKFold(n_splits=10, shuffle=True)
neighbors = [x for x in range(1, 100)]
cv_scores = []
weight = np.ones((X.shape[1],))
weight[[X.columns.get_loc("season"),
X.columns.get_loc("year"),
X.columns.get_loc("month")
]] = 5
weight = weight/weight.sum() #Normalize weights
def my_distance(x, y):
dist = ((x-y)**2)
return np.dot(dist, weight)
for k in neighbors:
print('k: ', k)
knn = KNeighborsClassifier(n_neighbors=k, metric=my_distance)
cv_scores.append(np.mean(cross_val_score(knn, X, y, cv=cv, scoring='roc_auc')))
#optimal K
optimal_k_index = cv_scores.index(min(cv_scores))
optimal_k = neighbors[optimal_k_index]
print('best k: ', optimal_k)
plt.plot(neighbors, cv_scores)
plt.xlabel('Number of Neighbors K')
plt.ylabel('ROC AUC')
plt.show()
Runs really slow, any idea on how to make it faster?
The idea of the weighted features is to find neighbors more close to the data point date to avoid data leakage and cv for finding optimal k.
First, you have to prepare a numpy 1D weight array, specifying weight for each feature. You could do something like:
weight = np.ones((M,)) # M is no of features
weight[[1,7,10]] = 2 # Increase weight of 1st,7th and 10th features
weight = weight/weight.sum() #Normalize weights
You can use kobe_data_encoded.columns to find indexes of season, year, month features in your dataframe to replace 2nd line above.
Now define a distance function, which by guideline have to take two 1D numpy array.
def my_dist(x,y):
global weight #1D array, same shape as x or y
dist = ((x-y)**2) #1D array, same shape as x or y
return np.dot(dist,weight) # a scalar float
And initialize KNeighborsRegressor as:
knn = KNeighborsRegressor(metric=my_dist)
EDIT:
To make things efficient, you can precompute distance matrix, and reuse it in KNN. This should bring in significant speedup by reducing calls to my_dist, since this non-vectorized custom python distance function is quite slow. So now -
dist = np.zeros((len(X),len(X))) #Computing NXN distance matrix
for i in range(len(X)): # You can halve this by using the fact that dist[i,j] = dist[j,i]
for j in range(len(X)):
dist[i,j] = my_dist(X[i],X[j])
for k in neighbors:
print('k: ', k)
knn = KNeighborsClassifier(n_neighbors=k, metric='precomputed') #Note: metric='precomputed'
cv_scores.append(np.mean(cross_val_score(knn, dist, y, cv=cv, scoring='roc_auc'))) #Note: passing dist instead of X
I couldn't test it, so let me know if something isn't alright.
Just add on Shihab's answer regarding distance computation. Can use scipy pdist as suggested in this post, which is faster and more efficient.
from scipy.spatial.distance import pdist, minkowski, squareform
# create the custom weight array
weight = ...
# calculate pairwise distances, using Minkowski norm with custom weights
distances = pdist(X, minkowski, 2, weight)
# reformat the result as a square matrix
distances_as_2d_matrix = squareform(distances)

Linear interpolation of two 2D arrays

In a previous question (fastest way to use numpy.interp on a 2-D array) someone asked for the fastest way to implement the following:
np.array([np.interp(X[i], x, Y[i]) for i in range(len(X))])
assume X and Y are matrices with many rows so the for loop is costly. There is a nice solution in this case that avoids the for loop (see linked answer above).
I am faced with a very similar problem, but I am unclear on whether the for loop can be avoided in this case:
np.array([np.interp(x, X[i], Y[i]) for i in range(len(X))])
In other words, I want to use linear interpolation to upsample a large number of signals stored in the rows of two matrices X and Y.
I was hoping to find a function in numpy or scipy (scipy.interpolate.interp1d) that supported this operation via broadcasting semantics but I so far can't seem to find one.
Other points:
If it helps, the rows X[i] and x are pre-sorted in my application. Also, in my case len(x) is quite a bit larger than len(X[i]).
The function scipy.signal.resample almost does what I want, but it doesn't use linear interpolation...
This is a vectorized approach that directly implements linear interpolation. First, for each x value and each i, j compute the weight w expressing how much of the interval (X[i, j], X[i, j+1]) is to the left of x.
If the entire interval is to the left of x, the weight of that interval is 1.
If none of the subinterval is to the left, the weight is 0
Otherwise, the weight is a number between 0 and 1, expressing the proportion of that interval to the left of x.
Then the value of PL interpolant is computed as Y[i, 0] + sum of differences dY[i, j] multiplied by the corresponding weight. The logic is to follow by how much the interpolant changes from interval to interval. The differences dY = np.diff(Y, axis=1) show how much it changes over the entire interval. Multiplication by the weight prorates that change accordingly.
Setup, with some small data arrays
import numpy as np
X = np.array([[0, 2, 5, 6, 9], [1, 3, 4, 7, 8]])
Y = np.array([[3, 5, 2, 4, 1], [8, 6, 9, 5, 4]])
x = np.linspace(1, 8, 20)
The computation
dX = np.diff(X, axis=1)
dY = np.diff(Y, axis=1)
w = np.clip((x - X[:, :-1, None])/dX[:, :, None], 0, 1)
y = Y[:, [0]] + np.sum(w*dY[:, :, None], axis=1)
Demonstration
This is only to show that the interpolation is correct. Blue points: original data, red ones are computed.
import matplotlib.pyplot as plt
plt.plot(x, y[0], 'ro')
plt.plot(X[0], Y[0], 'bo')
plt.plot(x, y[1], 'rd')
plt.plot(X[1], Y[1], 'bd')
plt.show()

Python Memory error on scipy stats. Scipy linalg lstsq <> manual beta

Not sure if this question belongs here or on crossvalidated but since the primary issue is programming language related, I am posting it here.
Inputs:
Y= big 2D numpy array (300000,30)
X= 1D array (30,)
Desired Output:
B= 1D array (300000,) each element of which regression coefficient of regressing each row (element of length 30) of Y against X
So B[0] = scipy.stats.linregress(X,Y[0])[0]
I tried this first:
B = scipy.stats.linregress(X,Y)[0]
hoping that it will broadcast X according to shape of Y. Next I broadcast X myself to match the shape of Y. But on both occasions, I got this error:
File "C:\...\scipy\stats\stats.py", line 3011, in linregress
ssxm, ssxym, ssyxm, ssym = np.cov(x, y, bias=1).flat
File "C:\...\numpy\lib\function_base.py", line 1766, in cov
return (dot(X, X.T.conj()) / fact).squeeze()
MemoryError
I used manual approach to calculate beta, and on Sascha's suggestion below also used scipy.linalg.lstsq as follows
B = lstsq(Y.T, X)[0] # first estimate of beta
Y1=Y-Y.mean(1)[:,None]
X1=X-X.mean()
B1= np.dot(Y1,X1)/np.dot(X1,X1) # second estimate of beta
The two estimates of beta are very different however:
>>> B1
Out[10]: array([0.135623, 0.028919, -0.106278, ..., -0.467340, -0.549543, -0.498500])
>>> B
Out[11]: array([0.000014, -0.000073, -0.000058, ..., 0.000002, -0.000000, 0.000001])
Scipy's linregress will output slope+intercept which defines the regression-line.
If you want to access the coefficients naturally, scipy's lstsq might be more appropriate, which is an equivalent formulation.
Of course you need to feed it with the correct dimensions (your data is not ready; needs preprocessing; swap dims).
Code
import numpy as np
from scipy.linalg import lstsq
Y = np.random.random((300000,30))
X = np.random.random(30)
x, res, rank, s = lstsq(Y.T, X) # Y transposed!
print(x)
print(x.shape)
Output
[ 1.73122781e-05 2.70274135e-05 9.80840639e-06 ..., -1.84597771e-05
5.25035470e-07 2.41275026e-05]
(300000,)