polynomial surface fit numpy - numpy

How do I fit a 2D surface z=f(x,y) with a polynomial in numpy with full cross terms?

This is inherently numerically ill-conditioned but you could do something like this:
import numpy as np
x = np.random.randn(500)
y = np.random.randn(500)
z = np.random.randn(500) # Dependent variable
v = np.array([np.ones(500), x, y, x**2, x * y, y**2])
coefficients, residues, rank, singval = np.linalg.lstsq(v.T, z)
The more terms you add, the worse things get, numerically. Are you sure you want a polynomial interpolant?
There are other bases for polynomials for which the matrix of values is not so badly conditioned but I can't remember what they are called; any college-level numerical analysis textbook would have this material, though.

You can use a combination of polyvander2d and polyval2d, but will need to do the fit yourself using the design matrix output from polyvander2d, probably involving scaling and such. It should be possible to build a class Polynomial2d from those tools.

Related

Projecting a vector in a given plane using numpy

Using numpy, how can I do an orthogonal projection of, for example, the vector np.array([0.3,0.5,0.2]) into the plane 3x+2y-2z=0 ?
EDIT:
I think one may simply use numpy.linalg.lstsq to find the orthogonal projection?
Your hyperplane is defined by the set of x such that <a,x>=0, where a is a vector orthogonal to the plane. In your example,
a = (3,2,-2).
Then The projection of a point p is in the hyperplane is a point p_proj such that p-p_proj is orthogonal to the plane. This means that it is parallel to a, or in other words p-p_proj=lambda*a. So
p_proj = p- lambda*a (1).
since p_proj is in the hyperplane, <p_proj,a> = 0 so multiplying by a on the equality(1) gives
lambda= <p,a>/<a,a>.
Substituting into (2), you get
Projection(p) = p_proj = p-<p,a>/<a,a>a
which can be done easily in numpy using np.dot(v_1,v_2) wherever we encounter <v_1,v_2>:
def projection(p,a):
lambda_val = np.dot(p,a)/np.dot(a,a)
return p - lambda_val * a
(Note that this is a Gram-Schmidt iteration).

Difficulty fitting with Gaussian distribution

I am given two (long) finite sequences (i.e. numpy arrays) x and y of the same length. There graph is given here:
.
Array x uses the x-axis and is monotonically increasing. My goal is to fit the graph with Gaussian distribution such that the "major peak" is preserved, which looks something like this:
.
Here is a part of my code:
import numpy as np
import matplotlib.pyplot as plt
from astropy import modeling
fitter = modeling.fitting.LevMarLSQFitter()
model = modeling.models.Gaussian1D(amplitude = np.max(y), mean = y[np.argmax(x)],stddev = 1) #(1)
fitted_model = fitter(model, x, y)
plt.plot(x,fitted_model(x),linewidth=0.7, color = 'black')
plt.plot(x,y,linewidth=0.1, color = 'black')
plt.savefig('result.png', dpi = 1200)
My code results in the following:
.
It remains the same if I change the standard deviation in line (1). I figure I must have made some mistakes in line (1) but I have no idea why it is not working. If this is not possible in astropy, is there any work arounds?
Update:
As it is commented, I think Gaussian may not be the best distribution. I think I am actually looking for something similar to a perfusion curve. (In the picture AUC means "area under curve for infinite time" and "mTT" means "mean transit time".) The equation in the picture is not precise. The goal is to make sure the peak is best fitted. The curve does not need to follow the original data very closely as x is close to 0 or infinity. It only needs maintain smoothness and to roughly go down to zero (like the case for Gaussian). I need hints on what kind of function may best satisfy such a demand.

Automatic Differentiation with respect to rank-based computations

I'm new to automatic differentiation programming, so this maybe a naive question. Below is a simplified version of what I'm trying to solve.
I have two input arrays - a vector A of size N and a matrix B of shape (N, M), as well a parameter vector theta of size M. I define a new array C(theta) = B * theta to get a new vector of size N. I then obtain the indices of elements that fall in the upper and lower quartile of C, and use them to create a new array A_low(theta) = A[lower quartile indices of C] and A_high(theta) = A[upper quartile indices of C]. Clearly these two do depend on theta, but is it possible to differentiate A_low and A_high w.r.t theta?
My attempts so far seem to suggest no - I have using the python libraries of autograd, JAX and tensorflow, but they all return a gradient of zero. (The approaches I have tried so far involve using argsort or extracting the relevant sub-arrays using tf.top_k.)
What I'm seeking help with is either a proof that the derivative is not defined (or cannot be analytically computed) or if it does exist, a suggestion on how to estimate it. My eventual goal is to minimize some function f(A_low, A_high) wrt theta.
This is the JAX computation that I wrote based on your description:
import numpy as np
import jax.numpy as jnp
import jax
N = 10
M = 20
rng = np.random.default_rng(0)
A = jnp.array(rng.random((N,)))
B = jnp.array(rng.random((N, M)))
theta = jnp.array(rng.random(M))
def f(A, B, theta, k=3):
C = B # theta
_, i_upper = lax.top_k(C, k)
_, i_lower = lax.top_k(-C, k)
return A[i_lower], A[i_upper]
x, y = f(A, B, theta)
dx_dtheta, dy_dtheta = jax.jacobian(f, argnums=2)(A, B, theta)
The derivatives are all zero, and I believe this is correct, because the change in value of the outputs does not depend on the change in value of theta.
But, you might ask, how can this be? After all, theta enters into the computation, and if you put in a different value for theta, you get different outputs. How could the gradient be zero?
What you must keep in mind, though, is that differentiation doesn't measure whether an input affects an output. It measures the change in output given an infinitesimal change in input.
Let's use a slightly simpler function as an example:
import jax
import jax.numpy as jnp
A = jnp.array([1.0, 2.0, 3.0])
theta = jnp.array([5.0, 1.0, 3.0])
def f(A, theta):
return A[jnp.argmax(theta)]
x = f(A, theta)
dx_dtheta = jax.grad(f, argnums=1)(A, theta)
Here the result of differentiating f with respect to theta is all zero, for the same reasons as above. Why? If you make an infinitesimal change to theta, it will in general not affect the sort order of theta. Thus, the entries you choose from A do not change given an infinitesimal change in theta, and thus the derivative with respect to theta is zero.
Now, you might argue that there are circumstances where this is not the case: for example, if two values in theta are very close together, then certainly perturbing one even infinitesimally could change their respective rank. This is true, but the gradient resulting from this procedure is undefined (the change in output is not smooth with respect to the change in input). The good news is this discontinuity is one-sided: if you perturb in the other direction, there is no change in rank and the gradient is well-defined. In order to avoid undefined gradients, most autodiff systems will implicitly use this safer definition of a derivative for rank-based computations.
The result is that the value of the output does not change when you infinitesimally perturb the input, which is another way of saying the gradient is zero. And this is not a failure of autodiff – it is the correct gradient given the definition of differentiation that autodiff is built on. Moreover, were you to try changing to a different definition of the derivative at these discontinuities, the best you could hope for would be undefined outputs, so the definition that results in zeros is arguably more useful and correct.

Coefficients of 2D Chebyshev series in numpy.polynomial.chebyshev

I understand that chebvander2d and chebval2d return the Vandermonde matrix and fitted values for 2D inputs, and chebfit returns the coefficients for 1D-input series, but how do I get the coefficients for 2D-input series?
Short answer: It looks to me like this is not yet implemented. The whole of 2D polynomials seems more like a draft with some stub functions (as of June 2020).
Long answer (I came looking for the same thing, so I dug a little deeper):
First of all, this applies to all of the polynomial classes, not only chebyshev, so you also cannot fit an "ordinary" polynomial (power series). In fact, you cannot even construct one.
To understand the programming problem, let me recapture what a 2D polynomial looks like as a math formula, at an example polynomial of degree 2:
p(x, y) = c_00 + c_10 x + c_01 y + c_20 x^2 + c11 xy + c02 y^2
here the indices of c refer to the powers of x and y (the sum of the exponents must be <= degree).
First thing to notice is that, for degree d, there are (d+1)(d+2)/2 coefficients.
They could be stored in the upper left part of a matrix or in a 1D array, e.g. aranged as in the formula above.
The documentation of functions like numpy.polynomial.polynomial.polyval2d implies that numpy expects the matrix variant: p(x, y) = sum_i,j c_i,j * x^i * y^j.
Side note: it may be confusing that the row index i ("y-coordinate") of the matrix is used as exponent of x, not y; maybe the role of i and j should be switched if this is eventually implementd, or at least there should be a note in the documentation.
This leads to the core problem: the data structure for the 2D coefficients is not defined anywhere; only indirectly, like above, it can be guessed that a matrix should be used. But compared to a 1D array this is a waste of space, and evaluation of the polynomial takes two nested loops instead of just one. Also: does the matrix have to be initialized with np.zeros or do the implemented functions make sure that the lower right part is never touched so that np.empty can be used?
If the whole (d+1)^2 matrix were used, as the polyval2d function doc suggests, the degree of the polynomial would actually be d*2 (if c_d,d != 0)
To test this, I wanted to construct a numpy.polynomial.polynomial.Polynomial (yes, three times polynomial) and check the degree attribute:
import numpy as np
import numpy.polynomial.polynomial as poly
coef = np.array([
[5.00, 5.01, 5.02],
[5.10, 5.11, 0. ],
[5.20, 0. , 0. ]
])
polyObj = poly.Polynomial(coef)
print(polyObj.degree)
This gave a ValueError: Coefficient array is not 1-d before the print statement was reached. So while polyval2d expects a 2D coefficient array, it is not (yet) possible to construct such a polynomial - not manually like this at least. With this insight, it is not surprising that there is no function (yet) that computes a fit for 2D polynomials.

A Pure Pythonic Pairwise Euclidean distance of rows of a numpy ndarray

I have a matrix of size (n_classes, n_features) and i want to compute the pairwise euclidean distance of each pair of classes so the output would be a (n_classes, n_classes) matrix where each cell has the value of euclidean_distance(class_i, class_j).
I know that there is this scipy spatial distances (http://docs.scipy.org/doc/scipy-0.14.0/reference/spatial.distance.html) and sklearn.metric.euclidean distances (http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.euclidean_distances.html) but i want to use this in Theano software so i need a pure mathematical formula rather than functions that compute the results.
for example i need a series of transformations like A = X * B, D = X.T-X, results = D.T something that contains just matrix mathematical operations not functions.
You can do this using numpy broadcasting as shown in this gist. It should be straightforward to convert this to Theano code, or just reference #eickenberg's comment above, since he's the one who showed me how to do this!