Solution to transcendental equation with both Mathematica and Python - numpy

I have the following issue with finding the roots of a non-linear equation. The equation is the following:
tanh[ 5* log [ (2/t)^(0.00990099) (1+x)^(0.990099) (1-x)^(-1) ] ]-x = 0
Solving this with NSolve, for {t, 0, 100} returns the following with Mathematica:
This what I was expecting by plotting the resulting roots versus the time parameter within this range. Now, I have tried to replicate this result with Python by using scipy.optimize.root but it seems that my code returns as a solution any value that I use as an initial condition, hence it is nothing else that the identity map. This can be also see in the pic below, where I used an initial condition 0.7:
I have provided the code below:
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import root
#Setting up the function
def delta(v,t):
epsilon = 10**(-20)
return np.tanh( 5*np.log( (2/(1.0*t+epsilon))**(0.00990099)*(1+v+epsilon)**(0.990099)*(1-v+epsilon)**(-1)))-v
#Setting up time paramerer
time = np.linspace(0, 101)
res = [root(delta, 0.7, args=(t, )).x[0] for t in time]
print res
plt.plot(time, res)
plt.savefig("plot.png")
I am not really sure if I am using the scipy.optimize.root correct, since the function looks ok as far as what I expect from its behaviour. Perhaps a mistake in the way I pass the args?

The root-finding methods that begin with a bracketing interval [a, b] (one where f(a) and f(b) have opposite signs) are generally more robust than the methods that begin with a single point x0 of departure. The reason is that the former have a definite field to work with, and can refine it iteratively. The bisection method is a classical example of these, but it's slow. SciPy implements more sophisticated methods such as brentq. It works fine here, with the bracket of [-0.1, 0.1] (which should be enough from looking from the Mathematica plot).
Also, t=0 is problematic in the equation, as it's not even defined then. Put a small positive number like 0.01 instead.
time = np.linspace(0.01, 101, 500)
res = [brentq(delta, -0.1, 0.1, args=(t, )) for t in time]

Related

Efficient solving of generalised eigenvalue problems in python

Given an eigenvalue problem Ax = λBx what is the more efficient way to solve it out of the two shown here:
import scipy as sp
import numpy as np
def geneivprob(A,B):
# Use scipy
lamda, eigvec = sp.linalg.eig(A, B)
return lamda, eigvec
def geneivprob2(A,B):
# Reduce the problem to a standard symmetric eigenvalue problem
Linv = np.linalg.inv(np.linalg.cholesky(B))
C = Linv # A # Linv.transpose()
#C = np.asmatrix((C + C.transpose())*0.5,np.float32)
lamda,V = np.linalg.eig(C)
return lamda, Linv.transpose() # V
I saw the second version in a codebase and was wondering if it was better than simply using scipy.
Well there is no obvious advantage in using the second approach, maybe for some class of matrices it will be better, I would suggest you to test with the problems you want to solve. Since you are transforming the eigenvectors, this will also transform how the errors affect the solution, and maybe that is the reason for using this second method, not efficiency, but numerical accuracy, or convergence.
Another thing is that the second method will only work for symmetric B.

change scientific notation abbreviation of y axis units to a string

First I would like to apologize as I know I am not asking this question correctly (which is why I cant find what is likely a simple answer).
I have a graph
As you can see above the y axis it says 1e11 meaning that the units are in 100 Billions. I would like to change the graph to read 100 Billion instead of 1e11.
I am not sure what such a notation is called.
To be clear I am not asking to change the whole y axis to number values like other questions I only want to change the top 1e11 to be more readable to those who are less mathematical.
ax.get_yaxis().get_major_formatter().set_scientific(False)
results in an undesired result
import numpy as np
from matplotlib.ticker import FuncFormatter
def billions(x, pos):
return '$%1.1fB' % (x*1e-9)
formatter = FuncFormatter(billions)
ax.yaxis.set_major_formatter(formatter)
located from https://matplotlib.org/examples/pylab_examples/custom_ticker1.html
produces

Numpy - AttributeError: 'Zero' object has no attribute 'exp'

I'm having trouble solving a discrepancy between something breaking at runtime, but using the exact same data and operations in the python console, having it work fine.
# f_err - currently has value 1.11819388872025
# l_scales - currently a numpy array [1.17840183376334 1.13456764589809]
sq_euc_dists = self.se_term(x1, x2, l_scales) # this is fine. It calls cdists on x1/l_scales, x2/l_scales vectors
return (f_err**2) * np.exp(-0.5 * sq_euc_dists) # <-- errors on this line
The error that I get is
AttributeError: 'Zero' object has no attribute 'exp'
However, calling those exact same lines, with the same f_err, l_scales, and x1, x2 in the console right after it errors out, somehow does not produce errors.
I was not able to find a post referring to the 'Zero' object error specifically, and the non-'Zero' ones I found didn't seem to apply to my case here.
EDIT: It was a bit lacking in info, so here's an actual (extracted) runnable example with sample data I took straight out of a failed run, which when run in isolation works fine/I can't reproduce the error except in runtime.
Note that the sqeucld_dist function below is quite bad and I should be using scipy's cdist instead. However, because I'm using sympy's symbols for matrix elementwise gradients with over 15 partial derivatives in my real data, cdist is not an option as it doesn't deal with arbitrary objects.
import numpy as np
def se_term(x1, x2, l):
return sqeucl_dist(x1/l, x2/l)
def sqeucl_dist(x, xs):
return np.sum([(i-j)**2 for i in x for j in xs], axis=1).reshape(x.shape[0], xs.shape[0])
x = np.array([[-0.29932052, 0.40997373], [0.40203481, 2.19895326], [-0.37679417, -1.11028267], [-2.53012051, 1.09819485], [0.59390005, 0.9735], [0.78276777, -1.18787904], [-0.9300892, 1.18802775], [0.44852545, -1.57954101], [1.33285028, -0.58594779], [0.7401607, 2.69842268], [-2.04258086, 0.43581565], [0.17353396, -1.34430191], [0.97214259, -1.29342284], [-0.11103534, -0.15112815], [0.41541759, -1.51803154], [-0.59852383, 0.78442389], [2.01323359, -0.85283772], [-0.14074266, -0.63457529], [-0.49504797, -1.06690869], [-0.18028754, -0.70835799], [-1.3794126, 0.20592016], [-0.49685373, -1.46109525], [-1.41276934, -0.66472598], [-1.44173868, 0.42678815], [0.64623684, 1.19927771], [-0.5945761, -0.10417961]])
f_err = 1.11466725760716
l = [1.18388412685279, 1.02290811104357]
result = (f_err**2) * np.exp(-0.5 * se_term(x, x, l)) # This runs fine, but fails with the exact same calls and data during runtime
Any help greatly appreciated!
Here is how to reproduce the error you are seeing:
import sympy
import numpy
zero = sympy.sympify('0')
numpy.exp(zero)
You will see the same exception you are seeing.
You can fix this (inefficiently) by changing your code to the following to make things floating point.
def sqeucl_dist(x, xs):
return np.sum([np.vectorize(float)(i-j)**2 for i in x for j in xs],
axis=1).reshape(x.shape[0], xs.shape[0])
It will be better to fix your gradient function using lambdify.
Here's an example of how lambdify can be used on partial d
from sympy.abc import x, y, z
expression = x**2 + sympy.sin(y) + z
derivatives = [expression.diff(var, 1) for var in [x, y, z]]
derivatives is now [2*x, cos(y), 1], a list of Sympy expressions. To create a function which will evaluate this numerically at a particular set of values, we use lambdify as follows (passing 'numpy' as an argument like that means to use numpy.cos rather than sympy.cos):
derivative_calc = sympy.lambdify((x, y, z), derivatives, 'numpy')
Now derivative_calc(1, 2, 3) will return [2, -0.41614683654714241, 1]. These are ints and numpy.float64s.
A side note: np.exp(M) will calculate the element-wise exponent of each of the elements of M. If you are trying to do a matrix exponential, you need np.linalg.exmp.

SciPy UnivariateSpline Specifying Axis?

Using scipy.interpolate.interp1d it is possible to pass in a (1080, 4) nd.array and compute an interpolation function for each 'row' in a single command:
spline = interp1d(np.arange(1,5), np.random.random(1080,4), kind='cubic')
I am getting slightly different interpolation results (off the knots) than some existing Fortran code. I believe this is because the SciPy source is using a b-spline and the Fortran code is using splines derived from numerical recipes.
I am attempting to perform the same interpolation using UnivariateSpline with s=0, so InterpolatedUnivariateSpline.
I am able to get this working if I pass the data row by row, i.e. using an iterator to step over all 1080 rows - this is highly inefficient.
Using:
spline = UnivariateSpline(np.arange(1,5).reshape(-1,1), np.random.random(1080,4), s=0, k=3)
I am seeing:
failed in converting 2nd argument `y' of dfitpack.fpcurf0 to C/Fortran array
I believe this is an issue getting the multi-dimensional array into Fitpack? Any insight in how to avoid an iterator? Additionally, any insight into a SciPy interpolation function that matches the one described in numerical recipes (section 3.3, p.120) - You have to type the page number, I can not direct link, it is a Flash viewer...
In older version of SciPy (I observed it in 0.14) the splines returned by interp1d were of relatively poor quality. In versions 0.19 and later, interp1d is consistent with other spline routines, and since it accepts vector inputs, I think that answers the question. Here is the comparison of three spline constructors: the latter two only take one row as input.
from scipy.interpolate import interp1d, UnivariateSpline, splrep, splev
x = np.arange(1, 5)
y = np.random.normal(size=(1080, 4))
spl1 = interp1d(x, y, kind='cubic')
spl2 = UnivariateSpline(x, y[123, :], s=0, k=3)
spl3 = splrep(x, y[123, :], s=0, k=3)
t = 2.345
print(spl1(t)[123], spl2(t), splev(t, spl3))
This prints (with my random numbers)
-0.333973049011 -0.333973049011 -0.333973049011

Exponential decay curve fitting in numpy and scipy

I'm having a bit of trouble with fitting a curve to some data, but can't work out where I am going wrong.
In the past I have done this with numpy.linalg.lstsq for exponential functions and scipy.optimize.curve_fit for sigmoid functions. This time I wished to create a script that would let me specify various functions, determine parameters and test their fit against the data. While doing this I noticed that Scipy leastsq and Numpy lstsq seem to provide different answers for the same set of data and the same function. The function is simply y = e^(l*x) and is constrained such that y=1 at x=0.
Excel trend line agrees with the Numpy lstsq result, but as Scipy leastsq is able to take any function, it would be good to work out what the problem is.
import scipy.optimize as optimize
import numpy as np
import matplotlib.pyplot as plt
## Sampled data
x = np.array([0, 14, 37, 975, 2013, 2095, 2147])
y = np.array([1.0, 0.764317544, 0.647136491, 0.070803763, 0.003630962, 0.001485394, 0.000495131])
# function
fp = lambda p, x: np.exp(p*x)
# error function
e = lambda p, x, y: (fp(p, x) - y)
# using scipy least squares
l1, s = optimize.leastsq(e, -0.004, args=(x,y))
print l1
# [-0.0132281]
# using numpy least squares
l2 = np.linalg.lstsq(np.vstack([x, np.zeros(len(x))]).T,np.log(y))[0][0]
print l2
# -0.00313461628963 (same answer as Excel trend line)
# smooth x for plotting
x_ = np.arange(0, x[-1], 0.2)
plt.figure()
plt.plot(x, y, 'rx', x_, fp(l1, x_), 'b-', x_, fp(l2, x_), 'g-')
plt.show()
Edit - additional information
The MWE above includes a small sample of the dataset. When fitting the actual data the scipy.optimize.curve_fit curve presents an R^2 of 0.82, while the numpy.linalg.lstsq curve, which is the same as that calculated by Excel, has an R^2 of 0.41.
You are minimizing different error functions.
When you use numpy.linalg.lstsq, the error function being minimized is
np.sum((np.log(y) - p * x)**2)
while scipy.optimize.leastsq minimizes the function
np.sum((y - np.exp(p * x))**2)
The first case requires a linear dependency between the dependent and independent variables, but the solution is known analitically, while the second can handle any dependency, but relies on an iterative method.
On a separate note, I cannot test it right now, but when using numpy.linalg.lstsq, I you don't need to vstack a row of zeros, the following works as well:
l2 = np.linalg.lstsq(x[:, None], np.log(y))[0][0]
To expound a bit on Jaime's point, any non-linear transformation of the data will lead to a different error function and hence to different solutions. These will lead to different confidence intervals for the fitting parameters. So you have three possible criteria to use to make a decision: which error you want to minimize, which parameters you want more confidence in, and finally, if you are using the fitting to predict some value, which method yields less error in the interesting predicted value. Playing around a bit analytically and in Excel suggests that different kinds of noise in the data (e.g. if the noise function scales the amplitude, affects the time-constant or is additive) leads to different choices of solution.
I'll also add that while this trick "works" for exponential decay to 0, it can't be used in the more general (and common) case of damped exponentials (rising or falling) to values that cannot be assumed to be 0.