fsolve for function with exp - numpy

I'm trying to solve a nonlinear equation with Python and Scipy, heres the simple input:
from numpy import exp
from scipy.optimize import fsolve
def func(x):
return 5*x*(2*x-1+exp(2*x))-5
x0 = fsolve(func,0)
print(x0)
However executing the function leads to RuntimeWarning: overflow encountered in exp message.
Using Matlab and fzero with the same function works fine and returns 0.4385 for the root.
How can I solve this?

using 0 as a starting estimate causes some problems
you can use any arbitary value and if you want to start from zero use something like 1e-6
from numpy import exp
from scipy.optimize import fsolve
def func(x):
return 5*x*(2*x-1+exp(2*x))-5
x0 = fsolve(func,1e-6)
print(x0)
yields
[0.43848533]

Related

Nan encountered when using numba

I am using Numba to speed up a function and I bumped into the following problem.
When using the decorator #njit (or #jit) the behaviour of some numpy function is changed.
For example if I use the following function to calculate tanh
from numba import njit
import numpy as np
#njit
def check_tanh(z):
return np.tanh(z)
and I run it for real values of z I get the same as np.tanh(z) as it should be.
If I move instead parallel to the real axis but with an imaginary part, for example z = x+ 1.j, and increase x, the numpy tanh will converge to 1.+0.j, while check_tanh(z) will return a nan (on my computer this is happening when x>360).
Does anyone have an idea of what is going on and how can be fixed?
Thanks in advance!
Using tanh from cmath fixes the issue.
Seems that is still a problem in Numba
https://github.com/numba/numba/issues/2919

Can't minimize function

I just want to minimize a simple function, every example i' ve watch didnt get me anywhere.
import math
import numpy as np
import sympy as sp
from scipy.optimize import minimize
import scipy.optimize as optimize
R=1.5
k_1=2
a=1
n=a
alpha=0.25
beta=0.5
delta=0.9
def f_gob(x, y, z):
c_1=((1/x-y/x)+R*k_1)/(1+delta*(1+alpha))
c_2=delta*x*(((1/x-y/x)+R*k_1)/(1+delta*(1+alpha)))
l=n-(alpha*(delta*x*(((1/x-y/x)+R*k_1)/((1+delta*(1+alpha))))))/(1-y)
return -1*(math.log(c_1)+delta*(math.log(c_2)+alpha*math.log(n-l)+beta*math.log(z)))
f_gob(0.9996,0.332,0.7765)
x0 = [0.8,0.2,0.6]
res = minimize(f_gob, x0)
Thank you very much.
Better is:
def f_gob(a):
x = a[0]
y = a[1]
z = a[2]
c_1= ((1/x-y/x)+R*k_1)/(1+delta*(1+alpha))
c_2=delta*x*c_1
l=n-(alpha*c_2)/(1-y)
return -1*(math.log(c_1)+delta*(math.log(c_2)+alpha*math.log(n-l)+beta*math.log(z)))
f_gob([0.9996,0.332,0.7765])
The main issue is that the current levels of the three decision variables x,y,z are passed on as a single array, which I call a. I just unpack the individual members to keep things close to what you had. Passing things on as an array makes sense, especially if you want to allow for large numbers of variables (say hundreds).
For further information see the documentation: the third sentence explains the format of the function to be called. Also check the examples.

Rolling multidimensional function in pandas

Let's say, I have the following code.
import numpy as np
import pandas as pd
x = pd.DataFrame(np.random.randn(100, 3)).rolling(window=10, center=True).cov()
For each index, I have a 3x3 matrix. I would like to calculate eigenvalues and then some function of those eigenvalues. Or, perhaps, I might want to compute some function of eigenvalues and eigenvectors. The point is that if I take x.loc[0] then I have no problem to compute anything from that matrix. How do I do it in a rolling fashion for all matrices?
Thanks!
You can use the analogous eigenvector/eigenvalue methods in spicy.sparse.linalg.
import numpy as np
import pandas as pd
from scipy import linalg as LA
x = pd.DataFrame(np.random.randn(100, 3)).rolling(window=10, center=True).cov()
for i in range(len(x)):
try:
e_vals,e_vec = LA.eig(x.loc[i])
print(e_vals,e_vec)
except:
continue
If there are no NaN values present then you need not use the try and except instead go for only for loop.

Plot Scipy ODE solution

I've been trying to solve a nonlinear ordinary differential equation numerically by using Scipy, in particular by the scipy.integrate.RK23 command. It returns <scipy.integrate._ivp.rk.RK23 at 0x7f2b1a908390>. How can I plot the solution?
Thank you in advance for your help!
EDIT:
As a simple example for testing:
import numpy
import scipy.integrate
t0=0;
tf=1;
x0=numpy.array([0]);
def F(t,x): return t**2;
x=scipy.integrate.RK23(F,t0,x0,tf)
RK23 is a class that implements a way to solve an ODE, that is, it is an OdeSolver so it should not be used directly but in other functions like solve_ivp:
import numpy
from scipy.integrate import solve_ivp, RK23
import matplotlib.pyplot as plt
t0=0
tf=1
x0=numpy.array([0])
def F(t,x): return t**2
sol = solve_ivp(F, [t0, tf], x0, RK23)
print(sol)
plt.plot(sol.t, sol.y[0])
plt.show()
OdeSolver allows the developer to add custom methods without the need to rewrite scipy, but since RK23 is a classic method already implemented by scipy, you could pass just the name and scipy search for the appropriate class:
...
sol = solve_ivp(F, [t0, tf], x0, "RK23")
...

Numpy gradient with non uniform spacing

I got something wrong going on :
import numpy as np
import matplotlib.pyplot as plt
x = np.concatenate((np.linspace(0,1,100),np.linspace(1,2,50)));
f = np.power(x,2);
df = 2*x;
Df = np.gradient(f,x);
plt.plot(x,df,'r', x,Df,'b');plt.show()
This is what I get :
Otherwise things work ok if using linearly spaced array and not using argument x.
Any suggestions?
I think this is because numpy versions before 1.13 expect the "x" argument to be the constant grid spacing (see https://docs.scipy.org/doc/numpy-1.11.0/reference/generated/numpy.gradient.html#numpy.gradient). Even though the earlier versions expect a scalar dx, they do not check for this, and the result is np.gradient(f) / x, which is a valid division. This is pretty annoying since code written for numpy 1.13 may run on earlier versions with incorrect output and no errors.