I'm performing curve fitting with scipy.optimize.leastsq. E.g. for a gaussian:
def fitGaussian(x, y, init=[1.0,0.0,4.0,0.1]):
fitfunc = lambda p, x: p[0]*np.exp(-(x-p[1])**2/(2*p[2]**2))+p[3] # Target function
errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target function
final, success = scipy.optimize.leastsq(errfunc, init[:], args=(x, y))
return fitfunc, final
Now, I want to optionally fix the values of some of the parameters in the fit. I found that suggestions are to use a different package lmfit, which I want to avoid, or are very general, like here.
Since I need a solution which
works with numpy/scipy (no further packages etc.)
is independent of the parameters themselves,
is flexible, in which parameters are fixed or not,
I came up with the following, using a condition on each of the parameters:
def fitGaussian2(x, y, init=[1.0,0.0,4.0,0.1], fix = [False, False, False, False]):
fitfunc = lambda p, x: (p[0] if not fix[0] else init[0])*np.exp(-(x-(p[1] if not fix[1] else init[1]))**2/(2*(p[2] if not fix[2] else init[2])**2))+(p[3] if not fix[3] else init[3])
errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target function
final, success = scipy.optimize.leastsq(errfunc, init[:], args=(x, y))
return fitfunc, final
While this works fine, it's neither practical, nor beautiful.
So my question is: Are there better ways of performing curve fitting in scipy for fixed parameters? Or are there wrappers, which already include such parameter fixing?
Using scipy, there are no builtin options that I am aware of. You will always have to do a work-around like the one you already did.
If you are willing to use a wrapper package however, may I recommend my own symfit? This is a wrapper to scipy with readability and less boilerplate code as its core principles. In symfit, your problem would be solved as:
from symfit import parameters, variables, exp, Fit, Parameter
a, b, c, d = parameters('a, b, c, d')
x, y = variables('x, y')
model_dict = {y: a * exp(-(x - b)**2 / (2 * c**2)) + d}
fit = Fit(model_dict, x=xdata, y=ydata)
fit_result = fit.execute()
The line a, b, c, d = parameters('a, b, c, d') makes four Parameter objects. To fix e.g. the parameter c to its initial value, do the following anywhere before calling fit.execute():
c.value = 4.0
c.fixed = True
So a possible end result might be:
from symfit import parameters, variables, exp, Fit, Parameter
a, b, c, d = parameters('a, b, c, d')
x, y = variables('x, y')
c.value = 4.0
c.fixed = True
model_dict = {y: a * exp(-(x - b)**2 / (2 * c**2)) + d}
fit = Fit(model_dict, x=xdata, y=ydata)
fit_result = fit.execute()
If you want to be more dynamic in your code, you could make the Parameter objects straight away using:
c = Parameter(4.0, fixed=True)
For more info, check the docs: http://symfit.readthedocs.io/en/latest/tutorial.html#simple-example
The above example using symfit would surely simply the syntax of the fitting approach, however, does the example given really constrain the variable c?
If you look at the fit_result.param you get the following:
OrderedDict([('a', 16.374368575343127),
('b', 0.49201249437123556),
('c', 0.5337962977235504),
('d', -9.55593614465743)])
The parameter c is not 4.0.
Related
cvxpy has a very neat way to write out the optimisation form without worrying too much about converting it into a "standard" matrix form as this is done internally somehow. Best to explain with an example:
def cvxpy_implementation():
var1 = cp.Variable()
var2 = cp.Variable()
constraints = [
var1 <= 3,
var2 >= 2
]
obj_fun = cp.Minimize(var1**2 + var2**2)
problem = cp.Problem(obj_fun, constraints)
problem.solve()
return var1.value, var2.value
def scipy_implementation1():
A = np.diag(np.ones(2))
lb = np.array([-np.inf, 2])
ub = np.array([3, np.inf])
con = LinearConstraint(A, lb, ub)
def obj_fun(x):
return (x**2).sum()
result = minimize(obj_fun, [0, 0], constraints=con)
return result.x
def scipy_implementation2():
con = [
{'type': 'ineq', 'fun': lambda x: 3 - x[0]},
{'type': 'ineq', 'fun': lambda x: x[1] - 2},]
def obj_fun(x):
return (x**2).sum()
result = minimize(obj_fun, [0, 0], constraints=con)
return result.x
All of the above give the correct result but the cvxpy implementation is much "easier" to write out, specifically I don't have to worry about the inequalities and can name variables useful thinks when writing out the inequalities. Compare that to the scipy1 and scipy2 implementations where in the first case I have to write out these extra infs and in the second case I have to remember which variable is which. You can imagine a case where I have 100 variables and while concatenating them will ultimately need to be done I'd like to be able to write it out like in cvxpy.
Question:
Has anyone implemented this for scipy? or is there an alternative library that could make this work?
thank you
Wrote something up that would do this and seems to cover the main issues I had in mind.
The general idea is you define variables and then create a simple expression as you would normally write it out and then the solver class optimises over the defined variables
https://github.com/evan54/optimisation/blob/master/var.py
The example below illustrates a simple use case
# fake data
a = 2
m = 3
x = np.linspace(0, 10)
y = a * x + m + np.random.randn(len(x))
a_ = Variable()
m_ = Variable()
y_ = a_ * x + m_
error = y_ - y
prob = Problem((error**2).sum(), None)
prob.minimize() print(f'a = {a}, a_ = {a_}') print(f'm = {m}, m_ = {m_}')
I am trying to solve a differential equation using scipy.integrate.solve_ivp
L*Q'' + R*Q' + (1/C)*Q = E(t), E(t) = 230*sin(50*t)
for Q(t) and Q'(t)
C = 0.0014 #F
dQ_0 = 2.6 #A
L = 1.8 #H
n = 575 #/
Q_0 = 1e-06 #C
R = 43 #Ohm
t_f = 2.8 #s
import numpy as np
from scipy.integrate import solve_ivp
t = np.linspace(0, t_f, n)
def E(x):
return 230*np.sin(50*x)
y = E(y)
def Q(t, y, R, L, C):
return (y - L*Q'' - R*Q')*C
init_cond = [Q_0, dQ_0]
y_ivp = solve_ivp(Q, t_span=(0, t_f), y0=init_cond)
I am only trying to understand how to correctly define a function that is passed as an argument 'fun' in scipy.integrate.solve_ivp
The answer below is no longer valid, solve_ode has since implemented the args parameter similar to odeint. So indeed
y_ivp = solve_ivp(Q, t_span=(0, t_f), y0=init_cond, args=(R, L, C))
is now valid (do not forget to set appropriate error tolerances, or at least check that the default values arol=1e-3, rtol=1e-6 are appropriate).
Always available was the use of semi-global variables in a closure or lambda expression
y_ivp = solve_ivp(lambda t,y: Q(t,y,R, L, C), t_span=(0, t_f), y0=init_cond, args=(R, L, C))
(obsolete part) solve_ivp has no parameter passing mechanism, so treat the parameters as global variables. You are formulating an ODE for Q, as it is a second order ODE, the state also contains the first derivative, as you somehow recognized in the composition of the initial state. The ODE function then needs to produce the derivative values at a given state. Identify Q(t)=Q[0] and Q'(t)=Q[1], then
def Q_ode(t, Q):
return [ Q[1], (E(t) - R*Q[1] - (1/C)*Q[0])/L ]
I would continue to name the variables containing Q values with the letter Q.
I have a loss function I would like to try and minimize:
def lossfunction(X,b,lambs):
B = b.reshape(X.shape)
penalty = np.linalg.norm(B, axis = 1)**(0.5)
return np.linalg.norm(np.dot(X,B)-X) + lambs*penalty.sum()
Gradient descent, or similar methods, might be useful. I can't calculate the gradient of this function analytically, so I am wondering how I can numerically calculate the gradient for this loss function in order to implement a descent method.
Numpy has a gradient function, but it requires me to pass a scalar field at pre determined points.
You could try scipy.optimize.minimize
For your case a sample call would be:
import scipy.optimize.minimize
scipy.optimize.minimize(lossfunction, args=(b, lambs), method='Nelder-mead')
You could estimate the derivative numerically by a central difference:
def derivative(fun, X, b, lambs, h):
return (fun(X + 0.5*h,b,lambs) - fun(X - 0.5*h,b,lambs))/h
And use it like this:
# assign values to X, b, lambs
# set the value of h
h = 0.001
print derivative(lossfunction, X, b, lambs, h)
The code above is valid for dimX = 1, some modifications are needed to account for multidimensional vector X:
def gradient(fun, X, b, lambs, h):
res = []
for i in range (0,len(X)):
t1 = list(X)
t1[i] = t1[i] + 0.5*h
t2 = list(X)
t2[i] = t2[i] - 0.5*h
res = res + [(fun(t1,b,lambs) - fun(t2,b,lambs))/h]
return res
Forgive the naivity of the code, I barely know how to write some python :-)
I try to compare the results of some numpy.array calculations with expected results, and I constantly get false comparison, but the printed arrays look the same, e.g:
def test_gen_sine():
A, f, phi, fs, t = 1.0, 10.0, 1.0, 50.0, 0.1
expected = array([0.54030231, -0.63332387, -0.93171798, 0.05749049, 0.96724906])
result = gen_sine(A, f, phi, fs, t)
npt.assert_array_equal(expected, result)
prints back:
> raise AssertionError(msg)
E AssertionError:
E Arrays are not equal
E
E (mismatch 100.0%)
E x: array([ 0.540302, -0.633324, -0.931718, 0.05749 , 0.967249])
E y: array([ 0.540302, -0.633324, -0.931718, 0.05749 , 0.967249])
My gen_sine function is:
def gen_sine(A, f, phi, fs, t):
sampling_period = 1 / fs
num_samples = fs * t
samples_range = (np.arange(0, num_samples) * 2 * f * np.pi * sampling_period) + phi
return A * np.cos(samples_range)
Why is that? How should I compare the two arrays?
(I'm using numpy 1.9.3 and pytest 2.8.1)
The problem is that np.assert_array_equal returns None and does the assert statement internally. It is incorrect to preface it with a separate assert as you do:
assert np.assert_array_equal(x,y)
Instead in your test you would just do something like:
import numpy as np
from numpy.testing import assert_array_equal
def test_equal():
assert_array_equal(np.arange(0,3), np.array([0,1,2]) # No assertion raised
assert_array_equal(np.arange(0,3), np.array([2,0,1]) # Raises AssertionError
Update:
A few comments
Don't rewrite your entire original question, because then it was unclear what an answer was actually addressing.
As far as your updated question, the issue is that assert_array_equal is not appropriate for comparing floating point arrays as is explained in the documentation. Instead use assert_allclose and then set the desired relative and absolute tolerances.
Compare the following:
par(mfrow = 2)
image(x=as.POSIXct(1:100, origin = "1970-1-1"), z= matrix(rnorm(100*100), 100))
plot(x=as.POSIXct(1:100, origin = "1970-1-1"), (rnorm(100)))
It seems like image (and so, image.default) fails to take the class-defined Axis functions into account when plotting, while plot does. This is problematic, since I'm in the process of implementing some classes with custom pretty and format specifications that would have their own way of plotting an axis, so I want to having my own axis functions be called when image is used, than always use the numeric version.
I understand there's a way round this by plotting axis manually, calling image first with xaxt = "n", for instance. But this seems inconvenient and messy. Ideally, I'd like a solution that can just drop in to overlay the existing function while breaking as few things as possible. Any thoughts?
The simplest way is to suppress the axes on the call to image() with axes = FALSE then add them yourself. E.g.:
set.seed(42)
X <- as.POSIXct(1:100, origin = "1970-1-1")
Z <- matrix(rnorm(100*100), 100)
image(x = X, z = Z, axes = FALSE)
axis(side = 2)
axis.POSIXct(side = 1, x = X)
box()
This can also be done using the Axis() S3 generic:
image(x = X, z = Z, axes = FALSE)
axis(side = 2)
Axis(x = X, side = 1)
box()
So to actually try to Answer the Question, I would wrap this into a function that automates the various steps:
Image <- function(x = seq(0, 1, length.out = nrow(z)),
y = seq(0, 1, length.out = ncol(z)),
z, ...) {
image(x = X, z = Z, ..., axes = FALSE)
Axis(x = y, side = 2, ...)
Axis(x = X, side = 1, ...)
box()
}
Write your axis functions as S3 methods for the Axis() generic and class x and y appropriately do that your methods are called and the above should just work. All you need to remember is to change image() to Image().
You could also write your own image() method, and add your class to x to have it called instead of image.default() Depends on whether it makes sense for x to have a class or not?
The reason I would do this is that the only way to change image.default() R-wide is to edit the function and assign it to the graphics namespace or source your version and call it explicitly. This would need to be done each and every time you started R. A custom function could easily be sourced or added to your own local package of misc functions that you arrange to load as R is starting so that it is automagically available. See ?Startup for details of how you might arrange for this.