How to minimize my function for a specific parameter, and holding others constant - optimization

So I am trying to calculate the value of a parameter, beta, by minimizing the chi-square function. To do this, I'm using the scipy.optimize.minimize() function. I can't seem to get the code to do what I want. Is there a way to do this? I am open to other ways of approaching the problem.
For some background, the variables vr, rms and delta are both 1D tuples of the same length, and zeff, H and beta are parameters. I am trying to calculate an optimized beta value.
def chisq(beta,vr, delta,rvs,rms,zeff,H):
c = -(H/(1+zeff))*(beta/3)
model = c*np.multiply(rms,delta)
q = (vr-model)**2
p = model**-1
ratio = np.multiply(p,q)
chisq = np.sum(ratio)
return chisq
initial_guess = 0.47663662075855323
res = opt.minimize(chisq,initial_guess,args = (beta,delta,rvs,rms,zeff,H))
I usually get an error saying the dimensions of the function don't match the syntax for the minimize() function.

In your case beta is the optimization variable, so you don't need to pass it as an extra argument to the function chisq:
res = opt.minimize(chisq, x0=initial_guess, args=(vr, delta, rvs, rms, zeff, H))

Related

Julia - Defining a Linear Programming Problem with MathOptInterface

I'd like to write a LP problem in the standard format with MatOptInterface, e.i.:
min c'*x
S.t A*x .== b
x >= 0
Now, how can one write this problem with MathOptInterface? I'm having many issues, one of them is how to define the variable "model". For example, if I try to run:
x = add_variables(model,3)
I first would need to declare this model variable. But I don't know how one is supposed to do this on MathOptInterface.
IIUC in your situation model has to be an argument to be specified by the user of your function.
The user can then pass GLPK.Optimizer(), Tulip.Optimizer() or any other optimizer inheriting from MathOptInterface.AbstractOptimizer.
See e.g. Manual#A complete example.
Alternatively you can look at MOI.Utilities.Model but I don't know how to get an optimizer to solve that model.
Here is how to implement the LP solver for standard Simplex format:
function SolveLP(c,A,b,model::MOI.ModelLike)
x = MOI.add_variables(model, length(c));
MOI.set(model, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0))
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
for xi in x
MOI.add_constraint(model, MOI.SingleVariable(xi), MOI.GreaterThan(0.0))
end
for (i,row) in enumerate(eachrow(A))
row_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(row, x), 0.0);
MOI.add_constraint(model, row_function, MOI.EqualTo(b[i]))
end
MOI.optimize!(model)
p = MOI.get(model, MOI.VariablePrimal(), x);
return p
end
For the model, just choose something like GLPK.Optimizer()

How to solve mixed differential equations? Or how to assign the dPsdt = o in first iteration and it will get value after iterations

Error generates like this: UnboundLocalError: local variable 'dPsdt' referenced before assignment
dL1dt = (m_si*(h1-hf) + pi*di*alphai_1*L1*(Tm1-Ts1) - d13*dPsdt)/d11
dL2dt = (m_si*hf-m_so*hg + pi*di*alphai_2*L2*(Tm2-Ts2)- d21 * dL1dt -d23*dPsdt-
d24*dhodt)/d22
dPsdt = (m_so*(hg-ho) + pi*di*alphai_3*L3*(Tm3-Ts3)-d31*dL1dt-d32*dL2dt -
d34*dhodt)/d33
dhodt = (m_si - m_so -(d41*dL1dt) - (d42*dL2dt) - (d43*dPsdt))/d44
dzdt = [dL1dt, dL2dt, dPsdt, dhodt, dTm1dt, dTm2dt, dTm3dt, dTp1dt, dTp2dt, dTp3dt]
return dzdt
It seems that your derivatives are implicitly defined by some linear system
A*dxdt = b
which you try to solve via Gauss-Seidel iteration. This you have to actually implement as iteration, that is, several passes over the system of equations. Note that you need a convergence condition like diagonal dominance so that that works at all.
But for these small dimensions you can be faster and more exact by using
dxdt = nump.linalg.solve(A,b)

Single Value Decomposition algorithm not working

I wrote the following function to perform SVD according to page 45 of 'the deep learning book' by Ian Goodfellow and co.
def SVD(A):
#A^T
AT = np.transpose(A)
#AA^T
AAT = A.dot(AT)
#A^TA
ATA = AT.dot(A)
#Left single values
LSV = np.linalg.eig(AAT)[1]
U = LSV #some values of U have the wrong sign
#Right single values
RSV = np.linalg.eig(ATA)[1]
V = RSV
V[:,0] = V[:,0] #V isnt arranged properly
values = np.sqrt(np.linalg.eig(ata)[0])
#descending order
values = np.sort(values)[::-1]
rows = A.shape[0]
columns = A.shape[1]
D = np.zeros((rows,columns))
np.fill_diagonal(D,values)
return U, D, V
However for any given matrix the results are not the same as using
np.linalg.svd(A)
and I have no idea why.
I tested my algorithm by saying
abs(UDV^T - A) < 0.0001
to check if it decomposed properly and it hasn't. The problem seems to lie with the V and U components but I can't see what's going wrong. D seems to be correct.
If anyone can see the problem it would be much appreciated.
I think you have a problem with the order of the eigenpairs that eig(ATA) and eig(AAT) return. The documentation of np.linalg.eig tells that no order is guaranteed. Replacing eig by eigh, which returns the eigenpairs in ascending order, should help. Also don't rearrange the values.
By the way, eigh is specific for symmetric matrices, such as the ones that you are passing, and will not return complex numbers if the original matrix is real.

How to define a function with 3 variables in GNUPLOT

I am new to GNUPLOT. I am trying to plot 3d vector fields. However I am having trouble defining a function of three variables f(x,y,z). Can anyone show me how to do this correctly?
Defining your own functions in gnuplot is pretty intuitive. According to the gnuplot documentation the syntax is as follows
<func-name>( <dummy1> {,<dummy2>} ... {,<dummy5>} ) = <expression>
Examples:
w = 2
q = floor(tan(pi/2 - 0.1))
f(x) = sin(w*x)
sinc(x) = sin(pi*x)/(pi*x)
delta(t) = (t == 0)
ramp(t) = (t > 0) ? t : 0
min(a,b) = (a < b) ? a : b
comb(n,k) = n!/(k!*(n-k)!)
len3d(x,y,z) = sqrt(x*x+y*y+z*z)
plot f(x) = sin(x*a), a = 0.2, f(x), a = 0.4, f(x)
There is also a large set of built-in mathematical functions which you can use (in the definition of your own function).
For piecewise defined functions you can use the fact that undefined values are ignored. Therefore, the function
y(x) = x < 0 ? 1/0 : x
is only defined for positive arguments.
Powers are defined by **. Hence f(x)=x*x is identical to f(x)=x**2
If you have still problems in defining your own function, please feel free to ask. (Shouldn't a 3d-function only depend on x and y, i.e., f(x,y)=...?)
For examples of 3d plots, also see the gnuplot demo site.

Matlab: how do I run the optimization (fmincon) repeately?

I am trying to follow the tutorial of using the optimization tool box in MATLAB. Specifically, I have a function
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)+b
subject to the constraint:
(x(1))^2+x(2)-1=0,
-x(1)*x(2)-10<=0.
and I want to minimize this function for a range of b=[0,20]. (That is, I want to minimize this function for b=0, b=1,b=2 ... and so on).
Below is the steps taken from the MATLAB's tutorial webpage(http://www.mathworks.com/help/optim/ug/nonlinear-equality-and-inequality-constraints.html), how should I change the code so that, the optimization will run for 20 times, and save the optimal values for each b?
Step 1: Write a file objfun.m.
function f = objfun(x)
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)+b;
Step 2: Write a file confuneq.m for the nonlinear constraints.
function [c, ceq] = confuneq(x)
% Nonlinear inequality constraints
c = -x(1)*x(2) - 10;
% Nonlinear equality constraints
ceq = x(1)^2 + x(2) - 1;
Step 3: Invoke constrained optimization routine.
x0 = [-1,1]; % Make a starting guess at the solution
options = optimoptions(#fmincon,'Algorithm','sqp');
[x,fval] = fmincon(#objfun,x0,[],[],[],[],[],[],...
#confuneq,options);
After 21 function evaluations, the solution produced is
x, fval
x =
-0.7529 0.4332
fval =
1.5093
Update:
I tried your answer, but I am encountering problem with your step 2. Bascially, I just fill the my step 2 to your step 2 (below the comment "optimization just like before").
%initialize list of targets
b = 0:1:20;
%preallocate/initialize result vectors using zeros (increases speed)
opt_x = zeros(length(b));
opt_fval = zeros(length(b));
>> for idx = 1, length(b)
objfun = #(x)objfun_builder(x,b)
%optimization just like before
x0 = [-1,1]; % Make a starting guess at the solution
options = optimoptions(#fmincon,'Algorithm','sqp');
[x,fval] = fmincon(#objfun,x0,[],[],[],[],[],[],...
#confuneq,options);
%end the stuff I fill in
opt_x(idx) = x
opt_fval(idx) = fval
end
However, it gave me the output is:
Error: "objfun" was previously used as a variable, conflicting
with its use here as the name of a function or command.
See "How MATLAB Recognizes Command Syntax" in the MATLAB
documentation for details.
There are two things you need to change about your code:
Creation of the objective function.
Multiple optimizations using a loop.
1st Step
For more flexibility with regard to b, you need to set up another function that returns a handle to the desired objective function, e.g.
function h = objfun_builder(x, b)
h = #(x)(objfun(x));
function f = objfun(x)
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) + b;
end
end
A more elegant and shorter approach are anonymous functions, e.g.
objfun_builder = #(x,b)(exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) + b);
After all, this works out to be the same as above. It might be less intuitive for a Matlab-beginner, though.
2nd Step
Instead of placing an .m-file objfun.m in your path, you will need to call
objfun = #(x)(objfun_builder(x,myB));
to create an objective function in your workspace. In order to loop over the interval b=[0,20], use the following loop
%initialize list of targets
b = 0:1:20;
%preallocate/initialize result vectors using zeros (increases speed)
opt_x = zeros(length(b))
opt_fval = zeros(length(b))
%start optimization of list of targets (`b`s)
for idx = 1, length(b)
objfun = #(x)objfun_builder(x,b)
%optimization just like before
opt_x(idx) = x
opt_fval(idx) = fval
end