How to display optimal variable values of a class-type Pyomo model? - gurobi

I am a new Pyomo/Python user, and I am just wondering how to display the optimal variable values in a class-type Pyomo model.
I have just tried the standard example from Pyomo example library, the "min-cost-flow model". The code is available in https://github.com/Pyomo/PyomoGallery/wiki/Min-cost-flow
At the bottom of the code, it says:
sp = MinCostFlow('nodes.csv', 'arcs.csv')
sp.solve()
print('\n\n---------------------------')
print('Cost: ', sp.m.OBJ())
and the output is
Academic license - for non-commercial use only
Read LP format model from file.
Reading time = 0.00 seconds
x8: 7 rows, 8 columns, 16 nonzeros
No parameters matching 'mip_tolerances_integrality' found
No parameters matching 'mip_tolerances_mipgap' found
Optimize a model with 7 rows, 8 columns and 16 nonzeros
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 5e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 1e+00]
Presolve removed 7 rows and 8 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 5.0000000e+00 0.000000e+00 0.000000e+00 0s
Solved in 0 iterations and 0.01 seconds
Optimal objective 5.000000000e+00
I can only get the optimal objective, but how about the optimal variable values? I have also searched the documentation, which told me to use something like:
print("x[2]=",pyo.value(model.x[2])).
But it did not work for class-type models like the min-cost-flow model.
I also tried to modify the function definition in the class:
def solve(self):
"""Solve the model."""
solver = pyomo.opt.SolverFactory('gurobi')
results = solver.solve(self.m, tee=True, keepfiles=False, options_string="mip_tolerances_integrality=1e-9, mip_tolerances_mipgap=0")
print('\n\n---------------------------')
print('First Variable: ', self.m.Y[0])
But it did not work as well. The output is:
KeyError: "Index '0' is not valid for indexed component 'Y'"
Can you help me with this? Thanks!
Gabriel

The most straightforward way to display the model results after solution is to use the model.display() function. In your case, self.m.display().
The display() function works on Var objects as well, so if you have a variable self.m.x, you could do self.m.x.display().

Related

How to perform dynamic optimization for a nonlinear discrete optimization problem with nonlinear constraints, using non-linear solvers like SNOPT?

I am new to the field of optimization and I need help in the following optimization problem. I have tried to solve it using normal coding to make sure that I got he correct results. However, the results I got are different and I am not sure my way of analysis is correct or not. This is a short description of the problem:
The objective function shown in the picture is used to find the optimal temperature of the insulating system that minimizes the total cost over a given horizon.
[This image provides the mathematical description of the objective function and the constraints] (https://i.stack.imgur.com/yidrO.png)
The data of the problems are as follow:
1-
Problem data:
A=1.07×10^8
h=1
T_ref=87.5
N=20
p1=0.001;
p2=0.0037;
This is the curve I want to obtain
2- Optimization variable:
u_t
3- Model type:
The model is a nonlinear cost function with non-linear constraints and it is solved using non-linear solver SNOPT.
4-The meaning of the symbols in the objective and constrained functions
The optimization is performed over a prediction horizon of N years.
T_ref is The reference temperature.
Represent the degree of polymerization in the kth year.
X_DP Represents the temperature of the insulating system in the kth year.
h is the time step (1 year) of the discrete-time model.
R is the ratio of the load loss at the rated load to the no-load loss.
E is the activation energy.
A is the pre-exponential constant.
beta is a linear coefficient representing the cost due to the decrement of the temperature.
I have developed the source code in MATLAB, this code is used to check if my analysis is correct or not.
I have tried to initialize the Ut value in its increasing or decreasing states so that I can have the curves similar to the original one. [This is the curve I obtained] (https://i.stack.imgur.com/KVv2q.png)
I have tried to simulate the problem using conventional coding without optimization and I got the figure shown above.
close all; clear all;
h=1;
N=20;
a=250;
R=8.314;
A=1.07*10^8;
E=111000;
Tref=87.5;
p1=0.0019;
p2=0.0037;
p3=0.0037;
Utt=[80,80.7894736842105,81.5789473684211,82.3684210526316,83.1578947368421,... % The value of Utt given here represent the temperature increament over a predictive horizon.
83.9473684210526,84.7368421052632,85.5263157894737,86.3157894736842,...
87.1052631578947,87.8947368421053,88.6842105263158,89.4736842105263,...
90.2631578947369,91.0526315789474,91.8421052631579,92.6315789473684,...
93.4210526315790,94.2105263157895,95];
Utt1 = [95,94.2105263157895,93.4210526315790,92.6315789473684,91.8421052631579,... % The value of Utt1 given here represent the temperature decreament over a predictive horizon.
91.0526315789474,90.2631578947369,89.4736842105263,88.6842105263158,...
87.8947368421053,87.1052631578947,86.3157894736842,85.5263157894737,...
84.7368421052632,83.9473684210526,83.1578947368421,82.3684210526316,...
81.5789473684211,80.7894736842105,80];
Ut1=zeros(1,N);
Ut2=zeros(1,N);
Xdp =zeros(N,N);
Xdp(1,1)=1000;
Xdp1 =zeros(N,N);
Xdp1(1,1)=1000;
for L=1:N-1
for k=1:N-1
%vt(k+L)=Ut(k-L+1);
Xdq(k+1,L) =(1/Xdp(k,L))+A*exp((-1*E)/(R*(Utt(k)+273)))*24*365*h;
Xdp(k+1,L)=1/(Xdq(k+1,L));
Xdp(k,L+1)=1/(Xdq(k+1,L));
Xdq1(k+1,L) =(1/Xdp1(k,L))+A*exp((-1*E)/(R*(Utt1(k)+273)))*24*365*h;
Xdp1(k+1,L)=1/(Xdq1(k+1,L));
Xdp1(k,L+1)=1/(Xdq1(k+1,L));
end
end
% MATLAB code
for j =1:N-1
Ut1(j)= -p1*(Utt(j)-Tref);
Ut2(j)= -p2*(Utt1(j)-Tref);
end
sum00=sum(Ut1);
sum01=sum(Ut2);
X1=1./Xdp(:,1);
Xf=1./Xdp(:,20);
Total= table(X1,Xf);
Tdiff =a*(Total.Xf-Total.X1);
X22=1./Xdp1(:,1);
X2f=1./Xdp1(:,20);
Total22= table(X22,X2f);
Tdiff22 =a*(Total22.X2f-Total22.X22);
obj=(sum00+(Tdiff));
ob1 = min(obj);
obj2=sum01+Tdiff22;
ob2 = min(obj2);
plot(Utt,obj,'-o');
hold on
plot(Utt1,obj)

Getting "DUAL_INFEASIBLE" when solving a very simple linear programming problem

I am solving a simple LP problem using Gurobi with dual simplex and presolve. I get the model is unbounded but I couldn't see why such a model is unbounded. Can anyone help to tell me where goes wrong?
I attached the log and also the content in the .mps file.
Thanks very much in advance.
Kind regards,
Hongyu.
The output log and .mps file:
Link to the .mps file: https://studntnu-my.sharepoint.com/:u:/g/personal/hongyuzh_ntnu_no/EV5CBhH2VshForCL-EtPvBUBiFT8uZZkv-DrPtjSFi8PGA?e=VHktwf
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (mac64[arm])
Thread count: 8 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 1 rows, 579 columns and 575 nonzeros
Coefficient statistics:
Matrix range [3e-02, 5e+01]
Objective range [7e-01, 5e+01]
Bounds range [0e+00, 0e+00]
RHS range [7e+03, 7e+03]
Iteration Objective Primal Inf. Dual Inf. Time
0 handle free variables 0s
Solved in 0 iterations and 0.00 seconds (0.00 work units)
Unbounded model
The easiest way to debug this is to put a bound on the objective, so the model is no longer unbounded. Then inspect the solution. This is a super easy trick that somehow few people know about.
When we do this with a bound of 100000, we see:
phi = 100000.0000
gamma[11] = -1887.4290
(the rest zero). Indeed we can make gamma[11] as negative as we want to obey R0. Note that gamma[11] is not in the objective.
More advice: It is also useful to write out the LP file of the model and study that carefully. You probably would have caught the error and that would have prevented this post.

SCIP what is the function for sign?

I am new to SCIP and have read through some of the example problems and documentation, but am still unsure how to formulate the following problem for the SCIP solver:
argmax(w) sum(sign(Aw) == sign(b))
where A is a nxm matrix, w is a mx1 vector, and b is a nx1 vector. The data type is floats/real numbers, and it is a constraint-free problem.
Values for A and b are also contained row-wise in a .txt file. How can I import that?
Overall - I am new to SCIP and have no idea how to start creating variables (especially the objective function value parameter), importing data, formulate the objective function... It's a bit of a stretch for me to ask this question, but your help is appreciated!
This should work:
where beta(i) = sign(b(i)). The implication can be implemented using indicator constraints. This way we don't need big-M's.
Most likely the >= 0 constraint should be >= 0.0001 (otherwise we can set all w(j)=0).

AMPL IPOPT gives wrong optimal solution while solve result is "solved"

I am trying to solve a very simple optimization problem in AMPL with IPOPT as follow:
var x1 >= 0 ;
minimize obj: -(x1^2)+x1;
obviously the problem is unbounded. but IPOPT gives me:
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.12.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 1
Total number of variables............................: 1
variables with only lower bounds: 1
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 9.8999902e-03 0.00e+00 2.00e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.5346023e-04 0.00e+00 1.50e-09 -3.8 9.85e-03 - 1.00e+00 1.00e+00f 1
2 1.7888952e-06 0.00e+00 1.84e-11 -5.7 1.52e-04 - 1.00e+00 1.00e+00f 1
3 -7.5005506e-09 0.00e+00 2.51e-14 -8.6 1.80e-06 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 3
(scaled) (unscaled)
Objective...............: -7.5005505996934397e-09 -7.5005505996934397e-09
Dual infeasibility......: 2.5091040356528538e-14 2.5091040356528538e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.4994494940593761e-09 2.4994494940593761e-09
Overall NLP error.......: 2.4994494940593761e-09 2.4994494940593761e-09
Number of objective function evaluations = 4
Number of objective gradient evaluations = 4
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 3
Total CPU secs in IPOPT (w/o function evaluations) = 0.001
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
Ipopt 3.12.4: Optimal Solution Found
suffix ipopt_zU_out OUT;
suffix ipopt_zL_out OUT;
ampl: display x1;
x1 = 0
when I change the solver to Gurobi, it gives this message:
Gurobi 6.5.0: unbounded; variable.unbdd returned.
which is what I expected.
I can not understand why it happens and now I don't know if I need to check it for all the problem that I am trying to solve to not converging to the the wrong optimal solution. As it is a super simple example it is a little bit strange.
I would appreciate if anybody can help me with this.
Thanks
You've already identified the basic problem, but elaborating a little on why these two solvers give different results:
IPOPT is designed to cover a wide range of optimisation problems, so it uses some fairly general numeric optimisation methods. I'm not familiar with the details of IPOPT but usually this sort of approach relies on picking a starting point, looking at the curvature of the objective function in the neighbourhood of that starting point, and following the curvature "downhill" until they find a local optimum. Different starting points can lead to different results. In this case IPOPT is probably defaulting to zero for the starting point, so it's right on top of that local minimum. As Erwin's suggested, if you specify a different starting point it might find the unboundedness.
Gurobi is designed specifically for quadratic/linear problems, so it uses very different methods which aren't susceptible to local-minimum issues, and it will probably be much more efficient for quadratics. But it doesn't support more general objective functions.
I think I understand why it happened. the objective function
-(x1^2)+x1;
is not convex. therefore the given solution is local optimum.

scipy.optimize.fmin_l_bfgs_b returns 'ABNORMAL_TERMINATION_IN_LNSRCH'

I am using scipy.optimize.fmin_l_bfgs_b to solve a gaussian mixture problem. The means of mixture distributions are modeled by regressions whose weights have to be optimized using EM algorithm.
sigma_sp_new, func_val, info_dict = fmin_l_bfgs_b(func_to_minimize, self.sigma_vector[si][pj],
args=(self.w_vectors[si][pj], Y, X, E_step_results[si][pj]),
approx_grad=True, bounds=[(1e-8, 0.5)], factr=1e02, pgtol=1e-05, epsilon=1e-08)
But sometimes I got a warning 'ABNORMAL_TERMINATION_IN_LNSRCH' in the information dictionary:
func_to_minimize value = 1.14462324063e-07
information dictionary: {'task': b'ABNORMAL_TERMINATION_IN_LNSRCH', 'funcalls': 147, 'grad': array([ 1.77635684e-05, 2.87769808e-05, 3.51718654e-05,
6.75015599e-06, -4.97379915e-06, -1.06581410e-06]), 'nit': 0, 'warnflag': 2}
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 2.220D-16
N = 6 M = 10
This problem is unconstrained.
At X0 0 variables are exactly at the bounds
At iterate 0 f= 1.14462D-07 |proj g|= 3.51719D-05
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
6 1 21 1 0 0 3.517D-05 1.145D-07
F = 1.144619474757747E-007
ABNORMAL_TERMINATION_IN_LNSRCH
Line search cannot locate an adequate point after 20 function
and gradient evaluations. Previous x, f and g restored.
Possible causes: 1 error in function or gradient evaluation;
2 rounding error dominate computation.
Cauchy time 0.000E+00 seconds.
Subspace minimization time 0.000E+00 seconds.
Line search time 0.000E+00 seconds.
Total User time 0.000E+00 seconds.
I do not get this warning every time, but sometimes. (Most get 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL' or 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH').
I know that it means the minimum can be be reached in this iteration. I googled this problem. Someone said it occurs often because the objective and gradient functions do not match. But here I do not provide gradient function because I am using 'approx_grad'.
What are the possible reasons that I should investigate? What does it mean by "rounding error dominate computation"?
======
I also find that the log-likelihood does not monotonically increase:
########## Convergence !!! ##########
log_likelihood_history: [-28659.725891322563, 220.49993177669558, 291.3513633060345, 267.47745327823907, 265.31567762171181, 265.07311121000367, 265.04217683341682]
It usually start decrease at the second or the third iteration, even through 'ABNORMAL_TERMINATION_IN_LNSRCH' does not occurs. I do not know whether it this problem is related to the previous one.
Scipy calls the original L-BFGS-B implementation. Which is some fortran77 (old but beautiful and superfast code) and our problem is that the descent direction is actually going up. The problem starts on line 2533 (link to the code at the bottom)
gd = ddot(n,g,1,d,1)
if (ifun .eq. 0) then
gdold=gd
if (gd .ge. zero) then
c the directional derivative >=0.
c Line search is impossible.
if (iprint .ge. 0) then
write(0,*)' ascent direction in projection gd = ', gd
endif
info = -4
return
endif
endif
In other words, you are telling it to go down the hill by going up the hill. The code tries something called line search a total of 20 times in the descent direction that you provide and realizes that you are NOT telling it to go downhill, but uphill. All 20 times.
The guy who wrote it (Jorge Nocedal, who by the way is a very smart guy) put 20 because pretty much that's enough. Machine epsilon is 10E-16, I think 20 is actually a little too much. So, my money for most people having this problem is that your gradient does not match your function.
Now, it could also be that "2. rounding errors dominate computation". By this, he means that your function is a very flat surface in which increases are of the order of machine epsilon (in which case you could perhaps rescale the function),
Now, I was thiking that maybe there should be a third option, when your function is too weird. Oscillations? I could see something like $\sin({\frac{1}{x}})$ causing this kind of problem. But I'm not a smart guy, so don't assume that there's a third case.
So I think the OP's solution should be that your function is too flat. Or look at the fortran code.
https://github.com/scipy/scipy/blob/master/scipy/optimize/lbfgsb/lbfgsb.f
Here's line search for those who want to see it. https://en.wikipedia.org/wiki/Line_search
Note. This is 7 months too late. I put it here for future's sake.
As pointed out in the answer by Wilmer E. Henao, the problem is probably in the gradient. Since you are using approx_grad=True, the gradient is calculated numerically. In this case, reducing the value of epsilon, which is the step size used for numerically calculating the gradient, can help.
I also got the error "ABNORMAL_TERMINATION_IN_LNSRCH" using the L-BFGS-B optimizer.
While my gradient function pointed in the right direction, I rescaled the actual gradient of the function by its L2-norm. Removing that or adding another appropriate type of rescaling worked. Before, I guess that the gradient was so large that it went out of bounds immediately.
The problem from OP was unbounded if I read correctly, so this will certainly not help in this problem setting. However, googling the error "ABNORMAL_TERMINATION_IN_LNSRCH" yields this page as one of the first results, so it might help others...
I had a similar problem recently. I sometimes encounter the ABNORMAL_TERMINATION_IN_LNSRCH message after using fmin_l_bfgs_b function of scipy. I try to give additional explanations of the reason why I get this. I am looking for complementary details or corrections if I am wrong.
In my case, I provide the gradient function, so approx_grad=False. My cost function and the gradient are consistent. I double-checked it and the optimization actually works most of the time. When I get ABNORMAL_TERMINATION_IN_LNSRCH, the solution is not optimal, not even close (even this is a subjective point of view). I can overcome this issue by modifying the maxls argument. Increasing maxls helps to solve this issue to finally get the optimal solution. However, I noted that sometimes a smaller maxls, than the one that produces ABNORMAL_TERMINATION_IN_LNSRCH, results in a converging solution. A dataframe summarizes the results. I was surprised to observe this. I expected that reducing maxls would not improve the result. For this reason, I tried to read the paper describing the line search algorithm but I had trouble to understand it.
The line "search algorithm generates a sequence of
nested intervals {Ik} and a sequence of iterates αk ∈ Ik ∩ [αmin ; αmax] according to the [...] procedure". If I understand well, I would say that the maxls argument specifies the length of this sequence. At the end of the maxls iterations (or less if the algorithm terminates in fewer iterations), the line search stops. A final trial point is generated within the final interval Imaxls. I would say the the formula does not guarantee to get an αmaxls that respects the two update conditions, the minimum decrease and the curvature, especially when the interval is still wide. My guess is that in my case, after 11 iterations the generated interval I11 is such that a trial point α11 respects both conditions. But, even though I12 is smaller and still containing acceptable points, α12 is not. Finally after 24 iterations, the interval is very small and the generated αk respects the update conditions.
Is my understanding / explanation accurate?
If so, I would then be surprised that when maxls=12, since the generated α11 is acceptable but not α12, why α11 is not chosen in this case instead of α12?
Pragmatically, I would recommend to try a few higher maxls when getting ABNORMAL_TERMINATION_IN_LNSRCH.