Find the difference between upper and lower bounds - gams-math

I have a MIP model with name "MyModel", i used these commands too( before the solve statement).
file opts cplex option file/ cplex.opt /;
MyModel.Optfile =1;
putclose opts /'epgap=0' /'epagap=0';
after solving it with CPLEX, the status model was 8,(INTEGER SOLUTION : A feasible solution to a problem with discrete variables has been found).
How can I get the difference between upper and lower bounds for this feasible solution?
In other words, I want to have the gap.
Thanks

You can see it in your lst file and in the log. In both cases you should find something like this:
Solution satisfies tolerances.
MIP Solution: 21.000000 (4720 iterations, 100 nodes)
Final Solve: 21.000000 (0 iterations)
Best possible: 23.000000
Absolute gap: 2.000000
Relative gap: 0.086957
In the lst file this should be directly after the "S O L V E S U M M A R Y" and in the log you can see it at the end of the Cplex Output.
Edit: You can also calculate it inside your model like this:
Scalar gap;
gap = abs(MyModel.objEst - MyModel.objVal);
Display gap;

Related

Getting "DUAL_INFEASIBLE" when solving a very simple linear programming problem

I am solving a simple LP problem using Gurobi with dual simplex and presolve. I get the model is unbounded but I couldn't see why such a model is unbounded. Can anyone help to tell me where goes wrong?
I attached the log and also the content in the .mps file.
Thanks very much in advance.
Kind regards,
Hongyu.
The output log and .mps file:
Link to the .mps file: https://studntnu-my.sharepoint.com/:u:/g/personal/hongyuzh_ntnu_no/EV5CBhH2VshForCL-EtPvBUBiFT8uZZkv-DrPtjSFi8PGA?e=VHktwf
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (mac64[arm])
Thread count: 8 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 1 rows, 579 columns and 575 nonzeros
Coefficient statistics:
Matrix range [3e-02, 5e+01]
Objective range [7e-01, 5e+01]
Bounds range [0e+00, 0e+00]
RHS range [7e+03, 7e+03]
Iteration Objective Primal Inf. Dual Inf. Time
0 handle free variables 0s
Solved in 0 iterations and 0.00 seconds (0.00 work units)
Unbounded model
The easiest way to debug this is to put a bound on the objective, so the model is no longer unbounded. Then inspect the solution. This is a super easy trick that somehow few people know about.
When we do this with a bound of 100000, we see:
phi = 100000.0000
gamma[11] = -1887.4290
(the rest zero). Indeed we can make gamma[11] as negative as we want to obey R0. Note that gamma[11] is not in the objective.
More advice: It is also useful to write out the LP file of the model and study that carefully. You probably would have caught the error and that would have prevented this post.

AMPL: "Bad suffix .npool for Initial" error with poolstub

I need to find a pool of solutions with AMPL (I am relatively new to it) using the option "poolstub" but I get an error when I try to retrive them. I will try to explain everything step by step. This is my code:
option solver cplex;
model my_model.mod;
data my_data.dat;
option cplex_options 'poolstub=multmip poolcapacity=10 populate=1 poolintensity=4 poolreplace=1';
solve;
At this point AMPLE gives me this:
CPLEX 20.1.0.0: poolstub=multmip
poolcapacity=10
populate=1
poolintensity=4
poolreplace=1
CPLEX 20.1.0.0: optimal solution; objective 4.153846154
66 dual simplex iterations (0 in phase I)
It seems like AMPL has not stored the solutions in the pool.
And in fact, if I try to retrive them with this code
for {i in 1..Current.npool} {
solution ('multmip' & i & '.sol');
display _varname, _var;
}
I get this error:
Bad suffix .npool for Initial
context: for {i in >>> 1..Current.npool} <<< {
Possible suffix values for Initial.suffix:
astatus exitcode message relax
result sstatus stage
for{...} { ? ampl: for{...} { ? ampl:
I have no integer variables, only real ones and I read that CPLEX doesn't support the populate method for linear programs. Could this be the problem or is something else missing? Thank you in advance
You have identified your problem correctly. Entity Initial does not have the npool suffix, which means the solver (in your case CPLEX) did not return one.
Gurobi can return that information for linear programs, but it seems to be identical to the optimal solution, so it would not give you any extra information (more info on AMPL-Gurobi options).
Here is an example AMPL script:
model net1.mod;
data net1.dat;
option solver gurobi;
option gurobi_options 'ams_stub=allopt ams_mode=1';
solve;
for {n in 1..Total_Cost.npool} {
solution ("allopt" & n & ".sol");
display Ship;
}
Output (on my machine):
Gurobi 9.1.1: ams_stub=allopt
ams_mode=2
ams_epsabs=0.5
Gurobi 9.1.1: optimal solution; objective 1819
1 simplex iterations
Alternative MIP solution 1, objective = 1819
1 alternative MIP solutions written to "allopt1.sol"
... "allopt1.sol".
Alternative solutions do not include dual variable values.
Best solution is available in "allopt1.sol".
suffix npool OUT;
Alternative MIP solution 1, objective = 1819
Ship :=
NE BOS 90
NE BWI 60
NE EWR 100
PITT NE 250
PITT SE 200
SE ATL 70
SE BWI 60
SE EWR 20
SE MCO 50
;
The files net1.mod and net1.dat are from the AMPL book.
When solving a MIP the solver can store sub-optimal solutions that it found along the way as they might be interesting for some reason to the modeler.
In terms of your LP, are you interested in the vertices the simplex algorithm visits?

In GAMS, how to deal with divisions?

In my GAMS model, I have a objective function that involves a division.
GAMS sets the initial values to zero whenever it solves something...brilliant idea, how could that possibly ever go wrong!....oh wait, now there's division by zero.
What is the approach to handle this? I have tried manually setting lower bounds such that division by zero is avoided, but then GAMS spits out "infeasible" solution.
Which is wrong, since I know the model is feasible. In fact, removing the division term from my model and resolving does produce a solution. This solution ought to be feasible for the original problem as well, since we are just adding terms to the objective.
Here are some common approaches:
set a lower bound. E.g. Z =E= X/Y, add Y.LO = 0.0001;
similarly, write something like: Z =E= X/(Y+0.0001)
set a initial value. E.g. Y.L = 1
Multiply both sides by Y: Z*Y =E= X
For any non-linear variable you should really think carefully about bounds and initial values (irrespective of division).
Try using the $ sign. For example: A(i,j)$C(i,j) = B(i,j) / C(i,j)

scipy.optimize.fmin_l_bfgs_b returns 'ABNORMAL_TERMINATION_IN_LNSRCH'

I am using scipy.optimize.fmin_l_bfgs_b to solve a gaussian mixture problem. The means of mixture distributions are modeled by regressions whose weights have to be optimized using EM algorithm.
sigma_sp_new, func_val, info_dict = fmin_l_bfgs_b(func_to_minimize, self.sigma_vector[si][pj],
args=(self.w_vectors[si][pj], Y, X, E_step_results[si][pj]),
approx_grad=True, bounds=[(1e-8, 0.5)], factr=1e02, pgtol=1e-05, epsilon=1e-08)
But sometimes I got a warning 'ABNORMAL_TERMINATION_IN_LNSRCH' in the information dictionary:
func_to_minimize value = 1.14462324063e-07
information dictionary: {'task': b'ABNORMAL_TERMINATION_IN_LNSRCH', 'funcalls': 147, 'grad': array([ 1.77635684e-05, 2.87769808e-05, 3.51718654e-05,
6.75015599e-06, -4.97379915e-06, -1.06581410e-06]), 'nit': 0, 'warnflag': 2}
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 2.220D-16
N = 6 M = 10
This problem is unconstrained.
At X0 0 variables are exactly at the bounds
At iterate 0 f= 1.14462D-07 |proj g|= 3.51719D-05
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
6 1 21 1 0 0 3.517D-05 1.145D-07
F = 1.144619474757747E-007
ABNORMAL_TERMINATION_IN_LNSRCH
Line search cannot locate an adequate point after 20 function
and gradient evaluations. Previous x, f and g restored.
Possible causes: 1 error in function or gradient evaluation;
2 rounding error dominate computation.
Cauchy time 0.000E+00 seconds.
Subspace minimization time 0.000E+00 seconds.
Line search time 0.000E+00 seconds.
Total User time 0.000E+00 seconds.
I do not get this warning every time, but sometimes. (Most get 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL' or 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH').
I know that it means the minimum can be be reached in this iteration. I googled this problem. Someone said it occurs often because the objective and gradient functions do not match. But here I do not provide gradient function because I am using 'approx_grad'.
What are the possible reasons that I should investigate? What does it mean by "rounding error dominate computation"?
======
I also find that the log-likelihood does not monotonically increase:
########## Convergence !!! ##########
log_likelihood_history: [-28659.725891322563, 220.49993177669558, 291.3513633060345, 267.47745327823907, 265.31567762171181, 265.07311121000367, 265.04217683341682]
It usually start decrease at the second or the third iteration, even through 'ABNORMAL_TERMINATION_IN_LNSRCH' does not occurs. I do not know whether it this problem is related to the previous one.
Scipy calls the original L-BFGS-B implementation. Which is some fortran77 (old but beautiful and superfast code) and our problem is that the descent direction is actually going up. The problem starts on line 2533 (link to the code at the bottom)
gd = ddot(n,g,1,d,1)
if (ifun .eq. 0) then
gdold=gd
if (gd .ge. zero) then
c the directional derivative >=0.
c Line search is impossible.
if (iprint .ge. 0) then
write(0,*)' ascent direction in projection gd = ', gd
endif
info = -4
return
endif
endif
In other words, you are telling it to go down the hill by going up the hill. The code tries something called line search a total of 20 times in the descent direction that you provide and realizes that you are NOT telling it to go downhill, but uphill. All 20 times.
The guy who wrote it (Jorge Nocedal, who by the way is a very smart guy) put 20 because pretty much that's enough. Machine epsilon is 10E-16, I think 20 is actually a little too much. So, my money for most people having this problem is that your gradient does not match your function.
Now, it could also be that "2. rounding errors dominate computation". By this, he means that your function is a very flat surface in which increases are of the order of machine epsilon (in which case you could perhaps rescale the function),
Now, I was thiking that maybe there should be a third option, when your function is too weird. Oscillations? I could see something like $\sin({\frac{1}{x}})$ causing this kind of problem. But I'm not a smart guy, so don't assume that there's a third case.
So I think the OP's solution should be that your function is too flat. Or look at the fortran code.
https://github.com/scipy/scipy/blob/master/scipy/optimize/lbfgsb/lbfgsb.f
Here's line search for those who want to see it. https://en.wikipedia.org/wiki/Line_search
Note. This is 7 months too late. I put it here for future's sake.
As pointed out in the answer by Wilmer E. Henao, the problem is probably in the gradient. Since you are using approx_grad=True, the gradient is calculated numerically. In this case, reducing the value of epsilon, which is the step size used for numerically calculating the gradient, can help.
I also got the error "ABNORMAL_TERMINATION_IN_LNSRCH" using the L-BFGS-B optimizer.
While my gradient function pointed in the right direction, I rescaled the actual gradient of the function by its L2-norm. Removing that or adding another appropriate type of rescaling worked. Before, I guess that the gradient was so large that it went out of bounds immediately.
The problem from OP was unbounded if I read correctly, so this will certainly not help in this problem setting. However, googling the error "ABNORMAL_TERMINATION_IN_LNSRCH" yields this page as one of the first results, so it might help others...
I had a similar problem recently. I sometimes encounter the ABNORMAL_TERMINATION_IN_LNSRCH message after using fmin_l_bfgs_b function of scipy. I try to give additional explanations of the reason why I get this. I am looking for complementary details or corrections if I am wrong.
In my case, I provide the gradient function, so approx_grad=False. My cost function and the gradient are consistent. I double-checked it and the optimization actually works most of the time. When I get ABNORMAL_TERMINATION_IN_LNSRCH, the solution is not optimal, not even close (even this is a subjective point of view). I can overcome this issue by modifying the maxls argument. Increasing maxls helps to solve this issue to finally get the optimal solution. However, I noted that sometimes a smaller maxls, than the one that produces ABNORMAL_TERMINATION_IN_LNSRCH, results in a converging solution. A dataframe summarizes the results. I was surprised to observe this. I expected that reducing maxls would not improve the result. For this reason, I tried to read the paper describing the line search algorithm but I had trouble to understand it.
The line "search algorithm generates a sequence of
nested intervals {Ik} and a sequence of iterates αk ∈ Ik ∩ [αmin ; αmax] according to the [...] procedure". If I understand well, I would say that the maxls argument specifies the length of this sequence. At the end of the maxls iterations (or less if the algorithm terminates in fewer iterations), the line search stops. A final trial point is generated within the final interval Imaxls. I would say the the formula does not guarantee to get an αmaxls that respects the two update conditions, the minimum decrease and the curvature, especially when the interval is still wide. My guess is that in my case, after 11 iterations the generated interval I11 is such that a trial point α11 respects both conditions. But, even though I12 is smaller and still containing acceptable points, α12 is not. Finally after 24 iterations, the interval is very small and the generated αk respects the update conditions.
Is my understanding / explanation accurate?
If so, I would then be surprised that when maxls=12, since the generated α11 is acceptable but not α12, why α11 is not chosen in this case instead of α12?
Pragmatically, I would recommend to try a few higher maxls when getting ABNORMAL_TERMINATION_IN_LNSRCH.

Optimize MATLAB code (nested for loop to compute similarity matrix)

I am computing a similarity matrix based on Euclidean distance in MATLAB. My code is as follows:
for i=1:N % M,N is the size of the matrix x for whose elements I am computing similarity matrix
for j=1:N
D(i,j) = sqrt(sum(x(:,i)-x(:,j)).^2)); % D is the similarity matrix
end
end
Can any help with optimizing this = reducing the for loops as my matrix x is of dimension 256x30000.
Thanks a lot!
--Aditya
The function to do so in matlab is called pdist. Unfortunately it is painfully slow and doesnt take Matlabs vectorization abilities into account.
The following is code I wrote for a project. Let me know what kind of speed up you get.
Qx=repmat(dot(x,x,2),1,size(x,1));
D=sqrt(Qx+Qx'-2*x*x');
Note though that this will only work if your data points are in the rows and your dimensions the columns. So for example lets say I have 256 data points and 100000 dimensions then on my mac using x=rand(256,100000) and the above code produces a 256x256 matrix in about half a second.
There's probably a better way to do it, but the first thing I noticed was that you could cut the runtime in half by exploiting the symmetry D(i,j)==D(i,j)
You can also use the function norm(x(:,i)-x(:,j),2)
I think this is what you're looking for.
D=zeros(N);
jIndx=repmat(1:N,N,1);iIndx=jIndx'; %'# fix SO's syntax highlighting
D(:)=sqrt(sum((x(iIndx(:),:)-x(jIndx(:),:)).^2,2));
Here, I have assumed that the distance vector, x is initalized as an NxM array, where M is the number of dimensions of the system and N is the number of points. So if your ordering is different, you'll have to make changes accordingly.
To start with, you are computing twice as much as you need to here, because D will be symmetric. You don't need to calculate the (i,j) entry and the (j,i) entry separately. Change your inner loop to for j=1:i, and add in the body of that loop D(j,i)=D(i,j);
After that, there's really not much redundancy left in what that code does, so your only room left for improvement is to parallelize it: if you have the Parallel Computing Toolbox, convert your outer loop to a parfor and before you run it, say matlabpool(n), where n is the number of threads to use.