Model is infeasible in Gurobi although it has a feasible solution - gurobi

I am attempting to solve a non-convex quadratic optimization problem using Gurobi, but I have encountered an issue. Specifically, I have a specific objective function; however, I am only interested in finding a feasible solution. To do this, I tried two ways:
1- set my specific objective function as the model objective and set the parameter "SolutionLimit" to 1. This works fine, and Gurobi gives me a feasible solution.
2- give Gurobi no objective function (or set the objective to some arbitrary number like 0). In this case, Gurobi returns no feasible solution. The log it prints says:
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.5757e+01) exceeds tolerance
(model may be infeasible or unbounded - try turning presolve off)
Best objective -0.000000000000e+00, best bound -0.000000000000e+00, gap 0.0000%
I checked the solution it returned, and it is infeasible. I want the second method to work too. I have attempted to modify the solver parameters (such as "m.ModelSense = GRB.MAXIMIZE," "m.params.MIPFocus = 3," "m.params.NoRelHeurTime = 200," "m.params.DualReductions = 0," "m.params.Presolve = 2," and "m.params.Crossover = 0") in an effort to resolve this issue but have been unsuccessful. Are there any other parameters that I can adjust in order to successfully solve this problem?

This model has numerical issues; to understand more, please see Guidelines for Numerical Issues in the Gurobi Reference Manual.

Related

Why does Gurobi's best bound for the objective goes over the optimal objective value

I am solving an integer programming problem in Gurobi for objective minimization. I know that there is an integer solution that gives an objective of 322.48, and I verified this by also setting that solution as constraints and solving with Gurobi. However, when solving, Gurobi's best bound starts from 322.48, but always goes beyond it, even going to 323.80, when in fact I know that a better solution exists. Does anyone know why this would be happening?
I run Gurobi with the following parameters:
opt.Params.Method = 2
opt.Params.Threads = 1
opt.Params.MIPFocus = 1
opt.Params.MIPGap = 1e-9
opt.Params.IntegralityFocus = 1
opt.Params.IntFeasTol = 1e-9
opt.Params.FeasibilityTol = 1e-9
When I set Params.Method to default, the relaxation objective itself turns out to be 323.60, which is incorrect as I previously verified that there is a solution with 322.48 objective. Something is off, does anyone have any idea what?

Set the gap for Gurobi.Optimizer in Julia-JuMP

I am trying to understand how to set the gap for Gurobi.Optimizer, because it solves too long when the default gap threshold is used (10e-4).
I couldn't find it anywhere (they might be referred to as attributes or parameters).
PS: Also, I am trying to find the right tutorial or instructions on how to use the solver in JuMP. I checked here https://juliahub.com/docs/Gurobi/do9v6/0.7.7, but they don't reveal the meanings of different attributes and inputs. Please, send me one in case somebody knows.
Best,
A.
You can set the MIP gap via the MIPGap parameter:
using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "MIPGap", 0.1)
You can read more about JuMP here: https://github.com/jump-dev/Gurobi.jl

I got segmentation fault\changing feastol dafault value

I have a SCIP project to solve a binary problem with nonlinear objective function that works well but for some instances I got a message saying "the best solution is not feasible", and there is some violation in the constraints. (The violation is mostly very small)
To solve this issue, I added the SCIP_CALL_EXC( SCIPsetRealParam(scip, "numerics/feastol", 1e-5) to change the default value of feastol. But I get segmentation fault!
Following your helpful suggestion, the violation value is now much lower. My objective function is in the form of Min: AX+ LSqrt(BX). In the previous version, I had used an auxiliary variable, let say, Q such that Q^2 - L^2(BX) >=0 and the objective function was expressed as Min: AX+ Q . In the new version, I changed the inequality sign into equality and in combination with SCIPsetRealParam(scip, "numerics/feastol",1e-8), the violation in the constraints are much lower. What can I do more to decrease the violation value? Moreover, when I printed the value of feastol, I seenumerics/lpfeastol=1e-8 but lpfeastol is different from feastol!. So, why lpfeastol is modified instead?. I see the modified value of lpfeastol several times printed on screen when SCIP is solving the problem. I appreciate your help in advance
Maybe try changing the define of UPGSCALE in src/scip/cons_soc.c to some larger value.
The output for the lpfeastol is unfortunate, but normal. Reducing feastol automatically leads to adjusting lpfeastol, too. I cannot reproduce "I printed the value of feastol, I seenumerics/lpfeastol=1e-8 but lpfeastol is different from feastol".

Maximum Likelihood Estimation of a log function with sevaral parameters

I am trying to find out the parameters for the function below:
$$
\log L(\alpha,\beta,v) = v/\beta(e^{-\beta T} -1) + \alpha/\beta \sum_{i=1}^{n}(e^{-\beta(T-t_i)} -1) + \sum_{i=1}^{N}log(v e^{-\beta t_i} + \alpha \sum_{j=1}^{jmax(t_i)} e^{-\beta(t_i - t_j)}).
$$
However, the conventional methods like fmin, fminsearch are not converging properly. Any suggestions on any other methods or open libraries which I can use?
I was trying CVXPY, but they don't support the division by a variable in the expression.
The problem may not be convex (I have not verified this but it could be why CVXPY refused it). We don't have the data so we cannot try things out, but I can give some general advice:
Provide exact gradients (and 2nd derivatives if needed) or use a modeling system with automatic differentiation. Especially first derivatives should be preferably quite precise. With finite differences you may lose half the precision.
Provide a good starting point. May be using an alternative estimation method.
Some solvers can use bounds on the variables to restrict the feasible region where functions will be evaluated. This can be used to restrict the search to interesting areas only and also to protect operations like division and log functions.

Why could a SCIPcopy model be infeasible a when original model is feasible?

I'm new to SCIP, so I'm not sure if this is a bug or if I'm just doing something wrong.
I have a MIP instance that solves perfectly using SCIP, however when I try to solve a copy of the model SCIP says that it is infeasible. It seems to be more noticeable when presolve is turned off.
I'm using windows with the pre-built SCIP v3.2.0. The model only has binary and integer variables.
The following code outlines my attempt:
SCIP* _scip, subscip;
SCIPcreate(&_scip);
SCIPincludeDefaultPlugins(_scip);
SCIPcreateProbBasic(_scip, "interval_solver")); // create an empty problem
SCIPsetPresolving(_scip, SCIP_PARAMSETTING_OFF, true); //disable presolving
// build model (snipped)
SCIPsolve(_scip); // succeeds and gives feasible solution
SCIP_Bool valid = FALSE;
SCIPcreate(&subscip);
SCIPcopy(_scip, subscip, NULL, NULL, "1", TRUE, FALSE, TRUE, &valid);
SCIPsolve(subscip); // infeasible
Something that might be related (and seems weird to me) is that after solving the original problem (and getting a feasible solution), checking the solution reports an infeasible result. i.e.
SCIP_SOL* sol = SCIPgetBestSol(_scip);
SCIPcheckSol(_scip, sol, TRUE, TRUE, TRUE, TRUE, &valid);
gives:
solution value 1 violates bounds of <t_x71_(6,1275,6805)_(9,1275,6805)>[-0,0] by 1
Any ideas why this could be happening? Thanks!
Propagation in SCIP may take into account the best solution known so far and do reductions which are only valid for the problem of finding a solution better than this.
For example, if you have a minimization problem with n variables x_1,...,x_n with objective coefficients c_1,...,c_n >= 0 and already found a solution with x_1 = 1, x_2 = ... = x_n = 0, then propagation will globally fix x_1 to 0, because the objective of any solution with x_1 = 1 will be at least as large as the objective of the solution you already found.
This means that the solutions found so far may not be feasible anymore for the remaining problem (which looks for a strictly better solution).
In order to check the solution, you should check it in the original problem space, which you can do with SCIPcheckSolOrig().
Disabling presolving an propagation might help, but does not guarantee that the global presolved problem is not changed. Presolving in the LP solver should not be the problem, but might have changed the reported optimal LP solution (if there are multiple optima) and therefore caused a change in the solving process. This might have avoided your issue in this case, but probably by pure luck and the issue may still appear again on other instances. Moreover, the more features you disable, the more this will have a negative impact on your performance.
However, there is an easy solution to your problem: You can copy the original unchanged problem by using SCIPcopyOrig().
Some of the variable bounds were still being presolved. To fix the issue I needed to add:
SCIPsetBoolParam(_scip, "lp/presolving", FALSE);
This fixed most things, but the following also helped fix some 'check solution' issues:
SCIPsetIntParam(_scip, "propagating/maxrounds", 0);
SCIPsetIntParam(_scip, "propagating/maxroundsroot", 0);