I am trying to understand how to set the gap for Gurobi.Optimizer, because it solves too long when the default gap threshold is used (10e-4).
I couldn't find it anywhere (they might be referred to as attributes or parameters).
PS: Also, I am trying to find the right tutorial or instructions on how to use the solver in JuMP. I checked here https://juliahub.com/docs/Gurobi/do9v6/0.7.7, but they don't reveal the meanings of different attributes and inputs. Please, send me one in case somebody knows.
Best,
A.
You can set the MIP gap via the MIPGap parameter:
using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "MIPGap", 0.1)
You can read more about JuMP here: https://github.com/jump-dev/Gurobi.jl
Related
I am attempting to solve a non-convex quadratic optimization problem using Gurobi, but I have encountered an issue. Specifically, I have a specific objective function; however, I am only interested in finding a feasible solution. To do this, I tried two ways:
1- set my specific objective function as the model objective and set the parameter "SolutionLimit" to 1. This works fine, and Gurobi gives me a feasible solution.
2- give Gurobi no objective function (or set the objective to some arbitrary number like 0). In this case, Gurobi returns no feasible solution. The log it prints says:
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.5757e+01) exceeds tolerance
(model may be infeasible or unbounded - try turning presolve off)
Best objective -0.000000000000e+00, best bound -0.000000000000e+00, gap 0.0000%
I checked the solution it returned, and it is infeasible. I want the second method to work too. I have attempted to modify the solver parameters (such as "m.ModelSense = GRB.MAXIMIZE," "m.params.MIPFocus = 3," "m.params.NoRelHeurTime = 200," "m.params.DualReductions = 0," "m.params.Presolve = 2," and "m.params.Crossover = 0") in an effort to resolve this issue but have been unsuccessful. Are there any other parameters that I can adjust in order to successfully solve this problem?
This model has numerical issues; to understand more, please see Guidelines for Numerical Issues in the Gurobi Reference Manual.
I have a SCIP project to solve a binary problem with nonlinear objective function that works well but for some instances I got a message saying "the best solution is not feasible", and there is some violation in the constraints. (The violation is mostly very small)
To solve this issue, I added the SCIP_CALL_EXC( SCIPsetRealParam(scip, "numerics/feastol", 1e-5) to change the default value of feastol. But I get segmentation fault!
Following your helpful suggestion, the violation value is now much lower. My objective function is in the form of Min: AX+ LSqrt(BX). In the previous version, I had used an auxiliary variable, let say, Q such that Q^2 - L^2(BX) >=0 and the objective function was expressed as Min: AX+ Q . In the new version, I changed the inequality sign into equality and in combination with SCIPsetRealParam(scip, "numerics/feastol",1e-8), the violation in the constraints are much lower. What can I do more to decrease the violation value? Moreover, when I printed the value of feastol, I seenumerics/lpfeastol=1e-8 but lpfeastol is different from feastol!. So, why lpfeastol is modified instead?. I see the modified value of lpfeastol several times printed on screen when SCIP is solving the problem. I appreciate your help in advance
Maybe try changing the define of UPGSCALE in src/scip/cons_soc.c to some larger value.
The output for the lpfeastol is unfortunate, but normal. Reducing feastol automatically leads to adjusting lpfeastol, too. I cannot reproduce "I printed the value of feastol, I seenumerics/lpfeastol=1e-8 but lpfeastol is different from feastol".
I'm trying to fit the lppl model to KLSE index to predict the most probable crash time. Many papers suggested tabuSearch to identify the initial value for non-linear parameters but none of them publish their code. I have tried to fit the mentioned index with the help of NLS And Log-Periodic Power Law (LPPL) in R. But the obtained error and p values are not significant. I believe that the initial values are not accurate. Can anyone help me on how to find the proper initial values?
library(tseries)
library(zoo)
ts<-get.hist.quote(instrument="^KLSE",start="2003-04-18",end="2008-01-30",quote="Close",provider="yahoo",origin="1970-01-01",compression="d",retclass="zoo")
df<-data.frame(ts)
df<-data.frame(Date=as.Date(rownames(df)),Y=df$Close)
df<-df[!is.na(df$Y),]
library(minpack.lm)
library(ggplot2)
df$days<-as.numeric(df$Date-df[1,]$Date)
f<-function(pars,xx){pars$a + (pars$tc - xx)^pars$m *(pars$b+ pars$c * cos(pars$omega*log(pars$tc - xx) + pars$phi))}
resids<-function(p,observed,xx){df$Y-f(p,xx)}
nls.out <- nls.lm(par=list(a=600,b=-266,tc=3000, m=.5,omega=7.8,phi=-4,c=-14),fn = resids, observed = df$Y, xx = df$days, control= nls.lm.control (maxiter =1024, ftol=1e-6, maxfev=1e6))
par<-nls.out$par
nls.final<-nls(Y~(a+(tc-days)^m*(b+c*cos(omega*log(tc-days)+phi))),data=df,start=par,algorithm="plinear",control=nls.control(maxiter=10024,minFactor=1e-8))
summary(nls.final)
I would look at some of the newer research on this topic, there is a good trig modification that will practically guarantee a singular optimization. Additionally, you can use r's built in linear equation solver, to find the linearizable parameters, ergo you will only need to optimize in 3 dimensions. The link below should get you started. I would cite recent literature and personal experience to strongly advise against a tabu search.
https://www.ethz.ch/content/dam/ethz/special-interest/mtec/chair-of-entrepreneurial-risks-dam/documents/dissertation/master%20thesis/MAS_final_Tuncay.pdf
I am currently trying to solve a complementarity problem with a function that features a downward discontinuity, using the mcpsolve() function of the NLsolve package in Julia. The function is reproduced here for specific parameters, and the numbers below refer to the three panels of the figure.
Unfortunately, the algorithm does not always return the interior solution, even though it exists:
In (1), when starting at 0, the algorithm stays at 0, thinking that the boundary constraint binds,
In (2), when starting at 0, the algorithm stops right before the downward jump, even though the solution lies to the right of this point.
These problems occur regardless of the method used - trust region or Newton's method. Ideally, the algorithm would look for potential solutions in the entire set before stopping.
I was wondering if some of you had worked with similar functions, and had found a clever solution to bypass these issues. Note that
Starting to the right of the solution would not solve these problems, as they would also occur for other parametrization - see (3) this time,
It is not known a priori where in the parameter space the particular cases occur.
As an illustrative example, consider the following piece of code. Note that the function is smoother, and yet here as well the algorithm cannot find the solution.
function f!(x,fvec)
if x[1] <= 1.8
fvec[1] = 0.1 * (sin(3*x[1]) - 3)
else
fvec[1] = 0.1 * (x[1]^2 - 7)
end
end
NLsolve.mcpsolve(f!,[0.], [Inf], [0.], reformulation = :smooth, autodiff = true)
Once more, setting the initial value to something else than 0 would only postpone the problem. Also, as pointed out by Halirutan, fzero from the Roots package would probably work, but I'd rather use mcpsolve() as the problem is initially a complementarity problem.
Thank you very much in advance for your help.
if someone can help on:
How to set a timeout for each individual test ? a timeout for the total experiment ?
How to setup a progressive strategy which would eliminate/prune a % of worst scoring branches of search space at different stage of the experiment (while using current optimization algorithms) ? ie. at 30% of the max total experiment, it could remove 50% of the worst scoring classifiers and all its branch of hyperparameters to remove it from upcoming tests. Then, same process at 60%...
Thanks a lot!
Following my exchange on hyperopt's github:
there is not a per-trial timeout but hyperopt-sklearn implements its own solution by just wrapping the function. Please look for "fn_with_timeout" at https://github.com/hyperopt/hyperopt-sklearn/ .
from issue 210: "the optimizers are stateless, and fmin stores all state of the experiment in the trials object. So if you remove some experiments from the trials object, it's as if they never happened. use fmin's "max_evals" parameter to interrupt search as often as you need to make these sorts of modifications. It should be fine to use repeated calls with e.g. max_evals increasing by 1 every time if you want really fine grained control."
Thanks for looking into this, #doxav. I've written some code that addresses question 1, taking part of fn_with_timeout from hyperopt-sklearn and adapting it for standard Hyperopt cost functions.
You can find it here:
https://gist.github.com/hunse/247d91d14aaa8f32b24533767353e35d