what's the maximum amount of constraints in minizinc - optimization

I haven't been able to find any reference as to what is the maximum amount of variables and constraints that minizinc's solvers can handle. Specifically I'm interested minizinc's mip solver. I've been getting stack overflow errors on my mac with 8GB when I have about 15k constraints and about 1000 variables. Does anyone know if that's something close to minizinc's real limitations?

It looks like minizinc was crashing due to too many constraints. I was able to model my problem using another open source MIP solver/optimization framework called SCIP. I had to learn how to model using mathematical language called ZIMP.

Related

GUROBI only uses single core to setup problem with cvxpy (python)

I have a large MILP that I build with cvxpy and want to solve with GUROBI. When I give use the solve() function of cvxpy it take a really really really long time to setup and does not start solving for hours. Whilest doing that only 1 core of my cluster is being used. It is used for 100%. I would like to use multiple cores to build the model so that the process of building the model does not take so long. Running grbprobe also shows that gurobi knows about the other cores and for solving the problem it uses multiple cores.
I have tried to run with different flags i.e. turning presolve off and on or giving the number of Threads to be used (this seemed like i didn't even for the solving.
I also have reduce the number of constraints in the problem and it start solving much faster which means that this is definitively not a problem of the model itself.
The problem in it's normal state should have 2200 constraints i reduce it to 150 and it took a couple of seconds until it started to search for a solution.
The problem is that I don't see anything since it takes so long to get the ""set username parameters"" flag and I don't get any information on what the computer does in the mean time.
Is there a way to tell GUROBI or CVXPY that it can take more cpus for the build-up?
Is there another way to solve this problem?
Sorry. The first part of the solve (cvxpy model generation, setup, presolving, scaling, solving the root, preprocessing) is almost completely serial. The parallel part is when it really starts working on the branch-and-bound tree. For many problems, the parallel part is by far the most expensive, but not for all.
This is not only the case for Gurobi. Other high-end solvers have the same behavior.
There are options to do less presolving and preprocessing. That may get you earlier in the B&B. However, usually, it is better not to touch these options.
Running things with verbose=True may give you more information. If you have more detailed questions, you may want to share the log.

Performance of SCIP: how many variables and constraints SCIP can deal with, and how much time SCIP will take on solving?

I'm new to SCIP, and I have a large-scale MINLP with about 500,000 integer variables, 500,000 linear constraints, and 100,000 nonlinear constraints.
I read a lot of papers about the performance of SCIP, but can't find how many variables and constraints SCIP can deal with.
One of the papers I found showing the number of sloved problems but not the number of variables and constraints as listed below.
https://link.springer.com/content/pdf/10.1007%2Fs11081-018-9411-8.pdf
Is there any experience or paper I can refer to how many variables and constraints SCIP can deal with, and how much time SCIP will take on solving?
There is hardly a limit on the size of the instances (if we're ignoring some limits imposed by the programming languages) that you can pass to SCIP - or any other MIP solver for that matter. Whether you can solve an instance in an acceptable time and without exceeding your memory is mainly a question of the computing resources at your disposal.
So, I'd say: Just give it a try!

AnyLogic Custom Experiment stuck without any error

I am using Custom Experiment Optimization in AnyLogic 8.7.2. My model includes optimizing usage of a set of resources (Resource Pool) to meet a certain pre-defined demand. I define my own decision variables and objective function. This Custom Experiment works fine when my model size is small (less number of agents and hence, less decision variables). However, as the model grows in size, the optimization engine seems to get stuck with no warning/error message.
I have tried a few things like increasing the heap size, making the upper bound of decision variables small (varied from 10 to 15) but none of these solutions seem to work.
Any workaround or help on what could be causing this on this would be really useful. Thanks.

Gurobi resume optimization after model modification

As far as i know Gurobi resumes optimizing where it left after calling Model.Terminate() and then calling Model.Optimize() again. So I can terminate and get the best solution so far and then proceed.Now I want to do the same, but since I want to use parts of the suboptimal solution I need to set some variables to fixed values before I call Model.Optimize() again and optimize the rest of the model. How can i do this so that gurobi does not start all over again?
First, it sounds like you're describing a mixed-integer program (MIP); model modification is different for continuous optimization (linear programming, quadratic programming).
When you modify a MIP model, the tree information is no longer helpful. Instead, you must resolve the continuous (LP) relaxation and create a new branch-and-cut tree. However, the prior solution may still be used as a MIP start, which can reduce the solve time for the second model.
However, your method may be redundant with the RINS algorithm, which is an automatic feature of Gurobi MIP. You can control the behavior of RINS via the parameters RINS, SubMIPNodes and Heuristics.

Speeding up Binary Integer programming model

Can anyone give me some tips to make a binary integer programming model faster?
I currently have a model that runs well with very small amount of variables but as soon as I increase the number of variables in my model SCIP keeps running without giving me an optimal solution. I'm currently using SCIP with Soplex to find an optimal solution.
You should have a look at the statistics (type display statistics in the interactive shell). Watch out for time consuming heuristics that don't find a solution and try disabling them. You should also play around with the parameters to find better suited settings for your instances (different branching rule or node selection). Without further information, though, we won't be able to help you.