How to access domain information on a node in SCIP? - scip

I am now solving MILP (mixed integer linear programming) with SCIP. When the branch-and-bound tree grows, there are lots of subproblems in the tree tagged as SCIP_Node.
How can I access the domain infomation on the subproblem (SCIP_Node) as I was trying to sample some feasible solutions (might not be optimal) on this subproblem.
Thanks a lot.

and sorry for the late answer.
You have to be aware that SCIP does not solve the whole subproblem in each SCIP_NODE. So, while a node is "active", you can access all the domain information for that node by calling, e.g. `SCIPvarGetLbLocal.
So when SCIP switches to a different node during solving, it computes the path to the new node in the tree and then applies all boundchanges along the way.
You could do the same if you have a SCIP_NODE that is not the focusnode.

Related

using Bonmin Counne and Ipopt for NLP

I want to just be sure that I am eligible to use Bonmin and Couenne for solving just the NLP problem (Still I do not have integer variable) and I am eager to obtain global optimum not local. I also read that Ipopt first search for the global answer and if it does not find that it will provide a local answer. How I can understand my answer is a global answer when I using Ipopt. Also, I want to what is the best NLP and MINLP open source pythonic solvers for these issues that can be merged with Pyomo?
The main reason for my question is the following output using Bonmin:
NOTE: You are using Ipopt by default with the MUMPS linear solver.
Other linear solvers might be more efficient (see Ipopt documentation).
Regards
Some notes:
(1) "Ipopt first search for the global answer and if it does not find that it will provide a local answer" This is probably not how I would phrase it. IPOPT finds local solutions. For some problems these will be the global solution. For convex problems, this is always the case (except for numerical issues).
(2) Bonmin is a local MINLP solver, Couenne is a global NLP/MINLP solver. Typically Bonmin can solve larger problems than Couenne, but you get local solutions.
(3) "NOTE: You are using Ipopt by default with the MUMPS linear solver. Other linear solvers might be more efficient (see Ipopt documentation)." This is just a notification that you are using IPOPT with linear algebra routines from MUMPS. There are other linear sub-solvers that IPOPT can use and that may perform better on large problems. Often the HARWELL routines (typically called MAnn) give better performance. MUMPS is free while the Harwell routines require a license.
In a follow-up answer (well it is not answer at all) it is stated:
Regarding Ipopt how I can understand that it is finding the global
solution or local optimum? the code will notify that? Regarding to
Bonmin according to AMPL page AMPL It provides the global solution for
the convex problem " Finds globally optimal solutions to convex
nonlinear problems in continuous and discrete variables, and may be
applied heuristically to nonconvex problems." And you were saying that
it is obtained the local solution, I am a bit confused on this part.
But the general question about all those codes is that how I can find
out that the answer is global optimum?
(a) Ipopt does not know if a solution is a local or a global optimal solution. For convex problems a local optimum is a global optimal solution. You will need to convince yourself the problem you pass on to Ipopt is convex (Ipopt will not do this for you).
(b) Bonmin: the same: if the problem is convex it will find global solutions. Otherwise you will get a local solution. You will get no notification whether a solution is a global solution: Bonmin does not know if a solution is a global optimum.
(c) When looking for guaranteed global solutions you can use a local solver only when the problem is convex. For other problems you need a global solver. Another approach is to use a multi-start algorithm with a local solver. That gives you confidence that you are not ending up with a bad local optimum.
If possible, I suggest to discuss this with your teacher. These concepts are important to understand (and most solver manuals assume you know about them).

convergence of an ant colony algorithm

I use ant colony optimization to solve a problem. In my case, at each iteration, n ants are generated from n nodes (one ant per node every iteration). I obtain solutions that verify the conditions of the problem. But, I don't achieve a convergence (for example, I have 30 iterations, the best solution is obtained in the iteration 8 or 9). I want to know if using only a single ant at each iteration is the problem? Also, I want to know if an ant colony algorithm must converge to a state of equilibrium?
thank you in advance.
Convergence and divergence of heuristics algorithms is a very broad topic. Your problem type, dimension, parameters affect the behaviour of the algorithm. You should study the paper in here http://iridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2009-013r001.pdf for basic information about ACO algorithms.
After then you should ask a question based on https://stackoverflow.com/help/mcve.

Modelling Diffusion in Dymola/Modelica

I'm facing a problem with modelling diffusion in Dymola.
I want to have two seprate volumes (filled with air), which can be joined and thus exchange heat via diffusion.
My approach was using the Modelica.Fluid library and connect two ClosedVolumes with a Valve.
But as I found out, this library doesn't regard difussion.
What would be the best way to accomplish such a model?
This limitation is due to the use of stream connector in the Modelica.Fluid library.
One way to solve this is to develop a fluid connector which do not rely on stream connector but only on potential and flow variables. Unfortunately in this case you'll have to solve yourself numerical problems for solving flow reversal and zero-flow singularity.
One example is described in the paper "A physical solution for solving the zero-flow singularity in static thermal-hydraulics mixing models" presenting in the Modelica conference 2014. Basically, adding diffusion helps to solve zero-flow singularity and they use a regularized step function to solve flow reversal. Other regularization functions can be found in Modelica.Fluid.Utilities.
Hope this help,
Best regards.

Finding best path trought strongly connected component

I have a directed graph which is strongly connected and every node have some price(plus or negative). I would like to find best (highest score) path from node A to node B. My solution is some kind of brutal force so it takes ages to find that path. Is any algorithm for this or any idea how can I do it?
Have you tried the A* algorithm?
It's a fairly popular pathfinding algorithm.
The algorithm itself is not to difficult to implement, but there are plenty of implementations available online.
Dijkstra's algorithm is a special case for the A* (in which the heuristic function h(x) = 0).
There are other algorithms who can outperform it, but they usually require graph pre-processing. If the problem is not to complex and you're looking for a quick solution, give it a try.
EDIT:
For graphs containing negative edges, there's the Bellman–Ford algorithm. Detecting the negative cycles comes at the cost of performance, though (worse than the A*). But it still may be better than what you're currently using.
EDIT 2:
User #templatetypedef is right when he says the Bellman-Ford algorithm may not work in here.
The B-F works with graphs where there are edges with negative weight. However, the algorithm stops upon finding a negative cycle. I believe that is a useful behavior. Optimizing the shortest path in a graph that contains a cycle of negative weights will be like going down a Penrose staircase.
What should happen if there's the possibility of reaching a path with "minus infinity cost" depends on the problem.

Branch and Bound - what to store

I have read in more than one book (including Wolsey) that when implementing a branch and bound algorithm, one does not need to store the whole tree, just a list of active nodes (leaf nodes, for what I understand).
The thing is, I could not understand how to work out the new bounds after I branch, if I don't have the bounds of every ancestor.
Some clarification on that would be appreciated.
OK, let's work out an example. Let's say you're implementing a fairly naive integer program solver that tries to solve a binary integer program by solving its LP relaxation, and, if it doesn't get an integral solution, branches on some variable. Let's suppose you have a maximisation problem.
Let's consider what happens after exactly one branch. You solved the root subproblem, and suppose it gave you a fractional solution with objective value 10. Then you branched on a variable, giving you a left subproblem with optimal objective value 9 and a right subproblem with optimal objective value 8.
We got a global bound of 10 from the root subproblem. We also know that every integral solution lies in either the left subproblem or the right subproblem, and we know that the left subproblem has no solutions better than 9 and the right subproblem has no subproblems better than 8. Then there are no solutions better than 9 to the root subproblem, even though the root LP relaxation wasn't enough to tell us that.
In general, your best global bound is the worst bound any active subproblem. Fathomed subproblems are irrelevant since they can't contain a feasible solution with better objective value than your incumbent. Subproblems that have been branched are irrelevant because their bounds should be dominated by the weakest of their childrens' bounds.