I am kind of new to the SCIP. I want to use SCIP as a branch and price framework. I have coded the problem in C++ already and also have implemented the pricer or column generation as a function. In fact I have implemented the BP algorithm for the root node by linking Cplex.dll to the project and now need to code the branching tree and decided to use SCIP for this purpose.
I want to know what is the fastest way I can solve my problem using SCIP and the old codes which I have? Or maybe using GCG is a better and faster way?
I have read the GCG documentation but doesn't understand if I should implement the pricer myself again or not? In fact I don't understand the difference between these two (SCIP and GCG).
Thanks.
In GCG, you do not need to implement anything yourself. It is a generic solver for branch-and-price. You have to provide the compact formulation, that is, a model which after applying a Dantzig-Wolfe reformulation leads to the master problem you are solving. The reformulation also provides a MIP-formulation of the pricing problem, so GCG can solve this as a sub-MIP for pricing. There is the possibility, however, to plug-in a pricing solver in GCG, to which the pricing MIP to be solved will be passed (with objective function corresponding to the current pricing round). The pricing solver can then solve this problem with any problem-specific algorithm and pass solutions back to GCG.
In SCIP, on the other hand, you create the master problem you want to solve and implement a pricer which gets dual values from the LP and solves the corresponding pricing problem. This is probably very similar to what you have already.
Additionally, if you want to do branch-and-price, you need a branching rule. GCG comes with some generic ones, in SCIP you would have to implement one yourself (since the branching decisions must be regarded within your pricing procedure).
Overall, SCIP is a framework for branch-and-price, i.e., it provides the tree management, LP solving and updates, etc., but you need to implement some things yourself like a reader, the pricer, and the branching rules. GCG is a generic solver, so you can just plug in a compact model, which is reformulated and solved in a generic way. The reformulation is either provided by you via an input file or you can try to let GCG detect an appropriate structure. You do not need to implement anything. It already provides some nice features like primal heuristics that make use of the reformulation, an automatic management of which pricing problem is solved when, and more. On the other hand, the possibilities to extend it further, e.g., by a pricing solver and branching rules are restricted compared to SCIP, since you have to stick to the structure defined by GCG.
I would say that using SCIP and adding your pricer is probably the easier way and more similar to what you already have (you do not need to formulate the compact model). If you already have an idea on how your branching should work, it should also not be too hard to implement within SCIP.
Related
I am using PySCIPOpt and have a MIP with some quadratic constraints (works). Now, I want to implement a Primal Heuristic (should run once before presolving), that fixes certain Variables and optimizes afterwards.
Ìn pseudo-code something like:
For x in ToFIX:
model.fixVar(x, my_guess(x))
model.optimize()
*Any found solution is used as solution of the original problem*
For x in ToFIX:
model.unFixVar(x)
I worked around that problem by creating a second model, solving that, identifying the variables by their name and using model.trySol().
This mostly works but is slow and certainly not the way it is meant to be implemented.
Any hint, which functionalities to use is appreciated.
sorry this took a while to answer.
What you want to implement is a sub-scip heuristic. This is certainly possible, but if you want to do it in PySCIPOpt you might have to wrap some missing methods from the C-API.
I suggest you take a look at heur_rens.c in the SCIP code. The methods you would need to wrap are probably SCIPcopyLargeNeighborhoodSearch and SCIPtranslateSubSol which should save you a lot of trouble. Please refer to the section extending the interface in the PySCIPopt Readme.
I am evaluating OptaPlanner for a planning problem I have. I have seen several responses to this topic, but nothing quite like I am looking for.
I am looking for the capability to extend the problem on the fly; that is, as the planner is solving a problem.
For example, in the CloudComputing example, I would like to be able to add computers on the fly (to a point) while the problem is being solved. The easiest case is that the problem is initially over-constrained and to resolve this I would like to be able to add computers, and then replan.
Or, I would like to be able to add a lecture, or a lecturer in one of the scheduling problems, etc.
It seems like the OptaPlanner requires a static number of entities / variables at solve time.
Any pointers would be appreciated.
Take a look at the Real-time planning section of the OptaPlanner User Guide.
You could also look at the Travelling Salesman Problem example in optaplanner-examples. Specifically, look at the org.optaplanner.examples.tsp.swingui.TspPanel class and traverse down from there. It's a pretty standard implementation of real-time planning AFAIK. I can also recommend to run the TSP example first to "see" how it works.
This is about a typical problem in larger embedded system projects:
Data from a text-file/database must be hard coded to C-code. The data will change the control flow, table dimension etc.
What is the solution of choice?
Background:
In our (large) embedded software we have to connect hundreds of signals (outputs of finite state machines) with busses (e.g. a CAN-bus). We are using Simulink/Stateflow as model-based development tool (state machines) and auto-coding.
The connection will have to scale, do dataype conversions etc. Usually all the information for the conversion/connection is stored in a database or file (e.g. dbc-text-file).
Apparently the usual dynamic way: reading the database and dynamically connect/convert accordingly is not indicated if hard deterministic realtime capability must be ensured. This data has to be hard coded.
We have not found a realistic and more practical way other than to use the Simulink API: write external m-code which reads the data from file and automates "drawing a picture" of the entangeld connections into the Simulink model! This is finally auto-coded to C.
Needless to say, that this scripted "painting-code" - while effective - is not very reusable, maintainable etc. I can't find an effective solution even taking : compiler/code generation, model driven architecture, Autosar, model transformations, into consideration. Constructing for each external data-document an own compiler, which transforms the data to C-code, seems to be unrealistic ...
Is there a practical alternative? Is this a fundamental weakness of Simulink "language" (i.e. it has no underlying high level language like other model driven embedded-tools like modelica) ?
Your actual problem isn't really well described by your question.
The way I read it is that you have a complex set of data, that requires a complex code generation process. Whenever one needs a complex translation, you'll end up building complicated analysis and code generation machinery. That's generally not an easy task.
You've done it in two stages: 1) read the database and make Simulink, 2) let MATLAB compile the Simulink to source code. Your "database" content is in effect a specification (e.g., a DSL). You have a front end that reads the "spec" and interprets it into a Simulink model; you seem to be complaining that there is no underlying semantics for Simulink. Well... is there an underlying robust semantics for your specification DSL? That is probably part of your problem. Simulink itself has poorly defined semantics, too. The combination means that whatever ad hoc transformations you are doing from your DSL to Simulink, and from Simulink to C, is in effect ad hoc and likely difficult to maintain. We can argue about whether C has cleanly defined semantics; it sort of does but it isn't easy to see in the standards documents.
In any case, you are building a staged translator. Your first stage probably needs more structure and better analyses. Ideally you'd transform to an intermediate represention that was more formal; I always thought that Colored Petri nets were much better than Simulink, partly because they do have clear, formal semantics. (Modelica is pretty nice, too). Ideally you'd transform the intermediate stage into C using transformation rules you can control, rather that what Simulink happens to do.
This is easier if you have good foundation machinery on which to build translators. The best class of machinery for this purpose IMHO are program transformation systems. (I happen to build one
of them [see bio], that knows C and Simulink already, and could be taught Modelica). You can read a bit more about what is needed in this SO answer on what it takes to build translators.
It's difficult to make out what your exact question is, but here are two suggestions:
If you're looking for a code generator that can generate code from your state diagrams which you can understand try fsmpro.io
For making individual logic, you'll have to paste code rather than using any inbuilt utilities to design units.
When I look at optimization problems I see a lot of options. One is linear programming. I understand in abstract terms how LP works, but I find it difficult to see whether a particular problem is suitable for LP or not. Are there any heuristics that can help guide this decision?
For example, the work described in Is there a good way to do this type of mining? took weeks before I saw how to structure the problem correctly. Is it possible to know "in advance" that problem could be solved by LP, without first seeing "how to phrase it"?
Is there a checklist I can use to decide whether a problem is suitable for LP? Is there a standard (readable) reference for this topic?
Heuristics (and/or checklists) to decide if the problem at hand is really a Linear Program.
Here's my attempt at answering, and I have also tried to outline how I'd approach this problem.
Questions that indicate that a given problem is suitable to be formulated as an LP/IP:
Are there decisions that need to be taken regularly, at different time intervals?
Are there a number of resources (workers, machines, vehicles) that need to be assigned tasks? (hours, jobs, destinations)
Is this a routing problem, where different "points" have to be visited?
Is this a location or a "layout" problem? (Whole class of Stock-cutting problems fall into this group)
Answering yes to these questions means that an LP formulation might work.
Commonly encountered LP's include: Resource allocation.: (Assignment, Transportation, Trans-shipment, knapsack) ,Portfolio Allocation, Job Scheduling, and network flow problems.
Here's a good list of LP Applications for anyone new to LPs or IPs.
That said, there are literally 1000s of different types of problems that can be formulated as LP/IP. The people I've worked with (researchers, colleagues) develop an intuition. They are good at recognizing that a problem is a certain type of an Integer Program, even if they don't remember the details, which they can then look up.
Why this question is tricky to answer:
There are many reasons why it is not always straightforward to know if an LP formulation will cut it.
There is a lot of "art" (subjectivity) in the approach to modeling/formulation.
Experience helps a lot. People get good at recognizing that this problem can be "likened" to another known formulation
Even if a problem is not a straight LP, there are many clever master-slave techniques (sub-problems), or nesting techniques that make the overall formulation work.
What looks like multiple objectives can be combined into one objective function, with an appropriate set of weights attached.
Experienced modelers employ decomposition and constraint-relaxation techniques and later compensate for it.
How to Proceed to get the basic formulation done?
The following has always steered me in the right direction. I typically start by listing the Decision Variables, Constraints, and the Objective Function. I then usually iterate among these three to make sure that everything "fits."
So, if you have a problem at hand, ask yourself:
What are the Decision Variables (DV)? I find that this is always a good place to start the process of formulation. How many types of DV's are there? (Which resource gets which task, and when should it start?)
What are the Constraints?
Some constraints are very readily visible. Others take a little bit of teasing out. The constraints have to be written in terms of your decision variables, and any constants/limits that are imposed.
What is the Objective Function?
What are the quantities that need to be maximized or minimized? Note: Sometimes, it is not clear what the objective function is. That is okay, because it could well be a constraint-satisfaction problem.
A couple of quick Sanity Checks once you think your LP formulation is done:
I always try to see if a trivial solution (all 0s or all big
numbers) is not part of the solution set. If yes, then the
formulation is most probably not correct. Some constraint is
missing.
Make sure that each and every constraint is "related"' to
the Decision Variables. (I occasionally find constraints that are
just "hanging out there." This means that a "bookkeeping constraint"
has been missed.)
In my experience, people who keep at it almost always develop the needed intuition. Hope this helps.
For starters I should let you guys know what I'm trying to do. The project I'm working on has a requirement that requires a custom scripting system to be built. This will be used by non-programmers who are using the application and should be as close to natural language as possible. An example would be if the user needs to run a custom simulation and plot the output, the code they would write would need to look like
variable input1 is 10;
variable input2 is 20;
variable value1 is AVERAGE(input1, input2);
variable condition1 is true;
if condition1 then PLOT(value1);
Might not make a lot of sense, but its just an example. AVERAGE and PLOT are functions we'd like to define, they shouldn't be allowed to change them or really even see how they work. Is something like this possible with DLR? If not what other options would we have(start with ANTRL to define the grammar and then move on?)? In the future this may need to run using XBAP and WPF too, so this is also something we need to consider, but haven't seen much if anything on dlr & xbap. Thanks, and hopefully this all makes sense.
Lua is not an option as it is to different from what they are already accustomed to.
Ralf, its going to reactive, and to be honest the timeframe for when the results should get back to the user may be 1/100 of a second all the way up to 2 weeks or a month(very complex mathematical functions).
Basically they already have a system they purchased that does some of what they need, and included a custom scripting language that does what I mentioned above and they don't want to have to learn a new one, they basically just want us to copy it and add functionality. I think I'll just start with ANTRL and go from there.
Lua
it's small, fast, easy to embed, portable, extensible, and fun!
Lua is definitly the best choice for soft real-time system (like computer games).
See http://shootout.alioth.debian.org/ for detailed benchmarks.
However, last time I checked, Lua used a mark-and-sweep garbage collector which can lead to deadline-violation and non-deterministic jitter in real-time systems.
I believe that you could use theoretically use the DLR, but I'm unsure about support in an XBAP (partially trusted?) scenario.
If you host the DLR you would quickly be able to take advantage of IronRuby or IronPython scripting. You would want to look at these implementations when creating your own language implementation. If you post your question to the IronPython mailing list I'm sure you would get a better reply around the XBAP scenario, and some of the developers there created ToyScript.
What kind of real-time requirement are you trying to fulfill? Is the simulation a hard real-time simulation (some kind of hardware-in-the-loop simulation ==> deadline is less than 1/1000 second)?
Or do you want the scripting-system to be "reactive" to user-input ==> 1/10 should be sufficient.
I am no expert regarding MS DLR, but as far as I know, it does not support hard real-time systems. You may want to take a look at the real-time specification for Java (RTSJ)
Firstly I think that defining your own language is not the way to go.
Primarily because the biggest productivity gains you can get for programmers or non-programmers are the development tools. You (and 99.9% of the rest of us) are not going to write tools as good as what is out their.
Language design is hard.
Language support and documentation, also hard
I would recommend looking for a pre-built solution. If you could find a language that can lock down some functionality, that would be a good starting point. MatLab would be the first that comes to my mind.
Lastly, ditch the natural language part, BASIC, COBOL and YA-TDWTF-Lang all tried and failed at it.
Full disclosure: I work for a company that is developing a generalized domain specific language "system". It's targeted at data-in/text-out applications so it's not apropos and it's not yet to beta. The result is I'm somewhat knowledgeable and biased.