How to pass a solution into an model as a fixed start in docplex? - docplex

I have two models: an initial model and a more complex model with more decision variables. I need to use the solution to the first model as a fixed start for the second model but can't figure out a way to do this automatically. The decision variables are a mix of integer, binary and continuous variables, and the initial solution values must be unchanged in the second model, so I can't use a warm start. What would be the best way to do this? Is there another way to read the starting solution from a .mst file as a fixed start?

The issue here is transferring a solution from one model to another instance.
Assuming variables in both models have same name, have a look at Model.import_solution with match='name'.
See doc here :
http://ibmdecisionoptimization.github.io/docplex-doc/mp/docplex.mp.model.html#docplex.mp.model.Model.import_solution
Once imported to the second model, use the imported solution as a MIP start with Model.add_mip_start,
adjust which variables are fixed with the `write_levelè parameter.
By default, only discrete variables are fixed.

Related

Is it possible to access SCIP's Statistics output values directly from PyScipOpt Model Object?

I'm using SCIP to solve MILPs in Python using PyScipOpt. After solving a problem, the solver statistics can be either 1) printed as a string using printStatistics(), or 2) saved to an external file using writeStatistics(). For example:
import pyscipopt as pso
model = pso.Model()
model.addVar(name="x", obj=1)
model.optimize()
model.printStatistics()
model.writeStatistics(filename="stats.txt")
There's a lot of information in printStatistics/writeStatistics that doesn't seem to be accessible from the Python model object directly (e.g. primal-dual integral value, data for individual branching rules or primal heuristics, etc.) It would be helpful to be able to extract the data from this output via, e.g., attributes of the model object or a dictionary.
Is there any way to access this information from the model object without having to parse the raw text/file output?
PySCIPOpt does not provide access to the statistics directly. The data for the various tables (e.g. separators, presolvers, etc.) are stored separately for every single plugin in SCIP and are sometimes not straightforward to collect.
If you are only interested in certain statistics about the general solving process, then you might want to add PySCIPOpt wrappers for a few of the simple get functions defined in scip_solvingstats.c.
Lastly, you might want to check out IPET for parsing the statistics output.

Is it possible in my case to implement a strategy pattern with different semantics of algorithms?

Hi everyone, I am new on Stack Overflow so if you like my example please vote up so I get reputation of 50 for some extra features.
Now let's start with my problem.
I have several classes that literally convert one data model to another.
Different classes use different versions of the data model.
Here is my example:
In this example I have 3 converters (for now) and two algorithms that convert one data model to another, but they work for different versions of the data model. For example, AlgoVerOne works for an older version of the data model while AlgoVer2 works for a newer version that contains more / less information in it.
What matters is that ConverterA and ConverterB use the same version of the data model. So the conversion algorithm is exactly the same because the versions of the data model do not differ.
PROBLEM
My problem is that the semantics of some parts are different for these two classes. Let's say there is an element in a data model that has a value of 100. This value can be converted and inserted into another data model, because these classes use the same version of it. But the semantics of value 100 for ConverterA means "car" while for ConverterB means "bus".
So the algorithm needed to convert one data model to another is the same, but the value of an element within that data model is semantically different for these two classes.
I don’t want to use a completely new algorithm for both classes because it only changes 1% of the semantics of the whole data model.

AnyLogic: Is there a way to specify array of decision variables in Optimization Experiment?

I am working on an optimization model using AnyLogic. Is there a way to specify an array of decision variables in AnyLogic like how it is in IBM Cplex? For lesser number of decision variables (say 2 to 5), I used to specify them individually, for example, numAgents_1, numAgents_2 for locations 1 and 2. However, as my model grows in size and more locations are added (up to 40), is there a way I can specify them as an array or list of decision variables?
Any help regarding this would be really useful. Thanks.
Yes, but you need to use a "custom experiment" instead and set it up using an Array of decision variables.
This is not totally straight forward, however, best start by checking the example models that apply custom experiments.
Some starting points below:

Best neural network model for lot of if/else conditions

I have got a large set of data. The data has 13 parameters and those parameters depend on each other and the dependency is established by some rules.
Example:- If say parameter_one is "A", and parameter_two is "B", and there is rule stating that parameter_one==A and parameter_two==B=>parameter_three==C, then parameter_three should be C(ideally). So, basically it's a lot of if/else statements.
Now, I just have the data, and we have to make the machine learning model learn the rules, so that whenever any data comes which doesn't obeys the rules:- as in above example, if parameter_three would have been 'D' instead of 'C', then it's a violation of the rule. How can I make the model learn these rules?
Also, the rules can't be written manually since there are a lot of rules and it's not scaling.
My try
I thought of using an autoencoder and pass the training data through it. After that for each data, we would use the reconstruction loss to check if it's a violation case or not. However, it's overfitting and not working well on test data.
I have previously tried to use deep neural network also, but it was not helping there. Can anyone help me out here?
Thanks in advance.
You could use Association Rule Mining algorithms like Apriori or FP-Growth to generate the frequent item sets.
From the frequent item sets you can generate Association rules.
Once you have the association rules, you can assign a weight to each rule (or use some parameter like confidence/lift of the rule).
When you want to test it on a new data entry, do weighted summing (if the new entry satisfies a rule, use the rule's weight to calculate the score/sum for the new entry).
If the generated score for the new entry is greater than a chosen threshold, you can say the new entry passes the preset rules otherwise it's in violation of the rules.
Weighted summing gives you flexibility to assign importance to association rules. You can also do this, if new entry does not satisfy even one of the association rules, then it is in violation of preset rules.

Additional PlanningEntity in CloudBalancing - bounded-space situation

I successfully amended the nice CloudBalancing example to include the fact that I may only have a limited number of computers open at any given time (thanx optaplanner team - easy to do). I believe this is referred to as a bounded-space problem. It works dandy.
The processes come in groupwise, say 20 processes in a given order per group. I would like to amend the example to have optaplanner also change the order of these groups (not the processes within one group). I have therefore added a class ProcessGroup in the domain with a member List<Process>, the instances of ProcessGroup being stored in a List<ProcessGroup>. The desired optimisation would shuffle the members of this List, causing the instances of ProcessGroup to be placed at different indices of the List List<ProcessGroup>. The index of ProcessGroup should be ProcessGroup.index.
The documentation states that "if in doubt, the planning entity is the many side of the many-to-one relationsship." This would mean that ProcessGroup is the planning entity, the member index being a planning variable, getting assigned to (hopefully) different integers. After every new assignment of indices, I would have to resort the list List<ProcessGroup in ascending order of ProcessGroup.index. This seems very odd and cumbersome. Any better ideas?
Thank you in advance!
Philip.
The current design has a few disadvantages:
It requires 2 (genuine) entity classes (each with 1 planning variable): probably increases search space (= longer to solve, more difficult to find a good or even feasible solution) + it increases configuration complexity. Don't use multiple genuine entity classes if you can avoid it reasonably.
That Integer variable of GroupProcess need to be all different and somehow sequential. That smelled like a chained planning variable (see docs about chained variables and Vehicle Routing example), in which case the entire problem could be represented as a simple VRP with just 1 variable, but does that really apply here?
Train of thought: there's something off in this model:
ProcessGroup has in Integer variable: What does that Integer represent? Shouldn't that Integer variable be on Process instead? Are you ordering Processes or ProcessGroups? If it should be on Process instead, then both Process's variables can be replaced by a chained variable (like VRP) which will be far more efficient.
ProcessGroup has a list of Processes, but that a problem property: which means it doesn't change during planning. I suspect that's correct for your use case, but do assert it.
If none of the reasoning above applies (which would surprise me) than the original model might be valid nonetheless :)