Defining OR-constraint in mixed-integer problem with SCIP - scip

I'm trying to use the python interface of SCIP tool (https://github.com/scipopt/PySCIPOpt) to solve a mixed-integer optimization problem.
I want to define an OR-constraint with three constraints, but only one of them must be satisfied.
For example, I want to minimize a variable x with three constraints x>=1, x>=2, x>=3, but only one of them must be valid, and then minimize the value of x. Of course the result should be x=1.
However the OR-constraint API addConsOr requires both the constraint list and result variable (resvar, resultant variable of the operation). While I can provide the list of constraints, I don't know the meaning of result variable in the second function parameter. When I set the second parameter to a new variable, the following code cannot run and result in segmentation fault.
from pyscipopt import Model
model = Model()
x = model.addVar(vtype = "I")
b = model.addVar(vtype="B")
model.addConsOr([x>=1, x>=2, x>=3], b)
model.setObjective(x, "minimize")
model.optimize()
print("Optimal value:", model.getObjVal())
Also, setting the second variable to True also gets segmentation fault.
model.addConsOr([x>=1, x>=2, x>=3], True)

What you are describing is not an OR-constraint. An or-constraint is a constraint that takes into account a set of binary variables and gets the result as an OR of these values, as explained in the SCIP documentation.
What you want is a general disjunctive constraint. Those exist in SCIP as SCIPcreateConsDisjunction but are not wrapped in the Python API yet. Fortunately, you can extend the API yourself quite easily. Simply add the correct function to scip.pxd and define the wrapper in scip.pyx. Just look at how it is done for the existing constraint types and do it the same way. The people over at the PySCIPopt GitHub will be happy if you create a pull-request with your changes.

Related

Summation iterated over a variable length

I have written an optimization problem in pyomo and need a constraint, which contains a summation that has a variable length:
u_i_t[i, t]*T_min_run - sum (tnewnew in (t-T_min_run+1)..t-1) u_i_t[i,tnewnew] <= sum (tnew in t..(t+T_min_run-1)) u_i_t[i,tnew]
T is my actual timeline and N my machines
usually I iterate over t, but I need to guarantee the machines are turned on for certain amount of time.
def HP_on_rule(model, i, t):
return model.u_i_t[i, t]*T_min_run - sum(model.u_i_t[i, tnewnew] for tnewnew in range((t-T_min_run+1), (t-1))) <= sum(model.u_i_t[i, tnew] for tnew in range(t, (t+T_min_run-1)))
model.HP_on_rule = Constraint(N, rule=HP_on_rule)
I hope you can provide me with the correct formulation in pyomo/python.
The problem is that t is a running variable and I do not know how to implement this in Python. tnew is only a help variable. E.g. t=6 (variable), T_min_run=3 (constant) and u_i_t is binary [00001111100000...] then I get:
1*3 - 1 <= 3
As I said, I do not know how to implement this in my code and the current version is not running.
TypeError: HP_on_rule() missing 1 required positional argument: 't'
It seems like you didn't provide all your arguments to the function rule.
Since t is a parameter of your function, I assume that it corresponds to an element of set T (your timeline).
Then, your last line of your code example should include not only the set N, but also the set T. Try this:
model.HP_on_rule = Constraint(N, T, rule=HP_on_rule)
Please note: Building a Constraint with a "for each" part, you must provide the Pyomo Sets that you want to iterate over at the begining of the call for Constraint construction. As a rule of thumb, your constraint rule function should have 1 more argument than the number of Pyomo Sets specified in the Constraint initilization line.

Atomic updates to variables in TensorFlow

Say I have a TensorFlow variable to track the mean of a value. mean can be updated with the following graph snippet:
mean.assign((step * mean.read() + value) / (step + 1))
Unfortunately, those operations are not atomic, so if two different portions of the graph try to update the same mean variable one of the updates may be lost.
If instead I were tracking sum, I could just do
sum.assign_add(value, use_locking=True)
and everything would be great. Unfortunately, in other cases a more complicated update to mean (or std or etc.) may be required, and it may be impossible to use tf.assign_add.
Question: Is there any way to make the first code snippet atomic?
Unfortunately I believe the answer is no, since (1) I don't remember any such mechanism and (2) one of our reasons for making optimizers C++ ops was to get atomic behavior. My main source of hope is XLA, but I do not whether whether this kind of atomicity can be guaranteed there.
The underlying problem of the example is that there are two operations - read and subsequent assign - that together need to be executed atomically.
Since beginning of 2018, the tensorflow team added the CriticalSection class to the codebase. However, this only works for resource variables (as pointed out in Geoffrey's comments). Hence, value in the example below needs to be acquired as:
value = tf.get_variable(..., use_resource=True, ...)
Although I did not test this, according to the class' documentation the atomic update issue should then be solvable as follows:
def update_mean(step, value):
old_value = mean.read_value()
with tf.control_dependencies([old_value]):
return mean.assign((step * old_value + value) / (step + 1))
cs = tf.CriticalSection()
mean_update = cs.execute(update_mean, step, value)
session.run(mean_update)
Essentially, it provides a lock from the beginning of execute() till its end, i.e. covering the whole assignment operation including read and assign.

How to add a new syntax element in HM (HEVC test Model)

I've been working on the HM reference software for a while, to improve something in the intra prediction part. Now a new intra prediction algorithm is added to the code and I let the encoder choose between my algorithm and the default algorithm of HM (according to the RDCost of course).
What I need now, is to signal a flag for each PU, so that the decoder will be able to perform the same algorithm as the encoder decides in the rate distortion loop.
I want to know what exactly should I do to properly add this one bit flag to the stream, without breaking anything in the code.
Assuming that I want to use a CABAC context model to keep the track of my flag's statistics, what else should I do:
adding a new context model like ContextModel3DBuffer m_cCUIntraAlgorithmSCModel to the TEncSbac.h file.
properly initializing the model (both at encoder and decoder side) by looking at how the HM initialezes other context models.
calling the function m_pcBinIf->encodeBin(myFlag, cCUIntraAlgorithmSCModel) and m_pcTDecBinIfdecodeBin(myFlag, cCUIntraAlgorithmSCModel) at the encoder side and decoder side, respectively.
I take these three steps but apparently it breaks something.
PS: Even an equiprobable signaling (i.e. without using CABAC contexts) will be useful. I just want to send this flag peacefully!
Thanks in advance.
I could solve this problem finally. It was a bug in the CABAC context initialization.
But I want to share this experience as many people may want to do the same thing.
The three steps that I explained are essentially necessary to add a new syntax element, but one might be very careful with the followings:
In the beginning, you need to decide either you want to use a separate context model for your syntax element? Or you want to use an existing one? In case of CABAC separation, you should define a ContextModel3DBuffer and the best way to do that is: finding a similar syntax element in the code; then duplicating its ``ContextModel3DBuffer'' definition and ALL of its occurences in the code. This way assures that you are considering everything.
Encoding of each syntax elements happens in two different places: first, in the RDO loop to make a "decision", and second, during the actual encoding phase and when the decisions are being encoded (e.g. encodeCtu function).
The order of encoding/decoding syntaxt elements should be the same at the encoder/decoder sides. For example if your new syntax element is encoded after splitFlag and before predMode at the encoder side, you should decode it exactly between splitFlag and predMode at the decoder side.
The context model is implemented as a 3D matrix in order to let track the statistics of syntaxt elements separately for different block sizes, componenets etc. This means that when you want to call the function encodeBin, you may make sure that a correct index is being used. I've made stupid mistakes in this part!
Apart from the above remarks, I found a the function getState very useful for debugging. This function returns the state of your CABAC context model in an arbitrary place of the code when you have access to it. It is very useful to compare the state at the same place of the encoder and the decoder when you have a mismatch. For example, it happens a lot that you encode a 1 but you decode a 0. In this case, you need to check the state of your CABAC context before encoding and decoding. They should be the same. If they are not the same, track back the error to find the first place of mismatch.
I hope it was helpful.

how to vary a parameter after compiling in modelica

I have written a finite volume model. The parameter n represents the number of volumes. After translating, the parameter can't be modified. Dymola gives this message:
Warning: Setting n has no effect in model.
After translation you can only set literal start-values and non-evaluated parameters.
I think the problem is that the parameter n is used in the equation section. There I use the following code:
equation
...
for i in 2:n-1 loop
T[i] = some equation
end for
I also use n for the calculation of the initial values of T.
The purpose is to make a script that repeatedly executes the model but with a different n.
How can I do this?
The issue here is that your parameter n affects the number of variables in the problem. Dymola (and all other Modelica compilers I know of) evaluate such parameters at compile time. In other words, they hard code the value at compile time into the model.
One potential workaround in your case is to perform the translation or simulation inside your loop. Note that in the translate and simulate commands in Dymola you can include modifications. Just add them after the model name. For example MyModel would become MyModel(n=10).

Clearing numerical values in Mathematica

I am working on fairly large Mathematica projects and the problem arises that I have to intermittently check numerical results but want to easily revert to having all my constructs in analytical form.
The code is fairly fluid I don't want to use scoping constructs everywhere as they add work overhead. Is there an easy way for identifying and clearing all assignments that are numerical?
EDIT: I really do know that scoping is the way to do this correctly ;-). However, for my workflow I am really just looking for a dirty trick to nix all numerical assignments after the fact instead of having the foresight to put down a Block.
If your assignments are on the top level, you can use something like this:
a = 1;
b = c;
d = 3;
e = d + b;
Cases[DownValues[In],
HoldPattern[lhs_ = rhs_?NumericQ] |
HoldPattern[(lhs_ = rhs_?NumericQ;)] :> Unset[lhs],
3]
This will work if you have a sufficient history length $HistoryLength (defaults to infinity). Note however that, in the above example, e was assigned 3+c, and 3 here was not undone. So, the problem is really ambiguous in formulation, because some numbers could make it into definitions. One way to avoid this is to use SetDelayed for assignments, rather than Set.
Another alternative would be to analyze the names in say Global' context (if that is the context where your symbols live), and then say OwnValues and DownValues of the symbols, in a fashion similar to the above, and remove definitions with purely numerical r.h.s.
But IMO neither of these approaches are robust. I'd still use scoping constructs and try to isolate numerics. One possibility is to wrap you final code in Block, and assign numerical values inside this Block. This seems a much cleaner approach. The work overhead is minimal - you just have to remember which symbols you want to assign the values to. Block will automatically ensure that outside it, the symbols will have no definitions.
EDIT
Yet another possibility is to use local rules. For example, one could define rule[a] = a->1; rule[d]=d->3 instead of the assignments above. You could then apply these rules, extracting them as say
DownValues[rule][[All, 2]], whenever you want to test with some numerical arguments.
Building on Andrew Moylan's solution, one can construct a Block like function that would takes rules:
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
You can then save your numeric rules in a variable, and use BlockRules[ savedrules, code ], or even define a function that would apply a fixed set of rules, kind of like so:
In[76]:= NumericCheck =
Function[body, BlockRules[{a -> 3, b -> 2`}, body], HoldAll];
In[78]:= a + b // NumericCheck
Out[78]= 5.
EDIT In response to Timo's comment, it might be possible to use NotebookEvaluate (new in 8) to achieve the requested effect.
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
nb = CreateDocument[{ExpressionCell[
Defer[Plot[Sin[a x], {x, 0, 2 Pi}]], "Input"],
ExpressionCell[Defer[Integrate[Sin[a x^2], {x, 0, 2 Pi}]],
"Input"]}];
BlockRules[{a -> 4}, NotebookEvaluate[nb, InsertResults -> "True"];]
As the result of this evaluation you get a notebook with your commands evaluated when a was locally set to 4. In order to take it further, you would have to take the notebook
with your code, open a new notebook, evaluate Notebooks[] to identify the notebook of interest and then do :
BlockRules[variablerules,
NotebookEvaluate[NotebookPut[NotebookGet[nbobj]],
InsertResults -> "True"]]
I hope you can make this idea work.