Zimpl:Non linear constraint - scip

I have a constraint of the type(in zmpl)
sum (i,j) in S1 : x[i,j]*c[i,j]<=100
where, x is a binary variable of two dimension and c[i,j] is a parameter.
I would like to change this to
sum (i,j) in S1 : x[i,j]*c[i,sum (i) x[i,j]]<=100
Essentially the parameter in the second index depends on the number of selected variables in the ith row. Any effective way to do this ?

First: It is not possible to index parameters with variable expressions, because this essentially makes them variables, too.
Instead, I suggest to use additional variables to model the desired constraint and I try to be as zimpl as possible:
set S2 := { 0..card(S1) }; # new set to model all possible outcomes of the sum operation
var y[S1] >= 0; # y models nonnegative coefficients c[i,j]
var z[S2] binary; # models the value of the x-sum
subto binlink: sum <i,j> in S1: x[i,j] - sum <s> in S2: s * z[s] == 0;
# binlink expresses the outcome of the x-sum in z
subto partition: sum <s> in S2: z[s] == 1;
# maybe redundant because of binlink, but easy to write
subto coeflink: forall <i,j> in S1: y[i,j] == sum <s> in S2: c[i,s] * z[i,s]
#links continous coefficient variable to coefficient parameter
subto yourcons: sum <i,j> in S1: x[i,j] * y[i,j] <= 100;
# finally...
Note that this formulation is nonlinear, but I think it is worth a try. Its effectiveness pretty much depends on the number of "dynamic coefficients" in your formulation and the size of the set S2 defined in my answer.

Related

Calling a separation algorithm in Julia

I'm trying to solve a model using Julia-JuMP. The following is the outline of the model that I created. Here, z[i,j] is a binary variable and d[i,j] is the cost for which z[i,j]=1.
My constraint creates an infinite number of constraint and hence I need to use a separation algorithm to solve it.
First, I solve the model without any constraint, so the answer to all variables z[i,j] and d[i,j] are zero.
Then, I'm including the separation algorithm (which is given inside the if condition). Even though I'm including if z_value == 0, z_values are not passing to it.
Am I missing something in the format of this model?
m = Model(solver=GurobiSolver())
#variable(m, z[N,N], Bin)
#variable(m, d[N,N]>=0)
#objective(m, Min, sum{ d[i,j]*z[i,j], i in N, j in N} )
z_value = getvalue(z)
d_value = getvalue(d)
if z_value == 0
statement
elseif z_value == 1
statement
end
#constraint(m, sum{z[i,j], i in N, j in N}>=2)
solve(m)
println("Final solution: [ $(getvalue(z)), $(getvalue(d)) ]")
You're multiplying z by d which both are variables, hence your model is non-linear,
Are the costs d[i,j] constant or really a variable of the problem ?
If so you need to use a non-linear solver

Minizinc "var set of int: x" instead of "set of int: x"

I have an array of set in the Golfers problem (in each week there should be formed groups, such that no two players play together more than once, and everybody plays exactly one time each week):
int: gr; %number of groups
set of int: G=1..gr;
int: sz; %size of groups
set of int: S=1..sz;
int: n=gr*sz; %number of players
set of int: P=1..n;
int: we; % number of weeks
set of int: W=1..we;
include "globals.mzn";
array[G,W] of var set of P: X; %X[g,w] is the set of people that form group g in week w
My constraints are as follow (I'm not sure if everything works correctly yet):
constraint forall (g in G, w in W) (card (X[g,w]) = sz); %Each group should have size sz
constraint forall (w in W, g,h in G where g > h) (disjoint(X[g,w], X[h,w])); % Nobody plays twice in one week
constraint forall (w,u in W where w > u) (forall (g,h in G) (card(X[g,w] intersect X[h,u]) <= 1 )); % Two players never meet more than once
constraint forall (w in 2..we) (w+sz-1 in X[1,w] /\ 1 in X[1,w]); %Symmetries breaking: week permutations
constraint forall (w in W, g in 1..gr-1) ( min(X[g,w]) < min(X[g+1,w]) ); %Symmetries breaking: group permutations
constraint forall (g in G, s in S) ( s+sz*(g-1) in X[g,1]);
solve satisfy;
output [ show(X[i,j]) ++ if j == we then "\n" else " " endif | i in 1..gr, j in 1..we ];
My problem lies in constraint number 5. I cannot use min on "var set of int: x", I should use it on "set of int: x". Unfortunately, I do not understand the difference between those two (from what I've read this may be connected to defining the size of each set, but I'm not sure).
Could someone explain the problem to me and propose a solution? I would be very very grateful. Thanks!
First of all: A var is a decision variable. The goal of all Minizinc programs are to decide the the value of all decision variables. You don't know what the values are and you are trying to find the values. Anything that is not a var is simply a known number. (disregarding the use of sets)
Doing min(X[g,w]) of a decision variable (var) is simply not implemented in Minizinc. The reason would be that using X[g,w] < X[g+1,w] without the min makes more sense. Why only constrain the lowest number in both sets insted of all numbers. I.e {1,3,5} < {1,4} insted of 1 < 1
(I hope MiniZinc has < on sets so I don't lie, I am not sure)
I have found out the solution - we should make an array of elements of the set to make the max function possible in this case.
constraint forall (w in 2..we) ( max([i | i in X[1,w-1]]) < max([i | i in X[1,w]])); %Symmetries breaking: week permutations
constraint forall (w in W, g in 1..gr-1) ( min([i | i in X[g,w]]) < min([i | i in X[g+1,w]]));% Symmetries breaking: group permutations (I have been trying to speed up the constraint above, but it does not work with var set of int..)

Max number of consecutive values (Minizinc)

I'm trying to model the next constraint in Minizinc:
Suppose S is an array of decision variables of size n. I want my decision variables to take a value between 1-k, but there is a maximum 'Cons_Max' on the number of consecutive values used.
For example, suppose Cons_Max = 2, n = 8 and k = 15, then the sequence [1,2,4,5,7,8,10,11] is a valid sequence , while e.g. [1,2,3,5,6,8,9,11] is not a valid sequence because the max number of consecutive values is equal to 3 here (1,2,3).
Important to mention is that sequence [1,3,5,7,9,10,12,14] is also valid, because the values don't need to be consecutive but the max number of consectuive values is fixed to 'Cons_Max'.
Any recommendations on how to model this in Minizinc?
Here's a model with a approach that seems to work. I also added the two constraints all_different and increasing since they are probably assumed in the problem.
include "globals.mzn";
int: n = 8;
int: k = 15;
int: Cons_Max = 2;
% decision variables
array[1..n] of var 1..k: x;
constraint
forall(i in 1..n-Cons_Max) (
x[i+Cons_Max]-x[i] > Cons_Max
)
;
constraint
increasing(x) /\
all_different(x)
;
%% test cases
% constraint
% % x = [1,2,4,5,7,8,10,11] % valid solution
% % x = [1,3,5,7,9,10,12,14] % valid valid solution
% % x = [1,2,3,5,6,8,9,11] % -> not valid solution (-> UNSAT)
% ;
solve satisfy;
output ["x: \(x)\n" ];
Suppose you use array x to represent your decision variable.
array[1..n] of var 1..k: x;
then you can model the constraint like this.
constraint not exists (i in 1..n-1)(
forall(j in i+1..min(n, i+Cons_Max))
(x[j]=x[i]+1)
);

Sum the binary variables in GLPK

I am new in GLPK. This is some of my code:
set I := setof{(i,r,p,d) in T} i;
var Y{I,I}, binary;
s.t. c1{i in I, j in I}: sum{Y[i,j]} = 6;
I want to have only six values in Y that are 1. Can anyone tell me how to do it in proper way? Because s.t. c1{i in I, j in I}: sum{Y[i,j]} = 6;always produces an error.
Thank you.
This is just a syntax problem. The constraint should look like the following:
s.t. c1: sum{i in I, j in I}(Y[i,j]) = 6;
The first brackets after the name of your constraints imply that the constraint is applied to every single [I, I]. What you want is to fix the sum of all Y in your problem, so you need the constraint to only apply once to your problem (so delete these brackets).
In the sum-syntax don't put the variable you want to sum in the brackets, they belong after them. Inside the brackets you can define the range of the sum.

Number of solutions for a particular subset sum

Let's say we have a set : {1, 2, ..., n}.
How many subsets of order R exist S = {a_i1, a_i2, ...a_iR} that sum up to a certain number S?. What is the recursion for this problem?
Just define method to solve original problem. Parameters it receives are:
max number to use (n),
subset size (R),
subset sum (S),
and returns number of combinations.
To implement this method, first we have to check is it possible to make this request. It is not possible to fulfill task if:
subset size is larger than number of possible elements (R > n)
maximal possible sum is smaller than S. n + (n-1) + ... + (n-R+1) < S => R*((n-R) + (R+1)/2) < S
After that it is enough to try all possibilities for larger element that will go in subset. In python style it should be implemented like:
def combinations(n, R, S):
if R > n or R*((n-R) + (R+1)/2) < S:
return 0
c = 0
for i in xrange(R, n+1): # try i as maximal element in subset. It can go from R to n
# recursion n is i-1, since i is already used
# recursion R is R-1, since we put i in a set
# recursion S is S-i, since i is added to a set and we are looking for sum without it
c += combinations(i-1, R-1, S-i)
return c