How can I state the following constraint in Constraint Programming? (Preferably in Gurobi or Comet).
S is an array of integers of size n. The set of integers that I can use to fill the array are in the range 1-k. There is a constraint ci for each of the integers that can be used. ci denotes the minimum number of consecutive integers i.
For example if c1 = 3, c2 = 2 then 1112211112111 is not a valid sequence since there must be two consecutive 2's, whereas 1112211122111 is a valid sequence.
Perhaps using the regular constraint (automaton in Comet) would be the best approach.
However, here is a straightforward solution in MiniZinc which use a lot of reifications. It should be possible to translate it to Comet at least (I don't think Gurobi support reifications).
The decision variables (the sequences) are in the array "x". It also use a helper array ("starts") which contains the start positions of each sequences; this makes it easier to reason about the sequences in "x". The number of sequences is in "z" (e.g. for optimization problems).
Depending on the size of x and other constraints, one can probably add more (redundant) constraints on how many sequences there can be etc. This is not done here, though.
Here's the model: http://www.hakank.org/minizinc/k_consecutive_integers.mzn
It's also shown below.
int: n;
int: k;
% number of consecutive integers for each integer 1..k
array[1..k] of int: c;
% decision variables
array[1..n] of var 1..k: x;
% starts[i] = 1 -> x[i] starts a new sequence
% starts[i] = 0 -> x[i] is in a sequence
array[1..n] of var 0..k: starts;
% sum of sequences
var 1..n: z = sum([bool2int(starts[i] > 0) | i in 1..n]);
solve :: int_search(x, first_fail, indomain_min, complete) satisfy;
constraint
forall(a in 1..n, b in 1..k) (
(starts[a] = b ) ->
(
forall(d in 0..c[b]-1) (x[a+d] = b )
/\
forall(d in 1..c[b]-1) (starts[a+d] = 0 )
/\
(if a > 1 then x[a-1] != b else true endif) % before
/\
(if a <= n-c[b] then x[a+c[b]] != b else true endif) % after
)
)
/\
% more on starts
starts[1] > 0 /\
forall(i in 2..n) (
starts[i] > 0 <-> ( x[i]!=x[i-1] )
)
/\
forall(i in 1..n) (
starts[i] > 0 -> x[i] = starts[i]
)
;
output [
"z : " ++ show(z) ++ "\n" ++
"starts: " ++ show(starts) ++ "\n" ++
"x : " ++ show(x) ++ "\n"
];
%
% data
%
%% From the question above:
%% It's a unique solution.
n = 13;
k = 2;
c = [3,2];
Related
For the above questioned i went through some of the query optimization techniques like square root decomposition , binary indexed tree but they don't help in solving my problem optimally . If anybody can suggest an query optimization technique through which i can solve this problem please do suggest.
You can do that in constant time, using a O(n) space, where n is the size of your array. The initial construction takes O(n).
Given an array A, you first need to build the array XOR-A, in this way:
XOR-A[0] = A[0]
For i from 1 to n-1:
XOR-A[i] = XOR-A[i-1] xor A[i]
Then you can answer a query on the range (l, r) as follows:
get_xor_range(l, r, XOR-A):
return XOR-A[l-1] xor XOR-A[r]
We use the fact that for any x, x xor x = 0. That's what makes the job here.
Edit: Sorry I did not understand the problem well at first.
Here is a method to update the array in O(m + n) time, and O(n) space, where n is the size of the array and m the number of queries.
Notation: the array is A, of size n, the queries are (l_i, r_i, x_i), 0 <= i < m
L <- array of tuple (l_i, x_i) sorted in ascending order by the l_i
R <- array of tuple (r_i, x_i) sorted in ascending order by the r_i
X <- array initialised at 0 everywhere, of size `n`
idx <- 0
l, x <- L[idx]
for j from 0 to n-1 do:
while l == j do:
X[j] <- X[j] xor x
idx ++
l, x <- L[idx]
if j < n then X[j+1] <- X[j]
idx <- 0
r, x <- R[idx]
for j from 0 to n-1 do:
while r == j do:
X[j] <- X[j] xor x
idx ++
r, x <- R[idx]
if j < n then X[j+1] <- X[j]
for j from 0 to n-1 do:
A[j] <- A[j] xor X[j]
Consider N variables, x_1, x_2, ... , x_N. Given i < N and j < N, it holds dx_i/dx_j=delta_i,j, i.e. the derivative is 1 when i=j and 0 otherwise.
While diff(x[i],x[i]) returns 1, unfortunately diff(x[i],x[j]) returns 0 rather than delta_i,j and sum(diff(x[i],x[j]),j=1..N) returns 0 rather than 1.
Is there a way of getting the correct derivative without specifying the value of N? I.e. a way that can be used for calculations that hold for any N.
The regular diff() command handles the arguments in a literal manner. However, you can try the Physics package, and treat the metric as the Kronecker delta:
restart;
with( Physics ):
Setup( metric = Euclidean ):
Define( x ):
f := diff( Sum( a[i] * x[i], i=1..N ), x[j] );
Simplify( eval( f, g_=KroneckerDelta ) ) assuming j >= 1 and j <= N; # returns a[j]
I have a MiniZinc model which is supposed to find d[1 .. n] and o[1..k, 0 .. n] such that
x[k] = o[k,0] + d[1]*o[k,1] + d[2]*o[k,2] ... d[n]+o[k,n] and the sum of absolute values of o[k,i]'s is minimized.
I have many different x[i] and d[1..n] should remain the same for all of them.
I have a working model which is pasted below, which finds a good solution in the n=2 case really quickly (less than a second) however, if I go to n=3 (num_dims in the code) even after an hour I get no answer except the trivial one (xi=o0), even though the problem is somewhat recursive, in that a good answer for 2 dimensions can serve as a starting point for 3 dimensions by using o0 as xi for a new 2 dimensional problem.
I have used MiniZinc before, however, I do not have a background in OR or Optimization, thus I do not really know how to optimize my model. I would be helpful for any hints on how to do that, either by adding constraints or somehow directing the search. Is there a way to debug such performance problems in MiniZinc?
My current model:
% the 1d offsets
array [1 .. num_stmts] of int : x;
x = [-10100, -10001, -10000, -9999, -9900, -101, -100, -99, -1, 1, 99, 100, 101, 9900, 9999, 10000, 10001, 10100];
int : num_stmts = 18;
% how many dimensions we decompose into
int : num_dims = 2;
% the dimension sizes
array [1 .. num_dims] of var int : dims;
% the access offsets
array [1 .. num_stmts, 1 .. num_dims] of var int : offsets;
% the cost function: make access distance (absolute value of offsets) as small as possible
var int : cost = sum (s in 1 .. num_stmts, d in 1 .. num_dims) (abs(offsets[s,d]));
% dimensions must be positive
constraint forall (d in 1 .. num_dims) (dims[d] >= 0);
% offsets * dimensions must be equal to the original offsets
constraint forall (s in 1 .. num_stmts) (
x[s] = offsets[s,1] + sum(d in 2 .. num_dims) (offsets[s,d] * dims[d-1])
);
% disallow dimension crossing
constraint forall (s in 1 .. num_stmts, d in 1 .. num_dims) (
abs(offsets[s,d]) < dims[d]
);
% all dims together need to match the array size
constraint product (d in 1..num_dims) (dims[d]) = 1300000;
solve minimize cost;
output ["dims = ", show(dims), "\n"] ++
[ if d == 1 then show_int(6, x[s]) ++ " = " else "" endif ++
" " ++ show_int(4, offsets[s, d]) ++ if d>1 then " * " ++ show(dims[d-1]) else "" endif ++
if d == num_dims then "\n" else " + " endif |
s in 1 .. num_stmts, d in 1 .. num_dims];
Are you using the MiniZinc IDE? Have you tried using a different solver?
I was struggling with a problem of dividing n random positive integers into m groups (m < n) where the sum of each group was supposed to be as close as possible to some other number.
When n reached about 100 and m about 10, it took significantly longer time (30 min+) and the result was not satisfying. This was using the default Gecode (bundled) solver. By chance I went through each and everyone of the solvers and found that the COIN-OR CBC (bundled) found an optimal solution within 15 s.
I have an array of set in the Golfers problem (in each week there should be formed groups, such that no two players play together more than once, and everybody plays exactly one time each week):
int: gr; %number of groups
set of int: G=1..gr;
int: sz; %size of groups
set of int: S=1..sz;
int: n=gr*sz; %number of players
set of int: P=1..n;
int: we; % number of weeks
set of int: W=1..we;
include "globals.mzn";
array[G,W] of var set of P: X; %X[g,w] is the set of people that form group g in week w
My constraints are as follow (I'm not sure if everything works correctly yet):
constraint forall (g in G, w in W) (card (X[g,w]) = sz); %Each group should have size sz
constraint forall (w in W, g,h in G where g > h) (disjoint(X[g,w], X[h,w])); % Nobody plays twice in one week
constraint forall (w,u in W where w > u) (forall (g,h in G) (card(X[g,w] intersect X[h,u]) <= 1 )); % Two players never meet more than once
constraint forall (w in 2..we) (w+sz-1 in X[1,w] /\ 1 in X[1,w]); %Symmetries breaking: week permutations
constraint forall (w in W, g in 1..gr-1) ( min(X[g,w]) < min(X[g+1,w]) ); %Symmetries breaking: group permutations
constraint forall (g in G, s in S) ( s+sz*(g-1) in X[g,1]);
solve satisfy;
output [ show(X[i,j]) ++ if j == we then "\n" else " " endif | i in 1..gr, j in 1..we ];
My problem lies in constraint number 5. I cannot use min on "var set of int: x", I should use it on "set of int: x". Unfortunately, I do not understand the difference between those two (from what I've read this may be connected to defining the size of each set, but I'm not sure).
Could someone explain the problem to me and propose a solution? I would be very very grateful. Thanks!
First of all: A var is a decision variable. The goal of all Minizinc programs are to decide the the value of all decision variables. You don't know what the values are and you are trying to find the values. Anything that is not a var is simply a known number. (disregarding the use of sets)
Doing min(X[g,w]) of a decision variable (var) is simply not implemented in Minizinc. The reason would be that using X[g,w] < X[g+1,w] without the min makes more sense. Why only constrain the lowest number in both sets insted of all numbers. I.e {1,3,5} < {1,4} insted of 1 < 1
(I hope MiniZinc has < on sets so I don't lie, I am not sure)
I have found out the solution - we should make an array of elements of the set to make the max function possible in this case.
constraint forall (w in 2..we) ( max([i | i in X[1,w-1]]) < max([i | i in X[1,w]])); %Symmetries breaking: week permutations
constraint forall (w in W, g in 1..gr-1) ( min([i | i in X[g,w]]) < min([i | i in X[g+1,w]]));% Symmetries breaking: group permutations (I have been trying to speed up the constraint above, but it does not work with var set of int..)
I'm trying to model the next constraint in Minizinc:
Suppose S is an array of decision variables of size n. I want my decision variables to take a value between 1-k, but there is a maximum 'Cons_Max' on the number of consecutive values used.
For example, suppose Cons_Max = 2, n = 8 and k = 15, then the sequence [1,2,4,5,7,8,10,11] is a valid sequence , while e.g. [1,2,3,5,6,8,9,11] is not a valid sequence because the max number of consecutive values is equal to 3 here (1,2,3).
Important to mention is that sequence [1,3,5,7,9,10,12,14] is also valid, because the values don't need to be consecutive but the max number of consectuive values is fixed to 'Cons_Max'.
Any recommendations on how to model this in Minizinc?
Here's a model with a approach that seems to work. I also added the two constraints all_different and increasing since they are probably assumed in the problem.
include "globals.mzn";
int: n = 8;
int: k = 15;
int: Cons_Max = 2;
% decision variables
array[1..n] of var 1..k: x;
constraint
forall(i in 1..n-Cons_Max) (
x[i+Cons_Max]-x[i] > Cons_Max
)
;
constraint
increasing(x) /\
all_different(x)
;
%% test cases
% constraint
% % x = [1,2,4,5,7,8,10,11] % valid solution
% % x = [1,3,5,7,9,10,12,14] % valid valid solution
% % x = [1,2,3,5,6,8,9,11] % -> not valid solution (-> UNSAT)
% ;
solve satisfy;
output ["x: \(x)\n" ];
Suppose you use array x to represent your decision variable.
array[1..n] of var 1..k: x;
then you can model the constraint like this.
constraint not exists (i in 1..n-1)(
forall(j in i+1..min(n, i+Cons_Max))
(x[j]=x[i]+1)
);