Sum the binary variables in GLPK - sum

I am new in GLPK. This is some of my code:
set I := setof{(i,r,p,d) in T} i;
var Y{I,I}, binary;
s.t. c1{i in I, j in I}: sum{Y[i,j]} = 6;
I want to have only six values in Y that are 1. Can anyone tell me how to do it in proper way? Because s.t. c1{i in I, j in I}: sum{Y[i,j]} = 6;always produces an error.
Thank you.

This is just a syntax problem. The constraint should look like the following:
s.t. c1: sum{i in I, j in I}(Y[i,j]) = 6;
The first brackets after the name of your constraints imply that the constraint is applied to every single [I, I]. What you want is to fix the sum of all Y in your problem, so you need the constraint to only apply once to your problem (so delete these brackets).
In the sum-syntax don't put the variable you want to sum in the brackets, they belong after them. Inside the brackets you can define the range of the sum.

Related

Print a variable defined on multiple sets

I have defined the variables w in AMPL in the following way:
var w[F,J,T] binary;
where F, J and T are sets.
After solving the problem, I would like to print the tables w[f,*,*] for each f in F. What is the command that does this?
If I just write
display w;
it returns me the tables w[*,j,*] for each j in J (I guess it's a question of the size of the results).
Thanks to anyone who will answer
By the way, for future reference, the answer is:
display {f in F} : {j in J, t in T} w[f, j, t];

Define the function for distance matrix in ampl. Keep getting "i is not defined"

I'm trying to set up a ampl model which clusters given points in a 2-dimensional space according to the model of Saglam et al(2005). For testing purposes I want to generate randomly some datapoints and then calculate the euclidian distance matrix for them (since I need this one). I'm aware that I could only make the distance matrix without the data points but in a later step the data points will be given and then I need to calculate the distances between each the points.
Below you'll find the code I've written so far. While loading the model I keep getting the error message "i is not defined". Since i is a subscript that should run over x1 and x1 is a parameter which is defined over the set D and have one subscript, I cannot figure out why this code should be invalid. As far as I understand, I don't have to define variables if I use them only as subscripts?
reset;
# parameters to define clustered
param m; # numbers of data points
param n; # numbers of clusters
# sets
set D := 1..m; #points to be clustered
set L := 1..n; #clusters
# randomly generate datapoints
param x1 {D} = Uniform(1,m);
param x2 {D} = Uniform(1,m);
param d {D,D} = sqrt((x1[i]-x1[j])^2 + (x2[i]-x2[j])^2);
# variables
var x {D, L} binary;
var D_l {L} >=0;
var D_max >= 0;
#minimization funcion
minimize max_clus_dis: D_max;
# constraints
subject to C1 {i in D, j in D, l in L}: D_l[l] >= d[i,j] * (x[i,l] + x[j,l] - 1);
subject to C2 {i in D}: sum{l in L} x[i,l] = 1;
subject to C3 {l in L}: D_max >= D_l[l];
So far I tried to change the line form param x1 to
param x1 {i in D, j in D} = ...
as well as
param d {x1, x2} = ...
Alas, nothing of this helped. So, any help someone can offer is deeply appreciated. I searched the web but I found nothing useful for my task.
I found eventually what was missing. The line in which I calculated the parameter d should be
param d {i in D, j in D} = sqrt((x1[i]-x1[j])^2 + (x2[i]-x2[j])^2);
Retrospectively it's clear that the subscripts i and j should have been mentioned on the line, I don't know how I could miss that.

Minizinc "var set of int: x" instead of "set of int: x"

I have an array of set in the Golfers problem (in each week there should be formed groups, such that no two players play together more than once, and everybody plays exactly one time each week):
int: gr; %number of groups
set of int: G=1..gr;
int: sz; %size of groups
set of int: S=1..sz;
int: n=gr*sz; %number of players
set of int: P=1..n;
int: we; % number of weeks
set of int: W=1..we;
include "globals.mzn";
array[G,W] of var set of P: X; %X[g,w] is the set of people that form group g in week w
My constraints are as follow (I'm not sure if everything works correctly yet):
constraint forall (g in G, w in W) (card (X[g,w]) = sz); %Each group should have size sz
constraint forall (w in W, g,h in G where g > h) (disjoint(X[g,w], X[h,w])); % Nobody plays twice in one week
constraint forall (w,u in W where w > u) (forall (g,h in G) (card(X[g,w] intersect X[h,u]) <= 1 )); % Two players never meet more than once
constraint forall (w in 2..we) (w+sz-1 in X[1,w] /\ 1 in X[1,w]); %Symmetries breaking: week permutations
constraint forall (w in W, g in 1..gr-1) ( min(X[g,w]) < min(X[g+1,w]) ); %Symmetries breaking: group permutations
constraint forall (g in G, s in S) ( s+sz*(g-1) in X[g,1]);
solve satisfy;
output [ show(X[i,j]) ++ if j == we then "\n" else " " endif | i in 1..gr, j in 1..we ];
My problem lies in constraint number 5. I cannot use min on "var set of int: x", I should use it on "set of int: x". Unfortunately, I do not understand the difference between those two (from what I've read this may be connected to defining the size of each set, but I'm not sure).
Could someone explain the problem to me and propose a solution? I would be very very grateful. Thanks!
First of all: A var is a decision variable. The goal of all Minizinc programs are to decide the the value of all decision variables. You don't know what the values are and you are trying to find the values. Anything that is not a var is simply a known number. (disregarding the use of sets)
Doing min(X[g,w]) of a decision variable (var) is simply not implemented in Minizinc. The reason would be that using X[g,w] < X[g+1,w] without the min makes more sense. Why only constrain the lowest number in both sets insted of all numbers. I.e {1,3,5} < {1,4} insted of 1 < 1
(I hope MiniZinc has < on sets so I don't lie, I am not sure)
I have found out the solution - we should make an array of elements of the set to make the max function possible in this case.
constraint forall (w in 2..we) ( max([i | i in X[1,w-1]]) < max([i | i in X[1,w]])); %Symmetries breaking: week permutations
constraint forall (w in W, g in 1..gr-1) ( min([i | i in X[g,w]]) < min([i | i in X[g+1,w]]));% Symmetries breaking: group permutations (I have been trying to speed up the constraint above, but it does not work with var set of int..)

Max number of consecutive values (Minizinc)

I'm trying to model the next constraint in Minizinc:
Suppose S is an array of decision variables of size n. I want my decision variables to take a value between 1-k, but there is a maximum 'Cons_Max' on the number of consecutive values used.
For example, suppose Cons_Max = 2, n = 8 and k = 15, then the sequence [1,2,4,5,7,8,10,11] is a valid sequence , while e.g. [1,2,3,5,6,8,9,11] is not a valid sequence because the max number of consecutive values is equal to 3 here (1,2,3).
Important to mention is that sequence [1,3,5,7,9,10,12,14] is also valid, because the values don't need to be consecutive but the max number of consectuive values is fixed to 'Cons_Max'.
Any recommendations on how to model this in Minizinc?
Here's a model with a approach that seems to work. I also added the two constraints all_different and increasing since they are probably assumed in the problem.
include "globals.mzn";
int: n = 8;
int: k = 15;
int: Cons_Max = 2;
% decision variables
array[1..n] of var 1..k: x;
constraint
forall(i in 1..n-Cons_Max) (
x[i+Cons_Max]-x[i] > Cons_Max
)
;
constraint
increasing(x) /\
all_different(x)
;
%% test cases
% constraint
% % x = [1,2,4,5,7,8,10,11] % valid solution
% % x = [1,3,5,7,9,10,12,14] % valid valid solution
% % x = [1,2,3,5,6,8,9,11] % -> not valid solution (-> UNSAT)
% ;
solve satisfy;
output ["x: \(x)\n" ];
Suppose you use array x to represent your decision variable.
array[1..n] of var 1..k: x;
then you can model the constraint like this.
constraint not exists (i in 1..n-1)(
forall(j in i+1..min(n, i+Cons_Max))
(x[j]=x[i]+1)
);

AMPL: Subscript out of bound

Hello fellow optimizers!
I'm having some issues with the following constraint:
#The supply at node i equals what was present at the last time period + any new supply and subtracted by what has been extracted from the node.
subject to Constraint1 {i in I, t in T, c in C}:
l[i,t-1,c] + splus[i,t] - sum{j in J, v in V, a in A} x[i,j,v,t,c,a]= l[i,t,c];
which naturally causes this constraint to provide errors the first time it loops, as the t-1 is not defined (for me, l[i,0,c] is not defined. Where
var l{I,T,C} >= 0; # Supply at supply node I in time period T for company C.
param splus{I,T}; # Additional supply at i.
var x{N,N,V,T,C,A} integer >= 0; #Flow from some origin within N to a destination within N using vehicle V, in time T, for company C and product A.
and set T; (in the .mod) is a set defined as:
set T := 1 2 3 4 5 6 7; in the .dat file
I've tried to do:
subject to Constraint1 {i in I, t in T: t >= 2, c in C}:
all else same
which got me a syntax error. I've also tried to include "let l[1,0,1] := 0" for all possible combinations, which got me the error
error processing var l[...]:
no data for set I
I've also tried
subject to Constraint1 {i in I, t in T, p in TT: p>t, c in C}:
l[i,t,c] + splus[i,p] - sum{j in J, v in V, a in A} x[i,j,v,p,c,a]= l[i,p,c];
where
set TT := 2 3 4 5 6;
in the .dat file (and merely set TT; in the .mod) which also gave errors. Does someone have any idea of how to do this?
One way to fix this is to specify the condition t >= 2 at the end of the indexing expression:
subject to Constraint1 {i in I, t in T, c in C: t >= 2}:
...
See also Section A.3 Indexing expressions and subscripts for more details on syntax of indexing expressions.