Gams uncontrolled index issue - gams-math

I have this situation in GAMS:
Sets
i mina / m1, m2 / ;
Parameters
k(i) non important description
/ m1 10
m2 20 /;
Variables
x(i) non important description;
Equations
r1 non important description;
r1 .. x(i) =l= k(i);
and r1 give me the error 149 Uncontrolled set entered as constant.
What can I do to fix it? I've searched all around but nothing makes sense, x(i) and k(i) have the same dimentions, I just want to say that x(i) <= k(i) for all i.

You need to declare and define your equation differently to say, that you want it for all i and not just once. Do it like this:
Equations
r1(i) non important description;
r1(i) .. x(i) =l= k(i);

Related

GAMS modelation: how do i set an identifier as the last value of a set (index.last) on an equation

I'm modeling a VRP in GAMS language and one of my equations would ideally be:
SUM(i, x(i,n+1)) =e= 0;
with "n+1" being the last value of the set i /0*4/ (so it's 4)
I can't type x(i,"4") because this number (4) is just an example.
The software doesn't recognize this equation. the error says "unknown identifier set as index, which i understand is because "n" isn't a set.
so i put n as a set, just like i did with i, but then I'd have to give it a number (3, so that n+1 = 4) and i don't want that.
I just need a way to put "n+1" as a valid index for x(i,n+1)
Assuming that x is declared as x(i,i), you can do something like this:
Alias (i,i2);
Equation eq;
eq.. SUM((i,i2)$i2.last, x(i,i2)) =e= 0;

Define the function for distance matrix in ampl. Keep getting "i is not defined"

I'm trying to set up a ampl model which clusters given points in a 2-dimensional space according to the model of Saglam et al(2005). For testing purposes I want to generate randomly some datapoints and then calculate the euclidian distance matrix for them (since I need this one). I'm aware that I could only make the distance matrix without the data points but in a later step the data points will be given and then I need to calculate the distances between each the points.
Below you'll find the code I've written so far. While loading the model I keep getting the error message "i is not defined". Since i is a subscript that should run over x1 and x1 is a parameter which is defined over the set D and have one subscript, I cannot figure out why this code should be invalid. As far as I understand, I don't have to define variables if I use them only as subscripts?
reset;
# parameters to define clustered
param m; # numbers of data points
param n; # numbers of clusters
# sets
set D := 1..m; #points to be clustered
set L := 1..n; #clusters
# randomly generate datapoints
param x1 {D} = Uniform(1,m);
param x2 {D} = Uniform(1,m);
param d {D,D} = sqrt((x1[i]-x1[j])^2 + (x2[i]-x2[j])^2);
# variables
var x {D, L} binary;
var D_l {L} >=0;
var D_max >= 0;
#minimization funcion
minimize max_clus_dis: D_max;
# constraints
subject to C1 {i in D, j in D, l in L}: D_l[l] >= d[i,j] * (x[i,l] + x[j,l] - 1);
subject to C2 {i in D}: sum{l in L} x[i,l] = 1;
subject to C3 {l in L}: D_max >= D_l[l];
So far I tried to change the line form param x1 to
param x1 {i in D, j in D} = ...
as well as
param d {x1, x2} = ...
Alas, nothing of this helped. So, any help someone can offer is deeply appreciated. I searched the web but I found nothing useful for my task.
I found eventually what was missing. The line in which I calculated the parameter d should be
param d {i in D, j in D} = sqrt((x1[i]-x1[j])^2 + (x2[i]-x2[j])^2);
Retrospectively it's clear that the subscripts i and j should have been mentioned on the line, I don't know how I could miss that.

Mixed Integer Linear Programming for a Ranking Constraint

I am trying to write a mixed integer linear programming for a constraint related to the rank of a specific variable, as follows:
I have X1, X2, X3, X4 as decision variables.
There is a constraint asking to define i as a rank of X1 (For example, if X1 is the largest number amongst X1, X2, X3, X4, then i=1; if X1 is the second largest number then i=2, if X1 is the 3rd largest number then i=3, else i=4)
How could I write this constraint into a mixed integer linear programming?
Not so easy. Here is an attempt:
First introduce binary variables y(i) for i=2,3,4
Then we can write:
x(1) >= x(i) - (1-y(i))*M i=2,3,4
x(1) <= x(i) + y(i)*M i=2,3,4
rank = 4 - sum(i,y(i))
y(i) ∈ {0,1} i=2,3,4
Here M is a large enough constant (a good choice is the maximum range of the data). If your solver supports indicator constraints, you can simplify things a bit.
A small example illustrates it works:
---- 36 VARIABLE x.L
i1 6.302, i2 8.478, i3 3.077, i4 6.992
---- 36 VARIABLE y.L
i3 1.000
---- 36 VARIABLE rank.L = 3.000

Using the sum function in GAMS to sum over a subset of variables

I am working with maximazation problems in GAMS where I will choose
X=(x_1,x2,...,x_n) such that f(X)=c_1*x_1+...c_n*x_n is maximized. The c's are known scalars and I know n (10 in my case). I want my constraints to be such that the first (n-1)=9 x's should sum up to one and the last one should be less than 10. How do I use the sum to do so?
This is what I have tried:
SET C / c1 .... c2 /;
ALIAS(Assets,i)
Parameter Valuesforc(i) 'C values'/
*( here are my values typed in for all the C1)
POSITIVE VARIABLES
x(i);
EQUATIONS
Const1 First constraint
Const1 Second constraint
Obj The Object;
* here comes the trouble:
Const1 .. x(10) =l= 10
Const2 .. sum((i-1),x(i)) =e= 1
The code is not done all the way but I believe the essential setup is typed in. How do you make the summation to find x_1+x_1 + .... x_(n-1) and how do you refer to x_10?
Try this:
Const1 .. x('10') =l= 10;
Const2 .. sum(i$(ord(i)<card(i)),x(i)) =e= 1;
Edit: Here are some notes to explain what happens in Const2, especially in the "$(ord(i) < card(i))" part.
The "$" starts a condition, so it excludes certain elements of i from the sum (see: https://www.gams.com/latest/docs/UG_CondExpr.html#UG_CondExpr_TheDollarCondition)
The operator ord returns the relative position of a member in a set (see: https://www.gams.com/latest/docs/UG_OrderedSets.html#UG_OrderedSets_TheOrdOperator)
The operator card returns the number of elements in a set (see: https://www.gams.com/latest/docs/UG_OrderedSets.html#UG_OrderedSets_TheCardOperator)
So, all in all, there is a condition saying that all elements of i should be included in the sum except for the last one.

counting infeasible solutions in GAMS software

I want to run several mathematical models in GAMS and count the number of infeasible solutions. How should I write the condition of IF statement?
You can check the modelstat attribute of your models after solving them. Here is a little example:
equation obj;
variable z;
positive variable x;
obj.. z =e= 1;
equation feasible;
feasible.. x =g= 1;
equation infeasible1;
infeasible1.. x =l= -1;
equation infeasible2;
infeasible2.. x =l= -2;
model m1 /obj, feasible /;
model m2 /obj, infeasible1/;
model m3 /obj, infeasible2/;
scalar infCount Number of infeasible models /0/;
solve m1 min z use lp;
if(m1.modelstat = %ModelStat.Infeasible%, infCount = infCount+1;)
solve m2 min z use lp;
if(m2.modelstat = %ModelStat.Infeasible%, infCount = infCount+1;)
solve m3 min z use lp;
if(m3.modelstat = %ModelStat.Infeasible%, infCount = infCount+1;)
display infCount;
If you have an integer problem you should also check for %ModelStat.Integer Infeasible% and not only %ModelStat.Infeasible%, so the check after a solve could become
solve m3 min z use mip;
if(m3.modelstat = %ModelStat.Infeasible% or m3.modelstat = %ModelStat.Integer Infeasible%,
infCount = infCount+1;
)
I hope, that helps!
Lutz