GAMS modelation: how do i set an identifier as the last value of a set (index.last) on an equation - gams-math

I'm modeling a VRP in GAMS language and one of my equations would ideally be:
SUM(i, x(i,n+1)) =e= 0;
with "n+1" being the last value of the set i /0*4/ (so it's 4)
I can't type x(i,"4") because this number (4) is just an example.
The software doesn't recognize this equation. the error says "unknown identifier set as index, which i understand is because "n" isn't a set.
so i put n as a set, just like i did with i, but then I'd have to give it a number (3, so that n+1 = 4) and i don't want that.
I just need a way to put "n+1" as a valid index for x(i,n+1)

Assuming that x is declared as x(i,i), you can do something like this:
Alias (i,i2);
Equation eq;
eq.. SUM((i,i2)$i2.last, x(i,i2)) =e= 0;

Related

how to write "then" as IP constraint in Julia

Hello fellows, i am learning Julia and integer programing but i am stuck at one point
How to model "then" in julia-jump for integer programing leanring.
Stuck here here
#Define the variables of the model
#variable(mo, x[1:N,1:S], Bin)
#variable(mo, a[1:S]>=0)
#Assignment constraint
#constraint(mo, [i=1:N], sum(x[i,j] for j=1:S) == 1)
##constraint (mo, PLEASE HELP )
In cases like this you usually need to use Big-M constraints
So this will be:
a_ij >= s_i^2 - M*(1-x_ij)
where M is a "big enough" number. This means that if x_ij == 0 the inequality will always be true (and hence kind of turned-off). On the other hand when x_ij == 1 the M-part will be zeroed and the equation will hold.
In JuMP terms the code will look like this:
const M = 10_000
#constraint(mo, [i=1:N, j=1:S], a[i, j] >= s[i]^2 - M*(1 - x[i, j]))
However, if s[i] is an external parameter rather than model variable you could simply use x[i,j] <= a[j]/s[i]^2 proposed by #DanGetz. However when s[i] is #variable you really want to avoid dividing or multiplying variables by each other. So this big M approach is more general across use cases.

Inline index addition in gams

I want to use an index equation to iterate over a tensors, whereas I always want to extract the value at index i and index i+1. An example:
Variable x; x.up = 10;
Parameter T /1=1,2=2,3=3,4=4,5=5/;
Set a /1,2,4/;
equation eq(a); eq(a).. x =g= T[a+1];
*x ist restricted by the values of T at the indices 2,3 and 5.
Model dummy /all/;
solve dummy min x use lp;
I am aware that gams sees the indices as string-keys rather than numerical ones, so the addition is not intended. Is this possible anyway? This e.g. can be solved by defining another tensor, unfortunaly my given conditions require the index operation inline (i.e. I am not allowed to define additional parameters or sets.
Does this work for you?
Variable x; x.up = 10;
Set aa /1*6/;
Parameter T(aa) /1=1,2=2,3=3,4=4,5=5/;
Set a(aa) /1,2,4/;
equation eq(a); eq(a(aa)).. x =g= T[aa+1];
*x ist restricted by the values of T at the indices 2,3 and 5.
Model dummy /all/;
solve dummy min x use lp;

Using the sum function in GAMS to sum over a subset of variables

I am working with maximazation problems in GAMS where I will choose
X=(x_1,x2,...,x_n) such that f(X)=c_1*x_1+...c_n*x_n is maximized. The c's are known scalars and I know n (10 in my case). I want my constraints to be such that the first (n-1)=9 x's should sum up to one and the last one should be less than 10. How do I use the sum to do so?
This is what I have tried:
SET C / c1 .... c2 /;
ALIAS(Assets,i)
Parameter Valuesforc(i) 'C values'/
*( here are my values typed in for all the C1)
POSITIVE VARIABLES
x(i);
EQUATIONS
Const1 First constraint
Const1 Second constraint
Obj The Object;
* here comes the trouble:
Const1 .. x(10) =l= 10
Const2 .. sum((i-1),x(i)) =e= 1
The code is not done all the way but I believe the essential setup is typed in. How do you make the summation to find x_1+x_1 + .... x_(n-1) and how do you refer to x_10?
Try this:
Const1 .. x('10') =l= 10;
Const2 .. sum(i$(ord(i)<card(i)),x(i)) =e= 1;
Edit: Here are some notes to explain what happens in Const2, especially in the "$(ord(i) < card(i))" part.
The "$" starts a condition, so it excludes certain elements of i from the sum (see: https://www.gams.com/latest/docs/UG_CondExpr.html#UG_CondExpr_TheDollarCondition)
The operator ord returns the relative position of a member in a set (see: https://www.gams.com/latest/docs/UG_OrderedSets.html#UG_OrderedSets_TheOrdOperator)
The operator card returns the number of elements in a set (see: https://www.gams.com/latest/docs/UG_OrderedSets.html#UG_OrderedSets_TheCardOperator)
So, all in all, there is a condition saying that all elements of i should be included in the sum except for the last one.

Summation under null set

I have a few collection ,and the intersection of these collections gives me a few new collection.
I want to have summation under these intersections , but some of these are null. And I get an error for my summation, for example
Set I/1*3/;
Set j/1*3/;
Set s(I,j)
1.(2,3)
2.(1,3)
3.(2);
Alias (I,i1,j);
Set intersection (I,i1,j);
Intersection (I,i1,j)= s(I,j)*s(i1,j);
Variable x(j) ,z;
Binary variable x;
Equation c1,c2;
C1(I)..sum(j$s(I,j),x(j))=e=z;
C2(I,i1)..sum(j$ intesection(i,I1,j),x(j))=g=1;
Model test /all/;
Solve test using lp minimizing z;
I have error for constraint 2 because intersection(2,3) is null ,and I have 0> 1
How can I write this summation?
I do not really understand, what you model here, but this way it runs without an error (there is still no feasible solution though because of equation C1('3')):
Set I/1*3/;
Set j/1*3/;
Set s(I,j) / 1.(2,3)
2.(1,3)
3.(2) /;
Alias (I,i1);
Set intersection (I,i1,j);
Intersection (I,i1,j)= s(I,j)*s(i1,j);
Variable x(j) ,z;
Binary variable x;
Equation c1,c2;
C1(I).. sum(j$s(I,j),x(j))=e=z;
C2(I,i1)$sum(j$ Intersection(i,I1,j),1)..
sum(j$ Intersection(i,I1,j),x(j))=g=1;
Model test /all/;
Solve test using mip minimizing z;

How to declare variable depends on other variable as a constraint in AMPL?

How to declare variable depends on other variable as a constraint in AMPL?
I'm trying to solve the minimize the difference between "maximum number of variable - minimum number of variable"
So, my objective equation is
minimize max{t in 0..T}production[t] + min{t in 0..T}production[t];
(t is index and T is time periods parameter and production is decision variable.)
However, it is not linear algebra.
Thus, I'm trying to declare 'max{t in 0..T} production[t]' as a variable 'y'.
So, I would like to write 'var y >= all production'.
But it's not working.
The constraint
s.t. max_production{t in 0..T}: y >= production[t];
will ensure that y is greater or equal than production[t] for all t in 0..T. If you minimize y then it will be exactly max{t in 0..T} production[t].