Is it possible to solve bi-objective model directly in GAMS? - gams-math

Is there any command that can solve multi objective model directly?
I mean, without using weighted sum or epsilon constraint methods, can we solve multi objective model in gams?
Many thanks!

This is the epsilon constraint model in GAMS which is applicable in solving bi-objective optimization problems and finding the pareto optimal front.
$title Pareto optimal front determination
$onText
For more details please refer to Chapter 2 (Gcode2.16), of the following book:
Soroudi, Alireza. Power System Optimization Modeling in GAMS. Springer, 2017.
--------------------------------------------------------------------------------
Model type: NLP
--------------------------------------------------------------------------------
Contributed by
Dr. Alireza Soroudi
IEEE Senior Member
email: alireza.soroudi#gmail.com
We do request that publications derived from the use of the developed GAMS code
explicitly acknowledge that fact by citing
Soroudi, Alireza. Power System Optimization Modeling in GAMS. Springer, 2017.
DOI: doi.org/10.1007/978-3-319-62350-4
$offText
Variable of1, of2, x1, x2;
Equation eq1, eq2, eq3, eq4;
eq1.. 4*x1 - 0.5*sqr(x2) =e= of1;
eq2.. -sqr(x1) + 5*x2 =e= of2;
eq3.. 2*x1 + 3*x2 =l= 10;
eq4.. 2*x1 - x2 =g= 0;
x1.lo = 1; x1.up = 2;
x2.lo = 1; x2.up = 3;
Model pareto1 / all /;
Set counter / c1*c21 /;
Scalar E;
Parameter report(counter,*), ranges(*);
solve pareto1 using nlp maximizing of1;
ranges('OF1max') = of1.l;
ranges('OF2min') = of2.l;
solve pareto1 using nlp maximizing of2;
ranges('OF2max') = of2.l;
ranges('OF1min') = of1.l;
loop(counter,
E = (ranges('OF2max') - ranges('OF2min'))*(ord(counter) - 1)/(card(counter) - 1) + ranges('OF2min');
of2.lo = E;
solve pareto1 using nlp maximizing of1;
report(counter,'OF1') = of1.l;
report(counter,'OF2') = of2.l;
report(counter,'E') = E;
);
display report;

Related

Gekko Variable Definition - Primary vrs. Utility Decision Variable

I am trying to formulate and solve an optimization problem based on an article. The authors introduced 2 decision variables. Power of station i at time t, P_i,t, and a binary variable X_i,n which is 1 if vehicle n is assigned to station i.
They introduced some other variables, called utility variables. For instance, energy delivered from station i up to time t for vehicle n, E_i,t,n which is calculated based on primary decision variables and a few fix parameters.
My question is should I define the utility variables as Gekko variables? If yes, which type is more appropriate?
I = 4 # number of stations
T = 24 # hours of simulation
N = 5 # number of vehicles
p = m.Array(m.Var,(I,T),lb=0,ub= params.ev.max_power)
x = m.Array(m.Var,(I,N),lb=0,ub=1, integer = True)
Should I define E as follow to solve these equations as an example? This introduces extra variables that are not primary decision variables and are calculated based on other terms that depend on the primary decision variable.
E = m.Array(m.Var,(I,T,N),lb=0)
for i in range(I):
for n in range(N):
for t in range(T):
m.Equation(E[i][t][n] >= np.sum(0.25 * availability[n, :t] * p[i,:t]) - (M * (1 - x[i][n])))
m.Equation(E[i][t][n] <= np.sum(0.25 * availability[n, :t] * p[i,:t]) + (M * (1 - x[i][n])))
m.Equation(E[i][t][n] <= M * x[i][n])
m.Equation(E[i][t][n] >= -M * x[i][n])
All of those variable definitions and equations look correct. Here are a few suggestions:
There is no availability[] variable defined yet. If availability is a function of other decision variables, then it is generally more efficient to use an m.Intermediate() definition to define it.
As the total number of total decision variables increase, there is often a large increase in computational time. I recommend starting with a small problem initially and then scale-up to the larger sized problem.
Try the gekko m.sum() instead of sum or np.sum() for potentially more efficient calculations. Using m.sum() does increase the model compile time but generally decreases the optimization solve time, so it is a trade-off.

Hyperpriors for hierarchical models with Stan

I'm looking to fit a model to estimate multiple probabilities for binomial data with Stan. I was using beta priors for each probability, but I've been reading about using hyperpriors to pool information and encourage shrinkage on the estimates.
I've seen this example to define the hyperprior in pymc, but I'm not sure how to do something similar with Stan
#pymc.stochastic(dtype=np.float64)
def beta_priors(value=[1.0, 1.0]):
a, b = value
if a <= 0 or b <= 0:
return -np.inf
else:
return np.log(np.power((a + b), -2.5))
a = beta_priors[0]
b = beta_priors[1]
With a and b then being used as parameters for the beta prior.
Can anybody give me any pointers on how something similar would be done with Stan?
To properly normalize that, you need a Pareto distribution. For example, if you want a distribution p(a, b) ∝ (a + b)^(-2.5), you can use
a + b ~ pareto(L, 1.5);
where a + b > L. There's no way to normalize the density with support for all values greater than or equal to zero---it needs a finite L as a lower bound. There's a discussion of using just this prior as the count component of a hierarchical prior for a simplex.
If a and b are parameters, they can either both be constrained to be positive, or you can leave a unconstrained and declare
real<lower = L - a> b;
to insure a + b > L. L can be a small constant or something more reasonable given your knowledge of a and b.
You should be careful because this will not identify a + b. We use this construction as a hierarchical prior for simplexes as:
parameters {
real<lower = 1> kappa;
real<lower = 0, upper = 1> phi;
vector<lower = 0, upper = 1>[K] theta;
model {
kappa ~ pareto(1, 1.5); // power law prior
phi ~ beta(a, b); // choose your prior for theta
theta ~ beta(kappa * phi, kappa * (1 - phi)); // vectorized
There's an extended example in my Stan case study of repeated binary trials, which is reachable from the case studies page on the Stan web site (the case study directory is currently linked under the documentation link from the users tab).
Following suggestions in the comments I'm not sure that I will follow this approach, but for reference I thought I'd at least post the answer to my question of how this could be accomplished in Stan.
After some asking around on Stan Discourses and further investigation I found that the solution was to set a custom density distribution and use the target += syntax. So the equivalent for Stan of the example for pymc would be:
parameters {
real<lower=0> a;
real<lower=0> b;
real<lower=0,upper=1> p;
...
}
model {
target += log((a + b)^-2.5);
p ~ beta(a,b)
...
}

GAMS to AMPL OPTIMIZATION

I wondered if someone could help me make this GAMS model to a AMPL model. I am trying to understand the language.
Before hand thanks! You can see the model below.
GAMS Model
set activity / A*G/;
alias (activity,i,j);
set prec(i,j) /
A.(B,C), (B,E).F, C.D, D.E, F.G /;
parameter duration(activity) / A 2, B 3, C 3, D 4, E 8, F 6, G 2 /;
free variable time;
nonnegative variable s(i);
equations ctime(i)
ptime(i,j) ;
ctime(i).. time =g= s(i) + duration(i);
ptime(prec(i,j)).. s(i) + duration(i) =l= s(j);
model schedule /all/;
solve schedule using lp minimizing time;
display time.l, s.l;
GAMS convert function with the option Ampl allows you to generate AMPL input file (*.mod) from a GAMS model file.

Min dominating set software

Cross posting this from CS Theory since it is more of a software question.
I need a code for calculating exact MIN-DOM-SET. Currently the best option suggested has been to formulate it as an SMT problem and throw it at an SMT solver.
Curious if there were any good MIN-DOM-SET specific codes out there or a good SMT-LIB formulation.
I coded one up in Z3's Python bindings using the new Optimize functionality.
def min_dom_set(graph):
"""Try to dominate the graph with the least number of verticies possible"""
s = Optimize()
nodes_colors = dict((node_name, Int('k%r' % node_name)) for node_name in graph.nodes())
for node in graph.nodes():
s.add(And(nodes_colors[node] >= 0, nodes_colors[node] <= 1)) # dominator or not
dom_neighbor = Sum ([ (nodes_colors[j]) for j in graph.neighbors(node) ])
s.add(Sum(nodes_colors[node], dom_neighbor ) >= 1 )
s.minimize( Sum([ nodes_colors[y] for y in graph.nodes() ]) )
if s.check() == sat:
m = s.model()
return dict((name, m[color].as_long()) for name, color in nodes_colors.iteritems())
raise Exception('Could not find a solution.')

counting infeasible solutions in GAMS software

I want to run several mathematical models in GAMS and count the number of infeasible solutions. How should I write the condition of IF statement?
You can check the modelstat attribute of your models after solving them. Here is a little example:
equation obj;
variable z;
positive variable x;
obj.. z =e= 1;
equation feasible;
feasible.. x =g= 1;
equation infeasible1;
infeasible1.. x =l= -1;
equation infeasible2;
infeasible2.. x =l= -2;
model m1 /obj, feasible /;
model m2 /obj, infeasible1/;
model m3 /obj, infeasible2/;
scalar infCount Number of infeasible models /0/;
solve m1 min z use lp;
if(m1.modelstat = %ModelStat.Infeasible%, infCount = infCount+1;)
solve m2 min z use lp;
if(m2.modelstat = %ModelStat.Infeasible%, infCount = infCount+1;)
solve m3 min z use lp;
if(m3.modelstat = %ModelStat.Infeasible%, infCount = infCount+1;)
display infCount;
If you have an integer problem you should also check for %ModelStat.Integer Infeasible% and not only %ModelStat.Infeasible%, so the check after a solve could become
solve m3 min z use mip;
if(m3.modelstat = %ModelStat.Infeasible% or m3.modelstat = %ModelStat.Integer Infeasible%,
infCount = infCount+1;
)
I hope, that helps!
Lutz