AMPL to JuMP (Julia) - optimization

I need to transform a AMPL code to JuMP.
param f;
set R := 1..N;
set R_OK := 1..M;
set V := 1..N;
param tMax;
set T := 1..tMax;
var primary{R,V}, binary;
var SendPrepReq{T,R,V}, binary;
"param f" would be an int. The varibles I know how to do. But what about the sets? What is its equivalent in JuMP?

One of the most relevant pieces of documentation may be the Quickstart guide to get the basics of how JuMP works.
For your example, you can just declare your parameters directly:
using JuMP
# declare some parameters
f = 3
N = 10
M = 5
R = 1:N
V = 1:N
R_OK = 1:M
Tmax = 33
T = 1:Tmax
# create the model
m = Model()
# add variables
#variable(m, primary[R,V], Bin)
#variable(m, SendPrepReq[T,R,V], Bin)
EDIT
One might want to provide parameters independently from the model declaration as in AMLP. The most straightforward way in Julia will be to build and solve the model in a function taking the problem parameters in argument:
function build_model(f, N, M, Tmax)
R = 1:N
V = 1:N
R_OK = 1:M
T = 1:Tmax
# create the model
m = Model()
# add variables
#variable(m, primary[R,V], Bin)
#variable(m, SendPrepReq[T,R,V], Bin)
return (m, primary, SendPrepReq)
end

Related

GEKKO - MINLP in Matrix Form - Errors using m.axb()

I am trying to solve a MINLP problem using GEKKO. My code is the following:
m = GEKKO(remote = True)
m.options.SOLVER = 3
m.solver_options = ['minlp_maximum_iterations 500', \
# minlp iterations with integer solution
'minlp_max_iter_with_int_sol 10', \
# treat minlp as nlp
'minlp_as_nlp 0', \
# nlp sub-problem max iterations
'nlp_maximum_iterations 50', \
# 1 = depth first, 2 = breadth first
'minlp_branch_method 1', \
# maximum deviation from whole number
'minlp_integer_tol 0.05', \
# covergence tolerance
'minlp_gap_tol 0.01']
# Array Variable
rows = nb_phases + 3*b_max*(nb_phases+1)#48
columns = 1
x = np.empty((rows,columns),dtype=object)
for i in range(3*nb_phases*b_max+nb_phases+1):
for j in range(columns):
x[i,j] = m.Var(value = xinit[i,j], lb = LB[i,j], ub = UB[i,j], integer = False)
for i in range(3*nb_phases*b_max+nb_phases+1, (3*nb_phases+3)*b_max+nb_phases):
for j in range(columns):
x[i,j] = m.Var(value = xinit[i,j], lb = LB[i,j], ub = UB[i,j], integer = True)
# Constraints
#m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
m.axb(A,B, etype = '<=',sparse=False)
#m.axb(A = A_eq,b = B_eq, x = x, etype = '=', sparse = False)
m.axb(A_eq,B_eq, etype = '=',sparse=False)
for i in range(rows):
for j in range(columns):
m.Minimize((x[i,j]-i*j)**2)
#Solver
m.solve(disp = True)
When calling the axb function, if I declare the variable x in the arguments as the following:
m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
I get the error : List x must be composed of GEKKO parameters or variables. I don't really understand why I get this error since x is a gekko variable.
If I don't declare the variable x in the arguments of the axb function:
m.axb(A,B, etype = '<=',sparse=False)
I get the following error: AXB Missing Configuration File, Error: AXB object missing: axb1.txt, Example config file: axb1.txt
I was thinking maybe the issue is that x is not defined as an array. Therefore, considering x[i,j], I tried to explicit the equation Ax<=b by coding the matrix product A.x in a loop to avoid calling m.axb but I am not sure how to declare the equations after. My code is the following:
Ax = []
for i in range(rows):
temp = []
for j in range(columns):
temp.append(A[i,j]*x[j,0])
Ax.append(sum(temp))
for i in range(rows):
m.Equations(Ax[i] <= B[i])
I get the error: 'int' object is not subscriptable
Is anyone able to help me figure out how to solve this problem?
Is there a way of defining x as an array? (Since some of its elements are integers and some aren't)
Thanks a lot !
Here is a solution that works with the newer version of Gekko that is not yet released but is available on GitHub. You'll need to put the newest version of gekko.py (v1.0) in the Lib/site_packages/gekko folder and the local executable (apm.exe for Windows, apm_mac for MacOS, apm for Linux) in the Lib/site_packages/gekko/bin folder to use remote=False.
from gekko import GEKKO
import numpy as np
m = GEKKO(remote = False)
m.options.SOLVER = 3
nb_phases = 2
b_max = 3
m.solver_options = ['minlp_maximum_iterations 500', \
# minlp iterations with integer solution
'minlp_max_iter_with_int_sol 10', \
# treat minlp as nlp
'minlp_as_nlp 0', \
# nlp sub-problem max iterations
'nlp_maximum_iterations 50', \
# 1 = depth first, 2 = breadth first
'minlp_branch_method 1', \
# maximum deviation from whole number
'minlp_integer_tol 0.05', \
# covergence tolerance
'minlp_gap_tol 0.01']
# Array Variable
rows = nb_phases + 3*b_max*(nb_phases+1)#48
columns = 1
xinit = np.ones(rows)
LB = np.zeros(rows)
UB = np.ones(rows)*10.0
#x = m.Array(m.Var,(rows))
x = np.empty(rows,dtype=object)
for i in range(3*nb_phases*b_max+nb_phases+1):
x[i] = m.Var(value = xinit[i], lb = LB[i], ub = UB[i], integer = False)
for i in range(3*nb_phases*b_max+nb_phases+1, (3*nb_phases+3)*b_max+nb_phases):
x[i] = m.Var(value = xinit[i], lb = LB[i], ub = UB[i], integer = True)
# Constraints
#m.axb(A = A,b = B, x = x, etype = '<=', sparse = False)
A = np.ones((1,rows)); B = np.zeros(1)
m.axb(A,B,x,etype = '<=',sparse=False)
#m.axb(A = A_eq,b = B_eq, x = x, etype = '=', sparse = False)
m.axb(A,B,x,etype = '=',sparse=False)
for i in range(rows):
m.Minimize((x[i]-i)**2)
#Solver
m.options.SOLVER = 1
m.solve(disp = True)
This produces the solution:
----------------------------------------------------------------
APMonitor, Version 1.0.0
APMonitor Optimization Suite
----------------------------------------------------------------
--------- APM Model Size ------------
Each time step contains
Objects : 2
Constants : 0
Variables : 29
Intermediates: 0
Connections : 58
Equations : 29
Residuals : 29
Number of state variables: 29
Number of total equations: - 2
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 27
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: -0.00 NLPi: 2 Dpth: 0 Lvs: 0 Obj: 7.71E+03 Gap: 0.00E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 0.019000000000000003 sec
Objective : 7714.
Successful solution
---------------------------------------------------

How to add a new variable to an already existing set of variables (based on a SparseAxisArray) in JuMP?

I am currently working with a JuMP model where I define the following example variables:
using JuMP
N = 3
outN = [[4,5],[1,3],[5,7]]
m = Model()
#variable(m, x[i=1:N,j in outN[i]] >=0)
At some point, I want to add, for example, a variable x[1,7]. How can I do that in an effective way? Likewise, how can I remove it afterwards? Is there an alternative to just fixing it to 0?
Thanks in advance
You're probably better off just using a dictionary:
using JuMP
N = 3
outN = [[4,5],[1,3],[5,7]]
model = Model()
x = Dict(
(i, j) => #variable(model, lower_bound = 0, base_name = "x[$i, $j]")
for i in 1:N for j in outN[i]
)
x[1, 7] = #variable(model, lower_bound = 0)
delete(model, x[1, 4])
delete!(x, (1, 4))
Nothing about JuMP restricts you to using only the built-in variable containers: https://jump.dev/JuMP.jl/stable/variables/#User-defined-containers-1

jDE(Adaptive Differential Evolution)

In jDE, each individual has its own F and CR values. How to assign these values to each individuals programmatically. How to update these values.
A pseudo-code will help.
If you want each individual to have its own F and CR values, you can simply save it in a list. (Pseudo-code: Python)
ID_POS = 0
ID_FIT = 1
ID_F = 2
ID_CR = 3
def create_solution(problem_size):
pos = np.random.uniform(lower_bound, upper_bound, problem_size)
fit = fitness_function(pos)
F = your_values
CR = your values
return [pos, fit, F, CR]
def training(problem_size, pop_size, max_iteration):
# Initialization
pop = [create_solution(problem_size) for _ in range(0, pop_size)]
# Evolution process
for iteration in range(0, max_iteration):
for i in range(0, pop_size):
# Do your stuff here
pos_new = ....
fit_new = ....
F_new = ...
CR_new = ...
if pop[i][ID_FIT] < fit_new: # meaning the new solution has better fitness than the old one.
pop[i][ID_F] = F_new
pop[i][ID_CR] = CR_new # This is how you update F and CR for every individual.
...
You can check out my repo's contains most of the state-of-the-art meta-heuristics here.
https://github.com/thieunguyen5991/metaheuristics

How to define a function to use with scipy.integrate.solve_ivp

I am trying to solve a differential equation using scipy.integrate.solve_ivp
L*Q'' + R*Q' + (1/C)*Q = E(t), E(t) = 230*sin(50*t)
for Q(t) and Q'(t)
C = 0.0014 #F
dQ_0 = 2.6 #A
L = 1.8 #H
n = 575 #/
Q_0 = 1e-06 #C
R = 43 #Ohm
t_f = 2.8 #s
import numpy as np
from scipy.integrate import solve_ivp
t = np.linspace(0, t_f, n)
def E(x):
return 230*np.sin(50*x)
y = E(y)
def Q(t, y, R, L, C):
return (y - L*Q'' - R*Q')*C
init_cond = [Q_0, dQ_0]
y_ivp = solve_ivp(Q, t_span=(0, t_f), y0=init_cond)
I am only trying to understand how to correctly define a function that is passed as an argument 'fun' in scipy.integrate.solve_ivp
The answer below is no longer valid, solve_ode has since implemented the args parameter similar to odeint. So indeed
y_ivp = solve_ivp(Q, t_span=(0, t_f), y0=init_cond, args=(R, L, C))
is now valid (do not forget to set appropriate error tolerances, or at least check that the default values arol=1e-3, rtol=1e-6 are appropriate).
Always available was the use of semi-global variables in a closure or lambda expression
y_ivp = solve_ivp(lambda t,y: Q(t,y,R, L, C), t_span=(0, t_f), y0=init_cond, args=(R, L, C))
(obsolete part) solve_ivp has no parameter passing mechanism, so treat the parameters as global variables. You are formulating an ODE for Q, as it is a second order ODE, the state also contains the first derivative, as you somehow recognized in the composition of the initial state. The ODE function then needs to produce the derivative values at a given state. Identify Q(t)=Q[0] and Q'(t)=Q[1], then
def Q_ode(t, Q):
return [ Q[1], (E(t) - R*Q[1] - (1/C)*Q[0])/L ]
I would continue to name the variables containing Q values with the letter Q.

How to use scipy minimize when the constraints are dynamic?

I have the following optimization problem:
Where X and q are endogenous while the other variables are known.
I use scipy minimize function to solve it. I have no problems with the bounds and constraints:
# objective function
def objective(q,s):
return -sumprod(q,s)
def sumprod(l1,l2):
return sum([x*y for x,y in zip(*[l1,l2])])
# constraints
def cons_periodicflow_min(q):
return q.sum()-qpmin
con1 = {'type':'ineq','fun':cons_periodicflow_min}
def cons_periodicflow_max(q):
return qpmax - q.sum()
con2 = {'type':'ineq','fun':cons_periodicflow_max}
def cons_daily_reservoir(q):#xmin,q,X,a,delta):
return X+a-q-delta-xmin
con3 = {'type':'ineq','fun':cons_daily_reservoir}
def cons_end_reservoir(q):#xend,q,X,a,delta):
return X[-1]+a[-1]-q[-1]-delta[-1]-xend
con4 = {'type':'ineq','fun':cons_end_reservoir}
cons=[con1,con2,con3,con4]
# definition of the parameters
T=3
q0 = np.zeros(T)
s0 = np.array([10,10,10])
qmin = [0,0,0]
qmax = [10,10,10]
delta = [1,1,1]
a = [2,2,2]
X = [10,0,0]
qpmax = 50
qpmin=10
b = [(qmin[t],qmax[t]) for t in range(T)]
sol = sco.minimize(objective,q0,bounds=b,constraints=cons)
My only problem is that X depends on q so I need to update X at each time step, can I add it to the minimize function? Else how to do it?
EDIT:
I can express X in the following way (please don't mind the t / t+1 issues):
Therefore the constraint with Xmin can rewrites:
Does it help to express the optimisation problem?