I’m trying to use the Optim package in Julia to optimize an objective function with 19 variables, and the following inequality constraints:
0 <= x[1]/3 - x[2] <= 1/3
5 <= 1/x[3] + 1/x[4] <= 6
I’m trying to use either IPNewton() or NewtonTrustRegion , so I need to supply both a Jacobian and Hessian for the constraints. My question is: what is the correct way to write the Jacobian and Hessian functions?
I believe the constraint function would be
function con_c!(c,x)
c[1] = x[1]/3 - x[2]
c[2] = 1/x[3] + 1/x[4]
c
end
Would the Jacobian function be
function con_jacobian!(J,x)
#first constraint:
J[1,1] = 1/3
J[1,2] = -1.0
#second constraint:
J[2,3] = -1/(x[3])^2
J[2,4] = -1/(x[4])^2
J
end
? (I assume all other indices of J are automatically set to zero?)
My main question: What would the Hessian function be? This is where I’m most confused. My understanding was that we take Hessians of scalar-valued functions. So do we have to enter multiple Hessians, one for each constraint function (2 in my case)?
I’ve looked at the multiple constraints example given here https://github.com/JuliaNLSolvers/ConstrainedOptim.jl , but I’m still confused. In the example, it looks like they are adding together two Hessian matrices…? Would greatly appreciate some help.
FD: I posted this question on Discourse two days ago but didn't receive a single response, which is why I'm posting it here.
Related
I am trying to formulate and solve an optimization problem based on an article. The authors introduced 2 decision variables. Power of station i at time t, P_i,t, and a binary variable X_i,n which is 1 if vehicle n is assigned to station i.
They introduced some other variables, called utility variables. For instance, energy delivered from station i up to time t for vehicle n, E_i,t,n which is calculated based on primary decision variables and a few fix parameters.
My question is should I define the utility variables as Gekko variables? If yes, which type is more appropriate?
I = 4 # number of stations
T = 24 # hours of simulation
N = 5 # number of vehicles
p = m.Array(m.Var,(I,T),lb=0,ub= params.ev.max_power)
x = m.Array(m.Var,(I,N),lb=0,ub=1, integer = True)
Should I define E as follow to solve these equations as an example? This introduces extra variables that are not primary decision variables and are calculated based on other terms that depend on the primary decision variable.
E = m.Array(m.Var,(I,T,N),lb=0)
for i in range(I):
for n in range(N):
for t in range(T):
m.Equation(E[i][t][n] >= np.sum(0.25 * availability[n, :t] * p[i,:t]) - (M * (1 - x[i][n])))
m.Equation(E[i][t][n] <= np.sum(0.25 * availability[n, :t] * p[i,:t]) + (M * (1 - x[i][n])))
m.Equation(E[i][t][n] <= M * x[i][n])
m.Equation(E[i][t][n] >= -M * x[i][n])
All of those variable definitions and equations look correct. Here are a few suggestions:
There is no availability[] variable defined yet. If availability is a function of other decision variables, then it is generally more efficient to use an m.Intermediate() definition to define it.
As the total number of total decision variables increase, there is often a large increase in computational time. I recommend starting with a small problem initially and then scale-up to the larger sized problem.
Try the gekko m.sum() instead of sum or np.sum() for potentially more efficient calculations. Using m.sum() does increase the model compile time but generally decreases the optimization solve time, so it is a trade-off.
Hello fellows, i am learning Julia and integer programing but i am stuck at one point
How to model "then" in julia-jump for integer programing leanring.
Stuck here here
#Define the variables of the model
#variable(mo, x[1:N,1:S], Bin)
#variable(mo, a[1:S]>=0)
#Assignment constraint
#constraint(mo, [i=1:N], sum(x[i,j] for j=1:S) == 1)
##constraint (mo, PLEASE HELP )
In cases like this you usually need to use Big-M constraints
So this will be:
a_ij >= s_i^2 - M*(1-x_ij)
where M is a "big enough" number. This means that if x_ij == 0 the inequality will always be true (and hence kind of turned-off). On the other hand when x_ij == 1 the M-part will be zeroed and the equation will hold.
In JuMP terms the code will look like this:
const M = 10_000
#constraint(mo, [i=1:N, j=1:S], a[i, j] >= s[i]^2 - M*(1 - x[i, j]))
However, if s[i] is an external parameter rather than model variable you could simply use x[i,j] <= a[j]/s[i]^2 proposed by #DanGetz. However when s[i] is #variable you really want to avoid dividing or multiplying variables by each other. So this big M approach is more general across use cases.
I need to define a constraint as follows:
mdl.add_constraints(p_pg[plan, segment] == np.exp(u_pg[plan, segment] for plan in range(1, p+1) for segment in range(1, g+1))
In this constraint both p_pg and u_pg are variable and are defined as mdl.continuous_var_dict. However I get the following error:
loop of ufunc does not support argument 0 of type Var which has no callable exp method
Can anyone help how to define this constraint?
exp is not linear so you could either try to do a piecewise linear approximation or use Constraint Programming within CPLEX.
See this example in Easy optimization with python
from docplex.cp.model import CpoModel
mdl = CpoModel(name='buses')
nbbus40 = mdl.integer_var(0,1000,name='nbBus40')
nbbus30 = mdl.integer_var(0,1000,name='nbBus30')
mdl.add(nbbus40*40 + nbbus30*30 >= 300)
#non linear objective
mdl.minimize(mdl.exponent(nbbus40)*500 + mdl.exponent(nbbus30)*400)
msol=mdl.solve()
print(msol[nbbus40]," buses 40 seats")
print(msol[nbbus30]," buses 30 seats")
I wrote a Gorubi optimization code, but because of some issues, I need to convert to Scipy code. Still have difficulties to convert it. Here is a part of code related to Gorubi:
m = Model()
#x is charging, discharging variable
x = m.addVars(n,lb=-1.5,ub=1.5,vtype=GRB.INTEGER, name="x")
#Y is SOC variable
Y = m.addVars(n+1,lb=0,ub=100,vtype=GRB.CONTINUOUS, name="Y")
# Add constraint: SOC[start]=50, initial SOC
m.addConstr(Y[0]==initialsoc,name='c1')
#Final targeted SOC
m.addConstr(Y[n]>=65,name='c2')
m.addConstrs((Y[i+1]-Y[i] == 3.75*x[i] for i in range(n)), name='c0')
#Objective function. 6 comes from capacity of inverter.
obj1=sum(((load[i+1]-(6*x[i]))*(load[i+1]-(6*x[i])) for i in range (n)))
m.setObjective(obj1,GRB.MINIMIZE)
m.optimize()
My x constraint can only have -1 or 0 or 1 values. The other constraint is Y where at each step y(i+1)-y(i) equals to 3.75*X(i).
Is it possible to convert this code to Scipy? Or do you recommend any other libraries?
I'm trying to solve a model using Julia-JuMP. The following is the outline of the model that I created. Here, z[i,j] is a binary variable and d[i,j] is the cost for which z[i,j]=1.
My constraint creates an infinite number of constraint and hence I need to use a separation algorithm to solve it.
First, I solve the model without any constraint, so the answer to all variables z[i,j] and d[i,j] are zero.
Then, I'm including the separation algorithm (which is given inside the if condition). Even though I'm including if z_value == 0, z_values are not passing to it.
Am I missing something in the format of this model?
m = Model(solver=GurobiSolver())
#variable(m, z[N,N], Bin)
#variable(m, d[N,N]>=0)
#objective(m, Min, sum{ d[i,j]*z[i,j], i in N, j in N} )
z_value = getvalue(z)
d_value = getvalue(d)
if z_value == 0
statement
elseif z_value == 1
statement
end
#constraint(m, sum{z[i,j], i in N, j in N}>=2)
solve(m)
println("Final solution: [ $(getvalue(z)), $(getvalue(d)) ]")
You're multiplying z by d which both are variables, hence your model is non-linear,
Are the costs d[i,j] constant or really a variable of the problem ?
If so you need to use a non-linear solver