Related
I am new to the field of optimization and I need help in the following optimization problem. I have tried to solve it using normal coding to make sure that I got he correct results. However, the results I got are different and I am not sure my way of analysis is correct or not. This is a short description of the problem:
The objective function shown in the picture is used to find the optimal temperature of the insulating system that minimizes the total cost over a given horizon.
[This image provides the mathematical description of the objective function and the constraints] (https://i.stack.imgur.com/yidrO.png)
The data of the problems are as follow:
1-
Problem data:
A=1.07×10^8
h=1
T_ref=87.5
N=20
p1=0.001;
p2=0.0037;
This is the curve I want to obtain
2- Optimization variable:
u_t
3- Model type:
The model is a nonlinear cost function with non-linear constraints and it is solved using non-linear solver SNOPT.
4-The meaning of the symbols in the objective and constrained functions
The optimization is performed over a prediction horizon of N years.
T_ref is The reference temperature.
Represent the degree of polymerization in the kth year.
X_DP Represents the temperature of the insulating system in the kth year.
h is the time step (1 year) of the discrete-time model.
R is the ratio of the load loss at the rated load to the no-load loss.
E is the activation energy.
A is the pre-exponential constant.
beta is a linear coefficient representing the cost due to the decrement of the temperature.
I have developed the source code in MATLAB, this code is used to check if my analysis is correct or not.
I have tried to initialize the Ut value in its increasing or decreasing states so that I can have the curves similar to the original one. [This is the curve I obtained] (https://i.stack.imgur.com/KVv2q.png)
I have tried to simulate the problem using conventional coding without optimization and I got the figure shown above.
close all; clear all;
h=1;
N=20;
a=250;
R=8.314;
A=1.07*10^8;
E=111000;
Tref=87.5;
p1=0.0019;
p2=0.0037;
p3=0.0037;
Utt=[80,80.7894736842105,81.5789473684211,82.3684210526316,83.1578947368421,... % The value of Utt given here represent the temperature increament over a predictive horizon.
83.9473684210526,84.7368421052632,85.5263157894737,86.3157894736842,...
87.1052631578947,87.8947368421053,88.6842105263158,89.4736842105263,...
90.2631578947369,91.0526315789474,91.8421052631579,92.6315789473684,...
93.4210526315790,94.2105263157895,95];
Utt1 = [95,94.2105263157895,93.4210526315790,92.6315789473684,91.8421052631579,... % The value of Utt1 given here represent the temperature decreament over a predictive horizon.
91.0526315789474,90.2631578947369,89.4736842105263,88.6842105263158,...
87.8947368421053,87.1052631578947,86.3157894736842,85.5263157894737,...
84.7368421052632,83.9473684210526,83.1578947368421,82.3684210526316,...
81.5789473684211,80.7894736842105,80];
Ut1=zeros(1,N);
Ut2=zeros(1,N);
Xdp =zeros(N,N);
Xdp(1,1)=1000;
Xdp1 =zeros(N,N);
Xdp1(1,1)=1000;
for L=1:N-1
for k=1:N-1
%vt(k+L)=Ut(k-L+1);
Xdq(k+1,L) =(1/Xdp(k,L))+A*exp((-1*E)/(R*(Utt(k)+273)))*24*365*h;
Xdp(k+1,L)=1/(Xdq(k+1,L));
Xdp(k,L+1)=1/(Xdq(k+1,L));
Xdq1(k+1,L) =(1/Xdp1(k,L))+A*exp((-1*E)/(R*(Utt1(k)+273)))*24*365*h;
Xdp1(k+1,L)=1/(Xdq1(k+1,L));
Xdp1(k,L+1)=1/(Xdq1(k+1,L));
end
end
% MATLAB code
for j =1:N-1
Ut1(j)= -p1*(Utt(j)-Tref);
Ut2(j)= -p2*(Utt1(j)-Tref);
end
sum00=sum(Ut1);
sum01=sum(Ut2);
X1=1./Xdp(:,1);
Xf=1./Xdp(:,20);
Total= table(X1,Xf);
Tdiff =a*(Total.Xf-Total.X1);
X22=1./Xdp1(:,1);
X2f=1./Xdp1(:,20);
Total22= table(X22,X2f);
Tdiff22 =a*(Total22.X2f-Total22.X22);
obj=(sum00+(Tdiff));
ob1 = min(obj);
obj2=sum01+Tdiff22;
ob2 = min(obj2);
plot(Utt,obj,'-o');
hold on
plot(Utt1,obj)
I try to use MILP (Mixed Integer Linear Programming) to calculate the unit commitment problem. (unit commitment: An optimization problem trying to find the best scheduling of generator)
Because the relationship between generator power and cost is a quadratic function, so I use piecewise function to convert power to cost.
I modify the answer on this page:
unit commitment problem using piecewise-linear approximation become MIQP
The simple program structure is like this:
from docplex.mp.model import Model
mdl = Model(name='buses')
nbbus40 = mdl.integer_var(name='nbBus40')
nbbus30 = mdl.integer_var(name='nbBus30')
mdl.add_constraint(nbbus40*40 + nbbus30*30 >= 300, 'kids')
#after 4 buses, additional buses of a given size are cheaper
f1=mdl.piecewise(0, [(0,0),(4,2000),(10,4400)], 0.8)
f2=mdl.piecewise(0, [(0,0),(4,1600),(10,3520)], 0.8)
cost1= f1(nbbus40)
cost2 = f2(nbbus30)
mdl.minimize(cost1+ cost1)
mdl.solve()
mdl.report()
for v in mdl.iter_integer_vars():
print(v," = ",v.solution_value)
which gives
* model buses solved with
objective = 3520.000
nbBus40 = 0
nbBus30 = 10.0
The answer is perfect but there is no way to apply my example.
I used a piecewise function to formulate a piecewise linear relationship between power and cost, and got a new object (cost1), and then calculated the minimum value of this object.
The following is my actual code(simply):
(min1,miny1), (pw1_1,pw1_1y),(pw1_2,pw1_2y), (max1,maxy1) are the breakpoints on the power-cost curve.
pwl_func_1phase = ucpm.piecewise(
0,
[(0,0),(min1,miny1),
(pw1_1,pw1_1y),
(pw1_2,pw1_2y),
(max1,maxy1)
],
0
)
#df_decision_vars_spinning is a dataframe store Optimization variables
df_decision_vars_spinning.at[
(units,period),
'variable_cost'
] = pwl_func_1phase(
df_decision_vars_spinning.at[
(units,period),
'production'
]
)
total_variable_cost = ucpm.sum(
(df_decision_vars_spinning.variable_cost))
ucpm.minimize(total_variable_cost )
I don’t know what causes this optimization problem can't be solve. Here is my complete code :
https://colab.research.google.com/drive/1JSKfOf0Vzo3E3FywsxcDdOz4sAwCgOHd?usp=sharing
With an unlimited edition of CPLEX, your model solves (though very slowly). Here are two ideas to better control what happens in solve()
use solve(log_output=True) to print the log: you'll see the gap going down
set a mip gap: setting mip gap to 5% stops the solve at 36s
ucpm.parameters.mip.tolerances.mipgap = 0.05
ucpm.solve(log_output=True)
Not an answer, but to illustrate my comment.
Let's say we have as the cost curve
cost = α + β⋅power^2
Furthermore, we are minimizing cost.
We can approximate using a few linear curves. Here I have drawn a few:
Let's say each linear curve has the form
cost = a(i) + b(i)⋅power
for i=1,...,n (n=number of linear curves).
It is easy to see that is we write:
min cost
cost ≥ a(i) + b(i)⋅power ∀i
we have a good approximation for the quadratic cost curve. This is exactly as I said in the comment.
No binary variables were used here.
I am trying to run a recurrent neural network where the state update function for each neuron is the following
z = g*y
given that
g = (x<x_max & x>x_max-e) | (x>-x_max & x<-x_max+e)
Note that all the variables here are just scalars.
The variable x is defined in a way that it will always update continually so that g will always be a pulse as shown in the this picture. That is, g won't be 1 for a single update but it will be 1 for several consecutive updates.
Can any of these packages implement an automatic gradient computation given this transfer function?
The gradient can't be computed.
g as you have shown is a binary variable. So it's gradient can't be computed. Even the wave-form you have plotted has gradient 0 everywhere except at two points (where its infinite, function is discontinuous)
I have the following equation:
Where M is a [Dx3] matrix and V is a [DxD] matrix. Each of these forms a [3x3] block in a larger [3Kx3K] matrix, indexed by i, j. For now, I'm wondering if anyone has come across doing a reduce-sum of this form in TensorFlow - I'm still getting used to the API structure!
I have browsed the tutorial of eigen at
https://eigen.tuxfamily.org/dox-devel/group__TutorialMatrixArithmetic.html
it said
"Note: for BLAS users worried about performance, expressions such as c.noalias() -= 2 * a.adjoint() * b; are fully optimized and trigger a single gemm-like function call."
but how about computation like H.transpose() * H , because it's result is a symmetric matrix so it should only need half time as normal A*B, but in my test, H.transpose() * H spend same time as H.transpose() * B. does eigen have special optimization for this situation , like opencv, it has similar function.
I know symmetric optimization will break the vectorization , I just want to know if eigen have solution which could provide both symmetric optimization and vectorization
You are right, you need to tell Eigen that the result is symmetric this way:
Eigen::MatrixXd H = Eigen::MatrixXd::Random(m,n);
Eigen::MatrixXd Z = Eigen::MatrixXd::Zero(n,n);
Z.template selfadjointView<Eigen::Lower>().rankUpdate(H.transpose());
The last line computes Z += H * H^T within the lower triangular part. The upper part is left unchanged. You want a full matrix, then copy the lower part to the upper one:
Z.template triangularView<Eigen::Upper>() = Z.transpose();
This rankUpdate routine is fully vectorized and comparable to the BLAS equivalent. For small matrices, better perform the full product.
See also the respective doc.