In GAMS, I get the following error message if I increase a parameter above a certain threshold:
** An initial function value is too large (larger than 1.0E+10).
Scale the variables and/or equations or add bounds.
Is there a way I can get GAMS to tell me which value gets too large?
I am using a NLP model
Related
I am new to the field of optimization and I need help in the following optimization problem. I have tried to solve it using normal coding to make sure that I got he correct results. However, the results I got are different and I am not sure my way of analysis is correct or not. This is a short description of the problem:
The objective function shown in the picture is used to find the optimal temperature of the insulating system that minimizes the total cost over a given horizon.
[This image provides the mathematical description of the objective function and the constraints] (https://i.stack.imgur.com/yidrO.png)
The data of the problems are as follow:
1-
Problem data:
A=1.07×10^8
h=1
T_ref=87.5
N=20
p1=0.001;
p2=0.0037;
This is the curve I want to obtain
2- Optimization variable:
u_t
3- Model type:
The model is a nonlinear cost function with non-linear constraints and it is solved using non-linear solver SNOPT.
4-The meaning of the symbols in the objective and constrained functions
The optimization is performed over a prediction horizon of N years.
T_ref is The reference temperature.
Represent the degree of polymerization in the kth year.
X_DP Represents the temperature of the insulating system in the kth year.
h is the time step (1 year) of the discrete-time model.
R is the ratio of the load loss at the rated load to the no-load loss.
E is the activation energy.
A is the pre-exponential constant.
beta is a linear coefficient representing the cost due to the decrement of the temperature.
I have developed the source code in MATLAB, this code is used to check if my analysis is correct or not.
I have tried to initialize the Ut value in its increasing or decreasing states so that I can have the curves similar to the original one. [This is the curve I obtained] (https://i.stack.imgur.com/KVv2q.png)
I have tried to simulate the problem using conventional coding without optimization and I got the figure shown above.
close all; clear all;
h=1;
N=20;
a=250;
R=8.314;
A=1.07*10^8;
E=111000;
Tref=87.5;
p1=0.0019;
p2=0.0037;
p3=0.0037;
Utt=[80,80.7894736842105,81.5789473684211,82.3684210526316,83.1578947368421,... % The value of Utt given here represent the temperature increament over a predictive horizon.
83.9473684210526,84.7368421052632,85.5263157894737,86.3157894736842,...
87.1052631578947,87.8947368421053,88.6842105263158,89.4736842105263,...
90.2631578947369,91.0526315789474,91.8421052631579,92.6315789473684,...
93.4210526315790,94.2105263157895,95];
Utt1 = [95,94.2105263157895,93.4210526315790,92.6315789473684,91.8421052631579,... % The value of Utt1 given here represent the temperature decreament over a predictive horizon.
91.0526315789474,90.2631578947369,89.4736842105263,88.6842105263158,...
87.8947368421053,87.1052631578947,86.3157894736842,85.5263157894737,...
84.7368421052632,83.9473684210526,83.1578947368421,82.3684210526316,...
81.5789473684211,80.7894736842105,80];
Ut1=zeros(1,N);
Ut2=zeros(1,N);
Xdp =zeros(N,N);
Xdp(1,1)=1000;
Xdp1 =zeros(N,N);
Xdp1(1,1)=1000;
for L=1:N-1
for k=1:N-1
%vt(k+L)=Ut(k-L+1);
Xdq(k+1,L) =(1/Xdp(k,L))+A*exp((-1*E)/(R*(Utt(k)+273)))*24*365*h;
Xdp(k+1,L)=1/(Xdq(k+1,L));
Xdp(k,L+1)=1/(Xdq(k+1,L));
Xdq1(k+1,L) =(1/Xdp1(k,L))+A*exp((-1*E)/(R*(Utt1(k)+273)))*24*365*h;
Xdp1(k+1,L)=1/(Xdq1(k+1,L));
Xdp1(k,L+1)=1/(Xdq1(k+1,L));
end
end
% MATLAB code
for j =1:N-1
Ut1(j)= -p1*(Utt(j)-Tref);
Ut2(j)= -p2*(Utt1(j)-Tref);
end
sum00=sum(Ut1);
sum01=sum(Ut2);
X1=1./Xdp(:,1);
Xf=1./Xdp(:,20);
Total= table(X1,Xf);
Tdiff =a*(Total.Xf-Total.X1);
X22=1./Xdp1(:,1);
X2f=1./Xdp1(:,20);
Total22= table(X22,X2f);
Tdiff22 =a*(Total22.X2f-Total22.X22);
obj=(sum00+(Tdiff));
ob1 = min(obj);
obj2=sum01+Tdiff22;
ob2 = min(obj2);
plot(Utt,obj,'-o');
hold on
plot(Utt1,obj)
I tried to use gams to find flow of material across network of nodes. I defined
set edge(i,n,nn);
positive variable flux(i,n,nn);
y.up(i,n,nn)$( not edge(i,n,nn)) = 0;
My intention is to define 3D matrix of variable for flux of matrial i from node n to nn, then use the set edge which specifies which of complete graph can have mass of flow.
This apparently working but when i tried to save y into gdx file, i have lots of lots of zeros. I only need subset of y where edge(i,n,nn) is true.
How can i subset the y when saving gdx file.
Thanks!
You could store things in a reduced parameter:
Parameter yLevel(i,n,nn);
yLevel(i,n,nn)$edge(i,n,nn) = y.l(i,n,nn);
execute_unload 'result.gdx' yLevel;
Just a note: Do you really need the complete y(i,n,nn)? This could be huge dependent on the size of the indexing sets. Or could you alternatively modify your model to just use y(i,n,nn)$edge(i,n,nn)?
I am using TF tensorboard to monitor the training progress for a model. I am getting a bit confused because I am seeing the two points that represent the validation loss value showing a different direction:
Time=13:30 Smoothed=18.33 Value=15.41..........
Time=13:45 Smoothed=17.76 Value=16.92
In this case, is the validation loss increasing or decreasing? thanks!
As I cannot put figures in the comments, have a look at this graph.
If you watch the falling slope between x = 50 and x = 100, you will see that locally, the real values increase at some points (usually after downward spikes). So you could conclude that your function values are increasing. But at a larger scope you will see that the function values are decreasing. The smoothing helps you to get make the interpretation easier, but does not return exact values.
Coming back to the local example, it would give you the insight that the overall trend is a decreasing function, but it does not provide accurate loss values.
I am attempting to build a model which has two phases.
The first takes an input image and passes it through a conv-deconv network. The resulting Tensor has entries corresponding to pixels in a desired output image (same size as the input image).
To calculate the final output image I want to take the value generated at each pixel location from the first phase and use it as an additional input to a reduction function that is applied over the entire input image. This second step has no trainable variables, but it does have computation/memory costs that grow exponentially with the size of the input (each output pixel is a function of all input pixels).
I'm currently using the tf.map_fn to calculate the output image. I'm mapping the output pixel calculation function onto the results from the first phase. My desire is that tensorflow would allocate the memory to store the intermediate tensors needed for each pixel calculation and then free that memory before moving on to the next pixel calculation. But instead it seems to never free the intermediate calculations causing OOM errors.
Is there someway to tell tensorflow (either explicitly or implicitly) that it should free the memory allocated to hold the data of a Tensor that is no longer needed in the calculation?
TensorFlow deallocates memory for the tensor as soon as the tensor is no longer needed for any future calculations. You can verify this by looking at memory deallocation messages as shown in this notebook.
It's possible you are running out of memory because TensorFlow executes nodes in a memory inefficient order.
As an example, consider following computation:
k = 2000
a = tf.random_uniform(shape=(k,k))
for i in range(n):
a = tf.matmul(a, tf.random_uniform(shape=(k,k)))
The order in which it is evaluated can be shown below
All the circles (tf.random_uniform) nodes are evaluated first, followed by squares (tf.matmul). This has O(n) memory requirement compared to O(1) for the optimal order.
You can use control dependencies to force a specific execution order, ie, using helper function as below:
import tensorflow.contrib.graph_editor as ge
def run_after(a_tensor, b_tensor):
"""Force a to run after b"""
ge.reroute.add_control_inputs(a_tensor.op, [b_tensor.op])
When I fitted a histogram using Gaussian its error on mean and sigma seems to be fine. You can look at it here .
But, when I first normalized the histogram and fitted it with Gaussian its parameters value is exactly same as previous case but the error on mean and sigma is almost equal to the actual value or greater.
One of the reason for this is that it may be happening because it is taking error as 1/sqrt{n} and after normalizing n decreased and hence error increased.
Please let me know what is happening and how I can fix it?
You probably want to call
hist->Sumw2()
before rescaling the histogram. Otherwise the uncertainties on all bin contents are just the square roots of the bin contents (which is a huge relative error for bin contents smaller than 1, which is the case when after rescaling). SumW2 triggers to store the sum of all weights squared and not only the bin contents (i.e. the sum of weights in each bin).
See also the documentation of Sumw2() for further details (and also the explanation of weights on the top of the TH1 documentation page).