I need some assistance to create loop in demand and supply constraints in GAMS - optimization

I am a newbie for using GAMS. Can you do me a favor please?
I try to input two constraints about how to minimize cost of the factory (i) each month (k) :
Constraint 1: Although the sum of crops (tonnes) from selected farmers (j) cannot meet all factory demand, finally they have to sent all to the factory. For example, if the factory need 5 tonnes of crops but sum of crops from farmers number 8,13,25 is added to 3 tonnes, these three farmers should sent 3 tonnes to the main factory i.
Constraint 2: When the sum of crops (tonnes) from selected farmers (j) excess the factory demand, the selected farmer can sent all crops to factory. For example, if the factory need 5 tonnes of crops but sum of crops from farmers number 23 is added to 6 tonnes, this farmer should sent 6 tonnes to the main factory i.
I try to use if,then statement but it is not enable. Maybe while loop will enable. Can you do me a favor please? or any suggestion. This is my idea but I do not know how to coding correctly.
con1(i,k).. sum((j),a(j,k)* b(j)x(i,j,k)) $ (sum((j),a(j,k) b(j)*x(i,j,k))< d(i,k)) =e= d(i,k);
con2(i,k).. sum((j),a(j,k)* b(j)x(i,j,k)) $ (sum((j),a(j,k) b(j)*x(i,j,k))> d(i,k)) =g= d(i,k);
a(j,k) is crop yield planning for each farmer in month (k) that they need to be added together by using binary variable x(i,j,k) to meet the factory demend as shows in d(i,k)
c(j,k) is the number of areas each farmer.
h(i,j) is distance bt. each farmer to factory
m(i,j) is transportaion cost
My objective is to minimize cost. I afraid that con1 will not work.
However, I look forward to hearing from you. Thanks
enter image description here

Related

How to model a time-dependent vehicle routing problem with time windows in octapy?

I am looking to model a vehicle routing problem with time windows on OctaPy. Specifically, this problem involves traffic enforcement on public roads, so parking wardens need to survey carparks and road segments and visit them more than once during a 24-hour period.
I refer to the answer in the following question as foundation to develop my problem:
Is it possible to create a VRP solution using NetworkX?
I have a few questions regarding the modelling:
How does OctaPy model time-dependency, that is a different edge weight representing travel duration depending on the time of day?
How do I model the demand points if each point needs to be visited X times?
If a demand point is to be visited X times, how can I enforce a time window gap such that the duration between visits is at least a fixed duration (e.g. 1 hour)?
OptaPy models time-dependency the way you model time-dependency. That is, whatever you use to model time-dependency (may it be an edge, a list, a matrix, a class, etc.), OptaPy can use it in its constraints.
If X is known in advance, for each demand point, you create X copies of it and put it in the #problem_fact_collection_property field. If X is not known in advance, consider using real-time planning (https://www.optapy.org/docs/latest/repeated-planning/repeated-planning.html#realTimePlanning).
This depends on how you implement your time dependency. This would be easier when OptaPy supports the new VariableListener API for List Variable (as well as the builtin list shadow variables) that OptaPlanner has. Until then, you need to do the calculation in a function. Make Edge a #planning_entity and give it a inverse relation shadow variable (https://www.optapy.org/docs/latest/shadow-variable/shadow-variable.html#bidirectionalVariable). Add a method get_arrival_time(edge) to Vehicle that get the estimated time of visit for a given Edge in its visited_edges_list.
def less_than_one_hour_between(visit_1: Edge, visit_2: Edge):
visit_1_arrival_time = visit_1.vehicle.get_arrival_time(visit_1)
visit_2_arrival_time = visit_2.vehicle.get_arrival_time(visit_2)
duration = visit_2_arrival_time - visit_1_arrival_time
return timedelta(hours=0) <= duration <= timedelta(hours=1)
def one_hour_between_consecutive_visits(constraint_factory):
return (
constraint_factory.for_each(Edge)
.join(Edge, Joiners.equal(lambda edge: edge.graph_from_node),
Joiners.equal(lambda edge: edge.graph_to_node))
.filter(lambda a, b: a is not b and less_than_one_hour_between(a, b))
.penalize('less than 1 hour between visits', HardSoftScore.ONE_HARD)

number of instruments GMM estimator in R

I have one question, maybe very simple. In Stata in dynamic panel data model (GMM estimator) you recieve a "number of instruments". In turn, in R you recieve AR test, sargan test but the "number of instruments" is nor displayed. How to obtain number of isntruments in R?
Thank you for helping
If you used all the 99 lags available for the instrumental variable, the number of instruments (for each instrumental variable) will be:
(0,5 x t-1 x t-2) + the number of time dummies you used.
(t is the time span of your data).
If you used less the all available lags, I don’t know how to calculate the number of instruments. If someone knows, please tell me!!

How to model measures that depend on the underlying substance

I'm using the Aconcagua measurement library in Pharo. I've had a lot of success using it to model things like days and kilometers, but have encountered an interesting problem where converting between units requires information on the underlying substance being measured. The formula for expressing the amount of a substance in air in parts per million, given the amount in milligrams per cubic meter is:
; where mw is the molecular weight of the material.
I'm envisioning usage like:
tlvCO := carbonMonoxide tlv. "returns the Threshold limit Value as 29 mg/m3"
...
tlvCO convertTo: PPM "where PPM is an Aconcagua unit"
The problem is that, while the examples I've seen of measurements in Aconcagua are contain in themselves all the info you need for conversion, in this case, you have to know the molecular weight of the underlying substance being measured. Thus mg/m3 -> ppm is not inherently meaningful. A properly formed question would be mg/m3 of ammonia -> ppm.
My instinct is to either:
create a new class like MaterialQuantity which has a material and a measure, or
create a special unit subclass that has a material
But I'm not 100% sold and would like some input...
I don't think that molecular weight is part of the unit, but part of a calculation, like the 24.45 (which is not clear, but it seems that is an average you consider for air molecular mass).
I am not sure that ppm is a unit that you can convert to a density unit, because they belong in different domains.
As far as i understand, you need to reify tlv as a compound unit or formula, which you can ask for the element. Then you could simply do something like [:tlv | tlv * ( 24.45 / tlv element) ]

Optimizing Portfolio With Bounds on Weights and Costs

I wish to create efficient frontiers for portfolios with bounds on both weights and costs. The following code provides the frontiers for portfolios in which the underlying assets are bounded with minimum and maximum weights. How do I add to this a secondary constraint in which the combined annual charges of the underlying assets do not exceed a maximum? Assume each asset has an annual cost which is applied as a percentage. As such the combined weights*charges should not exceed x%.
lb=Bounds(:,1);
ub=Bounds(:,2);
P = Portfolio('AssetList', AssetList,'LowerBound', lb, 'UpperBound', ub, 'Budget', 1);
P = P.estimateAssetMoments(AssetReturns);
[Passetmean, Passetcovar] = P.getAssetMoments;
Correlations=corrcoef(AssetReturns);
% Estimate Frontier
pwgt = P.estimateFrontier(20);
[prsk, pret] = P.estimatePortMoments(pwgt);
Mary,
having entered another set of constraint principles into the model, kindly notice, that the modified efficient frontier problem is out of the grounds of a guaranteed convex-optimisation problem.
Thus one may forget about a comfort of all the popular fmicg(), l-bgfs et al solvers.
This will not simply have a SLOC one-liner to get answer(s) out of the box.
Non-linear problems will require ( the wilder, the more ... ) you to assemble another optimisation function, be it either
a brute-force based scanner,
with a fully orthogonal mesh scanned, with "utility function" defined so that, as the given requirement states, it incorporates also the add-on cost-of-beholding a Portfolio item
or
a genetic-algorithm based approach,
in a belief, the brute-force one might become as time-extensive as to cease to be a feasible approach and a GA-evolution may yield acceptable sub-optimal (local optima) outputs

Can I run a GA to optimize wavelet transform?

I am running a wavelet transform (cmor) to estimate damping and frequencies that exists in a signal.cmor has 2 parameters that I can change them to get more accurate results. center frequency(Fc) and bandwidth frequency(Fb). If I construct a signal with few freqs and damping then I can measure the error of my estimation(fig 2). but in actual case I have a signal and I don't know its freqs and dampings so I can't measure the error.so a friend in here suggested me to reconstruct the signal and find error by measuring the difference between the original and reconstructed signal e(t)=|x(t)−x^(t)|.
so my question is:
Does anyone know a better function to find the error between reconstructed and original signal,rather than e(t)=|x(t)−x^(t)|.
can I use GA to search for Fb and Fc? or do you know a better search method?
Hope this picture shows what I mean, the actual case is last one. others are for explanations
Thanks in advance
You say you don't know the error until after running the wavelet transform, but that's fine. You just run a wavelet transform for every individual the GA produces. Those individuals with lower errors are considered fitter and survive with greater probability. This may be very slow, but conceptually at least, that's the idea.
Let's define a Chromosome datatype containing an encoded pair of values, one for the frequency and another for the damping parameter. Don't worry too much about how their encoded for now, just assume it's an array of two doubles if you like. All that's important is that you have a way to get the values out of the chromosome. For now, I'll just refer to them by name, but you could represent them in binary, as an array of doubles, etc. The other member of the Chromosome type is a double storing its fitness.
We can obviously generate random frequency and damping values, so let's create say 100 random Chromosomes. We don't know how to set their fitness yet, but that's fine. Just set it to zero at first. To set the real fitness value, we're going to have to run the wavelet transform once for each of our 100 parameter settings.
for Chromosome chr in population
chr.fitness = run_wavelet_transform(chr.frequency, chr.damping)
end
Now we have 100 possible wavelet transforms, each with a computed error, stored in our set called population. What's left is to select fitter members of the population, breed them, and allow the fitter members of the population and offspring to survive into the next generation.
while not done
offspring = new_population()
while count(offspring) < N
parent1, parent2 = select_parents(population)
child1, child2 = do_crossover(parent1, parent2)
mutate(child1)
mutate(child2)
child1.fitness = run_wavelet_transform(child1.frequency, child1.damping)
child2.fitness = run_wavelet_transform(child2.frequency, child2.damping)
offspring.add(child1)
offspring.add(child2)
end while
population = merge(population, offspring)
end while
There are a bunch of different ways to do the individual steps like select_parents, do_crossover, mutate, and merge here, but the basic structure of the GA stays pretty much the same. You just have to run a brand new wavelet decomposition for every new offspring.