Blending problem - calculate prices of a product given an increase in profits - ampl

new to AMPL.
In a blending problem, I have solved a model by maximizing profit.
Furthermore, I'm trying to calculate what the unit price of a given amount of product has to be, in order to increase my profits with a certain percentage.
Is it possible to do this directly in run. ?
By applying "let" i'm able to change the amount of product available, but i'm struggling figuring out how to set the price of the product as a variable? How can I do this?
Thank you.

There are a couple of things you're trying to do here.
One is to solve a modified version of the problem with a requirement that the profit be at least X% better than the previous version of the problem. This can be done as follows:
Solve the original version of the problem
Set a param profit_0 equal to the profit from the old problem.
Modify the constraints on the problem as appropriate
Change the objective function, so it's now "maximise Supplier 3's price".
Re-solve the problem with a new constraint that the profit in the new solution be at least as good as 1.05*profit_0 or whatever the requirement is.
The other is to treat some prices as fixed and others as variables. Probably the simplest way to do this is define them all as variables but then constrain some of them to fixed values.
Coding these in AMPL isn't too hard, and I'll give an example of the syntax below. Unfortunately, the fact that you're multiplying (variable price) by (variable quantity bought) to find your costs means you end up with a quadratic constraint, which many solvers will reject.
In this example I've used Gecode, which isn't ideal for this kind of problem (in particular, it requires all variables be integer) but does at least allow for quadratic constraints:
reset;
option solver gecode;
# all the "integer" constraints in this example are there because
# gecode won't accept non-integer variables; ideally they wouldn't
# be there.
# To keep the demo simple, we assume that we are simply buying
# a single ingredient from suppliers and reselling it, without
# blending considerations.
param ingredients_budget;
# the maximum we can spend on buying ingredients
set suppliers;
param max_supply{suppliers};
# maximum amount each supplier has available
var prices{suppliers} integer >= 0;
# the amount charged by each supplier - in fact we want to treat
# some of those as var and some as constant, which we'll do by
# fixing some of the values.
param fixed_prices{suppliers} default -1;
# A positive value will be interpreted as "fix price at this value";
# negative will be interpreted as variable price.
s.t. fix_prices{s in suppliers: fixed_prices[s] > 0}:
prices[s] = fixed_prices[s];
param selling_price;
var quantity_bought{s in suppliers} >= 0, <= max_supply[s] integer;
var quantity_sold integer;
s.t. max_sales: quantity_sold = sum{s in suppliers} quantity_bought[s];
var input_costs integer;
s.t. defineinputcosts: input_costs = sum{s in suppliers} quantity_bought[s]*prices[s];
s.t. enforcebudget: input_costs <= ingredients_budget;
var profit integer;
s.t. defineprofit: profit = quantity_sold*selling_price - input_costs;
maximize OF1: profit;
data;
param ingredients_budget := 1000;
set suppliers :=
S1
S2
;
param max_supply :=
S1 100
S2 0
;
param fixed_prices :=
S1 120
;
param selling_price := 150;
model;
print("Running first case with nothing available from S2");
solve;
display profit;
display quantity_bought;
display quantity_sold;
param profit_0;
let profit_0 := profit;
param increase_factor = 0.05;
let max_supply["S2"] := 100;
s.t. improveprofit: profit >= (1+increase_factor)*profit_0;
maximize OF2: prices["S2"];
objective OF2;
print("Now running to increase profit.");
solve;
display profit;
display quantity_bought;
display quantity_sold;
display prices["S2"];
Another option is to use AMPL's looping commands to run the same problem repeatedly but changing the relevant price value each time, to see which price values give an acceptable profit. If you do this, you don't need to declare price as a var, just make it a param and use "let" to change it between scenarios. This will avoid the quadratic constraint and allow you to use MIP solvers like CPLEX and Gurobi.
You could also ask on the Operations Research SE, where you might get some better answers than I can give you.

Related

How to improve the performance of my graph coloring model in MiniZinc?

I have created a model for solving the graph coloring problem in MiniZinc:
include "globals.mzn";
int: n_nodes; % Number of nodes
int: n_edges; % Number of edges
int: domain_ub; % Number of colors
array[int] of int: edges; % All edges of graph as a 1D array
array[1..n_edges, 1..2] of int: edges2d = array2d(1..n_edges, 1..2, edges);
array[1..n_nodes] of var 1..domain_ub: colors;
constraint forall (i in 1..n_edges) (colors[edges2d[i,1]] != colors[edges2d[i,2]]);
solve :: int_search(colors, dom_w_deg, indomain_random)
satisfy;
In order to tackle big problems (around 400-500 nodes), I start with an upper bound of the number of colors and solve successive satisfaction problems decrementing the number by one till it becomes unsatisfiable or times out. This method gives me decent results.
In order to improve my results, I added symmetry breaking constraints to the above model:
constraint colors[1] = 1;
constraint forall (i in 2..n_nodes) ( colors[i] in 1..max(colors[1..i-1])+1 );
This, however, brings down my results both speed-wise and quality-wise.
Why is my model performing badly after adding the additional constraints? How should I go about adding the symmetry breaking constraints?
For symmetry breaking for cases where the values are fully symmetric, I would recommend the seq_precede_chain constraint, which breaks that symmetry. As commented by #hakank, using indomain_random is probably not a good idea when used with symmetry breaking, indomain_min is a safer choice.
For graph coloring in general, it may help performance to run a clique-finding algorithm, and post all_different constraints over each cliques found. That would have to be done when generating a minizinc program for each instance. For comparison, see the Gecode graph coloring example which uses pre-computed cliques.
I know this is an old question, but I was working on the same problem and I wanted to write what I found about this topic that maybe it will be useful to someone in the future.
To improve the model the solution is to use symmetry breaking constraint, as you did, but in Minizinc there is a global constraint called value_precede which can be used in this case.
% A new color J is only allowed to appear after colors 0..J-1 have been seen before (in any order)
constraint forall(j in 1..n-1)(value_precede(j, j+1, map));
Changing the search heuristics the result does not improve much, I have tried different configurations and the best results are obtained using dom_w_deg and indomain_min (compared to my data files).
Another way to improve the results is to accept any good enough solution that's less than the number of colours in the domain.
But this model does not always lead to obtaining the optimal result.
include "globals.mzn";
int: n; % Number of nodes
int: e; % Number of edges
int: maxcolors = 17; % Domain of colors
array[1..e,1..2] of int: E; % 2d array, rows = edges, 2 cols = nodes per edge
array[0..n-1] of var 0..maxccolors: c; % Color of node n
constraint forall(i in 1..e)(c[E[i,1]] != c[E[i,2]] ); % Two linked nodes have diff color
constraint c[0] == 0; % Break Symmetry, force fist color == 0
% Big symmetry breaker. A new color J is only allowed to appear after colors
% 0..J-1 have been seen before (in any order)
constraint forall(i in 0..n-2)( value_precede(i,i+1, c) );
% Ideally solve would minimize(max(c)), but that's too slow, so we accept any good
% enough solution that's less equal our heuristic "maxcolors"
constraint max(c) <= maxcolors;
solve :: int_search(c, dom_w_deg, indomain_min, complete) satisfy;
output [ show(max(c)+1), "\n", show(c)]
A clear and complete explanation can be found here:
https://maxpowerwastaken.gitlab.io/model-idiot/posts/graph_coloring_and_minizinc/

Summation iterated over a variable length

I have written an optimization problem in pyomo and need a constraint, which contains a summation that has a variable length:
u_i_t[i, t]*T_min_run - sum (tnewnew in (t-T_min_run+1)..t-1) u_i_t[i,tnewnew] <= sum (tnew in t..(t+T_min_run-1)) u_i_t[i,tnew]
T is my actual timeline and N my machines
usually I iterate over t, but I need to guarantee the machines are turned on for certain amount of time.
def HP_on_rule(model, i, t):
return model.u_i_t[i, t]*T_min_run - sum(model.u_i_t[i, tnewnew] for tnewnew in range((t-T_min_run+1), (t-1))) <= sum(model.u_i_t[i, tnew] for tnew in range(t, (t+T_min_run-1)))
model.HP_on_rule = Constraint(N, rule=HP_on_rule)
I hope you can provide me with the correct formulation in pyomo/python.
The problem is that t is a running variable and I do not know how to implement this in Python. tnew is only a help variable. E.g. t=6 (variable), T_min_run=3 (constant) and u_i_t is binary [00001111100000...] then I get:
1*3 - 1 <= 3
As I said, I do not know how to implement this in my code and the current version is not running.
TypeError: HP_on_rule() missing 1 required positional argument: 't'
It seems like you didn't provide all your arguments to the function rule.
Since t is a parameter of your function, I assume that it corresponds to an element of set T (your timeline).
Then, your last line of your code example should include not only the set N, but also the set T. Try this:
model.HP_on_rule = Constraint(N, T, rule=HP_on_rule)
Please note: Building a Constraint with a "for each" part, you must provide the Pyomo Sets that you want to iterate over at the begining of the call for Constraint construction. As a rule of thumb, your constraint rule function should have 1 more argument than the number of Pyomo Sets specified in the Constraint initilization line.

AMPL Sum variables operator

I am trying to solve a set of problems using AMPL and add their objective values. However, the sum operator does not seem to work and only keeps getting updated to the most recent value.
set CASES := {1,2,3,4,5,6};
model modelFile.mod;
option solver cplex;
option eexit -123456789;
var total;
let total := 0;
for {j in CASES}
{
reset data;
data ("data" & j & ".dat")
solve;
display total_Cost;
let total := total + total_Cost;
display total;
}
Sample Output:
CPLEX 12.6.3.0: optimal solution; objective 4.236067977
2 dual simplex iterations (0 in phase I)
total_Cost = 4.23607
total = 4.23607
CPLEX 12.6.3.0: optimal solution; objective 5.656854249
5 dual simplex iterations (0 in phase I)
total_Cost = 5.65685
total = 5.65685
where total_cost is the objective value from the optimization problem
Since AMPL is an algebraic modeling language rather than a general-purpose programming language, variables in it denote optimization variables which are determined during the solution process. So each time you call solve, optimization variable total is reset. What you need here is a parameter which, unlike variable, is not changed during the optimization:
param total;
I finally realized that this happened due to the new keyword "reset data" that AMPL has. By changing the keyword to "update", the code works.

test for normality

What is the best way to fit test / test normality for each unique ilitm in the below dataset? Thanks
As you know (visible in the edit history) Oracle provides the Shapiro-Wilk
test of normality (I use a link to [R], as you will find much more reference for this implementation).
The important thing to know is that the OUT parameter sig corresponds to what the statistics call the p-value.
Example
DECLARE
sig NUMBER;
mean NUMBER := 0;
stdev NUMBER := 1;
BEGIN
DBMS_STAT_FUNCS.normal_dist_fit (USER,
'DIST',
'DIST1',
'SHAPIRO_WILKS',
mean,
stdev,
sig);
DBMS_OUTPUT.put_line (sig);
END;
/
you get the following output
W value : ,9997023261540432791888281834378157820514
,7136528702727722659486194469256296703232
For comparison the test in r with the same data
> shapiro.test(df$DIST1)
Shapiro-Wilk normality test
data: df$DIST1
W = 0.9997, p-value = 0.7137
The rest is statistics:)
My interpretation - this test is useful if you need to discard the most coarse deviations from the normal distribution
If sig < .05 you may throw the data away as not normal distributed, but a high value of sig doesn't mean the opposite. You only know that you can't discard it as non-normal..
Anyway a plot of distribution can provide better insight that a simple true/false test. Here is R a good resource as well.
Some other useful discussions to this topic.

Constrained Single-Objective Optimization

Introduction
I need to split an array filled with a certain type (let's take water buckets for example) with two values set (in this case weight and volume), while keeping the difference between the total of the weight to a minimum (preferred) and the difference between the total of the volumes less than 1000 (required). This doesn't need to be a full-fetched genetic algorithm or something similar, but it should be better than what I currently have...
Current Implementation
Due to not knowing how to do it better, I started by splitting the array in two same-length arrays (the array can be filled with an uneven number of items), replacing a possibly void spot with an item with both values being 0. The sides don't need to have the same amount of items, I just didn't knew how to handle it otherwise.
After having these distributed, I'm trying to optimize them like this:
func (main *Main) Optimize() {
for {
difference := main.Difference(WEIGHT)
for i := 0; i < len(main.left); i++ {
for j := 0; j < len(main.right); j++ {
if main.DifferenceAfter(i, j, WEIGHT) < main.Difference(WEIGHT) {
main.left[i], main.right[j] = main.right[j], main.left[i]
}
}
}
if difference == main.Difference(WEIGHT) {
break
}
}
for main.Difference(CAPACITY) > 1000 {
leftIndex := 0
rightIndex := 0
liters := 0
weight := 100
for i := 0; i < len(main.left); i++ {
for j := 0; j < len(main.right); j++ {
if main.DifferenceAfter(i, j, CAPACITY) < main.Difference(CAPACITY) {
newLiters := main.Difference(CAPACITY) - main.DifferenceAfter(i, j, CAPACITY)
newWeight := main.Difference(WEIGHT) - main.DifferenceAfter(i, j, WEIGHT)
if newLiters > liters && newWeight <= weight || newLiters == liters && newWeight < weight {
leftIndex = i
rightIndex = j
liters = newLiters
weight = newWeight
}
}
}
}
main.left[leftIndex], main.right[rightIndex] = main.right[rightIndex], main.left[leftIndex]
}
}
Functions:
main.Difference(const) calculates the absolute difference between the two sides, the constant taken as an argument decides the value to calculate the difference for
main.DifferenceAfter(i, j, const) simulates a swap between the two buckets, i being the left one and j being the right one, and calculates the resulting absolute difference then, the constant again determines the value to check
Explanation:
Basically this starts by optimizing the weight, which is what the first for-loop does. On every iteration, it tries every possible combination of buckets that can be switched and if the difference after that is less than the current difference (resulting in better distribution) it switches them. If the weight doesn't change anymore, it breaks out of the for-loop. While not perfect, this works quite well, and I consider this acceptable for what I'm trying to accomplish.
Then it's supposed to optimize the distribution based on the volume, so the total difference is less than 1000. Here I tried to be more careful and search for the best combination in a run before switching it. Thus it searches for the bucket switch resulting in the biggest capacity change and is also supposed to search for a tradeoff between this, though I see the flaw that the first bucket combination tried will set the liters and weight variables, resulting in the next possible combinations being reduced by a big a amount.
Conclusion
I think I need to include some more math here, but I'm honestly stuck here and don't know how to continue here, so I'd like to get some help from you, basically that can help me here is welcome.
As previously said, your problem is actually a constrained optimisation problem with a constraint on your difference of volumes.
Mathematically, this would be minimise the difference of volumes under constraint that the difference of volumes is less than 1000. The simplest way to express it as a linear optimisation problem would be:
min weights . x
subject to volumes . x < 1000.0
for all i, x[i] = +1 or -1
Where a . b is the vector dot product. Once this problem is solved, all indices where x = +1 correspond to your first array, all indices where x = -1 correspond to your second array.
Unfortunately, 0-1 integer programming is known to be NP-hard. The simplest way of solving it is to perform exhaustive brute force exploring of the space, but it requires testing all 2^n possible vectors x (where n is the length of your original weights and volumes vectors), which can quickly get out of hands. There is a lot of literature on this topic, with more efficient algorithms, but they are often highly specific to a particular set of problems and/or constraints. You can google "linear integer programming" to see what has been done on this topic.
I think the simplest might be to perform a heuristic-based brute force search, where you prune your search tree early when it would get you out of your volume constraint, and stay close to your constraint (as a general rule, the solution of linear optimisation problems are on the edge of the feasible space).
Here are a couple of articles you might want to read on this kind of optimisations:
UCLA Linear integer programming
MIT course on Integer programming
Carleton course on Binary programming
Articles on combinatorial optimisation & linear integer programming
If you are not familiar with optimisation articles or math in general, the wikipedia articles provides a good introduction, but most articles on this topic quickly show some (pseudo)code you can adapt right away.
If your n is large, I think at some point you will have to make a trade off between how optimal your solution is and how fast it can be computed. Your solution is probably suboptimal, but it is much faster than the exhaustive search. There might be a better trade off, depending on the exact configuration of your problem.
It seems that in your case, difference of weight is objective, while difference of volume is just a constraint, which means that you are seeking for solutions that optimize difference of weight attribute (as small as possible), and satisfy the condition on difference of volume attribute (total < 1000). In this case, it's a single objective constrained optimization problem.
Whereas, if you are interested in multi-objective optimization, maybe you wanna look at the concept of Pareto Frontier: http://en.wikipedia.org/wiki/Pareto_efficiency . It's good for keeping multiple good solutions with advantages in different objective, i.e., not losing diversity.