mathematical programming to optimize machine utilization? - batch-processing

I have to optimize this (see image) but I have to do it with respect to lambda (use of machines) and b (batch sizes), I'm using intellij idea using the cplex library, but I get an error: CPLEX Error 5002: 'q1' is not convex , I don't know how to solve ..
for this reason I was thinking otherwise of calculating first with respect to lambda and instead of b I insert the maximum capacity value (value already given), so once I found lambda I replace the values and then I optimize with respect to b. Does this make sense? I'm doing it wrong? what would be the right method to solve?
obviously here the constraint is missing that the sum of the lambda on the m -> machines must be = 1 for each product i.
I have to do is the third point

Related

Numerical Instability in Optim.jl

I'm currently working on a project in Julia where I am starting with an input beta which is assumed to be incorrect. I'm running through a sequence of code that updates this beta to be the correct value and checking the error. As beta gets larger, I expect this error to reach 100%. This code ultimately does a minimization of some parameter chi which is why I've chosen to employ the optimize function from Optim.jl. The output I'm getting is below.
When I perform this calculation by hand (using 1st and 2nd derivative to update) I get this
I see that this still has some numerical instability, but it holds up longer than the Optim way does. I would expect it to behave the other way around. My optimize function is set up as
result = optimize(β -> TEfunc(E,nc,onecut,β,pcutoff,μcutoff,N),β/2,2.2*β,Brent(),abs_tol=tempcutoff,rel_tol=sqrt(tempcutoff))
βstar=Optim.minimizer(result)
Is there an argument that I'm missing in the optimize call? I just want to figure out why I have numerical instability so quickly.

Searching for groups of objects given a reduction function

I have a few questions about a type of search.
First, is there a name and if so what is the name of the following type of search? I want to search for subsets of objects from some collection such that a reduction and filter function applied to the subset is true. For example, say I have the following objects, each of which contains an id and a value.
[A,10]
[B,10]
[C,10]
[D,9]
[E,11]
I want to search for "all the sets of objects whose summed values equal 30" and I would expect the output to be, {{A,B,C}, {A,D,E}, {B,D,E}, {C,D,E}}.
Second, is the only strategy to perform this search brute-force? Is there some type of general-purpose algorithm for this? Or are search optimizations dependent on the reduction function?
Third, if you came across this problem, what tools would you use to solve it in a general way? Assume the reduction and filter functions could be anything and are not necessarily the sum function. Does SQL provide a good API for this type of search? What about Prolog? Any interesting tips and tricks would be appreciated.
Thanks.
I cannot comment on the problem in general but brute forcing search can be easily done in prolog.
w(a,10).
w(b,10).
w(c,10).
w(d,9).
w(e,11).
solve(0, [], _).
solve(N, [X], [X|_]) :- w(X, N).
solve(N, [X|Xs], [X|Bs]) :-
w(X, W),
W < N,
N1 is N - W,
solve(N1, Xs, Bs).
solve(N, [X|Xs], [_|Bs]) :- % skip element if previous clause fails
solve(N, [X|Xs], Bs).
Which gives
| ?- solve(30, X, [a, b, c, d, e]).
X = [a,b,c] ? ;
X = [a,d,e] ? ;
X = [b,d,e] ? ;
X = [c,d,e] ? ;
(1 ms) no
Sql is TERRIBLE at this kind of problem. Until recently there was no way to get 'All Combinations' of row elements. Now you can do so with Recursive Common Table Expressions, but you are forced by its limitations to retain all partial results as well as final results which you would have to filter out for your final results. About the only benefit you get with SQL's recursive procedure is that you can stop evaluating possible combinations once a sub-path exceeds 30, your target total. That makes it slightly less ugly than an 'evaluate all 2^N combinations' brute force solution (unless every combination sums to less than the target total).
To solve this with SQL you would be running an algorithm that can be described as:
Seed your result set with all table entries less than your target total and their value as a running sum.
Iteratively join your prior result with all combinations of table that were not already used in the result set and whose value added to running sum is less than or equal to target total. Running sum becomes old running sum plus value, and append ID to ID LIST. Union this new result to the old results. Iterate until no more records qualify.
Make a final pass of the result set to filter out the partial sums that do not total to your target.
Oh, and unless you make special provisions, solutions {A,B,C}, {C,B,A}, and {A,C,B} all look like different solutions (order is significant).

Multi-objective optimization but the function equation is unknown?

Firstly, I am totally out of my expertise zone so please bear with me.
I developed a fluid dynamic engine with 5 exposed parameters (say A,B,C,D,E). When you give this engine these 5 parameters, it does magic and give out a value 'Z'.
I want to write a script which can explore which combinations of A-E give lowest (or close to lowest) value of Z.
I know optimization algorithm exists, but from all of my search for examples, they use some function.
So I guess my function would simply be minimize Z? But where do A-E go?
Not really an answer, but some questions and ideas that might help you think through the best way to address this. We have no understanding of how big a range of values needs to be explored for those parameters, or how Z behaves, so this is very vague...
If you look at the values of Z for given values of A...E, does the value of Z jump around a lot for small changes on the parameter values, or does the Z value change reasonably smoothly?
If the Z value is not too eratic you could try some kind of gradient descent approach using calculated values of Z for some values of the parameters to approximate the gradient - suppose changing the value of 'A' from 1 to 2 gives a better change in the Z value than a similar size change in the other parameters, then try other values of A while keeping the other parameters fixed until you find a value of A that gives the best value of Z. Then try changing the other parameter values to see which one gives the steepest descent and try to find some better value for that parameter. Repeat this process until you can't find any improvement and you will have found a (local) minimum. You could then start at a different place in your parameter space and try again - you will probably find several local minima, and may just choose the best of those. Not provably optimal but may be good enough. Of course you can get clever and use things like conjugate gradients, Newton-Raphson or similar if Z is smooth enough.
If the Z values are very eratic, then you might have to just do some sampling of the possible combinations of A...E to get values of Z and choose the best you can find. Again you might do that in some systematic way (e.g. points on a grid in your parameter space) or entirely at random, or a combination of both.
If you find that there are 'clusters' of good solutions with similar values of the parameters then maybe some kind of local search would help - the idea is that there is often a better solution in the local neighbourhood of a known good solution. So maybe try perturbing your parameter values a bit from a known solution to see if that can lead to a better solution - either by some gradient descent method or by random sampling.
Unfortunately, if your Z calculation is complex, then any method using it as a black box will likely be slow as it will need to be re-evaluated many times.
You could use a Genetic Algorithm, where your chromosomes are formed with the 5 candidate values of the variables you have to optimize, to minimize Z, and your optimization/fitness "function" is the simulation itself outputting Z.
Other viable alternatives are Particle Swarm Optimization algorithm or Ant Colony Optimization. All of those are usable algortihms for that kind of optimization problem.

How can i minimize the cost in this situation?

help, someone can help me?
Minimum cost flow with fixed costs and awards for strings saturated.
Consider the following variant of the problem of minimum cost flow where in addition
to the network G = (V, A) with values bi associated with nodes i ∈ V, such that
Pi∈V bi = 0 and costs cij for the unit cost of transport along the arc (i, j) ∈ A we also have that:
• in each arch is associated with a capacity value that indicates the maximum flow dij
transportable along the arc;
• the number of arcs along here has sent a strictly positive flow is no more than a percentage 100p1% of the total of the arches and for each of these arcs you pay a fixed cost of K;
• the number of arcs that are saturated (arcs along which is sent a flow equal to their capacity) is at least a percentage 100p2% of the total of the arches
(p2
Formulate the mathematical model for this problem, it is written in AMPL and defining the data of a particular instance, resolving it. Care must also be an analysis of what happens if you change some of the instance data. In particular, you may find an interval [p1, p2] as small as possible so that there is a solution of the problem.
I'm not sure that I clearly understand your problem, I try to give a possible solution for each question:
You should have a positive variable let's call it Xij for each arc, which defines the current flow passing on the arc between nodes i and j.
With this variable and the given parameter Dij you can add a constraint to express the capacity boundary: Xij<=Dij ForEach (i,j) belonging to A.
About the others constraint I suggest you to use a minimization objective function of the sum{ i in N , j in N } used[i,j] * k . Where used[i,j] is a binary variable that denotes if the corresponding flow is equal to zero or not.To relate the flow with this binary variable you should add an additional constraint like this:
x[i,j] <= d[i,j] * used[i,j]
As far the number of saturated arcs concern, you can solve the max flow problem, in which the solution is given by consecutive iterations of the augmenting flow algorithm.
I'm not sure that I answer to your questions if I don't feel free of posting exactly what is your decision problem ( which is the objective function and what are the constraints )

optimizing a function to find global and local peaks with R

Y
I have 6 parameters for which I know maxi and mini values. I have a complex function that includes the 6 parameters and return a 7th value (say Y). I say complex because Y is not directly related to the 6 parameters; there are many embeded functions in between.
I would like to find the combination of the 6 parameters which returns the highest Y value. I first tried to calculate Y for every combination by constructing an hypercube but I have not enough memory in my computer. So I am looking for kinds of markov chains which progress in the delimited parameter space, and are able to overpass local peaks.
when I give one combination of the 6 parameters, I would like to know the highest local Y value. I tried to write a code with an iterative chain like a markov's one, but I am not sure how to process when the chain reach an edge of the parameter space. Obviously, some algorythms should already exist for this.
Question: Does anybody know what are the best functions in R to do these two things? I read that optim() could be appropriate to find the global peak but I am not sure that it can deal with complex functions (I prefer asking before engaging in a long (for me) process of code writing). And fot he local peaks? optim() should not be able to do this
In advance, thank you for any lead
Julien from France
Take a look at the Optimization and Mathematical Programming Task View on CRAN. I've personally found the differential evolution algorithm to be very fast and robust. It's implemented in the DEoptim package. The rgenoud package is another good candidate.
I like to use the Metropolis-Hastings algorithm. Since you are limiting each parameter to a range, the simple thing to do is let your proposal distribution simply be uniform over the range. That way, you won't run off the edges. It won't be fast, but if you let it run long enough, it will do a good job of sampling your space. The samples will congregate at each peak, and will spread out around them in a way that reflects the local curvature.