Heuristics / Solver for high dimensional planning problem - optimization

To optimize a production system by planning ~1000 timesteps ahead I try to solve an optimization problem with around 20000 dimensions containing binary and continuous variables and several complex constraints.
I know the provided information is little, but can someone give a hint which approach would be suitable for such big problems? Would you recommend some metaheuristic or a commercial solver?

Related

Is TensorFlow the way to go for this optimization problem?

I have to optimize the result of a process that depends on a large number of variables, i.e. a laser engraving system where the engraving depth depends on the laser speed, distance, power and so on.
The final objective is the minimization of the engraving time, or the maximization of the laser speed. All the other parameters can vary, but must stay within safe bounds.
I have never used any machine learning tools, but to my very limited knowledge this seems like a good use case for TensorFlow or any other machine learning library.
I would experimentally gather data points to train the algorithm, test it and then use a gradient descent optimizer to find the parameters (within bounds) that maximize the laser travel velocity.
Does this sound feasible? How would you approach such a problem? Can you link to any examples available online?
Thank you,
Riccardo
I’m not quite sure if I understood the problem correctly, would you add some example data and a desired output?
As far as I understood, It could be feasible to use TensorFlow, but I believe there are better solutions to that problem. Let me expand on this.
TensorFlow is a framework focused in the development of Deep Learning models. These usually require lots of data (the number really depends on the problem) but I don’t believe that just you manually gathering this data would be enough unless your team is quite big or already have some data gathered.
Also, as you have a minimization (or maximization) problem given variables that lay within a known range, I think this can be a case of Operations Research optimization instead of Machine Learning. Check this example of OR.

How to speed up the process of generating model in GAMS

I am using GAMS to solve a nonlinear programming problem. The problem size is about 500K rows, 500k columns, and 1M non-zeros. I found that it took long time to generate the model, sometimes even longer than solving the model. Are there any ways to speed up the generating process inside GAMS? Alternatively, if I switch to other platforms, are there any good choices to resolve this issue? Thanks.

Large scale linearly-constrained convex quadratic optimization - R/Python/Gurobi

I have a series of linearly-constrained convex quadratic optimization problems that have around 100.000 variables, 1 linear constraint and 100.000 bound constraints (the same as the number of variables - the solution has to be positive). I am planning to use gurobi in R and/or Python. I have noticed that, although for small problems the solver can find a solution quite quickly, for medium to large problems (as the one I have), it takes forever (some benchmarks are shown in here - credits to Stéphane Caron).
I know that qp methods do not scale very well, but I'd like to know if you are aware of any solver/technique/tool that solves medium to large qp problems faster.
Thanks!
Please click here to see the optimization problem

Why does GLPSOL (GLPK) take a long time to solve a large MIP?

I have a large MIP problem, and I use GLPSOL in GLPK to solve it. However, solving the LP relaxation problem takes many iterations, and each iteration the obj and infeas value are all the same. I think it has found the optimal solution, but it won't stop and has continued to run for many hours. Will this happen for every large-scale MIP/LP problem? How can I deal with such cases? Can anyone give me any suggestions about this? Thanks!
The problem of solving MIPs is NP-complete in general, which means that there are instances which can't be solved efficiently. But often our problems have enough structure, so that heuristics can help to solve these models. This allowed huge gains in solving-capabilities in the last decades (overview).
For understanding the basic-approach and understanding what exactly is the problem in your case (no progress in upper-bound, no progress in lower-bound, ...), read Practical Guidelines for Solving Difficult Mixed Integer Linear
Programs.
Keep in mind, that there are huge gaps between commercial solvers like Gurobi / Cplex and non-commercial ones in general (especially in MIP-solving). There is a huge amount of benchmarks here.
There are also a lot of parameters to tune. Gurobi for example has different parameter-templates: one targets fast findings of feasible solution; one targets to proof the bounds.
My personal opinion: compared to cbc (open-source) & scip (open-source but non-free for commercial usage), glpk is quite bad.

What statistics concepts are useful for profiling?

I've been meaning to do a little bit of brushing up on my knowledge of statistics. One area where it seems like statistics would be helpful is in profiling code. I say this because it seems like profiling almost always involves me trying to pull some information from a large amount of data.
Are there any subjects in statistics that I could brush up on to get a better understanding of profiler output? Bonus points if you can point me to a book or other resource that will help me understand these subjects better.
I'm not sure books on statistics are that useful when it comes to profiling. Running a profiler should give you a list of functions and the percentage of time spent in each. You then look at the one that took the most percentage wise and see if you can optimise it in any way. Repeat until your code is fast enough. Not much scope for standard deviation or chi squared there, I feel.
All I know about profiling is what I just read in Wikipedia :-) but I do know a fair bit about statistics. The profiling article mentioned sampling and statistical analysis of sampled data. Clearly statistical analysis will be able to use those samples to develop some statistical statements on performance. Let's say you have some measure of performance, m, and you sample that measure 1000 times. Let's also say you know something about the underlying processes that created that value of m. For instance, if m is the SUM of a bunch of random variates, the distribution of m is probably normal. If m is the PRODUCT of a bunch of random variates, the distribution is probably lognormal. And so on...
If you don't know the underlying distribution and you want to make some statement about comparing performance, you may need what are called non-parametric statistics.
Overall, I'd suggest any standard text on statistical inference (DeGroot), a text that covers different probability distributions and where they're applicable (Hastings & Peacock), and a book on non-parametric statistics (Conover). Hope this helps.
Statistics is fun and interesting, but for performance tuning, you don't need it. Here's an explanation why, but a simple analogy might give the idea.
A performance problem is like an object (which may actually be multiple connected objects) buried under an acre of snow, and you are trying to find it by probing randomly with a stick. If your stick hits it a couple of times, you've found it - it's exact size is not so important. (If you really want a better estimate of how big it is, take more probes, but that won't change its size.) The number of times you have to probe the snow before you find it depends on how much of the area of the snow it is under.
Once you find it, you can pull it out. Now there is less snow, but there might be more objects under the snow that remains. So with more probing, you can find and remove those as well. In this way, you can keep going until you can't find anything more that you can remove.
In software, the snow is time, and probing is taking random-time samples of the call stack. In this way, it is possible to find and remove multiple problems, resulting in large speedup factors.
And statistics has nothing to do with it.
Zed Shaw, as usual, has some thoughts on the subject of statistics and programming, but he puts them much more eloquently than I could.
I think that the most important statistical concept to understand in this context is Amdahl's law. Although commonly referred to in contexts of parallelization, Amdahl's law has a more general interpretation. Here's an excerpt from the Wikipedia page:
More technically, the law is concerned
with the speedup achievable from an
improvement to a computation that
affects a proportion P of that
computation where the improvement has
a speedup of S. (For example, if an
improvement can speed up 30% of the
computation, P will be 0.3; if the
improvement makes the portion affected
twice as fast, S will be 2.) Amdahl's
law states that the overall speedup of
applying the improvement will be
I think one concept related to both statistics and profiling (your original question) that is very useful and used by some (you see the technique advised from time to time) is while doing "micro profiling": a lot of programmers will rally and yell "you can't micro profile, micro profiling simply doesn't work, too many things can influence your computation".
Yet simply run n times your profiling, and keep only x% of your observations, the ones around the median, because the median is a "robust statistic" (contrarily to the mean) that is not influenced by outliers (outliers being precisely the value you want to not take into account when doing such profiling).
This is definitely a very useful statistician technique for programmers who want to micro-profile their code.
If you apply the MVC programming method with PHP this would be what you need to profile:
Application:
Controller Setup time
Model Setup time
View Setup time
Database
Query - Time
Cookies
Name - Value
Sessions
Name - Value