Mosek runs indefinitely on large MIQCQP - optimization

We have a large-scale MIQCQP problem. Problem size:
Decision vars: ~9K (with 3K continuous and 6K integral vars)
Objective: 1 Linear expression
Constraints (linear): 35K linear constraints (9K lower bound + 9K upper bound + remaining inequality constraints)
Constraints (Quadratic): 1 quad constraint (with Q matrix size as 3K*3K, which is PSD)
When we use Mosek (via Cvxpy), it runs indefinitely (in the branch & bound logic). Moreover, from the mosek logs: BEST_INT_OBJ and REL_GAP(%) are displayed NA throughout.
Since this problem contains proprietary data, its difficult to share it.
Are there any general tips or tricks to speed up the problem?
(Weirdly, Gurobi can solve the same problem in less then a minute)

Related

SLSQP in ScipyOptimizeDriver only executes one iteration, takes a very long time, then exits

I'm trying to use SLSQP to optimise the angle of attack of an aerofoil to place the stagnation point in a desired location. This is purely as a test case to check that my method for calculating the partials for the stagnation position is valid.
When run with COBYLA, the optimisation converges to the correct alpha (6.04144912) after 47 iterations. When run with SLSQP, it completes one iteration, then hangs for a very long time (10, 20 minutes or more, I didn't time it exactly), and exits with an incorrect value. The output is:
Driver debug print for iter coord: rank0:ScipyOptimize_SLSQP|0
--------------------------------------------------------------
Design Vars
{'alpha': array([0.5])}
Nonlinear constraints
None
Linear constraints
None
Objectives
{'obj_cmp.obj': array([0.00023868])}
Driver debug print for iter coord: rank0:ScipyOptimize_SLSQP|1
--------------------------------------------------------------
Design Vars
{'alpha': array([0.5])}
Nonlinear constraints
None
Linear constraints
None
Objectives
{'obj_cmp.obj': array([0.00023868])}
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0002386835700364719
Iterations: 1
Function evaluations: 1
Gradient evaluations: 1
Optimization Complete
-----------------------------------
Finished optimisation
Why might SLSQP be misbehaving like this? As far as I can tell, there are no incorrect analytical derivatives when I look at check_partials().
The code is quite long, so I put it on Pastebin here:
core: https://pastebin.com/fKJpnWHp
inviscid: https://pastebin.com/7Cmac5GF
aerofoil coordinates (NACA64-012): https://pastebin.com/UZHXEsr6
You asked two questions whos answers ended up being unrelated to eachother:
Why is the model so slow when you use SLSQP, but fast when you use COBYLA
Why does SLSQP stop after one iteration?
1) Why is SLSQP so slow?
COBYLA is a gradient free method. SLSQP uses gradients. So the solid bet was that slow down happened when SLSQP asked for the derivatives (which COBYLA never did).
Thats where I went to look first. Computing derivatives happens in two steps: a) compute partials for each component and b) solve a linear system with those partials to compute totals. The slow down has to be in one of those two steps.
Since you can run check_partials without too much trouble, step (a) is not likely to be the culprit. So that means step (b) is probably where we need to speed things up.
I ran the summary utility (openmdao summary core.py) on your model and saw this:
============== Problem Summary ============
Groups: 9
Components: 36
Max tree depth: 4
Design variables: 1 Total size: 1
Nonlinear Constraints: 0 Total size: 0
equality: 0 0
inequality: 0 0
Linear Constraints: 0 Total size: 0
equality: 0 0
inequality: 0 0
Objectives: 1 Total size: 1
Input variables: 87 Total size: 1661820
Output variables: 44 Total size: 1169614
Total connections: 87 Total transfer data size: 1661820
Then I generated an N2 of your model and saw this:
So we have an output vector that is 1169614 elements long, which means your linear system is a matrix that is about 1e6x1e6. Thats pretty big, and you are using a DirectSolver to try and compute/store a factorization of it. Thats the source of the slow down. Using DirectSolvers is great for smaller models (rule of thumb, is that the output vector should be less than 10000 elements). For larger ones you need to be more careful and use more advanced linear solvers.
In your case we can see from the N2 that there is no coupling anywhere in your model (nothing in the lower triangle of the N2). Purely feed-forward models like this can use a much simpler and faster LinearRunOnce solver (which is the default if you don't set anything else). So I turned off all DirectSolvers in your model, and the derivatives became effectively instant. Make your N2 look like this instead:
The choice of best linear solver is extremely model dependent. One factor to consider is computational cost, another is numerical robustness. This issue is covered in some detail in Section 5.3 of the OpenMDAO paper, and I won't cover everything here. But very briefly here is a summary of the key considerations.
When just starting out with OpenMDAO, using DirectSolver is both the simplest and usually the fastest option. It is simple because it does not require consideration of your model structure, and it's fast because for small models OpenMDAO can assemble the Jacobian into a dense or sparse matrix and provide that for direct factorization. However, for larger models (or models with very large vectors of outputs), the cost of computing the factorization is prohibitively high. In this case, you need to break the solver structure down more intentionally, and use other linear solvers (sometimes in conjunction with the direct solver--- see Section 5.3 of OpenMDAO paper, and this OpenMDAO doc).
You stated that you wanted to use the DirectSolver to take advantage of the sparse Jacobian storage. That was a good instinct, but the way OpenMDAO is structured this is not a problem either way. We are pretty far down in the weeds now, but since you asked I'll give a short summary explanation. As of OpenMDAO 3.7, only the DirectSolver requires an assembled Jacobian at all (and in fact, it is the linear solver itself that determines this for whatever system it is attached to). All other LinearSolvers work with a DictionaryJacobian (which stores each sub-jac keyed to the [of-var, wrt-var] pair). Each sub-jac can be stored as dense or sparse (depending on how you declared that particular partial derivative). The dictionary Jacobian is effectively a form of a sparse-matrix, though not a traditional one. The key takeaway here is that if you use the LinearRunOnce (or any other solver), then you are getting a memory efficient data storage regardless. It is only the DirectSolver that changes over to a more traditional assembly of an actual matrix object.
Regarding the issue of memory allocation. I borrowed this image from the openmdao docs
2) Why does SLSQP stop after one iteration?
Gradient based optimizations are very sensitive to scaling. I ploted your objective function inside your allowed design space and got this:
So we can see that the minimum is at about 6 degrees, but the objective values are TINY (about 1e-4).
As a general rule of thumb, getting your objective to around order of magnitude 1 is a good idea (we have a scaling report feature that helps with this). I added a reference that was about the order of magnitude of your objective:
p.model.add_objective('obj', ref=1e-4)
Then I got a good result:
Optimization terminated successfully (Exit mode 0)
Current function value: [3.02197589e-11]
Iterations: 7
Function evaluations: 9
Gradient evaluations: 7
Optimization Complete
-----------------------------------
Finished optimization
alpha = [6.04143334]
time: 2.1188600063323975 seconds
Unfortunately, scaling is just hard with gradient based optimization. Starting by scaling your objective/constraints to order-1 is a decent rule of thumb, but its common that you need to adjust things beyond that for more complex problems.

Karger's Algorithm - Running Time - Edge Contraction

In Karger's Min-Cut Algorithm for undirected (possibly weighted) multigraphs, the main operation is to contract a randomly chosen edge and merge it's incident vertices into one metavertex. This process is repeated until two vertices remain. These vertices correspond to a cut. The algo can be implemented with an adjacency list.
Questions:
how can I find the particular edge, that has been chosen to be contracted?
how does an edge get contracted (in an unweighted and/or weighted graph)?
Why does this procedure take quadratic time?
Edit: I have found some information that the runtime can be quadratic, due to the fact that we have O(n-2) contractions of vertices and each contraction can take O(n) time. It would be great if somebody could explain me, why a contraction takes linear time in an adjacency list. Note a contraction consists of: deleting one adjacent edge, merging the two vertices into a supernode, and making sure that the remaining adjacent edges are connected to the supernode.
Pseudocode:
procedure contract(G=(V,E)):
while |V|>2
choose edge uniformly at random
contract its endpoints
delete self loops
return cut
I have read the related topic Karger Min cut algorithm running time, but it did not help me. Also I do not have so much experience, so a "laymens" term explanation would be very much appreciated!

Why does IPOPT converge faster when using path constraints instead of variable bounds?

I am using GPOPS-II (commercial optimisation software, unfortunately) to solve an aircraft trajectory optimisation problem. GPOPS-II transcribes the problem to a NLP problem that is subsequently solved by IPOPT, an NLP solver.
When trying to solve my problem, I impose a bound on the altitude of the aircraft. I am setting an upper limit of 5500 m on the altitude. Now, I can do this in two ways. First of all, I can set a direct upper bound on the state variable altitude of 5500 m. Doing this, IPOPT requires approximately 1000 iterations and 438 seconds until it finds an optimal solution.
Secondly, I can impose a path constraint on the state variable altitude of 5500 m. At the same time, I am relaxing the direct bound on the state variable altitude to 5750 m. Now, these problem formulations are logically equivalent, but not mathematically it seems: this time IPOPT takes only 150 iterations and 240 seconds to converge to the exact same optimal solution.
I already found a discussion where someone states that loosening the bounds on an NLP program promotes faster convergence, because of the nature of interior point methods. This seems logical to me: an interior point solver transforms the problem to a barrier problem in which the constraints are basically converted to an exponentially increasing cost at the constraint violation boundaries. As a result, the interior point solver will (initially) avoid the bounds of the problem (because of the increasing penalty function at the constraint violation boundaries) and converge at a slower rate.
My questions are the following:
How do the mathematical formulations of bound and of path constraints differ in an interior point method?
Why doesn't setting the bound of the path constraint to 5500 m slow down convergence in the same way the variable bound slows down convergence?
Thanks in advance!
P.s. The optimal solution lies near the constraint boundary of the altitude of 5500 m; in the optimal solution, the aircraft should reach h = 5500 m at the final time, and as a consequence, it flies near this altitude some time before t_f.
I found the answer to my first question in this post. I thought that IPOPT treated path constraints and bounds on variables equally. It turns out that "The only constraints that Ipopt is guaranteed to satisfy at all intermediate iterations are simple upper and lower bounds on variables. Any other linear or nonlinear equality or inequality constraint will not necessarily be satisfied until the solver has finished converging at the final iteration (if it can get to a point that satisfies termination conditions)."
So setting bounds on the variables gives a hard bound on the decision variables, whereas the path constraints give soft bounds.
This also partly answers my second question, in that the difference in convergence is explicable. However, with this knowledge, I'd expect setting bounds to result in faster convergence.

LDPC behaviour as density of parity-check matrix increases

My assignment is to implement a Loopy Belief Propagation algorithm for Low-density Parity-check Code. This code uses a parity-check matrix H which is rather sparse (say 750-by-1000 binary matrix with an average of about 3 "ones" per each column). The code to generate the parity-check matrix is taken from here
Anyway, one of the subtasks is to check the reliability of LDPC code when the density of the matrix H increases. So, I fix the channel at 0.5 capacity, fix my code speed at 0.35 and begin to increase the density of the matrix. As the average number of "ones" in a column goes from 3 to 7 in steps of 1, disaster happens. With 3 or 4 the code copes perfectly well. With higher density it begins to fail: not only does it sometimes fail to converge, it oftentimes converges to the wrong codeword and produces mistakes.
So my question is: what type of behaviour is expected of an LDPC code as its sparse parity-check matrix becomes denser? Bonus question for skilled mind-readers: in my case (as the code performance degrades) is it more likely because the Loopy Belief Propagation algo has no guarantee on convergence or because I made a mistake implementing it?
After talking to my TA and other students I understand the following:
According to Shannon's theorem, the reliability of the code should increase with the density of the parity check matrix. That is simply because more checks are made.
However, since we use Loopy Belief Propagation, it struggles a lot when there are more and more edges in the graph forming more and more loops. Therefore, the actual performance degrades.
Whether or not I made a mistake in my code based solely on this behaviour cannot be established. However, since my code does work for sparse matrices, it is likely that the implementation is fine.

Standard Errors for Differential Evolution

Is it possible to calculate standard errors for Differential Evolution?
From the Wikipedia entry:
http://en.wikipedia.org/wiki/Differential_evolution
It's not derivative based (indeed that is one of its strengths) but how then so you calculate the standard errors?
I would have thought some kind of bootstrapping strategy might have been applicable but can't seem to find any sources than apply bootstrapping to DE?
Baz
Concerning the standard errors, differential evolution is just like any other evolutionary algorithm.
Using a bootstrapping strategy seems a good idea: the usual formulas assume a normal (Gaussian) distribution for the underlying data. That's almost never true for evolutionary computation (exponential distributions being far more common, probably followed by bimodal distributions).
The simplest bootstrap method involves taking the original data set of N numbers and sampling from it to form a new sample (a resample) that is also of size N. The resample is taken from the original using sampling with replacement. This process is repeated a large number of times (typically 1000 or 10000 times) and for each of these bootstrap samples we compute its mean / median (each of these are called bootstrap estimates).
The standard deviation (SD) of the means is the bootstrapped standard error (SE) of the mean and the SD of the medians is the bootstrapped SE of the median (the 2.5th and 97.5th centiles of the means are the bootstrapped 95% confidence limits for the mean).
Warnings:
the word population is used with different meanings in different contexts (bootstrapping vs evolutionary algorithm)
in any GA or GP, the average of the population tells you almost nothing of interest. Use the mean/median of the best-of-run
the average of a set that is not normally distributed produces a value that behaves non-intuitively. Especially if the probability distribution is skewed: large values in "tail" can dominate and average tends to reflect the typical value of the "worst" data not the typical value of the data in general. In this case it's better the median
Some interesting links are:
A short guide to using statistics in Evolutionary Computation
An Introduction to Statistics for EC Experimental Analysis