How to frame a large scale optimization in python calling a SCIP solver - scip

I'm trying to use SCIP through python and I have installed SCIP optimization suite 3.2.1. I have problem framing my optimization question through PYSCIPOPT. As I have 2000+ variables to solve, I am wondering can I use matrix notation to frame the question in python?

No, this is not possible, because SCIP is constraint based and does not rely on a central matrix structure. A problem with 2000 variables is not at all large, by the way and should be processed within a a second.
This is how you would transform a quadratic constraint matrix Q of size 2:
Q = [a b;c d], x = [x1; x2]
x'Qx = ax1^2 + dx2^2 + (b+c)x1x2
This can then be passed to SCIP with the addCons() method.

Related

Solving nonlinear bin packing optimization problem in python

Is there a straight forward way (e.g. some module with a commonly used solver) to solve a problem derived from the well known bin packing problem (e.g. see https://en.wikipedia.org/wiki/Bin_packing_problem) in python?
In detail, the bin packing problem where s(i) are the items' weights
minimize K = sum_j y_j # use as less bins as possible
s.t. sum_i s(i)x_i,j <= B*y_j for all j # bin capacity B must not be exceeded
sum_j x_i,j = 1 for all i # all items have been fit in exactly one bin
x_i,j, y_j being integer variables in {0,1}
shall be extended towards the following objective function K_nonlinear
minimize K_nonlinear = sum_j (y_j + Std({s(i)}) for all x_i,j =1)
s.t. # the same constraints as the bin packing problem (above)
Hence, not only the number of bins in use should be minimized, but also the standard deviation of the selected items sharing a bin (which will lead to some necessary compromise in general). Therefore the problem becomes nonlinear in my opinion.
I am grateful for any advice how to attack this problem using python (python api would be sufficient, algorithm itself could also be implemented in any other language).
Up to now, I have tried to extend an existing bin packing solver (based on the Coin-or branch and cut solver) about the additional part in the objective function, which failed. Presumably this is due to the induced non-linearity.
Many thanks in advance
Instead of using the standard deviation, it may be easier to minimize the range: max-min. This can be formulated in a linear fashion:
Minimize xmax-xmin
xmax >= x[i] for all i
xmin <= x[i] for all i

What is the most reliable method to optimize linear objective function with nonlinear constraints and Why?

I currently solve the following problem:
Basically, this problem is equivalent to find the confidence interval for logistic regression. The objective function is linear (no second derivative), meanwhile, the constraint is non-linear. Specifically, I used n = 1, alpha = 0.05, theta = logit of p where p = [0,1] (for detail, please see binomial distribution). Thus, I have a closed-form solution for the gradient and jacobian for objective and constraints respectively.
In R, I first tried the alabama::auglag function which used augmented Lagrangian method with BFGS (as a default) and nloptr::auglag function which used augmented Lagrangian method with SLSQP (i.e. SLSQP as a local minimizer). Although they were able to find the (global) minimizer most time, sometimes they failed and produced a far-off solution.
After all, I could obtain the best (most stable) result using SLSQP method (nloptr::nloptr with algorithm=NLOPT_LD_SLSQP).
Now, I have a question of why SLSQP produced better result in this setting than the first two methods and why the first two methods (augmented Lagrangian with BFGS and SLSQP as a local optimizer) did not perform well.
Another question is, considering my problem setting, what would be the best method to find the optimizer?
Any comments and suggestions would be much appreciated.
Thanks.

SCIP what is the function for sign?

I am new to SCIP and have read through some of the example problems and documentation, but am still unsure how to formulate the following problem for the SCIP solver:
argmax(w) sum(sign(Aw) == sign(b))
where A is a nxm matrix, w is a mx1 vector, and b is a nx1 vector. The data type is floats/real numbers, and it is a constraint-free problem.
Values for A and b are also contained row-wise in a .txt file. How can I import that?
Overall - I am new to SCIP and have no idea how to start creating variables (especially the objective function value parameter), importing data, formulate the objective function... It's a bit of a stretch for me to ask this question, but your help is appreciated!
This should work:
where beta(i) = sign(b(i)). The implication can be implemented using indicator constraints. This way we don't need big-M's.
Most likely the >= 0 constraint should be >= 0.0001 (otherwise we can set all w(j)=0).

Solving a Mixed Integer Quadratic Program using SCIP

I have a mixed integer quadratic program (MIQP) which I would like to solve using SCIP. The program is in the form such that on fixing the integer variables, the problem turns out to be a linear program. And on fixing the the continuous variables it becomes a Integer Program. A simple example :
max. \Sigma_{i} n_i * f_i(x_i)
such that.
n_1 * x_1 + n2 * x_2 < t
n_3 * x_1 + n2 * x_2 < m
.
.
many random quadratic constraints in n_i's and x_i's
so on
Here f_i is a concave piecewise linear function.
x_i's are continuous variables ( they take real values )
n_i's are integer variables
I am able to solve the problem using SCIP. But on problems with a large number of variables SCIP takes a lot of time to find the solution. I have particularly noticed that it does not find many primal solutions. Thus the rate at which the upper bound reduces is very slow. However, I could get better results by doing set heuristics emphasis aggressive.
It would be great if anyone can guide me on the following questions :
1) Is there any particular algorithm/ Software package which solves problems that fit perfectly into the model as described above ?
2) Suggestions on how to improve the rate at which primal solutions are found.
3) What type of branching can I use to get better results ?
4) Any guidance on improving performance would be really helpful.
I am okay with relaxing the integer constraints as well.
Thanks
1) The algorithm in SCIP should fit your problem. There are other software packages that implement similar algorithms, e.g., BARON and ANTIGONE.
2) Have a look which primal heuristics were successful in your run and change their parameters to run them more frequently.
3) No idea. Default should be ok.
4) Make sure that your variables have good bounds. Tighter bounds allow for a tighter relaxation to be constructed.
If you can post an instance of your problem somewhere, or a log of a SCIP run, including the detailed statistics at the end, maybe someone can give more hints on what to improve.

Fastest way to multiply X*X.transpose() in Eigen?

I want to multiple matrix with self transposed. The size of matrix about X[8, 100].
Now it looks " MatrixXf h = X*X.transpose()"
a) Is it possible to use faster multiplication using explicit facts:
Result matrix is symmetric
The X matrix use same data so, can use custom procedure for multiplication.
?
b)Also i can generate X matrix as transposed and use X.transpose()*X, whitch i should prefer for my dimensions ?
c) Any tips on faster multiplication of such matrixes.
Thanks.
(a) Your matrix is too small to take advantage of the symmetry of the result because if you do so, then you will loose vectorization. So there is not much you can do.
(b) The default column storage should be fine for that example.
(c) Make sure you compile with optimizations ON, that you enabled SSE2 (this is the default on 64 bits systems), the devel branch is at least twice as fast for such sizes, and you can get additional speedup by enabling AVX.