induction for SCIPopt's setppc - scip

Regarding SCIP's "constraint handler for the set partitioning / packing / covering":
Is it smart enough to deduce all forms that it supports without me having to call the setppc functions directly?
Can it handle/detect forms of sum(x) == y where x is a list of binary variables and y is also a binary variable? Same question for less than or equal?
The docs for it state that it requires a right-hand-side equal to 1. What about RHS=0?

If I understand you correctly you are asking if SCIP will see that a linear constraint is a setppc constraint and automatically upgrade it? Yes.
Yes, it should not matter how you write it.
A sum of binary variables with rhs = 0 will just propagate and fix all variables to 0. (if only lhs is 0 then that is redundant)
If some of the coefficients are -1 instead of +1 SCIP will still try to make it work by negating all negative variables (or all positive ones and multiply by -1 afterwards). SCIP will check for any linear constraint that has only binary variables and +1/-1 coefficients if it can be upgraded in such a way.

Related

Pseudo-inverse via signular value decomposition in numpy.linalg.lstsq

at the risk of using the wrong SE...
My question concerns the rcond parameter in numpy.linalg.lstsq(a, b, rcond). I know it's used to define the cutoff for small singular values in the singular value decomposition when numpy computed the pseudo-inverse of a.
But why is it advantageous to set "small" singular values (values below the cutoff) to zero rather than just keeping them as small numbers?
PS: Admittedly, I don't know exactly how the pseudo-inverse is computed and exactly what role SVD plays in that.
Thanks!
To determine the rank of a system, you could need compare against zero, but as always with floating point arithmetic we should compare against some finite range. Hitting exactly 0 never happens when adding and subtracting messy numbers.
The (reciprocal) condition number allows you to specify such a limit in a way that is independent of units and scale in your matrix.
You may also opt of of this feature in lstsq by specifying rcond=None if you know for sure your problem can't run the risk of being under-determined.
The future default sounds like a wise choice from numpy
FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions
because if you hit this limit, you are just fitting against rounding errors of floating point arithmetic instead of the "true" mathematical system that you hope to model.

Worhp: Local point of infeasibility

I have a problem that is solved successfully with ipopt and fmincon. worhp terminates on local infeasibility. My x0 (init) is feasible.
This may happen with the interior point algorithm, but I expect sqp to always stay in the feasible zone?
Maybe also check the derivatives with WORHP by enabling CheckValuesDF, CheckValuesDG, CheckValuesHM, CheckStructureDF, CheckStructureDG and CheckStructureHM if you provide them. What I am pointing at is that WORHP requires a very special coordinate storage format (in particular for the Hessian). Mistakes here lead to false search directions.
Due to the approximation error of the QP subproblem this is not something you can expect in general. Consider the problem
which will have the QP subproblems
for a current x and Lagrangian multiplier lambda, as can be seen by determining the necessary derivatives. With initial values x_0 = 0 and lambda_0 = 1 we have a feasible initial guess. The first QP to be solved is then
which has the unique solution d = 2. Now, depending on the implemented linesearch, the full step might be taken, i.e. the next iterate is x_1 = x_0 + d. That means x_1 = 2 which is not a feasible point anymore. In fact, WORHP's SQP algorithm will iterate like this if you disable the par.InitialLMest and eventually find the global optimum at x = 1.
Apart from this fundamental property there can also be other effects leading to iterates leaving the feasible set, that will very much be specific to the actual solver implementation. For example numerical inaccuracies, difficulties during the solution of a QP or certain recovery strategies. As to why your problem is not solved successfully using the SQP algorithm of WORHP, I am unable to say much without knowing anything about the problem itself.

SCIP what is the function for sign?

I am new to SCIP and have read through some of the example problems and documentation, but am still unsure how to formulate the following problem for the SCIP solver:
argmax(w) sum(sign(Aw) == sign(b))
where A is a nxm matrix, w is a mx1 vector, and b is a nx1 vector. The data type is floats/real numbers, and it is a constraint-free problem.
Values for A and b are also contained row-wise in a .txt file. How can I import that?
Overall - I am new to SCIP and have no idea how to start creating variables (especially the objective function value parameter), importing data, formulate the objective function... It's a bit of a stretch for me to ask this question, but your help is appreciated!
This should work:
where beta(i) = sign(b(i)). The implication can be implemented using indicator constraints. This way we don't need big-M's.
Most likely the >= 0 constraint should be >= 0.0001 (otherwise we can set all w(j)=0).

Usage of scipy.optimize.fmin_slsqp for Integer design variable

I'm trying to use the scipy.optimize.slsqp for an industrial-related constrained optimization. A highly non-linear FE model is used to generate the objective and the constraint functions, and their derivatives/sensitivities.
The objective function is in the form:
obj=a number calculated from the FE model
A series of constraint functions are set, and most of them are in the form:
cons = real number i - real number j (calculated from the FE model)
I would like to try to restrict the design variables to integers as that would be what input into the plant machine.
Another consideration is to have a log file recording what design variable have been tried. if a set of design variable (integer) is already tried for, skip the calculation, perturb the design variable and try again. By limiting the design variable to integers, we are able to limit the number of trials (while leaving the design variable to real, a change in the e.g. 8th decimal point could be regarded as untried values).
I'm using SLSQP as it is one of the SQP method (please correct me if I am wrong), and the it is said to be powerful to deal with nonlinear problems. I understand the SLSQP algorithm is a gradient-based optimizer and there is no way I can implement the restriction of the design variables being integer in the algorithm coded in FORTRAN. So instead, I modified the slsqp.py file to the following (where it calls the python extension built from the FORTRAN algorithm):
slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw)
for i in range(len(x)):
x[i]=int(x[i])
The code stops at the 2nd iteration and output the following:
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.286621577077517
Iterations: 7
Function evaluations: 0
Gradient evaluations: 0
However, one of the constraint function is violated (value at about -5.2, while the default convergence criterion of the optimization code = 10^-6).
Questions:
1. Since the FE model is highly nonlinear, I think it's safe to assume the objective and constraint functions will be highly nonlinear too (regardless of their mathematical form). Is that correct?
2. Based on the convergence criterion of the slsqp algorithm(please see below), one of which requires the sum of all constraint violations(absolute values) to be less than a very small value (10^-6), how could the optimization exit with successful termination message?
IF ((ABS(f-f0).LT.acc .OR. dnrm2_(n,s,1).LT.acc).AND. h3.LT.acc)
Any help or advice is appreciated. Thank you.

Practical solver for convex QCQP?

I am working with a convex QCQP as the following:
Min e'Ie
z'Iz=n
[some linear equalities and inequalities that contain variables w,z, and e]
w>=0, z in [0,1]^n
So the problem has only one quadratic constraint, except the objective, and some variables are nonnegative. The matrices of both quadratic forms are identity matrices, thus are positive definite.
I can move the quadratic constraint to the objective but it must have the negative sign so the problem will be nonconvex:
min e'Ie-z'Iz
The size of the problem can be up to 10000 linear constraints, with 100 nonnegative variables and almost the same number of other variables.
The problem can be rewritten as an MIQP as well, as z_i can be binary, and z'Iz=n can be removed.
So far, I have been working with CPLEX via AIMMS for MIQP and it is very slow for this problem. Using the QCQP version of the problem with CPLEX, MINOS, SNOPT and CONOPT is hopeless as they either cannot find a solution, or the solution is not even close to an approximation that I know a priori.
Now I have three questions:
Do you know any method/technique to get rid of the quadratic constraint as this form without going to MIQP?
Is there any "good" solver for this QCQP? by good, I mean a solver that efficiently finds the global optimum in a resonable time.
Do you think using SDP relaxation can be a solution to this problem? I have never solved an SDP problem in reallity, so I do not know how efficient SDP version can be. Any advice?
Thanks.