Which solver can solve Fractional Linear (Costfunction) with quadratical non-convex equality constraints? - optimization

First of all, I am a noob in optimization. I have the following problem:
I have the optimization vector x=(x1, x2, x3, x4, x5, x6). The cost function is:
min. (x3+x4)/x6
The constraints are:
- quadratically equality constraints: e.g.:
k1*x5^2 + k2*x6 = k3*x5 + k4*x5 + k5*x1^2
- xmin < x < xmax
- some other linear constraints...
My biggest problem is to find a suitable solver for this problem. I already found the concept of Fractional Linear Programming by Boyd: https://web.stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf (4-20)
However, it requires linear constraints. I also found heuristic methods to solve quadratic equality constrained problems: https://pdfs.semanticscholar.org/6008/57c54df025e732238425cf55f55997b4a67c.pdf
https://web.stanford.edu/~boyd/papers/pdf/qcqp.pdf
However, I think they are not suitable to combine them with linear fractional programming.
I would be very glad if someone could mention any solution to this problem.
best regards
Leo
I tried to linearize the constraints around different random points and took the result with the lowest costs. However, the solution does not fullfil the quafratic equallity constraints.

Related

Time complexity of CVXOPT/MOSEK when the number of constraints is much greater than the number of variables

I have a convex quadratic programming problem:
min x^TPx+c^Tx
Ax \leq b
where P is a positive definite matrix. A is a m * n matrix and m is much greater than n, so the number of constraints is much greater than the number of variables.
My question is: 1. how to analyze the time complexity of this problem. 2. How the time complexity of the convex quadratic programming problem relates to the number of constraints.
I have tried to solve my problem using both cvxopt and mosek, the results of both show that the time complexity seems to be linear to the number of constraints.
However, when I tried to find the literature, I found that all the literatures I found only discussed how the time complexity relates to the number of variables, or assume A is a full rank matrix. I will appreciate it if you can recommend some related references. Thank you.

Should I transform constraint optimization to unconstrained optimization?

I have a two part question based on the optimization problem,
max f(x) s.t. a <= x <= b
where f is a nonlinear function and a and b are finite.
(1) I have heard that if possible, one should try transform this constrained optimization problem to an unconstrained one (I am interested in not finding local maximums but this could also be to speed up the optimization). Is this in general true?
For the specific problem at hand, I am using the "optim" function in R with "Nelder-Mead" that uses non-differentiable optimization.
(2) Is there a "best" transformation to use to transform the constrained to unconstrained problem?
I am using a +(b-a)*(sin(x)+1)/2 because it is onto and continuous (and so I am hoping not to find local maximums by searching the entire interval).
See https://math.stackexchange.com/questions/75077/mapping-the-real-line-to-the-unit-interval for some transformations. The unconstrained problem is then,
max f(a +(b-a)*(sin(x)+1)/2)
Also in the case of a one-sided constraint a < x, I have seen people use the exponential function a + exp(x). Is this the best thing to do?

Difference of Convex Functions Optimization

I am looking for the method or idea to solve the following optimization problem:
min f(x)
s.t. g(xi, yi) <= f(x), i=1,...,n
where x, y are variables in R^n. f(x) is convex function with respect to x. g(xi, yi) is a bunch of convex functions with respect to (xi, yi).
It is the problem of difference of convex functions (DC) optimization due to the DC structure of the constraints. Since I am fairly new to 'DC programming', I hope to know the global optimality condition of DC programs and the efficient and popular approaches for global optimization.
In my specific problem, it is already verified that the necessary optimality condition is g(xi*, yi*)=f(x*) for i=1,...,n.
Any ideas or solution would be appreciated, thanks.
For global methods, I would suggest looking into Branch and Bound, Branch and Cut, and Cutting Plane methods. These methods may be notoriously slow though depending on the problem size. It's because it is non-convex. It would be difficult to get efficient algorithms for global optimization for this problem.
For local methods, look into the convex-concave procedure. Actually, any heuristic might work.

Solving a Mixed Integer Quadratic Program using SCIP

I have a mixed integer quadratic program (MIQP) which I would like to solve using SCIP. The program is in the form such that on fixing the integer variables, the problem turns out to be a linear program. And on fixing the the continuous variables it becomes a Integer Program. A simple example :
max. \Sigma_{i} n_i * f_i(x_i)
such that.
n_1 * x_1 + n2 * x_2 < t
n_3 * x_1 + n2 * x_2 < m
.
.
many random quadratic constraints in n_i's and x_i's
so on
Here f_i is a concave piecewise linear function.
x_i's are continuous variables ( they take real values )
n_i's are integer variables
I am able to solve the problem using SCIP. But on problems with a large number of variables SCIP takes a lot of time to find the solution. I have particularly noticed that it does not find many primal solutions. Thus the rate at which the upper bound reduces is very slow. However, I could get better results by doing set heuristics emphasis aggressive.
It would be great if anyone can guide me on the following questions :
1) Is there any particular algorithm/ Software package which solves problems that fit perfectly into the model as described above ?
2) Suggestions on how to improve the rate at which primal solutions are found.
3) What type of branching can I use to get better results ?
4) Any guidance on improving performance would be really helpful.
I am okay with relaxing the integer constraints as well.
Thanks
1) The algorithm in SCIP should fit your problem. There are other software packages that implement similar algorithms, e.g., BARON and ANTIGONE.
2) Have a look which primal heuristics were successful in your run and change their parameters to run them more frequently.
3) No idea. Default should be ok.
4) Make sure that your variables have good bounds. Tighter bounds allow for a tighter relaxation to be constructed.
If you can post an instance of your problem somewhere, or a log of a SCIP run, including the detailed statistics at the end, maybe someone can give more hints on what to improve.

Practical solver for convex QCQP?

I am working with a convex QCQP as the following:
Min e'Ie
z'Iz=n
[some linear equalities and inequalities that contain variables w,z, and e]
w>=0, z in [0,1]^n
So the problem has only one quadratic constraint, except the objective, and some variables are nonnegative. The matrices of both quadratic forms are identity matrices, thus are positive definite.
I can move the quadratic constraint to the objective but it must have the negative sign so the problem will be nonconvex:
min e'Ie-z'Iz
The size of the problem can be up to 10000 linear constraints, with 100 nonnegative variables and almost the same number of other variables.
The problem can be rewritten as an MIQP as well, as z_i can be binary, and z'Iz=n can be removed.
So far, I have been working with CPLEX via AIMMS for MIQP and it is very slow for this problem. Using the QCQP version of the problem with CPLEX, MINOS, SNOPT and CONOPT is hopeless as they either cannot find a solution, or the solution is not even close to an approximation that I know a priori.
Now I have three questions:
Do you know any method/technique to get rid of the quadratic constraint as this form without going to MIQP?
Is there any "good" solver for this QCQP? by good, I mean a solver that efficiently finds the global optimum in a resonable time.
Do you think using SDP relaxation can be a solution to this problem? I have never solved an SDP problem in reallity, so I do not know how efficient SDP version can be. Any advice?
Thanks.