What is the appropriate optimization method (algorithm) to solve such problems (Linear mixed-integer)? - optimization

I have this optimization problem:
In this problem, C_{i,k} is a matrix of binary values (i.e., 0 or 1) and w_i is a vector of integers, p_f is a probability, and \epsilon is a constant.
I understand that the problem is a linear mixed-integer problem. But I'm confused about the method or algorithm I should use to solve the problem, and how can I go further by doing convexity analysis.
Your inputs are appreciated.
Thanks a lot.

This is a 0-1 knapsack problem. This problem can be solved using either dynamic programming or branch-and-bound algorithm. For branch and bound, you could select any variable z_k, solve two subproblems with z_k equals to either 0 or 1. Each subproblem has the exact structure as the original problem.

Related

Difference of Convex Functions Optimization

I am looking for the method or idea to solve the following optimization problem:
min f(x)
s.t. g(xi, yi) <= f(x), i=1,...,n
where x, y are variables in R^n. f(x) is convex function with respect to x. g(xi, yi) is a bunch of convex functions with respect to (xi, yi).
It is the problem of difference of convex functions (DC) optimization due to the DC structure of the constraints. Since I am fairly new to 'DC programming', I hope to know the global optimality condition of DC programs and the efficient and popular approaches for global optimization.
In my specific problem, it is already verified that the necessary optimality condition is g(xi*, yi*)=f(x*) for i=1,...,n.
Any ideas or solution would be appreciated, thanks.
For global methods, I would suggest looking into Branch and Bound, Branch and Cut, and Cutting Plane methods. These methods may be notoriously slow though depending on the problem size. It's because it is non-convex. It would be difficult to get efficient algorithms for global optimization for this problem.
For local methods, look into the convex-concave procedure. Actually, any heuristic might work.

what is the exact meaning of "First LP value" in SCIP solver, for a nonlinear convex model

I have solved using SCIP a convex mathematical model with binary variables, linear Objective function and a set of linear constraints amended with a single non-linear constraint making the model as a non-linear binary problem.
In the output file provided by SCIP, there is a term named as: First LP value and a value has been assigned to. I cannot figure out what is exactly the meaning of First LP value for my specific nonlinear problem ?? I appreciate any explanation in detail.
For solving nonlinear problems, SCIP solves linear programming relaxations (LPs) that describe an outer approximation of the feasible region. The first LP value is the value of the optimal solution to the initial LP that was solved at the root node, after presolving, but before any separation.

Solving a Mixed Integer Quadratic Program using SCIP

I have a mixed integer quadratic program (MIQP) which I would like to solve using SCIP. The program is in the form such that on fixing the integer variables, the problem turns out to be a linear program. And on fixing the the continuous variables it becomes a Integer Program. A simple example :
max. \Sigma_{i} n_i * f_i(x_i)
such that.
n_1 * x_1 + n2 * x_2 < t
n_3 * x_1 + n2 * x_2 < m
.
.
many random quadratic constraints in n_i's and x_i's
so on
Here f_i is a concave piecewise linear function.
x_i's are continuous variables ( they take real values )
n_i's are integer variables
I am able to solve the problem using SCIP. But on problems with a large number of variables SCIP takes a lot of time to find the solution. I have particularly noticed that it does not find many primal solutions. Thus the rate at which the upper bound reduces is very slow. However, I could get better results by doing set heuristics emphasis aggressive.
It would be great if anyone can guide me on the following questions :
1) Is there any particular algorithm/ Software package which solves problems that fit perfectly into the model as described above ?
2) Suggestions on how to improve the rate at which primal solutions are found.
3) What type of branching can I use to get better results ?
4) Any guidance on improving performance would be really helpful.
I am okay with relaxing the integer constraints as well.
Thanks
1) The algorithm in SCIP should fit your problem. There are other software packages that implement similar algorithms, e.g., BARON and ANTIGONE.
2) Have a look which primal heuristics were successful in your run and change their parameters to run them more frequently.
3) No idea. Default should be ok.
4) Make sure that your variables have good bounds. Tighter bounds allow for a tighter relaxation to be constructed.
If you can post an instance of your problem somewhere, or a log of a SCIP run, including the detailed statistics at the end, maybe someone can give more hints on what to improve.

Practical solver for convex QCQP?

I am working with a convex QCQP as the following:
Min e'Ie
z'Iz=n
[some linear equalities and inequalities that contain variables w,z, and e]
w>=0, z in [0,1]^n
So the problem has only one quadratic constraint, except the objective, and some variables are nonnegative. The matrices of both quadratic forms are identity matrices, thus are positive definite.
I can move the quadratic constraint to the objective but it must have the negative sign so the problem will be nonconvex:
min e'Ie-z'Iz
The size of the problem can be up to 10000 linear constraints, with 100 nonnegative variables and almost the same number of other variables.
The problem can be rewritten as an MIQP as well, as z_i can be binary, and z'Iz=n can be removed.
So far, I have been working with CPLEX via AIMMS for MIQP and it is very slow for this problem. Using the QCQP version of the problem with CPLEX, MINOS, SNOPT and CONOPT is hopeless as they either cannot find a solution, or the solution is not even close to an approximation that I know a priori.
Now I have three questions:
Do you know any method/technique to get rid of the quadratic constraint as this form without going to MIQP?
Is there any "good" solver for this QCQP? by good, I mean a solver that efficiently finds the global optimum in a resonable time.
Do you think using SDP relaxation can be a solution to this problem? I have never solved an SDP problem in reallity, so I do not know how efficient SDP version can be. Any advice?
Thanks.

Using NP Reductions

I have been having some difficulty understanding reductions using NP problems and would like clarification. Consider the following problem:
Show that the following problem is NP-Complete by designing
a polynomial-time reduction algorithm from an already known
NP-Complete problem.
Problem: Given an undirected graph G=(V,E) and integer k,
test whether G has a cycle of length k.
I know there are other topics regarding this subject, but I am still not sure I understand how reductions like this would be done.
It is my understanding that this is how you would approach a problem such as this.
Assume the given problem can be solved in polynomial time.
Use the given problem to solve a problem that we know is NP-Hard in polynomial time
This creates a contradiction, so the assumption must be incorrect
Thus, the given problem mustn't be solvable in polynomial time
So, for a problem like this, would this be a proper approach?
If we choose k to be the length of the Hamiltonian cycle in the graph (assuming there is one) that means that this problem could be used to find the Hamiltonian cycle in the graph.
Because we can only find the Hamiltonian cycle in NP time, this problem must also only be solvable in NP time.
This looks rather like homework so I'll only give you a hint, but try consider a unweighted graph V, with k nodes. What is equivalent to finding a cycle with length k, which is solvable with the algorithm you assumed that is polynomial? Try to proceed from this.