what is the exact meaning of "First LP value" in SCIP solver, for a nonlinear convex model - optimization

I have solved using SCIP a convex mathematical model with binary variables, linear Objective function and a set of linear constraints amended with a single non-linear constraint making the model as a non-linear binary problem.
In the output file provided by SCIP, there is a term named as: First LP value and a value has been assigned to. I cannot figure out what is exactly the meaning of First LP value for my specific nonlinear problem ?? I appreciate any explanation in detail.

For solving nonlinear problems, SCIP solves linear programming relaxations (LPs) that describe an outer approximation of the feasible region. The first LP value is the value of the optimal solution to the initial LP that was solved at the root node, after presolving, but before any separation.

Related

How can I get the properties of a Gurobi's Presolve model?

I have an integer programming problem with linear objective function and some quadratic constraints. When I use Gurobi to solve this problem, Gurobi uses Presolve to create a Quadratically Constrained Integer Programming model. Now, I would like to know if the objective function of the Presolve model is quadratic too.
Thanks in advance.
Gurobi will give you the presolved model with the method presolve on the model object. That object is a regular model object and you can query its attributes. The attribute isQCP is true if there are any quadratic constraints. The attribute isQP indicates that the model has a quadratic objective, but no quadratic constraints. The attribute NumQConstrs is a count of the number of quadratic constraints.
You can also use the printStats method to print the numbers or you could use the write method to write the presolved model to a file.
presolved_model = model.presolve()
print(presolved_model.IsQCP)
print(presolved_model.IsQP)
presolved_model.printStats()
presolved_model.write("presolved.lp")

Solving an optimization problem bounded by conditional constrains

Basically, I have a dataset that contains 'weights' for some (207) variables, some are more important than the others for determining the class variable (binary) and therefore they are bigger etc. at the end all weigths are summed up across all columns so that the resulting cumulative weight is obtained for each observation.
If this weight is higher then some number then class variable is 1 otherwise is 0. I do have true labels for a class variable so the problem is to minimize false positives.
The thing is, for me it looks like a OR problem as it's about finding optimal weights. However, I am not sure if there is an OR method for such problem, at least I have not heard about one. Question is: does anyone recognize this type of problems and can send some keywords for me to research?
Another thing of course is to predict that with machine learning rather then deterministic methods but I need to do it this way.
Thank you!
Are the variables discrete (integer numbers etc) or continuous (floating point numbers)?
If they are discrete, it sounds like the knapsack problem, which constraint solvers like OptaPlanner (see this training that builds a knapsack solver) excel at.
If they are continuous, look for an LP solver, like CPLEX.
Either way, you'll get much better results than machine learning approaches, because neural nets et al are great at pattern recognition use cases (image/voice recognition, prediction, catagorization, ...), but consistently inferior for constraint optimization problems (like this, I presume).

What is the appropriate optimization method (algorithm) to solve such problems (Linear mixed-integer)?

I have this optimization problem:
In this problem, C_{i,k} is a matrix of binary values (i.e., 0 or 1) and w_i is a vector of integers, p_f is a probability, and \epsilon is a constant.
I understand that the problem is a linear mixed-integer problem. But I'm confused about the method or algorithm I should use to solve the problem, and how can I go further by doing convexity analysis.
Your inputs are appreciated.
Thanks a lot.
This is a 0-1 knapsack problem. This problem can be solved using either dynamic programming or branch-and-bound algorithm. For branch and bound, you could select any variable z_k, solve two subproblems with z_k equals to either 0 or 1. Each subproblem has the exact structure as the original problem.

Can I use a lookup table instead of a 5 degree polynomial equation between three variables in a non-linear optimization model?

I am having a non-linear optimization model with several variables and a certain function between three of them should be defined as a constraint. (Let us say, that the efficiency of a machine is dependent on the inlet and outlet temperatures). I have calculated some values in a table to visualize the dependency for T_inlets and T_outlets. It gives back a pretty ugly surface. A good fit would be something like a 5 degree polynomial equation if I wanted to define a function directly, but I do not think that would boost my computation speed... So instead I am considering simply having the created table and use it as a lookup table. Is a non-linear solver able to interpret this? I am using ipopt in Pyomo environment.
Another idea would be to limit my feasible temperature range and simplify the connection...maybe with using peace-wise linearization. Is it doable with 3d surfaces?
Thanks in advance!

Practical solver for convex QCQP?

I am working with a convex QCQP as the following:
Min e'Ie
z'Iz=n
[some linear equalities and inequalities that contain variables w,z, and e]
w>=0, z in [0,1]^n
So the problem has only one quadratic constraint, except the objective, and some variables are nonnegative. The matrices of both quadratic forms are identity matrices, thus are positive definite.
I can move the quadratic constraint to the objective but it must have the negative sign so the problem will be nonconvex:
min e'Ie-z'Iz
The size of the problem can be up to 10000 linear constraints, with 100 nonnegative variables and almost the same number of other variables.
The problem can be rewritten as an MIQP as well, as z_i can be binary, and z'Iz=n can be removed.
So far, I have been working with CPLEX via AIMMS for MIQP and it is very slow for this problem. Using the QCQP version of the problem with CPLEX, MINOS, SNOPT and CONOPT is hopeless as they either cannot find a solution, or the solution is not even close to an approximation that I know a priori.
Now I have three questions:
Do you know any method/technique to get rid of the quadratic constraint as this form without going to MIQP?
Is there any "good" solver for this QCQP? by good, I mean a solver that efficiently finds the global optimum in a resonable time.
Do you think using SDP relaxation can be a solution to this problem? I have never solved an SDP problem in reallity, so I do not know how efficient SDP version can be. Any advice?
Thanks.