SCIP: get sensitivity range of the objective function coefficient of a variable - scip

I am implementing a branching rule in SCIP (using the C api). During the BRANCHEXECLP callback, I need to get the sensitivity range of the objective function coefficient of the candidates variables. Is there a way to get this information?
I want a function to get a range [x1, x2] for a variable x such that changing the objective coefficient of x by a value in this range does not change the optimal solution of the LP relaxation.

Ok so I think you only need to query the reduced costs, as well as the tableau coefficients.
The reduced cost you get from calling SCIPgetVarRedcost. For the tableau row you should call SCIPgetLPBInvARow

Related

Get covariance best-fit parameters obtained by lmfit using non-"Leastsq"methods

I have some observational data and I want to fit some model parameters by using lmfit.Minimizer() to minimize an error function which, for reasons I won't get into here, must return a float instead of an array of residuals. This means that I cannot use the Leastsq method to minimize the function. In practice, methods nelder, BFGS and powell converge fine, but these methods do not provide the covariance of the best-fit parameters (MinimizerResult.covar).
I would like to know if thee is a simple way to compute this covariance when using any of the non-Leastsq methods.
It is true that the leastsq method is the only method that can calculate error bars and that this requires a residual array (with more elements than variables!).
It turns out that some work has been done in lmfit toward the goal of being able to compute uncertainties for scalar minimizers, but it is not complete. See https://github.com/lmfit/lmfit-py/issues/169 and https://github.com/lmfit/lmfit-py/pull/481. If you're interested in helping, that would be great!
But, yes, you could compute the covariance by hand. For each variable, you would need to make a small perturbation to its value (ideally around 1-sigma, but since that is what you're trying to calculate, you probably don't know it) and then fix that value and optimize all the other values. In this way you can compute the Jacobian matrix (derivative of the residual array with respect to the variables).
From the Jacobian matrix, the covariance matrix is (assuming there are no singularities):
covar = numpy.inv(numpy.dot(numpy.transpose(jacobian), jacobian))

Does Z3 gives the maximum value for summation

I am using Z3 to solve an optimization problem. the objective is to maximize the value of a variable, call it X, X is the summation of:
X = x1+x2+x3+x4+...+xi
each term form x1 to xi represents a non-linear equation. So, I can't use the optimization APIs. Instead, I first get a value for X and begin a loop. in each iteration, I add another constraint to get X greater than the previous generated X value.
I noticed that the first value is the maximum value and in each time the program enters the loop, I wait for a long long time to get another greater value but it never generates new values. I changed the values of the input and this happens in each time.
is that a coincidence? or Is the Z3 designed such that it generates the max. values for such formulas?
Z3 doesn't really do non-linear optimization: Depending on the heuristics it uses, it may or may not give you an answer. (Most likely it'll either say unknown or run forever.) The hack you're implementing is likely the best you can get if you have truly non-linear constraints and you're not getting any mileage from z3 out-of-the-box. Another option would be to use strategies/tactics to guide the solver, but that is not for the faint of the heart and is not guaranteed to work.
See here for the original optimization z3 paper, which clearly states it is for the linear fragment: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/nbjorner-nuz.pdf
For a good read on strategies in z3, see: http://www.cs.tau.ac.il/~msagiv/courses/asv/z3py/strategies-examples.htm

Usage of scipy.optimize.fmin_slsqp for Integer design variable

I'm trying to use the scipy.optimize.slsqp for an industrial-related constrained optimization. A highly non-linear FE model is used to generate the objective and the constraint functions, and their derivatives/sensitivities.
The objective function is in the form:
obj=a number calculated from the FE model
A series of constraint functions are set, and most of them are in the form:
cons = real number i - real number j (calculated from the FE model)
I would like to try to restrict the design variables to integers as that would be what input into the plant machine.
Another consideration is to have a log file recording what design variable have been tried. if a set of design variable (integer) is already tried for, skip the calculation, perturb the design variable and try again. By limiting the design variable to integers, we are able to limit the number of trials (while leaving the design variable to real, a change in the e.g. 8th decimal point could be regarded as untried values).
I'm using SLSQP as it is one of the SQP method (please correct me if I am wrong), and the it is said to be powerful to deal with nonlinear problems. I understand the SLSQP algorithm is a gradient-based optimizer and there is no way I can implement the restriction of the design variables being integer in the algorithm coded in FORTRAN. So instead, I modified the slsqp.py file to the following (where it calls the python extension built from the FORTRAN algorithm):
slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw)
for i in range(len(x)):
x[i]=int(x[i])
The code stops at the 2nd iteration and output the following:
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.286621577077517
Iterations: 7
Function evaluations: 0
Gradient evaluations: 0
However, one of the constraint function is violated (value at about -5.2, while the default convergence criterion of the optimization code = 10^-6).
Questions:
1. Since the FE model is highly nonlinear, I think it's safe to assume the objective and constraint functions will be highly nonlinear too (regardless of their mathematical form). Is that correct?
2. Based on the convergence criterion of the slsqp algorithm(please see below), one of which requires the sum of all constraint violations(absolute values) to be less than a very small value (10^-6), how could the optimization exit with successful termination message?
IF ((ABS(f-f0).LT.acc .OR. dnrm2_(n,s,1).LT.acc).AND. h3.LT.acc)
Any help or advice is appreciated. Thank you.

"Precomputation" of a matrix in mathprog

I have a domain problem formulation in MathProg, where the cost function uses geometrical distances. The data sets contain only X,Y coordinates and not the actual distances. Right now, my formulation calculates the distances directly:
minimize total: sum{(f, c) in S} x[f, c] * sqrt(((facilityXs[f] - customerXs[c])**2) + ((facilityYs[f] - customerYs[c])**2));
And I want to know, whether the MathProg compiler is smart enough to see that the expression inside sqrt is constant and thus the whole thing can be precomputed, or whether it recalculates the expression every time, and how can I write it in a more elegant way.
Yes the MathProg 'compiler' is smart enough. It will precompute all equations containing solely parameters (and then create a computation matrix containing just one numeric value per cell). If you put variables in non linear functions like sqrt() the precomputation will fail.
A more elegant way is to keep your core set of equations linear. I often use separate parameters calculated by 'prequations', to keep the main formulations clean and simple.
param distance{(f,c) in S} := sqrt(((facilityXs[f] - customerXs[c])**2) + ((facilityYs[f] - customerYs[c])**2);
minimize total: sum{(f, c) in S} x[f, c] * distance[f,c]);
If the expression inside sqrt doesn't contain variables, then it will be evaluated at the translation stage and sent to the solver as a constant (coefficient of x[f, c]).

Fitting curves to a set of points

Basically, I have a set of up to 100 co-ordinates, along with the desired tangents to the curve at the first and last point.
I have looked into various methods of curve-fitting, by which I mean an algorithm with takes the inputted data points and tangents, and outputs the equation of the cure, such as the gaussian method and interpolation, but I really struggled understanding them.
I am not asking for code (If you choose to give it, thats acceptable though :) ), I am simply looking for help into this algorithm. It will eventually be converted to Objective-C for an iPhone app, if that changes anything..
EDIT:
I know the order of all of the points. They are not too close together, so passing through all points is necessary - aka interpolation (unless anyone can suggest something else). And as far as I know, an algebraic curve is what I'm looking for. This is all being done on a 2D plane by the way
I'd recommend to consider cubic splines. There is some explanation and code to calculate them in plain C in Numerical Recipes book (chapter 3.3)
Most interpolation methods originally work with functions: given a set of x and y values, they compute a function which computes a y value for every x value, meeting the specified constraints. As a function can only ever compute a single y value for every x value, such an curve cannot loop back on itself.
To turn this into a real 2D setup, you want two functions which compute x resp. y values based on some parameter that is conventionally called t. So the first step is computing t values for your input data. You can usually get a good approximation by summing over euclidean distances: think about a polyline connecting all your points with straight segments. Then the parameter would be the distance along this line for every input pair.
So now you have two interpolation problem: one to compute x from t and the other y from t. You can formulate this as a spline interpolation, e.g. using cubic splines. That gives you a large system of linear equations which you can solve iteratively up to the desired precision.
The result of a spline interpolation will be a piecewise description of a suitable curve. If you wanted a single equation, then a lagrange interpolation would fit that bill, but the result might have odd twists and turns for many sets of input data.