Deriving equations for finite domain constraint system - optimization

The following inequality system is solved for x1 and x2 over the integers.
x1 + x2 = l
x1 >= y1
x2 >= y2
x1 <= z1
x2 <= z2
l - z1 <= x2
l - z2 <= x1
l,y1,y2,z1,z2 are arbitrary but fixed and >= 0.
With the example values
l = 8
y1 = 1
y2 = 2
z1 = z2 = 6
I solve the system and get the following equations:
2 <= x1 <= 6
x2 = 8 - x1
When I tell WolframAlpha that it should solve it over the integers, it only outputs all possible values.
My question is whether I can derive equations/ranges for x1 and x2 for any given l,y1,y2,z1,z2 programmatically. This problem is related to constraint programming and I found an old paper about this problem: "Compiling Constraint Solving using Projection" by Harvey et al.
Is this approach used in any modern constraint solving libraries?
The reason I ask this is that I need to solve systems like the above several thousand times with different parameters and this takes a long time if the whole system is read/optimized/solved over and over again. Therefore, if I could compile my parameterized systems once and then just use the compiled versions I expect a massive speed gain.

Related

Numerically stable calculation of invariant mass in particle physics?

In particle physics, we have to compute the invariant mass a lot, which is for a two-body decay
When the momenta (p1, p2) are sometimes very large (up to a factor 1000 or more) compared to the masses (m1, m2). In that case, there is large cancellation happening between the last two terms when the calculation is carried out with floating point numbers on a computer.
What kind of numerical tricks can be used to compute this accurately for any inputs?
The question is about suitable numerical tricks to improve the accuracy of the calculation with floating point numbers, so the solution should be language-agnostic. For demonstration purposes, implementations in Python are preferred. Solutions which reformulate the problem and increase the amount of elementary operations are acceptable, but solutions which suggest to use other number types like decimal or multi-precision floating point numbers are not.
Note: The original question presented a simplified 1D dimensional problem in form of a Python expression, but the question is for the general case where the momenta are given in 3D dimensions. The question was reformulated in this way.
With a few tricks listed on Stackoverflow and the transformation described by Jakob Stark in his answer, it is possible to rewrite the equation into a form that does not suffer anymore from catastrophic cancellation.
The original question asked for a solution in 1D, which has a simple solution, but in practice, we need the formula in 3D and then the solution is more complicated. See this notebook for a full derivation.
Example implementation of numerically stable calculation in 3D in Python:
import numpy as np
# numerically stable implementation
#np.vectorize
def msq2(px1, py1, pz1, px2, py2, pz2, m1, m2):
p1_sq = px1 ** 2 + py1 ** 2 + pz1 ** 2
p2_sq = px2 ** 2 + py2 ** 2 + pz2 ** 2
m1_sq = m1 ** 2
m2_sq = m2 ** 2
x1 = m1_sq / p1_sq
x2 = m2_sq / p2_sq
x = x1 + x2 + x1 * x2
a = angle(px1, py1, pz1, px2, py2, pz2)
cos_a = np.cos(a)
if cos_a >= 0:
y1 = (x + np.sin(a) ** 2) / (np.sqrt(x + 1) + cos_a)
else:
y1 = -cos_a + np.sqrt(x + 1)
y2 = 2 * np.sqrt(p1_sq * p2_sq)
return m1_sq + m2_sq + y1 * y2
# numerically stable calculation of angle
def angle(x1, y1, z1, x2, y2, z2):
# cross product
cx = y1 * z2 - y2 * z1
cy = x1 * z2 - x2 * z1
cz = x1 * y2 - x2 * y1
# norm of cross product
c = np.sqrt(cx * cx + cy * cy + cz * cz)
# dot product
d = x1 * x2 + y1 * y2 + z1 * z2
return np.arctan2(c, d)
The numerically stable implementation can never produce a negative result, which is a commonly occurring problem with naive implementations, even in double precision.
Let's compare the numerically stable function with a naive implementation.
# naive implementation
def msq1(px1, py1, pz1, px2, py2, pz2, m1, m2):
p1_sq = px1 ** 2 + py1 ** 2 + pz1 ** 2
p2_sq = px2 ** 2 + py2 ** 2 + pz2 ** 2
m1_sq = m1 ** 2
m2_sq = m2 ** 2
# energies of particles 1 and 2
e1 = np.sqrt(p1_sq + m1_sq)
e2 = np.sqrt(p2_sq + m2_sq)
# dangerous cancelation in third term
return m1_sq + m2_sq + 2 * (e1 * e2 - (px1 * px2 + py1 * py2 + pz1 * pz2))
For the following image, the momenta p1 and p2 are randomly picked from 1 to 1e5, the values m1 and m2 are randomly picked from 1e-5 to 1e5. All implementations get the input values in single precision. The reference in both cases is calculated with mpmath using the naive formula with 100 decimal places.
The naive implementation loses all accuracy for some inputs, while the numerically stable implementation does not.
If you put e.g. m1 = 1e-4, m2 = 1e-4, p1 = 1 and p2 = 1 in the expression, you get about 4e-8 with double precision but 0.0 with single precision calculation. I assume, that your question is about how one can get the 4e-8 as well with single precision calculation.
What you can do is a taylor expansion (around m1 = 0 and m2 = 0) of the expression above.
e ~ e|(m1=0,m2=0) + de/dm1|(m1=0,m2=0) * m1 + de/dm2|(m1=0,m2=0) * m2 + ...
If I calculated correctly, the zeroth and first order terms are 0 and the second order expansion would be
e ~ (p1+p2)/p1 * m1**2 + (p1+p2)/p2 * m2**2
This yields exactly 4e-8 even with single precision calculation. You can of course do more terms in the expansion if you need, until you hit the precision limit of a single float.
Edit
If the mi are not always much smaller than the pi you could further massage the equation to get
The complicated part is now the one in the square brackets. It essentially is sqrt(x+1)-1 for a wide range of x values. If x is very small, we can use the taylor expansion of the square root (e.g. like here). If the x value is larger, the formula works just fine, because the addition and subtraction of 1 are no longer discarding the value of x due to floating point precision. So one threshold for x must be choosen below one switches to the taylor expansion.

How to get multiple BFS using lpsolve

I'm trying to use lpsolve IDE to solve LPs with multiple BFS, however only one solution is generated. What should I do?
min 2x1 + 4x2 + 7x3
st 2x1 + x2 + 6x3 >= 5
4x1 - 6x2 + 5x3 >= 8
x1 >= 0
x2 >= 0
I only get x1 = 2.5, x2 = x3 = 0
But there are other BFS eg (0,0,8.5), (19/8,1/4,0)
LP solvers typically just return one solution. With some solvers, you may be able to find out what solutions were visited (I am not sure that can be done with lpsolve).
Enumerating all BFS is not that easy. Here is a (somewhat complicated) way to enumerate optimal solutions. This can also be used to enumerate all (or many) feasible solutions.

In an ILP problem, is it possible to constrain/penalize the number of decision variables used?

I'm trying to setup minimization problems with restrictions on the number of decision variables used.
Is it possible to do this within a linear programming framework? Or am I forced to use a more sophisticated optimization framework?
Suppose all x's are non-negative integers:
x1, x2, x3, x4, x5 >= 0
1) Constraint: Is it possible to set up the problem so that no more than 3 of the x's can be non-zero? e.g. if
x1 = 1, x2 = 2, x3 = 3 then x4 = 0 and x5 = 0
2) Penalty: Suppose there are 3 possible solutions to the problem:
a) x1 = 1, x2 = 2, x3 = 3, x4 = 0, x5 = 0
b) x1 = 2, x2 = 3, x3 = 0, x4 = 0, x5 = 0
c) x1 = 3, x2 = 0, x3 = 0, x4 = 0, x5 = 0
Due to simplicity, solution (c) is preferred over solution (b) which is preferred over solution (a) i.e. 'using' less decision variables is preferable.
In both questions I've simplified the problem down to 5 x's, but in reality I have 100's of x's to optimise over.
I can see how I might do this in a general optimisation framework using indicator/delta variables, but can't figure out how to do it in linear programming. Any help would be appreciated!
You can build your own indicators (and without restrictions to some very specific problem you also need to).
Assuming there is an upper-bound ub_i for all of your integer-variables x0, x1, ..., xn, introduce binary-variables u0, u1, ... un and post new constraints like:
u1 * ub_1 >= x1
u2 * ub_2 >= x2
...
(the ub_x constants are often called big-M constants; but we keep them as small as possible for better relaxations)
Then your cardinality-constraint is simply:
sum(u) <= 3
Of course you can also use those u-variables in whatever penalty-design you might want to use.

Mixed Integer Linear Programming for a Ranking Constraint

I am trying to write a mixed integer linear programming for a constraint related to the rank of a specific variable, as follows:
I have X1, X2, X3, X4 as decision variables.
There is a constraint asking to define i as a rank of X1 (For example, if X1 is the largest number amongst X1, X2, X3, X4, then i=1; if X1 is the second largest number then i=2, if X1 is the 3rd largest number then i=3, else i=4)
How could I write this constraint into a mixed integer linear programming?
Not so easy. Here is an attempt:
First introduce binary variables y(i) for i=2,3,4
Then we can write:
x(1) >= x(i) - (1-y(i))*M i=2,3,4
x(1) <= x(i) + y(i)*M i=2,3,4
rank = 4 - sum(i,y(i))
y(i) ∈ {0,1} i=2,3,4
Here M is a large enough constant (a good choice is the maximum range of the data). If your solver supports indicator constraints, you can simplify things a bit.
A small example illustrates it works:
---- 36 VARIABLE x.L
i1 6.302, i2 8.478, i3 3.077, i4 6.992
---- 36 VARIABLE y.L
i3 1.000
---- 36 VARIABLE rank.L = 3.000

Variable not defined in AMPL

I keep running into an error with AMPL wherein whenever I try to model my mod file I get an error: Y1 is already defined, this is first time I am using AMPL and not sure where I am going wrong, following is my code and I would really appreciate any help with this. I tried changing the variable name from Y1 to something else then I started getting same error with other variable:
#Creating Variables
var Y1;
var Y2;
var Y3;
#writing the objective fincations
maximize Throughput:500 * Y1 + 450 * Y2 + 600 * Y3;
#writing constraints
subject to 1_limit: 8 * Y1 + 5 * Y2 + 8 * Y3 <=60;
subject to 2_limit: 10 * Y1 + 20 * Y2 + 10 * Y3 <=150;
subject to 3_limit: 0 <= Y1 <=8;
Put the line reset; at the front of your program.
AMPL remembers code previously run, and is getting confused because it remembers that you have already defined Y1.