Inequality constraints of convex relaxation with McCormick envelope - optimization

I have a nonconvex optimization problem for which I am calculating a lower bound using the McCormick envelope. Each bilinear term is replaced with an auxiliary variable which has the following constraints defined:
w_{ij} >= x_i^L * x_j + x_i * x_j^L - x_i^L * x_j^L
w_{ij} >= x_i^U * x_j + x_i * x_j^U - x_i^U * x_j^U
w_{ij} <= x_i^U * x_j + x_i * x_j^L - x_i^U * x_j^L
w_{ij} <= x_i^L * x_j + x_i * x_j^U - x_i^L * x_j^U
where
x_U <= x <= x_L
I am given a function taking in several arguments:
def convex_bounds(n,m,c,H,Q,A,b,lb,ub):
# n is the number of optimization variables
# m is the number of eq constraints
# H = positive, semidefinite matrix from objetcive function (n x n)
# Q is (mxn) x n
# A is m x n
# b is RHS of non linear eq constraints (m x 1)
# c,lb,ub are vectors size (n x 1)
......................................
# Create matrix B & b_ineq for inequality constraints
# where B*x <= b_ineq
B = np.eye(3)
b_ineq = np.array((10,10,10))
## these values would work in a scenario with no bilinear terms
My problem is that I don't know how to specify the inequality constraints matrix B and vector b_ineq. For this particular exercise my variables are x1, x2 and x3 with bounds 0 (x_L) and 10 (x_U). My bilinear terms are x_12 and x_23 (which will lead to auxiliary variables w_12 and w_23). How can I specify the known bounds (0 and 10) for x1,x2 and x3 and the calculated ones (as in the theory pasted above) in B and b_ineq?
I don't actually know how to proceed with this.

Related

Sum aggregate parts of a piecwise function [SymPy]

Here is a self-contained script.
from sympy import *
x = Symbol('x', real=True)
A = Symbol('A', real=True, positive=True, constant=True)
a = Symbol('a', real=True, positive=True, constant=True)
b = Symbol('b', real=True, positive=True, constant=True)
# Define wavefunction
psi_x_0 = Piecewise(
(0, x < 0),
(A * x / a, x <= a),
(A * (b - x) / (b - a), x <= b),
(0, True)
)
# Square Norm
square_norm = integrate(psi_x_0**2, (x, a, b))
Is there a way to get a sum of the branches of square_norm? I have tried applying Sum and sum to it, but these give errors. I want to ignore the branch conditions, which is not really the intended use case I realize.
A simple list comprehension should do the trick:
# arg is a tuple of 2-elements tuple:
# ((expr1, cond1), (expr2, cond2), ...)
# let's sum expr1, expr2
sum([a[0] for a in square_norm.args])

How to convert the following if conditions to Linear integer programming constraints?

These are the conditions:
if(x > 0)
{
y >= a;
z <= b;
}
It is quite easy to convert the conditions into Linear Programming constraints if x were binary variable. But I am not finding a way to do this.
You can do this in 2 steps
Step 1: Introduce a binary dummy variable
Since x is continuous, we can introduce a binary 0/1 dummy variable. Let's call it x_positive
if x>0 then we want x_positive =1. We can achieve that via the following constraint, where M is a very large number.
x < x_positive * M
Note that this forces x_positive to become 1, if x is itself positive. If x is negative, x_positive can be anything. (We can force it to be zero by adding it to the objective function with a tiny penalty of the appropriate sign.)
Step 2: Use the dummy variable to implement the next 2 constraints
In English: if x_positive = 1, then y >= a
However, if x_positive = 0, y can be anything (y > -inf)
y > a - M (1 - x_positive)
Similarly,
if x_positive = 1, then z <= b
z <= b + M * (1 - x_positive)
Both the linear constraints above will kick in if x>0 and will be trivially satisfied if x <=0.

Numerically stable calculation of invariant mass in particle physics?

In particle physics, we have to compute the invariant mass a lot, which is for a two-body decay
When the momenta (p1, p2) are sometimes very large (up to a factor 1000 or more) compared to the masses (m1, m2). In that case, there is large cancellation happening between the last two terms when the calculation is carried out with floating point numbers on a computer.
What kind of numerical tricks can be used to compute this accurately for any inputs?
The question is about suitable numerical tricks to improve the accuracy of the calculation with floating point numbers, so the solution should be language-agnostic. For demonstration purposes, implementations in Python are preferred. Solutions which reformulate the problem and increase the amount of elementary operations are acceptable, but solutions which suggest to use other number types like decimal or multi-precision floating point numbers are not.
Note: The original question presented a simplified 1D dimensional problem in form of a Python expression, but the question is for the general case where the momenta are given in 3D dimensions. The question was reformulated in this way.
With a few tricks listed on Stackoverflow and the transformation described by Jakob Stark in his answer, it is possible to rewrite the equation into a form that does not suffer anymore from catastrophic cancellation.
The original question asked for a solution in 1D, which has a simple solution, but in practice, we need the formula in 3D and then the solution is more complicated. See this notebook for a full derivation.
Example implementation of numerically stable calculation in 3D in Python:
import numpy as np
# numerically stable implementation
#np.vectorize
def msq2(px1, py1, pz1, px2, py2, pz2, m1, m2):
p1_sq = px1 ** 2 + py1 ** 2 + pz1 ** 2
p2_sq = px2 ** 2 + py2 ** 2 + pz2 ** 2
m1_sq = m1 ** 2
m2_sq = m2 ** 2
x1 = m1_sq / p1_sq
x2 = m2_sq / p2_sq
x = x1 + x2 + x1 * x2
a = angle(px1, py1, pz1, px2, py2, pz2)
cos_a = np.cos(a)
if cos_a >= 0:
y1 = (x + np.sin(a) ** 2) / (np.sqrt(x + 1) + cos_a)
else:
y1 = -cos_a + np.sqrt(x + 1)
y2 = 2 * np.sqrt(p1_sq * p2_sq)
return m1_sq + m2_sq + y1 * y2
# numerically stable calculation of angle
def angle(x1, y1, z1, x2, y2, z2):
# cross product
cx = y1 * z2 - y2 * z1
cy = x1 * z2 - x2 * z1
cz = x1 * y2 - x2 * y1
# norm of cross product
c = np.sqrt(cx * cx + cy * cy + cz * cz)
# dot product
d = x1 * x2 + y1 * y2 + z1 * z2
return np.arctan2(c, d)
The numerically stable implementation can never produce a negative result, which is a commonly occurring problem with naive implementations, even in double precision.
Let's compare the numerically stable function with a naive implementation.
# naive implementation
def msq1(px1, py1, pz1, px2, py2, pz2, m1, m2):
p1_sq = px1 ** 2 + py1 ** 2 + pz1 ** 2
p2_sq = px2 ** 2 + py2 ** 2 + pz2 ** 2
m1_sq = m1 ** 2
m2_sq = m2 ** 2
# energies of particles 1 and 2
e1 = np.sqrt(p1_sq + m1_sq)
e2 = np.sqrt(p2_sq + m2_sq)
# dangerous cancelation in third term
return m1_sq + m2_sq + 2 * (e1 * e2 - (px1 * px2 + py1 * py2 + pz1 * pz2))
For the following image, the momenta p1 and p2 are randomly picked from 1 to 1e5, the values m1 and m2 are randomly picked from 1e-5 to 1e5. All implementations get the input values in single precision. The reference in both cases is calculated with mpmath using the naive formula with 100 decimal places.
The naive implementation loses all accuracy for some inputs, while the numerically stable implementation does not.
If you put e.g. m1 = 1e-4, m2 = 1e-4, p1 = 1 and p2 = 1 in the expression, you get about 4e-8 with double precision but 0.0 with single precision calculation. I assume, that your question is about how one can get the 4e-8 as well with single precision calculation.
What you can do is a taylor expansion (around m1 = 0 and m2 = 0) of the expression above.
e ~ e|(m1=0,m2=0) + de/dm1|(m1=0,m2=0) * m1 + de/dm2|(m1=0,m2=0) * m2 + ...
If I calculated correctly, the zeroth and first order terms are 0 and the second order expansion would be
e ~ (p1+p2)/p1 * m1**2 + (p1+p2)/p2 * m2**2
This yields exactly 4e-8 even with single precision calculation. You can of course do more terms in the expansion if you need, until you hit the precision limit of a single float.
Edit
If the mi are not always much smaller than the pi you could further massage the equation to get
The complicated part is now the one in the square brackets. It essentially is sqrt(x+1)-1 for a wide range of x values. If x is very small, we can use the taylor expansion of the square root (e.g. like here). If the x value is larger, the formula works just fine, because the addition and subtraction of 1 are no longer discarding the value of x due to floating point precision. So one threshold for x must be choosen below one switches to the taylor expansion.

Beginner Finite Elemente Code does not solve equation properly

I am trying to write the code for solving the extremely difficult differential equation:
x' = 1
with the finite element method.
As far as I understood, I can obtain the solution u as
with the basis functions phi_i(x), while I can obtain the u_i as the solution of the system of linear equations:
with the differential operator D (here only the first derivative). As a basis I am using the tent function:
def tent(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return (x - l) / (m - l)
elif x < r and x > m:
return (r - x) / (r - m)
else:
return 0
def tent_half_down(l,r,x):
if x >= l and x <= r:
return (r - x) / (r - l)
else:
return 0
def tent_half_up(l,r,x):
if x >= l and x <= r:
return (x - l) / (r - l)
else:
return 0
def tent_prime(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return 1 / (m - l)
elif x < r and x > m:
return 1 / (m - r)
else:
return 0
def tent_half_prime_down(l,r,x):
if x >= l and x <= r:
return - 1 / (r - l)
else:
return 0
def tent_half_prime_up(l, r, x):
if x >= l and x <= r:
return 1 / (r - l)
else:
return 0
def sources(x):
return 1
Discretizing my space:
n_vertex = 30
n_points = (n_vertex-1) * 40
space = (0,5)
x_space = np.linspace(space[0],space[1],n_points)
vertx_list = np.linspace(space[0],space[1], n_vertex)
tent_list = np.zeros((n_vertex, n_points))
tent_prime_list = np.zeros((n_vertex, n_points))
tent_list[0,:] = [tent_half_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_list[-1,:] = [tent_half_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
tent_prime_list[0,:] = [tent_half_prime_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_prime_list[-1,:] = [tent_half_prime_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
for i in range(1,n_vertex-1):
tent_list[i, :] = [tent(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
tent_prime_list[i, :] = [tent_prime(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
Calculating the system of linear equations:
b = np.zeros((n_vertex))
A = np.zeros((n_vertex,n_vertex))
for i in range(n_vertex):
b[i] = np.trapz(tent_list[i,:]*sources(x_space))
for j in range(n_vertex):
A[j, i] = np.trapz(tent_prime_list[j] * tent_list[i])
And then solving and reconstructing it
u = np.linalg.solve(A,b)
sol = tent_list.T.dot(u)
But it does not work, I am only getting some up and down pattern. What am I doing wrong?
First, a couple of comments on terminology and notation:
1) You are using the weak formulation, though you've done this implicitly. A formulation being "weak" has nothing to do with the order of derivatives involved. It is weak because you are not satisfying the differential equation exactly at every location. FE minimizes the weighted residual of the solution, integrated over the domain. The functions phi_j actually discretize the weighting function. The difference when you only have first-order derivatives is that you don't have to apply the Gauss divergence theorem (which simplifies to integration by parts for one dimension) to eliminate second-order derivatives. You can tell this wasn't done because phi_j is not differentiated in the LHS.
2) I would suggest not using "A" as the differential operator. You also use this symbol for the global system matrix, so your notation is inconsistent. People often use "D", since this fits better to the idea that it is used for differentiation.
Secondly, about your implementation:
3) You are using way more integration points than necessary. Your elements use linear interpolation functions, which means you only need one integration point located at the center of the element to evaluate the integral exactly. Look into the details of Gauss quadrature to see why. Also, you've specified the number of integration points as a multiple of the number of nodes. This should be done as a multiple of the number of elements instead (in your case, n_vertex-1), because the elements are the domains on which you're integrating.
4) You have built your system by simply removing the two end nodes from the formulation. This isn't the correct way to specify boundary conditions. I would suggesting building the full system first and using one of the typical methods for applying Dirichlet boundary conditions. Also, think about what constraining two nodes would imply for the differential equation you're trying to solve. What function exists that satisfies x' = 1, x(0) = 0, x(5) = 0? You have overconstrained the system by trying to apply 2 boundary conditions to a first-order differential equation.
Unfortunately, there isn't a small tweak that can be made to get the code to work, but I hope the comments above help you rethink your approach.
EDIT to address your changes:
1) Assuming the matrix A is addressed with A[row,col], then your indices are backwards. You should be integrating with A[i,j] = ...
2) A simple way to apply a constraint is to replace one row with the constraint desired. If you want x(0) = 0, for example, set A[0,j] = 0 for all j, then set A[0,0] = 1 and set b[0] = 0. This substitutes one of the equations with u_0 = 0. Do this after integrating.

Negative values in Log likelihood of a bivariate gaussian

I am trying to implement a loss function which tries to minimize the negative log likelihood of obtaining ground truth values (x,y) from predicted bivariate gaussian distribution parameters. I am implementing this in tensorflow -
Here is the code -
def tf_2d_normal(self, x, y, mux, muy, sx, sy, rho):
'''
Function that implements the PDF of a 2D normal distribution
params:
x : input x points
y : input y points
mux : mean of the distribution in x
muy : mean of the distribution in y
sx : std dev of the distribution in x
sy : std dev of the distribution in y
rho : Correlation factor of the distribution
'''
# eq 3 in the paper
# and eq 24 & 25 in Graves (2013)
# Calculate (x - mux) and (y-muy)
normx = tf.sub(x, mux)
normy = tf.sub(y, muy)
# Calculate sx*sy
sxsy = tf.mul(sx, sy)
# Calculate the exponential factor
z = tf.square(tf.div(normx, sx)) + tf.square(tf.div(normy, sy)) - 2*tf.div(tf.mul(rho, tf.mul(normx, normy)), sxsy)
negRho = 1 - tf.square(rho)
# Numerator
result = tf.exp(tf.div(-z, 2*negRho))
# Normalization constant
denom = 2 * np.pi * tf.mul(sxsy, tf.sqrt(negRho))
# Final PDF calculation
result = -tf.log(tf.div(result, denom))
return result
When I am doing the training, I can see the loss value decreasing but it goes well past below 0. I can understand that should be because, we are minimizing the 'negative' likelihood. Even the loss values are decreasing, I can't get my results accurate. Can someone help in verifying, if the code that I have written for the loss function is correct or not.
Also is such a nature of loss desirable for training Neural Nets(specifically RNN)?
Thankss
I see you've found the sketch-rnn code from magenta, I'm working on something similar. I found this piece of code not to be stable by itself. You'll need to stabilize it using constraints, so the tf_2d_normal code can't be used or interpreted in isolation. NaNs and Infs will start appearing all over the place if your data isn't normalized properly in advance or in your loss function.
Below is a more stable loss function version I'm building with Keras. There may be some redundancy in here, it may not be perfect for your needs but I found it to be working and you can test/adapt it. I included some inline comments on how large negative log values can arise:
def r3_bivariate_gaussian_loss(true, pred):
"""
Rank 3 bivariate gaussian loss function
Returns results of eq # 24 of http://arxiv.org/abs/1308.0850
:param true: truth values with at least [mu1, mu2, sigma1, sigma2, rho]
:param pred: values predicted from a model with the same shape requirements as truth values
:return: the log of the summed max likelihood
"""
x_coord = true[:, :, 0]
y_coord = true[:, :, 1]
mu_x = pred[:, :, 0]
mu_y = pred[:, :, 1]
# exponentiate the sigmas and also make correlative rho between -1 and 1.
# eq. # 21 and 22 of http://arxiv.org/abs/1308.0850
# analogous to https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/model.py#L326
sigma_x = K.exp(K.abs(pred[:, :, 2]))
sigma_y = K.exp(K.abs(pred[:, :, 3]))
rho = K.tanh(pred[:, :, 4]) * 0.1 # avoid drifting to -1 or 1 to prevent NaN, you will have to tweak this multiplier value to suit the shape of your data
norm1 = K.log(1 + K.abs(x_coord - mu_x))
norm2 = K.log(1 + K.abs(y_coord - mu_y))
variance_x = K.softplus(K.square(sigma_x))
variance_y = K.softplus(K.square(sigma_y))
s1s2 = K.softplus(sigma_x * sigma_y) # very large if sigma_x and/or sigma_y are very large
# eq 25 of http://arxiv.org/abs/1308.0850
z = ((K.square(norm1) / variance_x) +
(K.square(norm2) / variance_y) -
(2 * rho * norm1 * norm2 / s1s2)) # z → -∞ if rho * norm1 * norm2 → ∞ and/or s1s2 → 0
neg_rho = 1 - K.square(rho) # → 0 if rho → {1, -1}
numerator = K.exp(-z / (2 * neg_rho)) # → ∞ if z → -∞ and/or neg_rho → 0
denominator = (2 * np.pi * s1s2 * K.sqrt(neg_rho)) + epsilon() # → 0 if s1s2 → 0 and/or neg_rho → 0
pdf = numerator / denominator # → ∞ if denominator → 0 and/or if numerator → ∞
return K.log(K.sum(-K.log(pdf + epsilon()))) # → -∞ if pdf → ∞
Hope you find this of value.