Is Romberg integration method implemented as weighted function values numerically correct? - numeric

I have to integrate expression f(x) * g(x) for many different functions f but just one g.
I want to integrate it as sum of weighted values of f(x) * g(x) instead of calculating the table. Note that in Python I may write:
sum(w[i] * f(x[i]) * g(x[i]) for i in range(2 ** k + 1))
as:
wg = [w[i] * g(x[i]) for i in range(2 ** k + 1)]
sum(wg[i] * f(x[i]) for i in range(2 ** k + 1))
where w[i] are weights of function values used by the Romberg method which may be calculated like:
import numpy as np
from scipy.integrate import romb
w = romb(np.eye(2 ** k + 1))
Is such implementation of Romberg method safe?
(The question has been also asked at CS: https://scicomp.stackexchange.com/questions/35469/is-romberg-integration-method-implemented-as-weighted-function-values-numericall)

Related

Inequality constraints of convex relaxation with McCormick envelope

I have a nonconvex optimization problem for which I am calculating a lower bound using the McCormick envelope. Each bilinear term is replaced with an auxiliary variable which has the following constraints defined:
w_{ij} >= x_i^L * x_j + x_i * x_j^L - x_i^L * x_j^L
w_{ij} >= x_i^U * x_j + x_i * x_j^U - x_i^U * x_j^U
w_{ij} <= x_i^U * x_j + x_i * x_j^L - x_i^U * x_j^L
w_{ij} <= x_i^L * x_j + x_i * x_j^U - x_i^L * x_j^U
where
x_U <= x <= x_L
I am given a function taking in several arguments:
def convex_bounds(n,m,c,H,Q,A,b,lb,ub):
# n is the number of optimization variables
# m is the number of eq constraints
# H = positive, semidefinite matrix from objetcive function (n x n)
# Q is (mxn) x n
# A is m x n
# b is RHS of non linear eq constraints (m x 1)
# c,lb,ub are vectors size (n x 1)
......................................
# Create matrix B & b_ineq for inequality constraints
# where B*x <= b_ineq
B = np.eye(3)
b_ineq = np.array((10,10,10))
## these values would work in a scenario with no bilinear terms
My problem is that I don't know how to specify the inequality constraints matrix B and vector b_ineq. For this particular exercise my variables are x1, x2 and x3 with bounds 0 (x_L) and 10 (x_U). My bilinear terms are x_12 and x_23 (which will lead to auxiliary variables w_12 and w_23). How can I specify the known bounds (0 and 10) for x1,x2 and x3 and the calculated ones (as in the theory pasted above) in B and b_ineq?
I don't actually know how to proceed with this.

Numpy nditer for non-broadcastable algorithms

TLDR: how to setup nditer when my algorithm needs different number of values for each operand, but I want broadcasting to be applied over the "other" axes.
I'm in the process of converting some algorithms to cython since I've got a lot of looping overhead in the implementation.
Originally I implemented the algorithms with support for broadcasting to allow various use-cases, and I would like to keep that.
The algorithms are quite involved, but the issue can be summarized by the following example code:
a = np.arange(5)
b = np.arange(11)
c = 0
for idx in range(len(a)):
c += a[idx] * b[2 * idx]
c += a[idx] * b[2 * idx + 1]
Broadcasting of this could be implemented along the first axis with the exact same code:
a = np.arange(5 * 7).reshape((5, 7, 1))
b = np.arange(11 * 6).reshape((11, 1, 6))
c = 0
for idx in range(a.shape[0]):
c += a[idx] * b[2 * idx]
c += a[idx] * b[2 * idx + 1]
or along the last axis with some slight modifications (not the same result, but that's not of importance here):
a = np.arange(5 * 7).reshape((7, 1, 5))
b = np.arange(11 * 6).reshape((1, 6, 11))
c = 0
for idx in range(a.shape[-1]):
c += a[..., idx] * b[..., 2 * idx]
c += a[..., idx] * b[..., 2 * idx + 1]
The actual algorithms I have need multiple nested for-loops for each "broadcastable unit", can involve more "inputs" (a and b here) and the "output" (c) can also be another array instead of a single value.
When the inner loops are moved over to cython broadcasting is no longer an option. It seems like nditer would be the way to go here, but I cannot figure out how to make it ignore the fact that one of the axes is not broadcastable. I expected that
a = np.arange(5 * 7).reshape((7, 1, 5))
b = np.arange(11 * 6).reshape((1, 6, 11))
it = np.nditer([a, b], flags=['external_loop'])
would allow me to loop over all axes except the one where I apply my custom algorithm, but that does not seem to be the case. Instead I'm met with a ValueError: operands could not be broadcast together with shapes (7,1,5) (1,6,11).
Ideally I would be able to loop as
for a_inner, b_inner, out_inner in it:
out_inner[...] = call_to_cythonized_algorithm(a_inner, b_inner)
where the shapes of the _inner variables match with what I need for the algorithm (5, 11, 0 in the examples above), or potentially with one extra dimension which I could loop over in the cython code.
I've tried a couple other flags as well, but I don't really know what I'm doing, and none of them give me an iterator that works.
Is this possible with the current API, or have I found a limitation of nditer?

Gurobi objective function

I am trying to convert an objective function from scipy to Gurobi as follows but getting "unsupported operand type(s) for ** or pow(): 'gurobipy.LinExpr' and 'float'".
Any idea how I could re-write the below? Thanks in advance.
from gurobipy import *
import scipy.optimize as optimize
price = 95.0428
par = 100.0
T = 1.5
coup = 5.75
freq = 2
guess = 0.05
freq = float(freq)
periods = T * freq
coupon = coup / 100. * par / freq
dt = [(i + 1) / freq for i in range(int(periods))]
#coverting the below scipy.optimize to Gurobi
#ytm_func = lambda y: sum([coupon / (1 + y / freq) ** (freq * t) for t in dt]) + (par / (1 + y / freq) ** (freq * T)) - price
#optimize.newton(ytm_func, guess)
m = Model()
y = m.addVar(vtype=GRB.CONTINUOUS, name='y')
m.setObjective(quicksum([coupon / (1 + y / freq) ** (freq * t) for t in dt]) + (par / (1 + y / freq) ** (freq * T)) - price, GRB.MINIMIZE)
m.optimize()
m.printAttr('X')
Hi I think what you are trying to do is not supported by gurobi yet. Not at least as a quadratic programming.
First you have your variables in the denominator which is not advised / supported directly
Second what you are defining is not a quadratic problem. It is a polynomial problem. As far as I know gurobi currently supports only quadratic programs with expressions such as y*y
This is unconstrained problem so I wonder why you need gurobi. Scientific solvers deal with these problem pretty well using gradient decent, Newton and so on methods
I hope this helps

How to get correct phase values using numpy.fft

import numpy as np
import matplotlib.pyplot as plt
n = 500
T = 10
dw = 2 * np.pi / T
t = np.linspace(0, T, n)
x = 5 * np.sin(20 * t + np.pi) + 10 * np.sin( 40 * t + np.pi/2)
fftx = np.fft.rfft(x)
freq = np.fft.rfftfreq(n) * n * dw
amps = np.abs(fftx) * 2 / n
angs = np.angle(fftx)
_, ax = plt.subplots(3, 1)
ax[0].plot(t, x)
ax[1].plot(freq, amps)
ax[2].plot(freq, angs)
I get correct values for frequency and amplitude. But as seen from the plot the phase values are not correct. How to extract correct values for phase from fft? What exactly am I looking at in the phase plot?
I am expecting approx 3.14 and 3.14/2 for frequencies 20 and 40 respectively.
There are two issues with computing the phase:
Your input signal is not an integer number of periods. If you replicate the signal repeatedly, you'll see you actually have a different set of frequency components than you assume when you construct the signal (the DFT can the thought of as using an infinite repetition of your signal as input). This causes the peaks to have some width to them, it also causes the phase to shift a bit.
This issue you can fix by either windowing your signal, or creating it so it has an integer number of periods. The latter is:
T = 3 * np.pi
t = np.linspace(0, T, n, endpoint=False)
The frequencies where there is no signal (which after the fix above is all except for two frequencies), the phase will be given by noise. You can set the phase here to zero:
angs[amps < 1] = 0
Now your plot looks like this:
The phases are not as you expected, because the sine has a phase of -π/2. Repeat the experiment with cos instead of sin and you get the phases you were expecting.

Beginner Finite Elemente Code does not solve equation properly

I am trying to write the code for solving the extremely difficult differential equation:
x' = 1
with the finite element method.
As far as I understood, I can obtain the solution u as
with the basis functions phi_i(x), while I can obtain the u_i as the solution of the system of linear equations:
with the differential operator D (here only the first derivative). As a basis I am using the tent function:
def tent(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return (x - l) / (m - l)
elif x < r and x > m:
return (r - x) / (r - m)
else:
return 0
def tent_half_down(l,r,x):
if x >= l and x <= r:
return (r - x) / (r - l)
else:
return 0
def tent_half_up(l,r,x):
if x >= l and x <= r:
return (x - l) / (r - l)
else:
return 0
def tent_prime(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return 1 / (m - l)
elif x < r and x > m:
return 1 / (m - r)
else:
return 0
def tent_half_prime_down(l,r,x):
if x >= l and x <= r:
return - 1 / (r - l)
else:
return 0
def tent_half_prime_up(l, r, x):
if x >= l and x <= r:
return 1 / (r - l)
else:
return 0
def sources(x):
return 1
Discretizing my space:
n_vertex = 30
n_points = (n_vertex-1) * 40
space = (0,5)
x_space = np.linspace(space[0],space[1],n_points)
vertx_list = np.linspace(space[0],space[1], n_vertex)
tent_list = np.zeros((n_vertex, n_points))
tent_prime_list = np.zeros((n_vertex, n_points))
tent_list[0,:] = [tent_half_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_list[-1,:] = [tent_half_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
tent_prime_list[0,:] = [tent_half_prime_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_prime_list[-1,:] = [tent_half_prime_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
for i in range(1,n_vertex-1):
tent_list[i, :] = [tent(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
tent_prime_list[i, :] = [tent_prime(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
Calculating the system of linear equations:
b = np.zeros((n_vertex))
A = np.zeros((n_vertex,n_vertex))
for i in range(n_vertex):
b[i] = np.trapz(tent_list[i,:]*sources(x_space))
for j in range(n_vertex):
A[j, i] = np.trapz(tent_prime_list[j] * tent_list[i])
And then solving and reconstructing it
u = np.linalg.solve(A,b)
sol = tent_list.T.dot(u)
But it does not work, I am only getting some up and down pattern. What am I doing wrong?
First, a couple of comments on terminology and notation:
1) You are using the weak formulation, though you've done this implicitly. A formulation being "weak" has nothing to do with the order of derivatives involved. It is weak because you are not satisfying the differential equation exactly at every location. FE minimizes the weighted residual of the solution, integrated over the domain. The functions phi_j actually discretize the weighting function. The difference when you only have first-order derivatives is that you don't have to apply the Gauss divergence theorem (which simplifies to integration by parts for one dimension) to eliminate second-order derivatives. You can tell this wasn't done because phi_j is not differentiated in the LHS.
2) I would suggest not using "A" as the differential operator. You also use this symbol for the global system matrix, so your notation is inconsistent. People often use "D", since this fits better to the idea that it is used for differentiation.
Secondly, about your implementation:
3) You are using way more integration points than necessary. Your elements use linear interpolation functions, which means you only need one integration point located at the center of the element to evaluate the integral exactly. Look into the details of Gauss quadrature to see why. Also, you've specified the number of integration points as a multiple of the number of nodes. This should be done as a multiple of the number of elements instead (in your case, n_vertex-1), because the elements are the domains on which you're integrating.
4) You have built your system by simply removing the two end nodes from the formulation. This isn't the correct way to specify boundary conditions. I would suggesting building the full system first and using one of the typical methods for applying Dirichlet boundary conditions. Also, think about what constraining two nodes would imply for the differential equation you're trying to solve. What function exists that satisfies x' = 1, x(0) = 0, x(5) = 0? You have overconstrained the system by trying to apply 2 boundary conditions to a first-order differential equation.
Unfortunately, there isn't a small tweak that can be made to get the code to work, but I hope the comments above help you rethink your approach.
EDIT to address your changes:
1) Assuming the matrix A is addressed with A[row,col], then your indices are backwards. You should be integrating with A[i,j] = ...
2) A simple way to apply a constraint is to replace one row with the constraint desired. If you want x(0) = 0, for example, set A[0,j] = 0 for all j, then set A[0,0] = 1 and set b[0] = 0. This substitutes one of the equations with u_0 = 0. Do this after integrating.