Another question on the maximization API in Z3.
I get wrong answers if I switch maximization objectives midway through:
from z3 import Real, Optimize
x = Real('x')
y = Real('y')
opt = Optimize()
opt.add(x >= 0)
opt.add(y >= 0)
opt.add(x + y <= 15)
print "Optimizing", x
h = opt.maximize(x)
print opt.check()
print opt.upper(h)
print opt.model()
print "Optimizing", y
h = opt.maximize(y)
print opt.check()
print opt.upper(h)
print opt.model()
The latter call to opt.model() returns y = 0, whereas clearly the answer should be 15.
Is it a bug or simply unsupported feature? (and should I manually re-add the constraints each time I want to switch the objective?)
Moreover, there is a separate bug which comes out when I remove the non-negativity constraint, but that's a separate issue (bad handling for unbounded objectives, I presume?)
from z3 import Real, Optimize
x = Real('x')
y = Real('y')
opt = Optimize()
opt.add(x + y <= 15)
print "Optimizing", x
h = opt.maximize(x)
print opt.check()
print opt.upper(h)
print opt.model()
Dies with
Optimizing x
terminate called after throwing an instance of 'std::bad_typeid'
what(): std::bad_typeid
fish: Job 1, 'python opt.py' terminated by signal SIGABRT (Abort)
Thanks for the bug report on the crash. That should not happen.
For the first problem you report. the semantics of adding objectives is additive. That is, you instruct the engine to optimize both x and also y (in the second call). It chooses by default the lexicographic combination of x, y. In the lexicographic ordering the value (15,0) dominates other values.
Related
These are the conditions:
if(x > 0)
{
y >= a;
z <= b;
}
It is quite easy to convert the conditions into Linear Programming constraints if x were binary variable. But I am not finding a way to do this.
You can do this in 2 steps
Step 1: Introduce a binary dummy variable
Since x is continuous, we can introduce a binary 0/1 dummy variable. Let's call it x_positive
if x>0 then we want x_positive =1. We can achieve that via the following constraint, where M is a very large number.
x < x_positive * M
Note that this forces x_positive to become 1, if x is itself positive. If x is negative, x_positive can be anything. (We can force it to be zero by adding it to the objective function with a tiny penalty of the appropriate sign.)
Step 2: Use the dummy variable to implement the next 2 constraints
In English: if x_positive = 1, then y >= a
However, if x_positive = 0, y can be anything (y > -inf)
y > a - M (1 - x_positive)
Similarly,
if x_positive = 1, then z <= b
z <= b + M * (1 - x_positive)
Both the linear constraints above will kick in if x>0 and will be trivially satisfied if x <=0.
I am trying to write the code for solving the extremely difficult differential equation:
x' = 1
with the finite element method.
As far as I understood, I can obtain the solution u as
with the basis functions phi_i(x), while I can obtain the u_i as the solution of the system of linear equations:
with the differential operator D (here only the first derivative). As a basis I am using the tent function:
def tent(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return (x - l) / (m - l)
elif x < r and x > m:
return (r - x) / (r - m)
else:
return 0
def tent_half_down(l,r,x):
if x >= l and x <= r:
return (r - x) / (r - l)
else:
return 0
def tent_half_up(l,r,x):
if x >= l and x <= r:
return (x - l) / (r - l)
else:
return 0
def tent_prime(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return 1 / (m - l)
elif x < r and x > m:
return 1 / (m - r)
else:
return 0
def tent_half_prime_down(l,r,x):
if x >= l and x <= r:
return - 1 / (r - l)
else:
return 0
def tent_half_prime_up(l, r, x):
if x >= l and x <= r:
return 1 / (r - l)
else:
return 0
def sources(x):
return 1
Discretizing my space:
n_vertex = 30
n_points = (n_vertex-1) * 40
space = (0,5)
x_space = np.linspace(space[0],space[1],n_points)
vertx_list = np.linspace(space[0],space[1], n_vertex)
tent_list = np.zeros((n_vertex, n_points))
tent_prime_list = np.zeros((n_vertex, n_points))
tent_list[0,:] = [tent_half_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_list[-1,:] = [tent_half_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
tent_prime_list[0,:] = [tent_half_prime_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_prime_list[-1,:] = [tent_half_prime_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
for i in range(1,n_vertex-1):
tent_list[i, :] = [tent(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
tent_prime_list[i, :] = [tent_prime(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
Calculating the system of linear equations:
b = np.zeros((n_vertex))
A = np.zeros((n_vertex,n_vertex))
for i in range(n_vertex):
b[i] = np.trapz(tent_list[i,:]*sources(x_space))
for j in range(n_vertex):
A[j, i] = np.trapz(tent_prime_list[j] * tent_list[i])
And then solving and reconstructing it
u = np.linalg.solve(A,b)
sol = tent_list.T.dot(u)
But it does not work, I am only getting some up and down pattern. What am I doing wrong?
First, a couple of comments on terminology and notation:
1) You are using the weak formulation, though you've done this implicitly. A formulation being "weak" has nothing to do with the order of derivatives involved. It is weak because you are not satisfying the differential equation exactly at every location. FE minimizes the weighted residual of the solution, integrated over the domain. The functions phi_j actually discretize the weighting function. The difference when you only have first-order derivatives is that you don't have to apply the Gauss divergence theorem (which simplifies to integration by parts for one dimension) to eliminate second-order derivatives. You can tell this wasn't done because phi_j is not differentiated in the LHS.
2) I would suggest not using "A" as the differential operator. You also use this symbol for the global system matrix, so your notation is inconsistent. People often use "D", since this fits better to the idea that it is used for differentiation.
Secondly, about your implementation:
3) You are using way more integration points than necessary. Your elements use linear interpolation functions, which means you only need one integration point located at the center of the element to evaluate the integral exactly. Look into the details of Gauss quadrature to see why. Also, you've specified the number of integration points as a multiple of the number of nodes. This should be done as a multiple of the number of elements instead (in your case, n_vertex-1), because the elements are the domains on which you're integrating.
4) You have built your system by simply removing the two end nodes from the formulation. This isn't the correct way to specify boundary conditions. I would suggesting building the full system first and using one of the typical methods for applying Dirichlet boundary conditions. Also, think about what constraining two nodes would imply for the differential equation you're trying to solve. What function exists that satisfies x' = 1, x(0) = 0, x(5) = 0? You have overconstrained the system by trying to apply 2 boundary conditions to a first-order differential equation.
Unfortunately, there isn't a small tweak that can be made to get the code to work, but I hope the comments above help you rethink your approach.
EDIT to address your changes:
1) Assuming the matrix A is addressed with A[row,col], then your indices are backwards. You should be integrating with A[i,j] = ...
2) A simple way to apply a constraint is to replace one row with the constraint desired. If you want x(0) = 0, for example, set A[0,j] = 0 for all j, then set A[0,0] = 1 and set b[0] = 0. This substitutes one of the equations with u_0 = 0. Do this after integrating.
As a warm-up to writing my own elastic net solver, I'm trying to get a fast enough version of ordinary least squares implemented using coordinate descent.
I believe I've implemented the coordinate descent algorithm correctly, but when I use the "fast" version (see below), the algorithm is insanely unstable, outputting regression coefficients that routinely overflow a 64-bit float when the number of features is of moderate size compared to the number of samples.
Linear Regression and OLS
If b = A*x, where A is a matrix, x a vector of the unknown regression coefficients, and y is the output, I want to find x that minimizes
||b - Ax||^2
If A[j] is the jth column of A and A[-j] is A without column j, and the columns of A are normalized so that ||A[j]||^2 = 1 for all j, the coordinate-wise update is then
Coordinate Descent:
x[j] <-- A[j]^T * (b - A[-j] * x[-j])
I'm following along with these notes (page 9-10) but the derivation is simple calculus.
It's pointed out that instead of recomputing A[j]^T(b - A[-j] * x[-j]) all the time, a faster way to do it is with
Fast Coordinate Descent:
x[j] <-- A[j]^T*r + x[j]
where the total residual r = b - Ax is computed outside the loop over coordinates. The equivalence of these update rules follows from noting that Ax = A[j]*x[j] + A[-j]*x[-j] and rearranging terms.
My problem is that while the second method is indeed faster, it's wildly numerically unstable for me whenever the number of features isn't small compared to the number of samples. I was wondering if anyone might have some insight as to why that's the case. I should note that the first method, which is more stable, still starts disagreeing with more standard methods as the number of features approaches the number of samples.
Julia code
Below is some Julia code for the two update rules:
function OLS_builtin(A,b)
x = A\b
return(x)
end
function OLS_coord_descent(A,b)
N,P = size(A)
x = zeros(P)
for cycle in 1:1000
for j = 1:P
x[j] = dot(A[:,j], b - A[:,1:P .!= j]*x[1:P .!= j])
end
end
return(x)
end
function OLS_coord_descent_fast(A,b)
N,P = size(A)
x = zeros(P)
for cycle in 1:1000
r = b - A*x
for j = 1:P
x[j] += dot(A[:,j],r)
end
end
return(x)
end
Example of the problem
I generate data with the following:
n = 100
p = 50
σ = 0.1
β_nz = float([i*(-1)^i for i in 1:10])
β = append!(β_nz,zeros(Float64,p-length(β_nz)))
X = randn(n,p); X .-= mean(X,1); X ./= sqrt(sum(abs2(X),1))
y = X*β + σ*randn(n); y .-= mean(y);
Here I use p=50, and I get good agreement between OLS_coord_descent(X,y) and OLS_builtin(X,y), whereas OLS_coord_descent_fast(X,y)returns exponentially large values for the regression coefficients.
When p is less than about 20, OLS_coord_descent_fast(X,y) agrees with the other two.
Conjecture
Since things agrees for the regime of p << n, I think the algorithm is formally correct, but numerically unstable. Does anyone have any thoughts on whether this guess is correct, and if so how to correct for the instability while retaining (most) of the performance gains of the fast version of the algorithm?
The quick answer: You forgot to update r after each x[j] update. Following is the fixed function which behaves like OLS_coord_descent:
function OLS_coord_descent_fast(A,b)
N,P = size(A)
x = zeros(P)
for cycle in 1:1000
r = b - A*x
for j = 1:P
x[j] += dot(A[:,j],r)
r -= A[:,j]*dot(A[:,j],r) # Add this line
end
end
return(x)
end
I am using the following python code to find two binary numbers that:
sum to a certain number
their highest bits cast to integers must sum up to 2
The second constraint is more important to me; and in my case, it will scale: let's say it might become that highest bits of [N] number must sum up to [M].
I am not sure why z3 does not give the correct result. Any hints? Thanks a lot.
def BV2Int(var):
return ArithRef(Z3_mk_bv2int(ctx.ref(), var.as_ast(), 0), var.ctx)
def main():
s = Solver()
s.set(':models', True)
s.set(':auto-cfgig', False)
s.set(':smt.bv.enable_int2bv',True)
x = BitVec('x',4)
y = BitVec('y',4)
s = Solver()
s.add(x+y == 16, Extract(3,3,x) + Extract(3,3,y) == 2)
s.check()
print s.model()
# result: [y = 0, x = 0], fail both constraint
s = Solver()
s.add(x+y == 16, BV2Int(Extract(3,3,x)) + BV2Int(Extract(3,3,y)) == 2)
s.check()
print s.model()
# result: [y = 15, x = 1], fail the second constraint
Update: Thanks the answer from Christoph. Here is a quick fix:
Extract(3,3,x) -> ZeroExt(SZ, Extract(3,3,x)) where SZ is the bit width of RHS minus 1.
(Aside: auto-cfgig should be auto-config.)
Note that bv2int and int2bv are essentially treated as uninterpreted, so if this part is crucial to your problem, then don't use them (see documentation and previous questions).
The problem with this example are the widths of the bit-vectors. Both x and y are 4-bit variables, and the numeral 16 as a 4-bit vector is 0 (modulo 2^4), so, indeed x + y is equal to 16 when x=0 and y=0.
Further, the Extract(...) terms extract 1-bit vectors, which means that the sum Ex.. + Ex.. is again a 1-bit value and the numeral 2 as a 1-bit vector is 0 (modulo 2^1), i.e., it is indeed the case that Ex... + Ex... = 2.
I'm currently playing with the maximization API for Z3 (opt branch), and I've stumbled upon a following bug:
Whenever I give it any unbounded problem, it simply returns me OPT and gives zero in the resulting model (e.g. maximize Real('x') with no constraints on the model).
Python example:
from z3 import *
context = main_ctx()
x = Real('x')
optimize_context = Z3_mk_optimize(context.ctx)
Z3_optimize_assert(context.ctx, optimize_context, (x >= 0).ast)
Z3_optimize_maximize(context.ctx, optimize_context, x.ast)
out = Z3_optimize_check(context.ctx, optimize_context)
print out
And I get the value of out to be 1 (OPT), while it seems like it should be -1.
Thanks for trying out this experimental branch.
Development is still churning quite a bit these days, but most of the features are reasonably stable and you are invited to try them out.
To answer your question. There is a native way to use the optimization features from Z3.
To paraphrase your example, here is what is relevant:
from z3 import *
x = Real('x')
opt = Optimize()
opt.add(x >= 0)
h = opt.maximize(x)
print opt.check()
print opt.upper(h)
print opt.model()
When running it, you will see the following output:
sat
oo
[x = 0]
The first line says that the assertions are satisfiable.
The second line prints the value of the handle "h" under the satisfiabilty call.
The value of the handle holds an expression that meets the maximization/minimization criteria declared by the call to opt.maximize/opt.minimize.
In this case the expression is "oo". It is somewhat of a "hack" because it is going to be up to you to guess that "oo" means infinity. If you interpret this value back to Z3, you will not get infinity.
(I am here restricting the use of Z3 where we don't expose non-standard numbers, there is another part of Z3 that includes non-standard numbers, but that is another story).
Note that the opt.maximize call returns the handle "h",
which is later used to query what was the optimal value.
The last line is some model satisfying the constraints.
When the objective is bounded, the model will be what
you expect, but in this case the objective is unbounded.
There is no finite best value.
Try for example instead:
x = Real('x')
opt = Optimize()
opt.add(x >= 0)
opt.add(x <= 10)
h = opt.maximize(x)
print opt.check()
print opt.upper(h)
print opt.model()
This time you get a model that sets x = 10, and this is also the maximal value.
You could also try:
x = Real('x')
opt = Optimize()
opt.add(x >= 0)
opt.add(x < 10)
h = opt.maximize(x)
print opt.check()
print opt.upper(h)
print opt.model()
The output is now:
sat
10 + -1*epsilon
[x = 9]
epsilon refers to a non-standard number (infinitesimal). You can set it arbitrarily small.
Again the model uses only standard numbers, so it picks some number, in this case 9.