Julia JuMP StackOverflow Error when defining a constraint on linear matrix inequalities - optimization

I was inspired by this post to switch from MATLAB's LMI toolbox to a Julia implementation and I tried to adapt the suggested script in the post to a problem I am trying to solve. However, I have failed to do this, consistently causing Julia to crash or give me a StackOverflowError while trying to solve the following LMI problem with JuMP,
Lyapunov stability of hybrid systems
For the unknown symmetric matrices Ui, Wi, M (and Pi) where Ui, Wi are greater than or equal to zero.
I have run the model set-up line by line and found that it occurs when I define the last constraint:
#SDconstraint(model, Ai'*Pi + Pi*Ai + Ei'*Ui*Ei ⪯ 0)
I saw other posts about this with JuMP and they usually were a result of improper variable definition but that knowledge has not allowed me to figure out the issue. Here is a minimal form of the code that produces the error,
using JuMP
using MosekTools
using LinearAlgebra
E1p = [1 0; 0 1];
A1 = [-1 10; -100 -1];
Ei = E1p
Ai = A1
n=2
model = Model(Mosek.Optimizer)
#variable(model, Pi[i=1:n, j=1:n], Symmetric)
#variable(model, Ui[i=1:n, j=1:n], Symmetric)
#variable(model, Wi[i=1:n, j=1:n], Symmetric)
#variable(model, M[i=1:n, j=1:n], Symmetric)
#SDconstraint(model, Ui ⪰ 0)
#SDconstraint(model, Wi ⪰ 0)
#SDconstraint(model, Ei'*M*Ei ⪰ 0)
#SDconstraint(model, Pi - Ei'*Wi*Ei ⪰ 0)
#SDconstraint(model, Ai'*Pi + Pi*Ai + Ei'*Ui*Ei ⪯ 0)
optimize!(model)
Pi = value.(Pi)
The first line of the error which repeats to great length is:
in mutable_operate! at MutableArithmetics/bPWR4/src/linear_algebra.jl:132

That looks like a bug.
Here's a MWE
using JuMP
model = Model()
E = [1 1; 1 1]
#variable(model, P[1:size(E, 1), 1:size(E, 2)], Symmetric)
julia> #SDconstraint(model, E' * P + P * E + E' * P * E <= 0)
ERROR: StackOverflowError:
Stacktrace:
[1] mutable_operate!(::typeof(MutableArithmetics.sub_mul), ::Matrix{AffExpr}, ::LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}, ::Matrix{Int64}) (repeats 79984 times)
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/linear_algebra.jl:132
If I drop the first term, I get
julia> #SDconstraint(model, P * E + E' * P * E <= 0)
ERROR: MethodError: no method matching *(::Matrix{AffExpr})
Closest candidates are:
*(::StridedMatrix{T} where T, ::LinearAlgebra.AbstractQ) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/qr.jl:680
*(::StridedMatrix{T} where T, ::LinearAlgebra.Adjoint{var"#s832", var"#s831"} where {var"#s832", var"#s831"<:LinearAlgebra.AbstractQ}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/qr.jl:720
*(::StridedVecOrMat{T} where T, ::LinearAlgebra.Adjoint{var"#s832", var"#s831"} where {var"#s832", var"#s831"<:LinearAlgebra.LQPackedQ}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lq.jl:254
...
Stacktrace:
[1] mutable_operate!(::typeof(MutableArithmetics.sub_mul), ::Matrix{AffExpr}, ::LinearAlgebra.Adjoint{Int64, Matrix{Int64}}, ::Matrix{AffExpr}) (repeats 2 times)
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/linear_algebra.jl:132
[2] operate_fallback!(::MutableArithmetics.IsMutable, ::Function, ::Matrix{AffExpr}, ::LinearAlgebra.Adjoint{Int64, Matrix{Int64}}, ::LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}, ::Matrix{Int64})
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/interface.jl:330
[3] operate!(::typeof(MutableArithmetics.sub_mul), ::Matrix{AffExpr}, ::LinearAlgebra.Adjoint{Int64, Matrix{Int64}}, ::LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}, ::Matrix{Int64})
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/rewrite.jl:80
[4] macro expansion
# ~/.julia/packages/MutableArithmetics/bPWR4/src/rewrite.jl:276 [inlined]
[5] macro expansion
# ~/.julia/packages/JuMP/y5vgk/src/macros.jl:447 [inlined]
[6] top-level scope
# REPL[15]:1
So it's probably related to https://github.com/jump-dev/MutableArithmetics.jl/issues/84.
I'll open an issue. (Edit: https://github.com/jump-dev/MutableArithmetics.jl/issues/86)
As a work-around, split it into an expression first:
julia> #expression(model, ex, E' * P + P * E + E' * P * E)
2×2 Matrix{AffExpr}:
3 P[1,1] + 4 P[1,2] + P[2,2] 4 P[1,2] + 2 P[2,2] + 2 P[1,1]
2 P[1,1] + 4 P[1,2] + 2 P[2,2] 4 P[1,2] + 3 P[2,2] + P[1,1]
julia> #SDconstraint(model, ex <= 0)
[-3 P[1,1] - 4 P[1,2] - P[2,2] -2 P[1,1] - 4 P[1,2] - 2 P[2,2];
-2 P[1,1] - 4 P[1,2] - 2 P[2,2] -P[1,1] - 4 P[1,2] - 3 P[2,2]] ∈ PSDCone()

Related

Transportation cost optimisation using OMPR for a large data set

I am solving a transport optimization problem given a set of constraints.
The following are the three key data sets that I have
#demand file
demand - has demand(DEMAND) across 4821(DPP) sale points(D)
head(demand)
D PP DEMAND DPP
1 ADILABAD (V) - T:11001 OPC:PACK 131.00 ADILABAD (V) - T:11001:OPC:PACK
2 ADILABAD (V) - T:13003 OPC:PACK 235.00 ADILABAD (V) - T:13003:OPC:PACK
3 ADILABAD (V) - T:2006 PPC:PACK 30.00 ADILABAD (V) - T:2006:PPC:PACK
4 ADILABAD (V) - T:4001 OPC:PACK 30.00 ADILABAD (V) - T:4001:OPC:PACK
5 ADILABAD (V) - T:7006 OPC:NPACK 34.84 ADILABAD (V) - T:7006:OPC:NPACK
6 AHMEDABAD:1001 OPC:PACK 442.10 AHMEDABAD:1001:OPC:PACK
#Capacity file
cc - has capacity constraint (MaxP, MinP) across 1823 sources(SOURCE)
head(cc,4)
SOURCE MinP MaxP
1 CHILAMKUR:P:OPC:NPACK:0:R 900 10806
2 CHILAMKUR:P:OPC:NPACK:0:W 900 10806
3 CHILAMKUR:P:OPC:PACK:0:R 5628 67536
4 CHILAMKUR:P:OPC:PACK:0:W 5628 67536
#LandingCost file
LCMat - This is a matrix with the landing cost to deliver the product across the demand location (DPP) from a given source(SOURCE). This is an 1823 x 4821 matrix. Since the landing costs to all locations do not exist from a given source, I have replace that with a huge cost (10^6) to such DPPs.
I am using the OMPR package in R to optimize shipping material to meet the demand.
This is potentially a very simple transport problem but it is taking a lot of time. I am using a 16GB ram machine
The following is the code. Could anyone guide me on what I should do better?
a = Sys.time()
grid = expand.grid(i = 1:nrow(LCMat),j = 1:ncol(LCMat))
grid_solve = grid[which(LCMat < 10^6),]
grid_notsolve = grid[which(LCMat >= 10^6),]
model <- MILPModel() %>%
add_variable(x[grid$i, grid$j],lb = 0, type = "continuous") %>%
add_constraint(x[grid_notsolve$i, grid_notsolve$j] == 0) %>%
add_constraint(sum_over(x[i,j], i = 1:nrow(LCMat)) <= demand$DEMAND[j], j = 1:ncol(LCMat)) %>%
add_constraint(sum_over(x[i,j], j = 1:ncol(LCMat)) <= cc$MaxP[i], i = 1:nrow(LCMat)) %>%
add_constraint(sum_over(x[i,j], j = 1:ncol(LCMat)) >= cc$MinP[i], i = 1:nrow(LCMat)) %>%
set_objective(sum_expr(LCMat[grid_solve$i,grid_solve$j]*x[grid_solve$i,grid_solve$j]),"min")
solution = model %>% solve_model(with_ROI(solver = "glpk", verbose = TRUE))
Sys.time() - a
Two options to potentially speed things up:
Make sure you use the latest CRAN versions of ompr and listcomp.
Try to use filter conditions to only create/use variables that are relevant to the model, instead of adding all nrow(LCMat)*ncol(LCMat) variables and then setting (potentially) a lot of them to 0. See the code below for an example. Depending on how sparse your problem is that could help as well.
The following code takes a sparse matrix (i.e. a matrix with many 0 elements or 10^6 elements in your case) and only generates x[i,j] variables that have an entry in sparse_matrix which is greater than 0. It hopefully illustrates how to use that feature and apply it to your case.
library(ompr)
sparse_matrix <- matrix(
c(
1, 0, 0, 1,
0, 1, 0, 1,
0, 0, 0, 1,
1, 0, 0, 0
), byrow = TRUE, ncol = 4
)
is_connected <- function(i, j) {
sparse_matrix[i, j] > 0
}
n <- nrow(sparse_matrix)
m <- ncol(sparse_matrix)
model <- MIPModel() |>
add_variable(x[i, j], i = 1:n, j = 1:m, is_connected(i, j)) |>
set_objective(sum_over(x[i, j], i = 1:n, j = 1:m, is_connected(i, j))) |>
add_constraint(sum_over(x[i, j], i = 1:n, is_connected(i, j)) <= 1, j = 1:m)
variable_keys(model)
#> [1] "x[1,1]" "x[1,4]" "x[2,2]" "x[2,4]" "x[3,4]" "x[4,1]"
extract_constraints(model)
#> $matrix
#> 3 x 6 sparse Matrix of class "dgCMatrix"
#>
#> [1,] 1 . . . . 1
#> [2,] . . 1 . . .
#> [3,] . 1 . 1 1 .
#>
#> $sense
#> [1] "<=" "<=" "<="
#>
#> $rhs
#> [1] 1 1 1
Created on 2022-03-12 by the reprex package (v2.0.1)
Both OMPR and GLPK are slow for large models.
You are duplicating sum_over(x[i,j], j = 1:ncol(LCMat)). That leads to more nonzero elements than needed. I usually try to prevent that (even at the expense of more variables).

How to solve simple linear programming problem with lpSolve

I am trying to maximize the function $a_1x_1 + \cdots +a_nx_n$ subject to the constraints $b_1x_1 + \cdots + b_nx_n \leq c$ and $x_i \geq 0$ for all $i$. For the toy example below, I've chosen $a_i = b_i$, so the problem is to maximize $0x_1 + 25x_2 + 50x_3 + 75x_4 + 100x_5$ given $0x_1 + 25x_2 + 50x_3 + 75x_4 + 100x_5 \leq 100$. Trivially, the maximum value of the objective function should be 100, but when I run the code below I get a solution of 2.5e+31. What's going on?
library(lpSolve)
a <- seq.int(0, 100, 25)
b <- seq.int(0, 100, 25)
c <- 100
optimal_val <- lp(direction = "max",
objective.in = a,
const.mat = b,
const.dir = "<=",
const.rhs = c,
all.int = TRUE)
optimal_val
b is not a proper matrix. You should do, before the lp call:
b <- seq.int(0, 100, 25)
b <- matrix(b,nrow=1)
That will give you an explicit 1 x 5 matrix:
> b
[,1] [,2] [,3] [,4] [,5]
[1,] 0 25 50 75 100
Now you will see:
> optimal_val
Success: the objective function is 100
Background: by default R will consider a vector as a column matrix:
> matrix(c(1,2,3))
[,1]
[1,] 1
[2,] 2
[3,] 3

How can I solve exponential equation in Maxima CAS

I have function in Maxima CAS :
f(t) := (2*exp(2*%i*%pi*t) - exp(4*%pi*t*%i))/4;
here:
t is a real number between 0 and 1
function should give a point on the boundary of main cardioid of Mandelbrot set
How can I solve equation :
eq1:c=f(t);
(where c is a complex number)
?
Solve doesn't work
solve( eq1,t);
result is empty list
[]
Result of this equation should give real number t ( internal angle or rotation number ) from complex point c
EDIT: Thx to comment by #JosehDoggie
I can draw initial equation using:
load(draw)$
f(t):=(2*exp(%i*t) - exp(2*t*%i))/4;
draw2d(
key="main cardioid",
nticks=200,
parametric( 0.5*cos(t) - 0.25*cos(2*t), 0.5*sin(t) - 0.25*sin(2*t), t,0,2*%pi),
title="main cardioid of M set "
)$
or
draw2d(polar(abs(exp(t*%i)/2 -exp(2*t*%i)/4),t,0,2*%pi));
Similar image ( cardioid) is here
Edit2:
(%i1) eq1:c = exp(%pi*t*%i)/2 - exp(2*%pi*t*%i)/4;
%i %pi t 2 %i %pi t
%e %e
(%o1) c = ---------- - ------------
2 4
(%i2) solve(eq1,t);
%i log(1 - sqrt(1 - 4 c)) %i log(sqrt(1 - 4 c) + 1)
(%o2) [t = - -------------------------, t = - -------------------------]
%pi %pi
So :
f1(c):=float(cabs( - %i* log(1 - sqrt(1 - 4* c))/%pi));
f2(c):=float(cabs( - %i* log(1 + sqrt(1 - 4* c))/%pi));
but the results are not good.
Edit 3 :
Maybe I shoud start from it.
I have:
complex numbers c ( = boundary of cardioid)
real numbers t ( from 0 to 1 or sometimes from 0 to 2*pi )
function f which computes c from t : c= f(t)
I want to find function which computes t from c: t = g(c)
testing values :
t = 0 , c= 1/4
t = 1/2 , c= -3/4
t = 1/3 , c = c = -0.125 +0.649519052838329*%i
t = 2/5 , c = -0.481762745781211 +0.531656755220025*%i
t = 0.118033988749895 c = 0.346828007859920 +0.088702386914555*%i
t = 0.618033988749895 , c = -0.390540870218399 -0.586787907346969*%i
t = 0.718033988749895 c = 0.130349371041523 -0.587693986342220*%i
load("to_poly_solve") $
e: (2*exp(2*%i*%pi*t) - exp(4*%pi*t*%i))/4 - c $
s: to_poly_solve(e, t) $
s: maplist(lambda([e], rhs(first(e))), s) $ /* unpack arguments of %union */
ratexpand(s);
Outputs
%i log(1 - sqrt(1 - 4 c)) %i log(sqrt(1 - 4 c) + 1)
(%o6) [%z7 - -------------------------, %z9 - -------------------------]
2 %pi 2 %pi

Sample without replacement

How to sample without replacement in TensorFlow? Like numpy.random.choice(n, size=k, replace=False) for some very large integer n (e.g. 100k-100M), and smaller k (e.g. 100-10k).
Also, I want it to be efficient and on the GPU, so other solutions like this with tf.py_func are not really an option for me. Anything which would use tf.range(n) or so is also not an option because n could be very large.
This is one way:
n = ...
sample_size = ...
idx = tf.random_shuffle(tf.range(n))[:sample_size]
EDIT:
I had posted the answer below but then read the last line of your post. I don't think there is a good way to do it if you absolutely cannot produce a tensor with size O(n) (numpy.random.choice with replace=False is also implemented as a slice of a permutation). You could resort to a tf.while_loop until you have unique indices:
n = ...
sample_size = ...
idx = tf.zeros(sample_size, dtype=tf.int64)
idx = tf.while_loop(
lambda i: tf.size(idx) == tf.size(tf.unique(idx)),
lambda i: tf.random_uniform(sample_size, maxval=n, dtype=int64))
EDIT 2:
About the average number of iterations in the previous method. If we call n the number of possible values and k the length of the desired vector (with k ≤ n), the probability that an iteration is successful is:
p = product((n - (i - 1) / n) for i in 1 .. k)
Since each iteartion can be considered a Bernoulli trial, the average number of trials unitl first success is 1 / p (proof here). Here is a function that calculates the average numbre of trials in Python for some k and n values:
def avg_iter(k, n):
if k > n or n <= 0 or k < 0:
raise ValueError()
avg_it = 1.0
for p in (float(n) / (n - i) for i in range(k)):
avg_it *= p
return avg_it
And here are some results:
+-------+------+----------+
| n | k | Avg iter |
+-------+------+----------+
| 10 | 5 | 3.3 |
| 100 | 10 | 1.6 |
| 1000 | 10 | 1.1 |
| 1000 | 100 | 167.8 |
| 10000 | 10 | 1.0 |
| 10000 | 100 | 1.6 |
| 10000 | 1000 | 2.9e+22 |
+-------+------+----------+
You can see it varies wildy depending on the parameters.
It is possible, though, to construct a vector in a fixed number of steps, although the only algorithm I can think of is O(k2). In pure Python it goes like this:
import random
def sample_wo_replacement(n, k):
sample = [0] * k
for i in range(k):
sample[i] = random.randint(0, n - 1 - len(sample))
for i, v in reversed(list(enumerate(sample))):
for p in reversed(sample[:i]):
if v >= p:
v += 1
sample[i] = v
return sample
random.seed(100)
print(sample_wo_replacement(10, 5))
# [2, 8, 9, 7, 1]
print(sample_wo_replacement(10, 10))
# [6, 5, 8, 4, 0, 9, 1, 2, 7, 3]
This is a possible way to do it in TensorFlow (not sure if the best one):
import tensorflow as tf
def sample_wo_replacement_tf(n, k):
# First loop
sample = tf.constant([], dtype=tf.int64)
i = 0
sample, _ = tf.while_loop(
lambda sample, i: i < k,
# This is ugly but I did not want to define more functions
lambda sample, i: (tf.concat([sample,
tf.random_uniform([1], maxval=tf.cast(n - tf.shape(sample)[0], tf.int64), dtype=tf.int64)],
axis=0),
i + 1),
[sample, i], shape_invariants=[tf.TensorShape((None,)), tf.TensorShape(())])
# Second loop
def inner_loop(sample, i):
sample_size = tf.shape(sample)[0]
v = sample[i]
j = i - 1
v, _ = tf.while_loop(
lambda v, j: j >= 0,
lambda v, j: (tf.cond(v >= sample[j], lambda: v + 1, lambda: v), j - 1),
[v, j])
return (tf.where(tf.equal(tf.range(sample_size), i), tf.tile([v], (sample_size,)), sample), i - 1)
i = tf.shape(sample)[0] - 1
sample, _ = tf.while_loop(lambda sample, i: i >= 0, inner_loop, [sample, i])
return sample
And an example:
with tf.Graph().as_default(), tf.Session() as sess:
tf.set_random_seed(100)
sample = sample_wo_replacement_tf(10, 5)
for i in range(10):
print(sess.run(sample))
# [3 0 6 8 4]
# [5 4 8 9 3]
# [1 4 0 6 8]
# [8 9 5 6 7]
# [7 5 0 2 4]
# [8 4 5 3 7]
# [0 5 7 4 3]
# [2 0 3 8 6]
# [3 4 8 5 1]
# [5 7 0 2 9]
This is quite intesive on tf.while_loops, though, which are well-known not to be particularly fast in TensorFlow, so I wouldn't know how fast can you really get with this method without some kind of benchmarking.
EDIT 4:
One last possible method. You can divide the range of possible values (0 to n) in "chunks" of size c and pick a random amount of numbers from each chunk, then shuffle everything. The amount of memory that you use is limited by c, and you don't need nested loops. If n is divisible by c, then you should get about a perfect random distribution, otherwise values in the last "short" chunk would receive some extra probability (this may be negligible depending on the case). Here is a NumPy implementation. It is somewhat long to account for different corner cases and pitfalls, but if c ≥ k and n mod c = 0 several parts get simplified.
import numpy as np
def sample_chunked(n, k, chunk=None):
chunk = chunk or n
last_chunk = chunk
parts = n // chunk
# Distribute k among chunks
max_p = min(float(chunk) / k, 1.0)
max_p_last = max_p
if n % chunk != 0:
parts += 1
last_chunk = n % chunk
max_p_last = min(float(last_chunk) / k, 1.0)
p = np.full(parts, 2)
# Iterate until a valid distribution is found
while not np.isclose(np.sum(p), 1) or np.any(p > max_p) or p[-1] > max_p_last:
p = np.random.uniform(size=parts)
p /= np.sum(p)
dist = (k * p).astype(np.int64)
sample_size = np.sum(dist)
# Account for rounding errors
while sample_size < k:
i = np.random.randint(len(dist))
while (dist[i] >= chunk) or (i == parts - 1 and dist[i] >= last_chunk):
i = np.random.randint(len(dist))
dist[i] += 1
sample_size += 1
while sample_size > k:
i = np.random.randint(len(dist))
while dist[i] == 0:
i = np.random.randint(len(dist))
dist[i] -= 1
sample_size -= 1
assert sample_size == k
# Generate sample parts
sample_parts = []
for i, v in enumerate(np.nditer(dist)):
if v <= 0:
continue
c = chunk if i < parts - 1 else last_chunk
base = chunk * i
sample_parts.append(base + np.random.choice(c, v, replace=False))
sample = np.concatenate(sample_parts, axis=0)
np.random.shuffle(sample)
return sample
np.random.seed(100)
print(sample_chunked(15, 5, 4))
# [ 8 9 12 13 3]
A quick benchmark of sample_chunked(100000000, 100000, 100000) takes about 3.1 seconds in my computer, while I haven't been able to run the previous algorithm (sample_wo_replacement function above) to completion with the same parameters. It should be possible to implement it in TensorFlow, maybe using tf.TensorArray, although it would require significant effort to get it exactly right.
use the gumbel-max trick here: https://github.com/tensorflow/tensorflow/issues/9260
z = -tf.log(-tf.log(tf.random_uniform(tf.shape(logits),0,1)))
_, indices = tf.nn.top_k(logits + z,K)
indices are what you want. This tick is so easy~!
The following works fairly fast on the GPU, and I did not encounter memory issues when using n~100M and k~10k (using NVIDIA GeForce GTX 1080 Ti):
def random_choice_without_replacement(n, k):
"""equivalent to 'numpy.random.choice(n, size=k, replace=False)'"""
return tf.math.top_k(tf.random.uniform(shape=[n]), k, sorted=False).indices

Is it a bug in Z3? Incorrect answer on Real and ForAll applied

I'm trying to find the minimal value of the Parabola y=(x+2)**2-3, apparently, the answer should be y==-3, when x ==-2.
But z3 gives the answer [x = 0, y = 1], which doesn't meet the ForAll assertion.
Am I assuming wrongly with something?
Here is the python code:
from z3 import *
x, y, z = Reals('x y z')
print(Tactic('qe').apply(And(y == (x + 2) ** 2 - 3,
ForAll([z], y <= (z + 2) ** 2 - 3))))
solve(y == x * x + 4 * x +1,
ForAll([z], y <= z * z + 4 * z +1))
And the result:
[[y == (x + 2)**2 - 3, True]]
[x = 0, y = 1]
The result shows that 'qe' tactic eliminated that ForAll assertion into True, although it's NOT always true.
Is that the reason that solver gives a wrong answer?
What should I code to find the minimal (or maximal) value of such an expression?
BTW, the Z3 version is 4.3.2 for Mac.
I refered
How does Z3 handle non-linear integer arithmetic?
and found a partial solution, using 'qfnra-nlsat' and 'smt' tactics.
from z3 import *
x, y, z = Reals('x y z')
s1 = Then('qfnra-nlsat','smt').solver()
print s1.check(And(y == (x + 2) ** 2 - 3,
ForAll([z], y <= (z + 2) ** 2 - 3)))
print s1.model()
s2 = Then('qe', 'qfnra-nlsat','smt').solver()
print s2.check(And(y == (x + 2) ** 2 - 3,
ForAll([z], y <= (z + 2) ** 2 - 3)))
print s2.model()
And the result:
sat
[x = -2, y = -3]
sat
[x = 0, y = 1]
Still the 'qe' tactic and the default solver seem buggy. They don't give the correct result.
Further comments and discussions are needed.