I have been read the paper "Constraint-Based Linear-Relations Analysis" from "Sriram Sankaranarayanan, Henny B. Sipma, and Zohar Mann" to check fixpoint equations arising from abstract interpretation by applying Farkas Lemma in a Given template inequality with unknown coefficients, which computes constraints on the values of the coefficients, such that substituting any solution back into the template yields a valid invariant relationship.
I have Followed example 1 (on that paper)
Let V = {x, y} and L = { 0 }. Consider the transition system shown below. Each transition models a concurrent process, that updates the variables x, y automatically.
Θ = (x = 0 ∧ y = 0)
T = {τ1 , τ2 }
τ1 = <l0 , l0 , [x' = x + 2y ∧ y' = 1 − y]>
τ2 = <l0, l0 , [x' = x + 1 ∧ y' = y + 2]>
I have encoded Initiation, by using Farkas Lemma (example 2 on that paper) as well consecution (through transitions τ1 and τ2).
The authors say:
We fix a linear transition system Π with variables {x1 , . . . , xn }, collectively referred to as x. The system is assumed to have a single location to simplify the presentation. The template assertion at location , is α(c) = c1 x1 + · · · + cn xn + d ≥ 0. The coefficient variables {c1 , . . . , cn , d} are collectively referred to as c. The system's transitions are {τ1 , . . . , τm }, where τi : , , ρi. The initial condition is denoted by Θ. The system in Example 1 will be used as a running example to illustrate the presented ideas.
I have reached the overall constraint obtained by the conjunction of the constraints obtained from initiation and consecution for each transition (example 4 on that paper).
At that point I guess it is possible to solve the constraint by encoding all of that in a solve like Z3.
In fact I did that by encoding linear arithmetic directly into Z3:
(define-sort MyType () Int)
(declare-const myzero MyType)
(declare-const mi1 MyType)
(declare-const mi2 MyType)
(declare-const c1 MyType)
(declare-const c2 MyType)
(declare-const d MyType)
(assert (= myzero 0))
;initiation
(assert (>= d 0) )
;transition 1
(assert (and
(= (- (* mi1 c1) c1) 0)
(= (- (+ (* mi1 c2) c2) (* 2 c1) ) 0)
(<= (- (- (* mi1 d) d) c2) 0)
(>= mi1 0)
))
;transition 2
(assert (and
(= (- (* mi2 c1) c1) 0)
(= (- (* mi2 c2) c2) 0)
(<= (- (- (- (* mi2 d) d) c1) (* 2 c2) ) 0)
(>= mi2 0)
))
(check-sat)
(get-model)
I guess I am not doing well once I have not figured out any inductive invariant at location l0 through c1, c2, ..., cn, d as an individual (o range) values.
Z3 has answered me zero for all coefficients:
sat
(model
(define-fun mi2 () Int 0)
(define-fun c2 () Int 0)
(define-fun mi1 () Int 0)
(define-fun c1 () Int 0)
(define-fun d () Int 4)
(define-fun myzero () Int 0)
)
I have tried to found examples related with, but until now no luck to get it.
If I understand what you are doing correctly in your Z3 encoding, you are indeed
getting correct but trivial invariants like 0 <= 4.
To get interesting invariants, I suggest adding constraints like c1 <> 0 to see if the solver gives you something interesting back.
Our work was done long before Z3 existed: we used the solver REDLOG as part of REDUCE, which is still available. Welcome to email me with your queries.
Best,
Sriram
Related
I'm trying to create a function that gets the sum of the square of the larger 2 of 3 numbers passed in. (Its exercise 1.3 in SICP)
When I run the following code I get the error ";The object #f is not applicable." If I switch the 3 and the 1 in my function call the error message will say #t instead of #f.
(define (sumOfSquareOfLargerTwoNumbers a b c) (
cond (
( (and (> (+ a b) (+ a c) ) (> (+ a b) (+ b c) ) ) (+ (square a) (square b) ) )
( (and (> (+ a c) (+ a b) ) (> (+ a c) (+ b c) ) ) (+ (square a) (square c) ) )
( (and (> (+ b c) (+ a b) ) (> (+ b c) (+ a c) ) ) (+ (square b) (square c) ) )
)
))
(sumOfSquareOfLargerTwoNumbers 1 2 3)
I was assuming the appropriate condition would return true and I'd get the square of the larger two numbers. Could someone please explain why I'm getting this error instead?
There are too many brackets in front of cond and that's causing the problem:
(cond (((and
The proper syntax for your solution should be:
(define (sumOfSquareOfLargerTwoNumbers a b c)
(cond ((and (> (+ a b) (+ a c)) (> (+ a b) (+ b c)))
(+ (square a) (square b)))
((and (> (+ a c) (+ a b)) (> (+ a c) (+ b c)))
(+ (square a) (square c)))
((and (> (+ b c) (+ a b)) (> (+ b c) (+ a c)))
(+ (square b) (square c)))))
What was happening is that the condition evaluated to a boolean, and the unexpected surrounding brackets made it look like a procedure application, so you ended up with something like this:
(#t 'something)
Which of course fails, because #t or #f are not procedures and cannot be applied. Just be careful with the brackets and use a good IDE with syntax coloring and code formatting, and you won't have this problem again.
Don't understand why
"a"
and
"b"
work in the code ? should we define var
"a"
and
"b"
before
"do"
?
(define v1 3)
(define v2 2)
(do ((a 1 (+ a v1))
(b 2 (+ b v2)))
((>= a b) (if (= a b) 'YES 'NO)))
After (do the local variables for the do loop are defined:
(a 1 (+ a v1)) meaning: define local loop variable a with starting value 1 and assigning (+ a v1) to a at the beginning of a new round
(b 2 (+ b v2)) meaning: define local loop variable b with starting value 2 and assigning (+ b v2) to b at the beginning of a new round
So, a and b are defined in the do loop.
There are no control flow operations other than procedure calls.
do is just a macro. The R5RS report gives an implementation:
(define-syntax do
(syntax-rules ()
((do ((var init step ...) ...)
(test expr ...)
command ...)
(letrec
((loop
(lambda (var ...)
(if test
(begin
(if #f #f)
expr ...)
(begin
command
...
(loop (do "step" var step ...)
...))))))
(loop init ...)))
((do "step" x)
x)
((do "step" x y)
y)))
Your code turns into something like this:
(let loop ((a 1) (b 2))
(if (>= a b)
(if (= a b) 'YES 'NO)
(loop (+ a v1) (+ b v2))))
When using mixed assert and assert-soft in optimization tasks, e.g. maximize, the soft assertions are disregarded, if they would lead to a non-optimal result.
Is it possible to restrict the "softness" to the satisfiability search only? I.e.: If the soft assertion is satisfiable at all, it is kept and then treated as "hard" assertion in the optimization?
Example exhibiting the aforementioned:
(declare-fun x () Int)
(declare-fun y () Int)
(assert (< (+ x y) (* x y)))
(assert (>= x 0))
(assert (>= y x))
;(assert-soft (>= (* 4 x) y)); x->2, y->500
(assert (>= (* 4 x) y)); x->16, y->62
(assert (<= (* x y) 1000))
(maximize (+ x y))
(set-option :opt.priority pareto)
(check-sat)
(get-value (x y (+ x y) (* x y)))
(check-sat)
(get-value (x y (+ x y) (* x y)))
;...
This would be required to fulfill the following use case:
Given a complex fixed ruleset on a large (>1000) number of variables (mostly having a finite domain), a user can choose the desired values for any of those, which may lead to conflicts with the ruleset.
The individual values for a variable have ratings/weights.
So, given a set of user-selected (possibly conflicting) selections, the ruleset itself and finally the ratings of the set of all possible selections, one solution for all variables is to be found, which, while respecting all non-conflicting selections by the user, maximizes the total rating score.
I had the idea of using assert-soft for user selections to cancel out conflicting ones, while combining it with the optimization of z3 to get the "best" solution. However, this failed, which is the reason for this question.
My answer is yes, if you do it incrementally.
Assume that the set of soft-clauses has only a unique Boolean assignment which makes its associated MaxSMT problem optimal.
Then, it is easy to make satisfiable soft-clauses hard by fixing the value of the associated objective function. Although z3 does not allow to put explicit constraints on the name of a soft clause group (AFAIK), one can do it implicitly by using a lexicographic multi-objective combination rather than a pareto one.
Now, in your example we can safely switch from pareto to lexicographic search because there is only one extra objective function in addition to the one implicitly defined by assert-soft. Assuming the value of the assert-soft group being fixed, the whole problem degenerates to a single-objective formula which in turn has always at-most-one Pareto-optimal solution.
Of course, this is not possible if one plans to add more objectives to the formula. In this case, the only option is to solve the formula incrementally, as follows:
(set-option:produce-models true)
(declare-fun x () Int)
(declare-fun y () Int)
(declare-fun LABEL () Bool)
(assert (and
(or (not LABEL) (>= (* 4 x) y))
(or LABEL (not (>= (* 4 x) y)))
)) ; ~= LABEL <-> (>= (* 4 x) y)
(assert-soft LABEL)
(assert (< (+ x y) (* x y)))
(assert (>= x 0))
(assert (>= y x))
(assert (>= (* 4 x) y))
(assert (<= (* x y) 1000))
(check-sat)
(get-model)
(push 1)
; assert LABEL or !LABEL depending on its value in the model
(maximize (+ x y))
; ... add other objective functions ...
(set-option :opt.priority pareto)
(check-sat)
(get-value (x y (+ x y) (* x y)))
(check-sat)
(get-value (x y (+ x y) (* x y)))
...
(pop 1)
This solves the MaxSMT problem first, fixes the Boolean assignment of the soft-clauses in a way that makes them hard, and then proceeds with optimizing multiple objectives with the pareto combination.
Note that, if the MaxSMT problem admits multiple same-weight solutions for the same optimal value, then the previous approach would discard them and focus on only one assignment. To circumvent this, the ideal solution would be to fix the value of the MaxSMT objective rather than fixing its associated Boolean assignment, e.g. as follows:
(set-option:produce-models true)
(declare-fun x () Int)
(declare-fun y () Int)
(declare-fun LABEL () Bool)
(assert-soft (>= (* 4 x) y) :id soft)
(assert (< (+ x y) (* x y)))
(assert (>= x 0))
(assert (>= y x))
(assert (>= (* 4 x) y))
(assert (<= (* x y) 1000))
(minimize soft)
(check-sat)
(get-model)
(push 1)
; assert `soft` equal to its value in model, e.g.:
; (assert (= soft XXX )))
(maximize (+ x y))
; ... add other objective functions ...
(set-option :opt.priority pareto)
(check-sat)
(get-value (x y (+ x y) (* x y)))
(check-sat)
(get-value (x y (+ x y) (* x y)))
...
(pop 1)
At the time being, this kind of syntax for objective combination is supported by OptiMathSAT only. Unfortunately, OptiMathSAT does not yet support non-linear arithmetic in conjunction with optimization, so it can't be reliably used on your example.
Luckily, one can still use this approach with z3!
If there is only one extra objective function in addition to the group of soft-clauses, one can still use the lexicographic combination rather than pareto one and get the same result.
Otherwise, if there are multiple objectives in addition to the group of soft-clauses, it suffices to explicitly encode the MaxSMT problem with a Pseudo-Boolean objective as opposed to using the assert-soft command. In this way, one retains full control of the associated objective function and can easily fix its value to any number. Note, however, that this might decrease the performance of the solver in dealing with the formula depending on the quality of the MaxSMT encoding.
I'm trying to proof the following with Z3 SMT Solver: ((x*x) + x) = ((~x * ~x) + ~x).
This is correct, because of overflow semantic in the c programming language.
Now I have written the following smt-lib code:
(declare-fun a () Int)
(define-fun myadd ((x Int) (y Int)) Int (mod (+ x y) 4294967296) )
(define-fun mynot ((x Int)) Int (- 4294967295 (mod x 4294967296)) )
(define-fun mymul ((x Int) (y Int)) Int (mod (* x y) 4294967296) )
(define-fun myfun1 ((x Int)) Int (myadd (mynot x) (mymul (mynot x) (mynot x))) )
(define-fun myfun2 ((x Int)) Int (myadd x (mymul x x)) )
(simplify (myfun1 0))
(simplify (myfun2 0))
(assert (= (myfun1 a) (myfun2 a)))
(check-sat)
(exit)
The output from z3 is:
0
0
unsat
Now my question: Why is the result "unsat"? The simplify command in my code shows that it is possible to get a valid allocation so that myfun1 and myfun2 have the same result.
Is something wrong with my code or is this a bug in z3?
Please can anybody help me. Thx
The incorrect result was due to a bug in the Z3 formula/expression preprocessor. The bug has been fixed, and is already part of the current release (v4.3.1). The bug affected benchmarks that use formulas of the form: (mod (+ a b)) or (mod (* a b)).
We can retry the example online here, and get the expected result.
I have a code in z3 which aims to solve an optimization problem for a boolean formula
(set-option :PI_NON_NESTED_ARITH_WEIGHT 1000000000)
(declare-const a0 Int) (assert (= a0 2))
(declare-const b0 Int) (assert (= b0 2))
(declare-const c0 Int) (assert (= c0 (- 99999)))
(declare-const d0 Int) (assert (= d0 99999))
(declare-const e0 Int) (assert (= e0 49))
(declare-const f0 Int) (assert (= f0 49))
(declare-const a1 Int) (assert (= a1 3))
(declare-const b1 Int) (assert (= b1 3))
(declare-const c1 Int) (assert (= c1 (- 99999)))
(declare-const d1 Int) (assert (= d1 99999))
(declare-const e1 Int) (assert (= e1 48))
(declare-const f1 Int) (assert (= f1 49))
(declare-const c Int)
(declare-const d Int)
(declare-const e Int)
(declare-const f Int)
(define-fun max ((x Int) (y Int)) Int
(ite (>= x y) x y))
(define-fun min ((x Int) (y Int)) Int
(ite (< x y) x y))
(define-fun goal ((c Int) (d Int) (e Int) (f Int)) Int
(* (- d c) (- f e)))
(define-fun sat ((c Int) (d Int) (e Int) (f Int)) Bool
(and (and (>= d c) (>= f e))
(forall ((x Int)) (=> (and (<= a0 x) (<= x b0))
(> (max c (+ x e)) (min d (+ x f)))))))
(assert (and (sat c d e f)
(forall ((cp Int) (dp Int) (ep Int) (fp Int)) (=> (sat cp dp ep fp)
(>= (goal c d e f) (goal cp dp ep fp))))))
(check-sat)
I guess it is because of the quantifiers and the implication, this code costs a lot. When I tested it on line, it gave me 2 warnings, and the final result is unknown:
failed to find a pattern for quantifier (quantifier id: k!33)
using non nested arith. pattern (quantifier id: k!48), the weight was increased to 1000000000 (this value can be modified using PI_NON_NESTED_ARITH_WEIGHT=<val>). timeout`.
Could anyone tell me if it is these 2 warnings which avoid from getting a good result? Is there any way to optimize this piece of code so that it runs?
I have solved optimization problems in Z3 in the following, iterative way, essentially a loop that searches for a solution by using several invocations of Z3.
Find one solution (in your case, a solution to (sat c d e f)
Compute the value of that solution (if your solution is c0, d0, e0, f0, evaluate (goal c0 d0 e0 f0). Call that value v0.
Find a solution to the new problem (and (sat c1 d1 e1 f1) (> (goal c1 d1 e1 f1) v0)).
If point 3. returns UNSAT, v0 is your maximum. If not, use the new solution as v0 and go back to point 3.
You can sometimes speed up the process by guessing an upper bound first (i.e. values cu, du, eu, fu such that (and (sat c d e f) (<= (goal cu du eu fu)) is UNSAT) and then proceeding by dichotomy.
In my experience, the iterative way is much faster than using quantifiers for optimization problems.
SoftTimur: Since your problem involves non-linear arithmetic (in the goal function) over integers, Z3 is likely to respond "unknown" to your problem even if you can solve other issues that you've come across. Non-linear integer arithmetic is undecidable, and it's unlikely that the current solver in Z3 can efficiently handle your problem in the presence of quantifiers. (Of course, the amazing Z3 folks can tweak their solver just "right" to address this particular problem, but the undecidability issue remains in general.) Even if you didn't have any non-linear constructs, quantifiers are a soft spot for SMT solvers, and you're unlikely to go far with the quantified approach.
So, you're essentially left with Philippe's idea of using iteration. I want to emphasize, however, that the two methods (iteration vs quantification) are not equivalent! In theory, the quantified approach is more powerful. For instance, if you ask Z3 to give you the largest integer value (a simple maximization problem, where the cost is the value of the integer itself), it'll correctly tell you that no such integer exists. If you follow the iterative approach, however, you'll loop forever. In general, the iterative approach will fail in cases where there is no global maximum/minimum to the optimization problem. Ideally, the quantifier based approach can deal with such cases, but then it's subject to other limitations as you've observed yourself.
As great as Z3 (and SMT solvers in general) is, programming them using SMT-Lib is a bit of a pain. That's why many people are building easier to use interfaces. If you're open to using Haskell for instance, you can try the SBV bindings which will let you script Z3 from Haskell. In fact, I've coded up your problem in Haskell: http://gist.github.com/1485092. (Bear in mind that I might've misunderstood your SMTLib code, or maybe made a coding mistake, so please double check.)
Haskell's SBV library allows both quantified and iterative approaches to optimization. When I try z3 with quantifiers, Z3 indeed returns "unknown", meaning the problem is not decidable. (See the function "test1" in the program.) When I tried to iterative version (see the function "test2"), it keeps finding better and better solutions, I killed it after about 10 minutes with the following solution found:
*** Round 3128 ****************************
*** Solution: [4,42399,-1,0]
*** Value : 42395 :: SInteger
Do you happen to know whether this particular instance of your problem actually has an optimal solution? If that's the case, you can let the program run for longer and it'll eventually find it, otherwise it'll go forever.
Let me know if you choose to explore the Haskell path and if you have any issues with it.