Context: I do research on bounded java program verification using z3. I want to get an optimization model on a linearization problem. A standard approach could be incrementally search the model until find a unsat case. But the performance seems be a problem, and it destroys the code portability by introducing JNI, which intergrates z3 c/c++ api to my tool.
Now I want to add constraints on all inputs of a java method. I use quantity arrays (I use theory of array to model heaps). However, z3 always returns "unknown" immediately on a satisfiable problem. It seems that it is impossible to generate model. I notice that there is an option of z3, INST_GEN, then I am trying to understand it. I feed following formulas to z3.
(set-option :INST_GEN true)
(define-sort S () (_ BitVec 2))
(declare-fun s () S)
(assert (= s (_ bv0 2)))
(define-sort A () (Array S S))
(push) ;; 1st case
(assert (forall ((a A)) (= (select a s) s)))
(check-sat)
(get-model)
(pop)
(push) ;; 2nd case
(declare-fun a () A)
(assert (forall ((t S)) (= (select a t) t)))
(check-sat)
(get-model)
(pop)
(push) ;; 3rd case
(check-sat)
(get-model)
(pop)
In both 1st and 2nd cases, z3 returns "segmentation fault" in Linux, while it crashes in windows 7. Both z3 are version 4.0, x64.
In 3rd case, it is quantify-free, and Z3 successfully generates model
sat
(model (define-fun s () (_ BitVec 2) #b00) )
My first question is how this option works? Does it enumerate arrays?
Second question is, I notice that z3 could successfully return "unsat" on a unsatisfied problem with quantify arrays. Does z3 support some option or approach to generate a model in a satisfied problem with quantified arrays, with bounded indices and elements? e.g. using if-then-else clause.
First, the option INST_GEN was part of an experiment. It should not have been exposed to external users. This options was not seriously tested. It will be hidden in future versions. Sorry about that.
Second, in general, Z3 will fail on satisfiable problems that quantify over arrays. The following tutorial (section Quantifiers) describes many fragments where Z3 is complete.
Finally, Z3 has many different engines/solvers. However, only one of them supports incremental solving. Whenever, push/pop commands are used, Z3 will automatically switch to this incremental solver.
If we remove the push and pop commands, then Z3 can show the second problem to be satisfiable.
Here is a link with the modified example: http://rise4fun.com/Z3/apcQ.
Related
I am writing an interpreter for the lambda calculus in C#. So far I have gone down the following avenues for interpretation.
Compilation of terms to MSIL, such that lazy evaluation is still preserved.
Evaluation on a tree of terms (term rewriting).
At this moment, the MSIL compilation strategy is well over an order of magnitude faster in most any case I have been able to test. However, I am looking into optimizing the term rewriter by identifying patterns often used in the construction of LC terms. So far, I have come up with one method in particular which provides a relatively small speedup: identification of exponentiated applications. E.g. f (f (f (f x))) is simplified to f^4 x. Then, a rule for applications of equal applicant exponentials is used, namely f^m (f^n x) = f^(m + n) x. This rule works very well in particular for the exponentiation of church numerals.
This optimization has me wondering: Are there other pattern-based approaches to optimization in LC?
I have a Common Lisp function that merges two ordered lists of symbols, without duplicates (two ordered sets):
(defun my-merge (x y)
"merge two lists of symbols *already sorted and without duplicates*
(and return the resulting list sorted and without duplicates)"
(let* ((first (cons nil nil))
(last first))
(loop while (and x y)
for cx = (car x)
for cy = (car y)
if (string= cx cy)
do (setf x (cdr x))
else if (string< cx cy)
do (rplacd last (cons cx nil))
and do (setf last (cdr last)
x (cdr x))
else do (rplacd last (cons cy nil))
and do (setf last (cdr last)
y (cdr y)))
(rplacd last (or x y))
(cdr first)))
Since I have found only scarce information about the use of type declarations in practical cases in order to compile efficiently the code, I am unsure if it is sufficient to declare the variables, for instance in this way:
(defun my-merge (x y)
"merge two list of symbols *already sorted and without duplicates*"
(declare (list x y))
(let* ((first (cons nil nil))
(last first))
(declare (cons first last))
(loop while (and x y)
for cx symbol = (car x)
for cy symbol = (car y)
...
or, as I suppose, if it is necessary also to add the the specifier to my code? But then, where and in which cases should I add it?
There is some rule that one can follow?
Should I also declare the type of my functions, again for the optimization purposes?
Style
Since you don't actually use the extended LOOP features in any useful way and the LOOP syntax isn't that great for your example, I would propose to write it with the primitive LOOP. See how COND makes it more readable for a Lisp programmer:
(defun my-merge (x y &aux (first (list nil)) (last first) cx cy)
(macrolet ((cdr! (v)
`(setf ,v (cdr ,v))))
(loop (unless (and x y)
(return))
(setf cx (car x) cy (car y))
(cond ((string= cx cy)
(cdr! x))
((string< cx cy)
(rplacd last (list cx))
(cdr! last)
(cdr! x))
(t
(rplacd last (list cy))
(cdr! last)
(cdr! y))))
(rplacd last (or x y))
(cdr first)))
Compiling
Given the level of sophistication of a compiler:
fully stupid = compiler ignores all declarations -> declarations don't help
mostly stupid = compiler needs declarations everywhere, but optimizes -> you need to write a lot of declarations
example:
(let ((a 1) (b 2))
(declare (integer a b))
(let ((c (the integer (* (the integer (+ a b))
(the integer (- a b))))))
(declare (integer c))
(the integer (* c c))))
Note that it might not enough to know what the argument types are, it might be necessary to declare the type of results. Thus the use of the. DISASSEMBLE and the profiler are your friends.
basic = compiler needs type declarations, optimizes, but also can infer some types. Types for the standard language is known.
Even better compilers complain about type errors, can propagate types across functions and can complain when certain optimizations are not possible.
Sequence functions
Note that sequence functions are a particular tough case. Sequences have as subtypes lists and vectors (including strings).
Let's say a sequence function is:
(foo result-type sequence-1 sequence-2 fn)
if the sequences are of the same type, one might want to have an optimized code versions for lists and another one for vectors.
if the sequences are of different types, it might be useful to convert one sequences to a different type. Maybe not.
the result type also has influence, depending on result types, different algorithms may be possible/necessary
So the degree of freedom is quite high. The compiler might contribute to fast code. But also the implementation of the particular sequence function might be able to do some optimization at runtime.
Then fn is a function which takes elements and produces new elements. It might be helpful to know its type signature - or not.
I can't really say which current Common Lisp has a sophisticated implementation of the sequence functions. Though I remember that the Symbolics Common Lisp implementations put some effort into it.
Documentation and papers
Often what the compiler can optimize and how is not well documented, if at all. There are some papers about this topic, but often they are old and/or outdated.
The Python compiler of CMUCL: The Compiler.
The Python compiler of CMUCL: Advanced Compiler Use.
The Python compiler for CMU Common Lisp (Postscript)
SBCL Compiler
Allegro CL: Compiling
LispWorks: Optimizing your code
Performance beyond expectations
How to make Lisp code go faster than C
An evaluation of major Lisp compilers
A question about typed/racket. I'm currently working my way through the Euler Project problems to better learn racket. Some of my solutions are really slow, especially when dealing with primes and factors. So for some problems, I've tried to make a typed/racket version and I find no improvement in speed, quite the opposite. (I try to minimize the impact of overhead by using really big numbers, calculations are around 10 seconds.)
I know from the Racket docs that the best optimizations happen when using Floats/Flonums. So... yeah, I've tried to make float versions of problems dealing with integers. As in this problem with a racket version using integers, and a typed/racket one artificially turning integers to floats. I have to use tricks: checking equality between two numbers actually means checking that they are "close enough", like in this function which checks if x can be divided by y :
(: divide? (-> Flonum Flonum Boolean))
(define (divide? x y)
(let ([r (/ x y)])
(< (- r (floor r)) 1e-6)))
It works (well... the solution is correct) and I have a 30%-40% speed improvement.
How acceptable is this? Do people actually do that in real life? If not, what is the best way to optimize typed/racket solutions when using integers? Or should typed/racket be abandoned altogether when dealing with integers and reserved for problems with float calculations?
In most cases the solution is to use better algorithms rather than converting to Typed Racket.
Since most problems at Project Euler concern integers, here is a few tips and tricks:
The division operator / needs to compute the greatest common division between the denominator and the numerator in order to cancel out common factors. This makes / a bad choice if you only want to know whether one number divides another. Use (= (remainder n m) 0) to check whether m divides n. Also: use quotient rander than / when you know the division has a zero remainder.
Use memoization to avoid recomputation. I.e. use a vector to store already computed results. Example: https://github.com/racket/math/blob/master/math-lib/math/private/number-theory/eulerian-number.rkt
First implement a naive algorithm. Then consider how to reduce the number of cases. A rule of thumb: brute force works best if you can reduce the number of cases to 1-10 million.
To reduce the number of cases look for parametrizations of the search space. Example: If you need to find a Pythagorean triple: loop over numbers m and n and then compute a = m^2 - n^2, b = 2mn, and, c = m^2 + n^2. This will be faster than looping over a, b, and, c skipping those triples where a^2 + b^2 = c^2 is not true.
Look for tips and tricks in the source of math/number-theory.
Not aspiring to be an real answer since I can't provide any general tips soegaard hasn't posted already, but since I recently have done "Amicable numbers
Problem 21", I thought may as well leave you my solution here (Sadly not many Lisp solutions get posted on Euler...).
(define (divSum n)
(define (aux i sum)
(if (> (sqr i) n)
(if (= (sqr (sub1 i)) n) ; final check if n is a perfect square
(- sum (sub1 i))
sum)
(aux (add1 i) (if (= (modulo n i) 0)
(+ sum i (/ n i))
sum))))
(aux 2 1))
(define (amicableSum n)
(define (aux a sum)
(if (>= a n)
sum
(let ([b (divSum a)])
(aux (add1 a)
(if (and (> b a) (= (divSum b) a))
(+ sum a b)
sum)))))
(aux 2 0))
> (time (amicableSum 10000))
cpu time: 47 real time: 46 gc time: 0
When dealing with divisors one can often stop at the square-root of n like here with divSum. And when you find an amicable pair you may as well add both to the sum at once, what saves you an unnecessary computation of (divSum b) in my code.
{WW} - Decidable but not Context free
{WW^R} - Context Free, but not in Regular
Σ* - Regular language
How can you determine which class they belong to?
May be my answer helpful to you:
L1 = {ww | w ∈ {a, b}* }
is not context Free Language because a (Push down Automata) PDA is not possible (even Non-Deterministic-PDA ). Why? suppose you push first w in stack. To match second w with first w you have to push first w in reverse order (either you need to match second w in reverse order with stack content) that is not possible with stack (and we can't read input in reverse order). Although its decidable because be can draw a Turing Machine for L1 that always half after finite number of steps.
L3 = {wwR | w ∈ {a, b}* }
Language L3 is a Non-Deterministic Context Free Language, because n-PDA is possible but Finite Automate is not possible for L3. you can also proof this using Pumping Lemma for Regular Languages.
Σ* - Regular Language(RL)
Σ* is Regular Expression (RE) e.g
if Σ = {a, b} then RE is (a + b)* RE is possible only for RLs.
The examples in my question may be more helpful to you.
I want to implement the vim commandT plugin in emacs. This code is mostly a translation from the matcher.
I've got some elisp here that's still too slow to use on my netbook -
how can I speed it up?
(eval-when-compile (require 'cl))
(defun commandT-fuzzy-match (choices search-string)
(sort (loop for choice in choices
for score = (commandT-fuzzy-score choice search-string (commandT-max-score-per-char choice search-string))
if (> score 0.0) collect (list score choice))
#'(lambda (a b) (> (first a) (first b)))
))
(defun* commandT-fuzzy-score (choice search-string &optional (score-per-char (commandT-max-score-per-char choice search-string)) (choice-pointer 0) (last-found nil))
(condition-case error
(loop for search-char across search-string
sum (loop until (char-equal search-char (elt choice choice-pointer))
do (incf choice-pointer)
finally return (let ((factor (cond (last-found (* 0.75 (/ 1.0 (- choice-pointer last-found))))
(t 1.0))))
(setq last-found choice-pointer)
(max (commandT-fuzzy-score choice search-string score-per-char (1+ choice-pointer) last-found)
(* factor score-per-char)))))
(args-out-of-range 0.0) ; end of string hit without match found.
))
(defun commandT-max-score-per-char (choice search-string)
(/ (+ (/ 1.0 (length choice)) (/ 1.0 (length search-string))) 2))
Be sure to compile that part, as that already helps a lot.
And a benchmark:
(let ((choices (split-string (shell-command-to-string "curl http://sprunge.us/FcEL") "\n")))
(benchmark-run-compiled 10
(commandT-fuzzy-match choices "az")))
Here are some micro optimizations you can try:
Use car-less-than-car instead of your lambda expression. This has no visible effect since the time is not spent in sort but in commandT-fuzzy-score.
Use defun instead of defun*: those optional arguments with a non-nil default have a non-negligible hidden cost. This reduces the GC cost by almost half (and you started with more than 10% of the time spent in the GC).
(* 0.75 (/ 1.0 XXX)) is equal to (/ 0.75 XXX).
use eq instead of char-equal (that changes the behavior to always be case-sensitive, tho). This makes a fairly large difference.
use aref instead of elt.
I don't understand why you pass last-found in your recursive call, so I obviously don't fully understand what your algorithm is doing. But assuming that was an error, you can turn it into a local variable instead of passing it as an argument. This saves you time.
I don't understand why you make a recursive call for every search-char that you find, instead of only for the first one. Another way to look at this is that your max compares a "single-char score" with a "whole search-string score" which seems rather odd. If you change your code to do the max outside of the two loops with the recursive call on (1+ first-found), that speeds it up by a factor of 4 in my test case.
The multiplication by score-per-char can be moved outside of the loop (this doesn't seem to be true for your original algorithm).
Also, the Elisp as implemented in Emacs is pretty slow, so you're often better off using "big primitives" so as to spend less time interpreting Elisp (byte-)code and more time running C code. Here is for example an alternative implementation (not of your original algorithm but of the one I got after moving the max outside of the loops), using regexp pattern maching to do the inner loop:
(defun commandT-fuzzy-match-re (choices search-string)
(let ((search-re (regexp-quote (substring search-string 0 1)))
(i 1))
(while (< i (length search-string))
(setq search-re (concat search-re
(let ((c (aref search-string i)))
(format "[^%c]*\\(%s\\)"
c (regexp-quote (string c))))))
(setq i (1+ i)))
(sort
(delq nil
(mapcar (lambda (choice)
(let ((start 0)
(best 0.0))
(while (string-match search-re choice start)
(let ((last-found (match-beginning 0)))
(setq start (1+ last-found))
(let ((score 1.0)
(i 1)
(choice-pointer nil))
(while (setq choice-pointer (match-beginning i))
(setq i (1+ i))
(setq score (+ score (/ 0.75 (- choice-pointer last-found))))
(setq last-found choice-pointer))
(setq best (max best score)))))
(when (> best 0.0)
(list (* (commandT-max-score-per-char
choice search-string)
best)
choice))))
choices))
#'car-less-than-car)))