Equivalent of define-fun in Z3 API - api

Using Z3 with the textual format, I can use define-fun to define functions for reuse later on. For example:
(define-fun mydiv ((x Real) (y Real)) Real
(if (not (= y 0.0))
(/ x y)
0.0))
I wonder how to create define-fun with Z3 API (I use F#) instead of repeating the body of the function everywhere. I want to use it to avoid duplication and debug formulas easier. I tried with Context.MkFuncDecl, but it seems to generate uninterpreted functions only.

The define-fun command is just creating a macro. Note that the SMT 2.0 standard doesn’t allow recursive definitions.
Z3 will expand every occurrence of my-div during parsing time.
The command define-fun may be used to make the input file simpler and easier to read, but internally it does not really help Z3.
In the current API, there is no support for creating macros.
This is not a real limitation, since we can define a C or F# function that creates instances of a macro.
However, it seems you want to display (and manually inspect) formulas created using the Z3 API. In this case, macros will not help you.
One alternative is to use quantifiers. You can declare an uninterpreted function my-div and assert the universally quantified formula:
(declare-fun mydiv (Real Real) Real)
(assert (forall ((x Real) (y Real))
(= (mydiv x y)
(if (not (= y 0.0))
(/ x y)
0.0))))
Now, you can create your formula using the uninterpreted function mydiv.
This kind of quantified formula can be handled by Z3. Actually, there are two options to handle this kind of quantifier:
Use the macro finder: this preprocessing step identifies quantifiers that are essentially defining macros and expand them. However, the expansion only happens during preprocessing time, not during parsing (i.e., formula construction time). To enable the model finder, you have to use MACRO_FINDER=true
The other option is to use MBQI (model based quantifier instantiation). This module can also handle this kind of quantifier. However, the quantifiers will be expanded on demand.
Of course, the solving time may heavily depend on which approach you use. For example, if your formula is unsatisfiable independently of the “meaning” of mydiv, then approach 2 is probably better.

Related

How to know whether a racket variable is defined or not

How you can have a different behaviour if a variable is defined or not in racket language?
There are several ways to do this. But I suspect that none of these is what you want, so I'll only provide pointers to the functions (and explain the problems with each one):
namespace-variable-value is a function that retrieves the value of a toplevel variable from some namespace. This is useful only with REPL interaction and REPL code though, since code that is defined in a module is not going to use these things anyway. In other words, you can use this function (and the corresponding namespace-set-variable-value!) to get values (if any) and set them, but the only use of these values is in code that is not itself in a module. To put this differently, using this facility is as good as keeping a hash table that maps symbols to values, only it's slightly more convenient at the REPL since you just type names...
More likely, these kind of things are done in macros. The first way to do this is to use the special #%top macro. This macro gets inserted automatically for all names in a module that are not known to be bound. The usual thing that this macro does is throw an error, but you can redefine it in your code (or make up your own language that redefines it) that does something else with these unknown names.
A slightly more sophisticated way to do this is to use the identifier-binding function -- again, in a macro, not at runtime -- and use it to get information about some name that is given to the macro and decide what to expand to based on that name.
The last two options are the more useful ones, but they're not the newbie-level kind of macros, which is why I suspect that you're asking the wrong question. To clarify, you can use them to write a kind of a defined? special form that checks whether some name is defined, but that question is one that would be answered by a macro, based on the rest of the code, so it's not really useful to ask it. If you want something like that that can enable the kind of code in other dynamic languages where you use such a predicate, then the best way to go about this is to redefine #%top to do some kind of a lookup (hashtable or global namespace) instead of throwing a compilation error -- but again, the difference between that and using a hash table explicitly is mostly cosmetic (and again, this is not a newbie thing).
First, read Eli's answer. Then, based on Eli's answer, you can implement the defined? macro this way:
#lang racket
; The macro
(define-syntax (defined? stx)
(syntax-case stx ()
[(_ id)
(with-syntax ([v (identifier-binding #'id)])
#''v)]))
; Tests
(define x 3)
(if (defined? x) 'defined 'not-defined) ; -> defined
(let ([y 4])
(if (defined? y) 'defined 'not-defined)) ; -> defined
(if (defined? z) 'defined 'not-defined) ; -> not-defined
It works for this basic case, but it has a problem: if z is undefined, the branch of the if that considers that it is defined and uses its value will raise a compile-time error, because the normal if checks its condition value at run-time (dynamically):
; This doesn't work because z in `(list z)' is undefined:
(if (defined? z) (list z) 'not-defined)
So what you probably want is a if-defined macro, that tells at compile-time (instead of at run-time) what branch of the if to take:
#lang racket
; The macro
(define-syntax (if-defined stx)
(syntax-case stx ()
[(_ id iftrue iffalse)
(let ([where (identifier-binding #'id)])
(if where #'iftrue #'iffalse))]))
; Tests
(if-defined z (list z) 'not-defined) ; -> not-defined
(if-defined t (void) (define t 5))
t ; -> 5
(define x 3)
(if-defined x (void) (define x 6))
x ; -> 3

Lambda Calculus Expression Test-bed?

I would like to test out the Lambda Calculus interpreter that I've written against a fairly large test set of Lambda Calculus expressions. Does anyone know of a Lambda Calc expression generator I can use (couldn't find anything upon an initial search on Google)? These expressions would obviously have to be properly formed.
Better yet, while I have created various examples myself and worked out the solutions so I could check the results, does anyone know of a good (and large) set of worked out Lambda Calculus reduction problems with solutions? I can type in the expressions myself so it's more important to just have a good variety of simpler (and larger) lambda calculus expressions upon which I can test my interpreter (which at the moment models Normal Order and Call by Name evaluation strategies).
Any help or guidance would be greatly appreciated.
Asperti and Guerrini (1998, The Optimal Implementation of Functional Programming Languages, CUP Press; see especially chapters 5 and 6) describe some of the more painful lambda terms that arise from Jean-Jacques Levy's theory of families of redexes and labelled reduction: these give measures of the complexity of interactions between colliding beta reductions, where reducing either redex creates work for the other.
A relatively simple example of colliding reductions is:
let D = λx(x x); F= λf.(f (f y)); and I= λx.x in
(D (F I))
which has two beta-redexes and reduces to (y y): reduce either one of them by regular substitution and you will create two new redexes, each of which is related to a piece of structure in the original term.
Iterating Church numerals is good in the same way:
let T = λfx. f(f( x)) in
λfx.(T (T (T (T T))) f x)
(which reduces the Church numeral for 65 536), which generates a lot of colliding redexes.
Generally, applying higher-order terms to each other, regardless of whether they are "well-typed" or make obvious sense, is a good source of hard work that generates complex intermediate structure.

Transition from infix to prefix notation

I started learning Clojure recently. Generally it looks interesting, but I can't get used to some syntactic inconveniences (comparing to previous Ruby/C# experience).
Prefix notation for nested expressions. In Ruby I get used to write complex expressions with chaining/piping them left-to-right: some_object.map { some_expression }.select { another_expression }. It's really convenient as you move from input value to result step-by-step, you can focus on a single transformation and you don't need to move cursor as you type. Contrary to that when I writing nested expressions in Clojure, I write the code from inner expression to outer and I have to move cursor constantly. It slows down and distracts. I know about -> and ->> macros but I noticed that it's not an idiomatic. Did you have the same problem when you started coding in Clojure/Haskell etc? How did you solve it?
I felt the same about Lisps initially so I feel your pain :-)
However the good news is that you'll find that with a bit of time and regular usage you will probably start to like prefix notation. In fact with the exception of mathematical expressions I now prefer it to infix style.
Reasons to like prefix notation:
Consistency with functions - most languages use a mix of infix (mathematical operators) and prefix (functional call) notation . In Lisps it is all consistent which has a certain elegance if you consider mathematical operators to be functions
Macros - become much more sane if the function call is always in the first position.
Varargs - it's nice to be able to have a variable number of parameters for pretty much all of your operators. (+ 1 2 3 4 5) is nicer IMHO than 1 + 2 + 3 + 4 + 5
A trick then is to use -> and ->> librerally when it makes logical sense to structure your code this way. This is typically useful when dealing with subsequent operations on objects or collections, e.g.
(->>
"Hello World"
distinct
sort
(take 3))
==> (\space \H \W)
The final trick I found very useful when working in prefix style is to make good use of indentation when building more complex expressions. If you indent properly, then you'll find that prefix notation is actually quite clear to read:
(defn add-foobars [x y]
(+
(bar x y)
(foo y)
(foo x)))
To my knowledge -> and ->> are idiomatic in Clojure. I use them all the time, and in my opinion they usually lead to much more readable code.
Here are some examples of these macros being used in popular projects from around the Clojure "ecosystem":
Ring cookie parsing
Leiningen internals
ClojureScript compiler
Proof by example :)
If you have a long expression chain, use let. Long runaway expressions or deeply nested expressions are not especially readable in any language. This is bad:
(do-something (map :id (filter #(> (:age %) 19) (fetch-data :people))))
This is marginally better:
(do-something (map :id
(filter #(> (:age %) 19)
(fetch-data :people))))
But this is also bad:
fetch_data(:people).select{|x| x.age > 19}.map{|x| x.id}.do_something
If we're reading this, what do we need to know? We're calling do_something on some attributes of some subset of people. This code is hard to read because there's so much distance between first and last, that we forget what we're looking at by the time we travel between them.
In the case of Ruby, do_something (or whatever is producing our final result) is lost way at the end of the line, so it's hard to tell what we're doing to our people. In the case of Clojure, it's immediately obvious that do-something is what we're doing, but it's hard to tell what we're doing it to without reading through the whole thing to the inside.
Any code more complex than this simple example is going to become pretty painful. If all of your code looks like this, your neck is going to get tired scanning back and forth across all of these lines of spaghetti.
I'd prefer something like this:
(let [people (fetch-data :people)
adults (filter #(> (:age %) 19) people)
ids (map :id adults)]
(do-something ids))
Now it's obvious: I start with people, I goof around, and then I do-something to them.
And you might get away with this:
fetch_data(:people).select{|x|
x.age > 19
}.map{|x|
x.id
}.do_something
But I'd probably rather do this, at the very least:
adults = fetch_data(:people).select{|x| x.age > 19}
do_something( adults.map{|x| x.id} )
It's also not unheard of to use let even when your intermediary expressions don't have good names. (This style is occasionally used in Clojure's own source code, e.g. the source code for defmacro)
(let [x (complex-expr-1 x)
x (complex-expr-2 x)
x (complex-expr-3 x)
...
x (complex-expr-n x)]
(do-something x))
This can be a big help in debugging, because you can inspect things at any point by doing:
(let [x (complex-expr-1 x)
x (complex-expr-2 x)
_ (prn x)
x (complex-expr-3 x)
...
x (complex-expr-n x)]
(do-something x))
I did indeed see the same hurdle when I first started with a lisp and it was really annoying until I saw the ways it makes code simpler and more clear, once I understood the upside the annoyance faded
initial + scale + offset
became
(+ initial scale offset)
and then try (+) prefix notation allows functions to specify their own identity values
user> (*)
1
user> (+)
0
There are lots more examples and my point is NOT to defend prefix notation. I just hope to convey that the learning curve flattens (emotionally) as the positive sides become apparent.
of course when you start writing macros then prefix notation becomes a must-have instead of a convenience.
to address the second part of your question, the thread first and thread last macros are idiomatic anytime they make the code more clear :) they are more often used in functions calls than pure arithmetic though nobody will fault you for using them when they make the equation more palatable.
ps: (.. object object2 object3) -> object().object2().object3();
(doto my-object
(setX 4)
(sety 5)`

Is there any way to simplify the path conditions

For example, in the code below the path condition will be x>0 && x+1>0. But since x>0 implies x+1>0, is there any way in z3 or pex API to get only x>0 and not both.
if(x>0)
if(x+1>0)
//get path condition.
Thanks
With the Z3 API, you can check whether A implies B by asserting A and not B (Z3_assert_cnstr function); and checking whether the result is unsatisfiable or not (Z3_check function). One simple idea is to keep asserting the path conditions in a Z3 context. Before asserting A, you check whether it is implied by the context or not. You can accomplish that using the following piece C of code.
Z3_push(ctx); // create a backtracking point
Z3_assert_cnstr(ctx, Z3_mk_not(ctx, A));
Z3_lbool r = Z3_check(ctx);
Z3_pop(ctx); // remove backtracking point and 'not A' from the context
if (r != Z3_L_FALSE)
Z3_assert_cnstr(ctx, A); // assert A only if it is not implied.
Z3 3.2 has a little language for specifying strategies for solving and simplifying expressions.
On this language, you can write:
(declare-const x Int)
(assert (> x 0))
(assert (> (+ x 1) 0))
(apply (and-then simplify propagate-bounds))
This simple strategy will produce (>= x 1) as expected. It is based on much cheaper (but incomplete) methods.
Another problem is that this functionality is only available in the interactive shell.
The plan is to have these capabilities available in the programmatic API in the next release.

(x86) Assembler Optimization

I'm building a compiler/assembler/linker in Java for the x86-32 (IA32) processor targeting Windows.
High-level concepts (I do not have any "source code": there is no syntax nor lexical translation, and all languages are regular) are translated into opcodes, which then are wrapped and outputted to a file. The translation process has several phases, one is the translation between regular languages: the highest-level code is translated into the medium-level code which is then translated into the lowest-level code (probably more than 3 levels).
My problem is the following; if I have higher-level code (X and Y) translated to lower-level code (x, y, U and V), then an example of such a translation is, in pseudo-code:
x + U(f) // generated by X
+
V(f) + y // generated by Y
(An easy example) where V is the opposite of U (compare with a stack push as U and a pop as V). This needs to be 'optimized' into:
x + y
(essentially removing the "useless" code)
My idea was to use regular expressions. For the above case, it'll be a regular expression looking like this: x:(U(x)+V(x)):null, meaning for all x find U(x) followed by V(x) and replace by null. Imagine more complex regular expressions, for more complex optimizations. This should work on all levels.
What do you suggest? What would be a good approach to optimize and produce fast x86 assembly?
What you should actually do is build an Abstract Syntax Tree (AST).
It is a representation of the source code in the form of a tree, that is much easier to work with, especially to make transformations and optimizations.
That code, represented as a tree, would be something like:
(+
(+
x
(U f))
(+
(V f)
y))
You could then try to make some transformations: a sum of sums is a sum of all the terms:
(+
x
(U f)
(V f)
y)
Then you could scan the tree and you could have the following rules:
(+ (U x) (V x)) = 0, for all x
(+ 0 x1 x2 ...) = x, for all x1, x2, ...
Then you would obtain what you are looking for:
(+ x y)
Any good book on compiler-writing will discuss a lot on ASTs. Functional programming languages are specially suited for this task, since in general it is easy to represent trees and to do pattern matching to parse and transform the tree.
Usually, for this task, you should avoid using regular expressions. Regular expressions define what mathematicians call regular languages. Any regular language can be parsed by a set of regular expressions. However, I think your language is not regular, so it cannot be properly parsed by regexps.
People try, and try, and try to parse languages such as HTML using regular expressions. This has been extensively discussed here in SO, and you cannot parse HTML with regular expressions. There will always be an exceptional case in which your regular expressions would fail, and you would have to adapt it.
It might be the same with your language: if it is not regular, you should avoid lots of headaches and not try to parse it (and especially "transform" it) using regular expressions.
I'm having a lot of trouble understanding this question, but I think you will find it useful to learn something about term-rewriting systems, which seems to be what you are proposing. Whether the mechanism is tree rewriting (always works) or regular expressions (will work for some languages some of the time and other languages all of the time) is of secondary importance.
It is definitely possible to optimize object code by term rewriting. You probably also will benefit from learning something about peephole optimization; a good place to start, because it is very strong on the fundamentals, is a paper by Davidson and Fraser on a retargetable peephole optimizer. There's also excellent later work by Benitez and Davidson.