Why am I getting these errors, and how can I solve them? - interpreter

I'm writing an interpreter in Standard ML but I'm having trouble with my the syntax in this function, and I cannot figure out what's wrong.
Here's the relevant code:
| eval (rho, SetExp (name, value)) =
(case rhoContains rho name of
true => rhoSet rho name value (rho, value)
| false => globalSet (name, value))
fun rhoSet [] key value = [(key, value)]
| rhoSet ((elt as (k, v)) :: tail) key value =
if key = k then (key, value) :: tail else elt :: rhoSet tail key value
fun rhoContains rho name =
case rhoGet rho name of SOME _ => true | NONE => false
This is where SetExp comes from:
datatype expression =
SetExp of (string * expression)
Running this gives me a long list of errors but I think this is the relevant section. Line 62 is the line that starts with true in eval:
eval.sml:62: error: Type error in function application.
Function: rhoSet rho name value : (string * expression) list
Argument: (rho, value) : (string * expression) list * expression
Reason: Value being applied does not have a function type

You're passing too many arguments to rhoSet - remove the trailing pair.
| eval (rho, SetExp (name, value)) =
(case rhoContains rho name of
true => rhoSet rho name value
| false => globalSet (name, value))
You can also make this more readable with a conditional:
| eval (rho, SetExp (name, value)) =
if rhoContains rho name
then rhoSet rho name value
else globalSet (name, value)

For SML, in the expression rhoSet rho name value (rho, value), rhoSet rho name value is a function and (rho, value) is the argument of this function. However, by the definition of rhoSet, rhoSet rho name value is necessarily a list and not a function. That's what the error message means.

Related

Why can't I specify the type of the names, in a named tuple, as a tuple of symbols in Julia?

There's this function:
function Γ(state::Dict{Symbol, <:Tuple{Vararg{Symbol}}})::Vector{NamedTuple{<:Tuple{Vararg{Symbol}}, <:Tuple{Vararg{Symbol}}}}
kwargs = Dict(kwargs)
keys = [key for (key, value) ∈ state]
instances_vec = [value for (key, value) ∈ state]
data = Vector{Vector{Symbol}}()
function recursive(data, current, depth = 1)
if depth <= length(instances_vec)
for instance ∈ instances_vec[depth]
recursive(data, push!(copy(current), instance), depth + 1)
end
else
push!(data, current)
end
end
recursive(data, [])
return [(;zip(keys,configuration)...) for configuration ∈ data]
end
which when run like this:
identifiers = Dict(:Id=>(:One, :Two, :Three, :Four, :Five), :Nationality=>(:Brit, :German), :Color=>(:Red, :Blue, :Green))
Γ(identifiers)
gives the following error:
ERROR: MethodError: Cannot `convert` an object of type
NamedTuple{(:Id, :Nationality, :Color),Tuple{Symbol,Symbol,Symbol}} to an object of type
NamedTuple{#s589,#s588} where #s588<:Tuple{Vararg{Symbol,N} where N} where #s589<:Tuple{Vararg{Symbol,N} where N}
The type of the returned is: Array{NamedTuple{(:Id, :Nationality, :Color),Tuple{Symbol,Symbol,Symbol}},1}
and the type of (:Id, :Nationality, :Color) is Tuple{Symbol,Symbol,Symbol}
so I don't see why this gives a conversion error when there is no need for conversion especially since:
typeof((:Id, :Nationality, :Color)) <: Tuple{Vararg{Symbol}} && Tuple{Symbol,Symbol,Symbol} <: Tuple{Vararg{Symbol}} = true
First, the conversion comes from the fact that the declaration of your function
declares a return type. If you remove (or comment out) this type annotation
function Γ(state::Dict{Symbol, <:Tuple{Vararg{Symbol}}}) #::Vector{NamedTuple{<:Tuple{Vararg{Symbol}}, <:Tuple{Vararg{Symbol}}}}
kwargs = Dict() # <- there was a typo here in your minimal example
keys = [key for (key, value) ∈ state]
instances_vec = [value for (key, value) ∈ state]
data = Vector{Vector{Symbol}}()
function recursive(data, current, depth = 1)
if depth <= length(instances_vec)
for instance ∈ instances_vec[depth]
recursive(data, push!(copy(current), instance), depth + 1)
end
else
push!(data, current)
end
end
recursive(data, [])
return [(;zip(keys,configuration)...) for configuration ∈ data]
end
then everything is fine:
julia> identifiers = Dict(:Id=>(:One, :Two, :Three, :Four, :Five), :Nationality=>(:Brit, :German), :Color=>(:Red, :Blue, :Green))
Dict{Symbol,Tuple{Symbol,Symbol,Vararg{Symbol,N} where N}} with 3 entries:
:Id => (:One, :Two, :Three, :Four, :Five)
:Nationality => (:Brit, :German)
:Color => (:Red, :Blue, :Green)
julia> Γ(identifiers)
30-element Array{NamedTuple{(:Id, :Nationality, :Color),Tuple{Symbol,Symbol,Symbol}},1}:
(Id = :One, Nationality = :Brit, Color = :Red)
(Id = :One, Nationality = :Brit, Color = :Blue)
(Id = :One, Nationality = :Brit, Color = :Green)
[...]
Note that typing the arguments of a function/method where necessary is considered good practice. However, code that provides type assertions for return values is often less idiomatic in Julia: the compiler is really good at figuring out what the return type is!
Now the reason why this conversion fails is because the element type provided in
the signature is not consistent with the actual element type of the returned
vector:
julia> elem = (Id = :One, Nationality = :Brit, Color = :Red)
(Id = :One, Nationality = :Brit, Color = :Red)
julia> typeof(elem)
NamedTuple{(:Id, :Nationality, :Color),Tuple{Symbol,Symbol,Symbol}}
julia> elem isa NamedTuple{<:Tuple{Vararg{Symbol}}, <:Tuple{Vararg{Symbol}}}
false
For NamedTuple{A,B} to match NamedTuple{<:Tuple{Vararg{Symbol}},
<:Tuple{Vararg{Symbol}}}, both conditions below have to hold:
A <: Tuple{Vararg{Symbol}}
B <: Tuple{Vararg{Symbol}}
Here, we have:
A = (:Id, :Nationality, :Color)
B = Tuple{Symbol,Symbol,Symbol}
so that, like you said, the condition for B holds:
julia> B <: Tuple{Vararg{Symbol}}
true
but not the condition for A:
julia> A <: Tuple{Vararg{Symbol}}
ERROR: TypeError: in <:, expected Type, got Tuple{Symbol,Symbol,Symbol}
Stacktrace:
[1] top-level scope at REPL[10]:1
# A is a value of type Tuple{...}, not the type itself
julia> typeof(A) <: Tuple{Vararg{Symbol}}
true

Polynomial evaluation in Isabelle

In the book Concrete Semantics, exercise 2.11 writes:
Define arithmetic expressions in one variable over integers
(type int) as a data type:
datatype exp = Var | Const int | Add exp exp | Mult exp exp
Define a function eval :: exp ⇒ int ⇒ int such that eval e x evaluates e at the value x. A polynomial can be represented as a list of coefficients, starting with the constant. For example, [4, 2, − 1, 3] represents the polynomial 4+2x−x 2 +3x 3 . Define a function evalp :: int list ⇒ int ⇒ int that evaluates a polynomial at the given value. Define a function coeffs :: exp ⇒ int list that transforms an
expression into a polynomial. This may require auxiliary functions. Prove that coeffs preserves the value of the expression: evalp (coeffs e) x = eval e x. Hint: consider the hint in Exercise 2.10.
As a first try, and because former exercises encouraged to write iterative versions of functions, I wrote the following:
datatype exp = Var | Const int | Add exp exp | Mult exp exp
fun eval :: "exp ⇒ int ⇒ int" where
"eval Var x = x"
| "eval (Const n) _ = n"
| "eval (Add e1 e2) x = (eval e1 x) + (eval e2 x)"
| "eval (Mult e1 e2) x = (eval e1 x) * (eval e2 x)"
fun evalp_it :: "int list ⇒ int ⇒ int ⇒ int ⇒ int" where
"evalp_it [] x xpwr acc = acc"
| "evalp_it (c # cs) x xpwr acc = evalp_it cs x (xpwr*x) (acc + c*xpwr)"
fun evalp :: "int list ⇒ int ⇒ int" where
"evalp coeffs x = evalp_it coeffs x 1 0"
fun add_coeffs :: "int list ⇒ int list ⇒ int list" where
"add_coeffs [] [] = []"
| "add_coeffs (a # as) (b# bs) = (a+b) # (add_coeffs as bs)"
| "add_coeffs as [] = as"
| "add_coeffs [] bs = bs"
(there might be some zip function to do this)
fun mult_coeffs_it :: "int list ⇒ int list ⇒ int list ⇒ int list ⇒ int list" where
"mult_coeffs_it [] bs accs zeros = accs"
| "mult_coeffs_it (a#as) bs accs zeros =
mult_coeffs_it as bs (add_coeffs accs zeros#bs) (0#zeros)"
fun mult_coeffs :: "int list ⇒ int list ⇒ int list" where
"mult_coeffs as bs = mult_coeffs_it as bs [] []"
fun coeffs :: "exp ⇒ int list" where
"coeffs (Var) = [0,1]"
| "coeffs (Const n) = [n]"
| "coeffs (Add e1 e2) = add_coeffs (coeffs e1) (coeffs e2)"
| "coeffs (Mult e1 e2) = mult_coeffs (coeffs e1) (coeffs e2)"
I tried to verify the sought theorem
lemma evalp_coeffs_eval: "evalp (coeffs e) x = eval e x"
but could not. Once I got an advice that writing good definitions is very important in theorem proving, though the adviser did not give details.
So, what is the problem with my definitions, conceptually? Please do not write the good definitions but point out the conceptual problems with my definitions.
UPDATE: upon advice I started to use
fun evalp2 :: "int list ⇒ int ⇒ int" where
"evalp2 [] v = 0"|
"evalp2 (p#ps) v = p + v * (evalp2 ps v) "
and looking into src/HOL/Algebra/Polynomials.thy I formulated
fun add_cffs :: "int list ⇒ int list ⇒ int list" where
"add_cffs as bs =
( if length as ≥ length bs
then map2 (+) as (replicate (length as - length bs) 0) # bs
else add_cffs bs as)"
but that did not help much, simp add: algebra_simps or arith did not solve the corresponding subgoal.
Some hints:
Try running quickcheck and nitpick on the Lemma until no more counter example is found. For the Lemma in your question I get the following counter example:
Quickcheck found a counterexample:
e = Mult Var Var
x = - 2
Evaluated terms:
evalp (coeffs e) x = - 10
eval e x = 4
Try to prove some useful Lemmas about your auxiliary functions first. For example:
lemma "evalp (mult_coeffs A B) x = evalp A x * evalp B x"
Read about calculating with polynomials (e.g. https://en.wikipedia.org/wiki/Polynomial_ring) and choose definitions close to what mathematicians do. Although this one might spoil the fun of coming up with a definition on your own.

Hofstadter Female and Male sequences in SML

This is my first SML program. I am trying to write a function that returns the first number to the nth number of Hofstadter's Female or Male sequence in list form. What I have so far is:
val m = fn (n) => if n = 0 then 1 :: [] else m f (n - 1);
val f = fn (n) => if n = 0 then 0 :: [] else f m (n - 1);
You can learn about the sequence here:
https://en.wikipedia.org/wiki/Hofstadter_sequence#Hofstadter_Female_and_Male_sequences
The error that I am getting is:
[opening sequence.sml]
sequence.sml:1.49 Error: unbound variable or constructor: f
sequence.sml:1.47-1.58 Error: operator is not a function [tycon mismatch]
operator: int list
in expression:
(m <errorvar>) (n - 1)
val it = () : unit
How can I correct this?
I ended up taking this approach:
fun
m (n) = if n = 0 then 0 else n - (f (m (n - 1)))
and
f (n) = if n = 0 then 1 else n - (m (f (n - 1)));
val seq = fn n => List.tabulate((n), f);
It is quite slow. If anybody has a faster version, then I'd love to see it.
Although you have already fixed them, there were two problems with your original approach:
Function application is left-associative in SML so m f (n - 1) was being interpreted as (m f) (n - 1), not the desired m (f (n - 1)). You can fix this by explicitly specifying the bracketing m (f (n - 1)).
To be able to call f from m and m from f, you need to use the keyword fun instead of val on the first declaration (to make the function recursive), and the keyword and instead of fun or val on the second declaration (to make the function mutually recursive with the first function). This would look like
fun f n = ... (* I can call f or m from here! *)
and m n = ... (* I can call f or m from here! *)
To make it faster, you can memoize! The trick is to make f and m take as arguments memoized versions of themselves.
(* Convenience function: Update arr[i] to x, and return x. *)
fun updateAndReturn arr i x = (Array.update (arr, i, SOME x); x)
(*
* Look up result of f i in table; if it's not found, calculate f i and
* store in the table. The token is used so that deeper recursive calls
* to f can also try to store in the table.
*)
fun memo table f token i =
case Array.sub (table, i)
of NONE => updateAndReturn table i (f token i)
| SOME x => x
(*
* Given f, g, and n : int, returns a tuple (f', g') where f' and g' are memoized
* versions of f and g, respectively. f' and g' are defined only on the domain
* [0, n).
*)
fun memoizeMutual (f, g) n =
let
val fTable = Array.array (n, NONE)
val gTable = Array.array (n, NONE)
fun fMemo i = memo fTable f (fMemo, gMemo) i
and gMemo i = memo gTable g (gMemo, fMemo) i
in
(fMemo, gMemo)
end
fun female _ 0 = 1
| female (f, m) n = n - m (f (n - 1))
fun male _ 0 = 0
| male (m, f) n = n - f (m (n - 1))
fun hofstadter upTo =
let
val (male', female') = memoizeMutual (male, female) upTo
in
(List.tabulate (upTo, male'), List.tabulate (upTo, female'))
end
I renamed f and m to female and male. The memoized fMemo and gMemo are threaded through female and male by memoizeMutual. Interestingly, if we call male', then results for both male' and female' are memoized.
To confirm it's indeed faster, try evaluating hofstadter 10000. It's much faster than the forever that your version would take.
As a final note, the only recursive functions are fMemo and gMemo. Every other function I wrote could be written as an anonymous function (val memoizeMutual = fn ..., val female = fn ..., etc.), but I chose not to do so because the syntax for writing recursive functions is much more compact in SML.
To generalize this, you could replace the array version of memoizing with something like a hash table. Then we wouldn't have to specify the size of the memoization up front.

F# HashCode to enum conversion

I have an enum of bit-masked error codes with a string representation and an binary int representation:
type ErrorCodes =
| NoError = 0
| InvalidInputError = 1
| AuthenticationFailedError = 2
| InvalidArgumentError = 4
| ItemNotFoundError = 8
| UnknownError = 16
As I run through the program, I collect all the errors by using the bitwise OR operator (|||). So now I have something that looks like 01100. How can I print to the console: "InvalidArgumentError", and "ItemNotFoundError?"
I had an idea of just using:
for i = 0 to 32 do
if ((err.GetHashCode() % 2) = 1) then
Console.WriteLine("ErrorCode: {0}",err.GetHashCode())
But now I'm stuck on how to print the actual string
If you decorate your ErrorCodes type with the System.Flags attribute then .ToString will format as a list of value names.
[<System.Flags>]
type ErrorCodes = ...
let errors = ErrorCodes.InvalidInputError ||| ErrorCodes.UnknownError
printfn "%O" errors
If, for whatever reason, you don't want the default flags ToString implementation, you could do something like this:
let inline printFlags (flags: 'e) =
let ty = typeof<'e>
(Enum.GetValues ty :?> 'e[], Enum.GetNames ty)
||> Array.zip
|> Seq.filter (fun (v, _) -> v <> enum 0 && flags &&& v = v)
|> Seq.iter (snd >> printfn "%s")
printFlags (ErrorCodes.InvalidInputError ||| ErrorCodes.UnknownError)
Output:
InvalidInputError
UnknownError

F# Code Optimization for Left Leaning Red Black Tree

I've been working on porting a C# implementation of a LLRBT to F# and I now have it running correctly. My question is how would I go about optimizing this?
Some ideas I have
Using a Discriminated Union for Node to remove the use of null
Remove getters and setters
you cant have a null attribute and a struct at the same time
Full source can be found here. C# code taken from Delay's Blog.
Current performance
F# Elapsed = 00:00:01.1379927 Height: 26, Count: 487837
C# Elapsed = 00:00:00.7975849 Height: 26, Count: 487837
module Erik
let Black = true
let Red = false
[<AllowNullLiteralAttribute>]
type Node(_key, _value, _left:Node, _right:Node, _color:bool) =
let mutable key = _key
let mutable value = _value
let mutable left = _left
let mutable right = _right
let mutable color = _color
let mutable siblings = 0
member this.Key with get() = key and set(x) = key <- x
member this.Value with get() = value and set(x) = value <- x
member this.Left with get() = left and set(x) = left <- x
member this.Right with get() = right and set(x) = right <- x
member this.Color with get() = color and set(x) = color <- x
member this.Siblings with get() = siblings and set(x) = siblings <- x
static member inline IsRed(node : Node) =
if node = null then
// "Virtual" leaf nodes are always black
false
else
node.Color = Red
static member inline Flip(node : Node) =
node.Color <- not node.Color
node.Right.Color <- not node.Right.Color
node.Left.Color <- not node.Left.Color
static member inline RotateLeft(node : Node) =
let x = node.Right
node.Right <- x.Left
x.Left <- node
x.Color <- node.Color
node.Color <- Red
x
static member inline RotateRight(node : Node) =
let x = node.Left
node.Left <- x.Right
x.Right <- node
x.Color <- node.Color
node.Color <- Red
x
static member inline MoveRedLeft(_node : Node) =
let mutable node = _node
Node.Flip(node)
if Node.IsRed(node.Right.Left) then
node.Right <- Node.RotateRight(node.Right)
node <- Node.RotateLeft(node)
Node.Flip(node)
if Node.IsRed(node.Right.Right) then
node.Right <- Node.RotateLeft(node.Right)
node
static member inline MoveRedRight(_node : Node) =
let mutable node = _node
Node.Flip(node)
if Node.IsRed(node.Left.Left) then
node <- Node.RotateRight(node)
Node.Flip(node)
node
static member DeleteMinimum(_node : Node) =
let mutable node = _node
if node.Left = null then
null
else
if not(Node.IsRed(node.Left)) && not(Node.IsRed(node.Left.Left)) then
node <- Node.MoveRedLeft(node)
node.Left <- Node.DeleteMinimum(node)
Node.FixUp(node)
static member FixUp(_node : Node) =
let mutable node = _node
if Node.IsRed(node.Right) then
node <- Node.RotateLeft(node)
if Node.IsRed(node.Left) && Node.IsRed(node.Left.Left) then
node <- Node.RotateRight(node)
if Node.IsRed(node.Left) && Node.IsRed(node.Right) then
Node.Flip(node)
if node.Left <> null && Node.IsRed(node.Left.Right) && not(Node.IsRed(node.Left.Left)) then
node.Left <- Node.RotateLeft(node.Left)
if Node.IsRed(node.Left) then
node <- Node.RotateRight(node)
node
type LeftLeaningRedBlackTree(?isMultiDictionary) =
let mutable root = null
let mutable count = 0
member this.IsMultiDictionary =
Option.isSome isMultiDictionary
member this.KeyAndValueComparison(leftKey, leftValue, rightKey, rightValue) =
let comparison = leftKey - rightKey
if comparison = 0 && this.IsMultiDictionary then
leftValue - rightValue
else
comparison
member this.Add(key, value) =
root <- this.add(root, key, value)
member private this.add(_node : Node, key, value) =
let mutable node = _node
if node = null then
count <- count + 1
new Node(key, value, null, null, Red)
else
if Node.IsRed(node.Left) && Node.IsRed(node.Right) then
Node.Flip(node)
let comparison = this.KeyAndValueComparison(key, value, node.Key, node.Value)
if comparison < 0 then
node.Left <- this.add(node.Left, key, value)
elif comparison > 0 then
node.Right <- this.add(node.Right, key, value)
else
if this.IsMultiDictionary then
node.Siblings <- node.Siblings + 1
count <- count + 1
else
node.Value <- value
if Node.IsRed(node.Right) then
node <- Node.RotateLeft(node)
if Node.IsRed(node.Left) && Node.IsRed(node.Left.Left) then
node <- Node.RotateRight(node)
node
I'm surprised there's such a perf difference, since this looks like a straightforward transliteration. I presume both are compiled in 'Release' mode? Did you run both separately (cold start), or if both versions in the same program, reverse the order of the two (e.g. warm cache)? Done any profiling (have a good profiler)? Compared memory consumption (even fsi.exe can help with that)?
(I don't see any obvious improvements to be had for this mutable data structure implementation.)
I wrote an immutable version and it's performing better than the above mutable one. I've only implemented insert so far. I'm still trying to figure out what the performance issues are.
type ILLRBT =
| Red of ILLRBT * int * ILLRBT
| Black of ILLRBT * int * ILLRBT
| Nil
let flip node =
let inline flip node =
match node with
| Red(l, v, r) -> Black(l, v, r)
| Black(l, v, r) -> Red(l, v, r)
| Nil -> Nil
match node with
| Red(l, v, r) -> Black(flip l, v, flip r)
| Black(l, v, r) -> Red(flip l, v, flip r)
| Nil -> Nil
let lRot = function
| Red(l, v, Red(l', v', r'))
| Red(l, v, Black(l', v', r')) -> Red(Red(l, v, l'), v', r')
| Black(l, v, Red(l', v', r'))
| Black(l, v, Black(l', v', r')) -> Black(Red(l, v, l'), v', r')
| _ -> Nil // could raise an error here
let rRot = function
| Red( Red(l', v', r'), v, r)
| Red(Black(l', v', r'), v, r) -> Red(l', v', Red(r', v, r))
| Black( Red(l', v', r'), v, r)
| Black(Black(l', v', r'), v, r) -> Black(l', v', Red(r', v, r))
| _ -> Nil // could raise an error here
let rec insert node value =
match node with
| Nil -> Red(Nil, value, Nil)
| n ->
n
|> function
| Red(Red(_), v, Red(_))
| Black(Red(_), v, Red(_)) as node -> flip node
| x -> x
|> function
| Red(l, v, r) when value < v -> Red(insert l value, v, r)
| Black(l, v, r) when value < v -> Black(insert l value, v, r)
| Red(l, v, r) when value > v -> Red(l, v, insert r value)
| Black(l, v, r) when value > v -> Black(l, v, insert r value)
| x -> x
|> function
| Red(l, v, Red(_))
| Black(l, v, Red(_)) as node -> lRot node
| x -> x
|> function
| Red(Red(Red(_),_,_), v, r)
| Black(Red(Red(_),_,_), v, r) as node -> rRot node
| x -> x
let rec iter node =
seq {
match node with
| Red(l, v, r)
| Black(l, v, r) ->
yield! iter l
yield v
yield! iter r
| Nil -> ()
}
If you're willing to consider an immutable implementation, you might want to look at Chris Okasaki's paper on red-black trees in a functional setting here.
My question is how would I go about optimizing this?
In the mutable case you should be able to get substantially better performance by using an array of Node structs rather than heap allocating each individual Node. In the immutable case you might try turning the red nodes into structs.