Is it possible to do patern binding a la Haskell in Idris? - idris

An example would be:
fib: Stream Integer
fib#(1::tfib) = 1 :: 1 :: [ a+b | (a,b) <- zip fib tfib]
But this generates the error:
50 | fib#(1::tfib) = 1 :: 1 :: [ a+b | (a,b) <- zip fib tfib]
| ^
unexpected "#(1::tfib)"
expecting "<==", "using", "with", ':', argument expression, constraint argument, expression, function right hand side, implementation
block, implicit function argument, or with pattern
This doesn't look promising given that it doesn't recognize # at the likely position.
Note that the related concept of as-patterns works the same in Haskell and Idris:
growHead : List a -> List a
growHead nnl#(x::_) = x::nnl
growHead ([]) = []

Related

How do you operate on dependent pairs in a proof?

This is a follow up to this question. Thanks to Kwartz I now have a state of the proposition if b divides a then b divides a * c for any integer c, namely:
alsoDividesMultiples : (a, b, c : Integer) ->
DivisibleBy a b ->
DivisibleBy (a * c) b
Now, the goal has been to prove that statement. I realized that I do not understand how to operate on dependent pairs. I tried a simpler problem, which was show that every number is divisible by 1. After a shameful amount of thought on it, I thought I had come up with a solution:
-- All numbers are divisible by 1.
DivisibleBy a 1 = let n = a in
(n : Integer ** a = 1 * n)
This compiles, but I was had doubts it was valid. To verify that I was wrong, it changed it slightly to:
-- All numbers are divisible by 1.
DivisibleBy a 1 = let n = a in
(n : Integer ** a = 2 * n)
This also compiles, which means my "English" interpretation is certainly incorrect, for I would interpret this as "All numbers are divisible by one since every number is two times another integer". Thus, I am not entirely sure what I am demonstrating with that statement. So, I went back and tried a more conventional way of stating the problem:
oneDividesAll : (a : Integer) ->
(DivisibleBy a 1)
oneDividesAll a = ?sorry
For the implementation of oneDividesAll I am not really sure how to "inject" the fact that (n = a). For example, I would write (in English) this proof as:
We wish to show that 1 | a. If so, it follows that a = 1 * n for some n. Let n = a, then a = a * 1, which is true by identity.
I am not sure how to really say: "Consider when n = a". From my understanding, the rewrite tactic requires a proof that n = a.
I tried adapting my fallacious proof:
oneDividesAll : (a : Integer) ->
(DivisibleBy a 1)
oneDividesAll a = let n = a in (n : Integer ** a = b * n)
But this gives:
|
12 | oneDividesAll a = let n = a in (n : Integer ** a = b * n)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When checking right hand side of oneDividesAll with expected type
DivisibleBy a 1
Type mismatch between
Type (Type of DPair a P)
and
(n : Integer ** a = prim__mulBigInt 1 n) (Expected type)
Any help/hints would be appreciated.
First off, if you want to prove properties on number, you should use Nat (or other inductive types). Integer uses primitives that the argument can't argue further than prim__mulBigInt : Integer -> Integer -> Integer; that you pass two Integer to get one. The compiler doesn't know anything how the resulting Integer looks like, so it cannot prove stuff about it.
So I'll go along with Nat:
DivisibleBy : Nat -> Nat -> Type
DivisibleBy a b = (n : Nat ** a = b * n)
Again, this is a proposition, not a proof. DivisibleBy 6 0 is a valid type, but you won't find a proof : Divisible 6 0. So you were right with
oneDividesAll : (a : Nat) ->
(DivisibleBy a 1)
oneDividesAll a = ?sorry
With that, you could generate proofs of the form oneDividesAll a : DivisibleBy a 1. So, what comes into the hole ?sorry? :t sorry gives us sorry : (n : Nat ** a = plus n 0) (which is just DivisibleBy a 1 resolved as far as Idris can). You got confused on the right part of the pair: x = y is a type, but now we need a value – that's what's your last error cryptic error message hints at). = has only one constructor, Refl : x = x. So we need to get both sides of the equality to the same value, so the result looks something like (n ** Refl).
As you thought, we need to set n to a:
oneDividesAll a = (a ** ?hole)
For the needed rewrite tactic we check out :search plus a 0 = a, and see plusZeroRightNeutral has the right type.
oneDividesAll a = (a ** rewrite plusZeroRightNeutral a in ?hole)
Now :t hole gives us hole : a = a so we can just auto-complete to Refl:
oneDividesAll a = (a ** rewrite plusZeroRightNeutral a in Refl)
A good tutorial on theorem proving (where it's also explained why plus a Z does not reduce) is in the Idris Doc.

OCaml User-Defined Type and Function Return Error

I was writing a function with user-defined types in OCaml when I encountered an error message that I don't understand.
I'm currently using the OCaml interactive toplevel and also Visual Studio Code to write my code. The strange thing is that when I run the code in Visual Studio Code, it compiles fine but encounters the error in the interactive toplevel.
The OCaml code that I am referring to is as follows:
type loc = int;;
type id = string;;
type value =
| Num of int
| Bool of bool
| Unit
| Record of (id -> loc)
;;
type memory = (loc * value) list;;
exception NotInMemory;;
let rec memory_lookup : (memory * loc) -> value
= fun (mem, l) ->
match mem with
| [] -> raise NotInMemory
| hd :: tl -> (match hd with
| (x, a) -> if x = l then a else (memory_lookup (tl, l))
)
;;
The code that I wrote is basically my rudimentary attempt at implementing/emulating looking up memory and returning corresponding values.
Here's an example input:
memory1 = [ (1, Num 1) ; (2, Bool true) ; (3, Unit) ];;
Here's the expected output:
memory_lookup (memory1, 2);;
- : value = Bool true
However, here's the actual output:
Characters: 179-180:
| (x, a) -> if x = l then "a" else (memory_lookup (tl, l)))
Error: This expression has type value/1076
but an expression was expected of type value/1104
(Just for clarification: the error is regarding character a)
Does anybody know what type value/1076 and type value/1104 mean? Also, if there is anything wrong with the code that I wrote, would anybody be kind enough to point it out?
Thank you.
This kind of error happens in the toplevel when a type is defined multiple times, and some values of the old type are left in scope. A simple example is
type t = A
let x = A;;
type t = A
let y = A;;
x = y;;
Error: This expression has type t/1012 but an expression was expected of type
t/1009
The numerical part after the type name in value/1076 is a binding time for the type value. This binding time is used as a last resort to differentiate between two different types that happens to have the same name. Thus
Error: This expression has type value/1076
but an expression was expected of type value/1104
means that the value memory1 was defined with a type value defined at time 1076, whereas the function memory_lookup expected values of the type value defined at a later date (aka at time 1104). The binding times are a bit arbitrary , so they may be replaced by simply value/1 and value/2 in OCaml 4.08 .

Elm Maybe.withDefault

I need to unwrap a Maybe -value in one of my update functions:
update msg model =
case msg of
UpdateMainContent val ->
Maybe.withDefault 100 (Just 42)
model
This of course is dummy code and the
Maybe.withDefault 100 (Just 42)
is taken straight out of the documentation for Maybe and not supposed to actually do anything. The compiler is complaining and saying:
Detected errors in 1 module.
-- TYPE MISMATCH ----------------------------------- ./src/Review/Form/State.elm
The 1st argument to function `withDefault` is causing a mismatch.
15|> Maybe.withDefault 100 (Just 42))
16| -- Maybe.withDefault 100 (model.activeItem)
17| model
Function `withDefault` is expecting the 1st argument to be:
a -> b
But it is:
number
Why is it saying that "withDefault" is expecting the first argument to be
a -> b
when it is defined as
a -> Maybe a -> a
in the documentation?
You accidentally left in model:
UpdateMainContent val ->
Maybe.withDefault 100 (Just 42)
model -- <-- here
This makes the type inference algorithm think that Maybe.withDefault 100 (Just 42) should evaluate to a function that can take this model argument. For that to make sense, it expects 100 and 42 to be functions, but they aren't, and so it tells you.
It might help to see an example where this works:
f : Int -> Int
f x = x + 1
Maybe.withDefault identity (Just f) 0
This will evaluate to 1.

Does "<-" mean assigning a variable in Haskell?

Just started Haskell, it's said that everything in Haskell is "immutable" except IO package. So when I bind a name to something, it's always something immutable? Question, like below:
Prelude> let removeLower x=[c|c<-x, c `elem` ['A'..'Z']]
Prelude> removeLower "aseruiiUIUIdkf"
"UIUI"
So here:
1. “removeLower" is an immutable? Even it's a function object?
But I can still use "let" to assign something else to this name.
2. inside the function "c<-x" seems that "c" is a variable.
It is assigned by list x's values.
I'm using the word "variable" from C language, not sure how Haskell name all its names?
Thanks.
If you're familiar with C, think of the distinction between declaring a variable and assigning a value to it. For example, you can declare a variable on its own and later assign to it:
int i;
i = 7;
Or you can declare a variable and assign initial value at the same time:
int i = 7;
And in either case, you can mutate the value of a variable by assigning to it once more after the first initialization or assignment:
int i = 7; // Declaration and initial assignment
i = 5; // Mutation
Assignment in Haskell works exclusively like the second example—declaration with initialization:
You declare a variable;
Haskell doesn't allow uninitialized variables, so you are required to supply a value in the declaration;
There's no mutation, so the value given in the declaration will be the only value for that variable throughout its scope.
I bolded and hyperlinked "scope" because it's the second critical component here. This goes one of your questions:
“removeLower" is an immutable? Even it's a function object? But I can still use "let" to assign something else to this name.
After you bind removeLower to the function you define in your example, the name removeLower will always refer to that function within the scope of that definition. This is easy to demonstrate in the interpreter. First, let's define a function foo:
Prelude> let foo x = x + 2
Prelude> foo 4
6
Now we define an bar that uses foo:
Prelude> let bar x = foo (foo x)
Prelude> bar 4
8
And now we "redefine" foo to something different:
Prelude> let foo x = x + 3
Prelude> foo 4
7
Now what do you think happens to bar?
Prelude> bar 4
8
It remains the same! Because the "redefinition" of foo doesn't mutate anything—it just says that, in the new scope created by the "redefinition", the name foo stands for the function that adds three. The definition of bar was made in the earlier scope where foo x = x + 2, so that's the meaning that the name foo has in that definition of bar. The original value of foo was not destroyed or mutated by the "redefinition."
In a Haskell program as much as in a C program, the same name can still refer to different values in different scopes of the program. This is what makes "variables" variable. The difference is that in Haskell you can never mutate the value of a variable within one scope. You can shadow a definition, however—uses of a variable will refer to the "nearest" definition of that name in some sense. (In the case of the interpreter, the most recent let declaration for that variable.)
Now, with that out of the way, here are the syntaxes that exist in Haskell for variable binding ("assignment"). First, there's top-level declarations in a module:
module MyLibrary (addTwo) where
addTwo :: Int -> Int
addTwo x = x + 2
Here the name addTwo is declared with the given function as its value. A top level declaration can have private, auxiliary declarations in a where block:
addSquares :: Integer -> Integer
addSquares x y = squareOfX + squareOfY
where square z = z * z
squareOfX = square x
squareOfY = square y
Then there's the let ... in ... expression, that allows you to declare a local variable for any expression:
addSquares :: Integer -> Integer
addSquares x y =
let square z = z * z
squareOfX = square x
squareOfY = square y
in squareOfX + squareOfY
Then there's the do-notation that has its own syntax for declaring variables:
example :: IO ()
example = do
putStrLn "Enter your first name:"
firstName <- getLine
putStrLn "Enter your lasst name:"
lastName <- getLine
let fullName = firstName ++ " " ++ lastName
putStrLn ("Hello, " ++ fullName ++ "!")
The var <- action assigns a value that is produced by an action (e.g., reading a line from standard input), while let var = expr assigns a value that is produced by a function (e.g., concatenating some strings). Note that the let in a do block is not the same thing as the let ... in ... from above!
And finally, in list comprehension you get the same assignment syntax as in do-notation.
It's referring to the monadic bind operator >>=. You just don't need to explicitly write a lambda as right hand side parameter. The list comprension will be compiled down to the monadic actions defined. And by that it means exactly the same as in a monadic environment.
In fact you can replace the list comprension with a simple call to filter:
filter (`elem` ['A' .. 'Z']) x
To answer your question regarding the <- syntactic structure a bit clearer:
[c| c <- x]
is the same as
do c <- x
return c
is the same as
x >>= \c -> return c
is the same as
x >>= return
Consider the official documentation of Haskell for further reading: https://hackage.haskell.org/package/base-4.8.2.0/docs/Control-Monad.html#v:-62--62--61-
[c|c<-x, c `elem` ['A'..'Z']]
is a list comprehension, and c <- x is a generator where c is a pattern to be matched from the elements of the list x. c is a pattern which is successively bound to the elements of the input list x which are a, s, e, u, ... when you evaluate removeLower "aseruiiUIUIdkf".
c `elem` ['A'..'Z']
is a predicate which is applied to each successive binding of c inside the comprehension and an element of the input only appears in the output list if it passes this predicate.

How to deal with variable references in yacc/bison (with ocaml)

I was wondering how to deal with variable references inside statements while writing grammars with ocamlyacc and ocamllex.
The problem is that statements of the form
var x = y + z
var b = true | f;
should be both correct but in the first case variable refers to numbers while in the second case f is a boolean variable.
In the grammar I'm writing I have got this:
numeric_exp_val:
| nint { Syntax.Int $1 }
| FLOAT { Syntax.Float $1 }
| LPAREN; ne = numeric_exp; RPAREN { ne }
| INCR; r = numeric_var_ref { Syntax.VarIncr (r,1) }
| DECR; r = numeric_var_ref { Syntax.VarIncr (r,-1) }
| var_ref { $1 }
;
boolean_exp_val:
| BOOL { Syntax.Bool $1 }
| LPAREN; be = boolean_exp; RPAREN { be }
| var_ref { $1 }
;
which obviously can't work, since both var_ref non terminals reduce to the same (reduce/reduce conflict). But I would like to have type checking that is mostly statically done (with respect to variable references) during the parsing phase itself.
That's why I'm wondering which is the best way to have variable references and keep this structure. Just as an additional info I have functions that compile the syntax tree by translating it into a byte code similar to this one:
let rec compile_numeric_exp exp =
match exp with
Int i -> [Push (Types.I.Int i)]
| Float f -> [Push (Types.I.Float f)]
| Bop (BNSum,e1,e2) -> (compile_numeric_exp e1) # (compile_numeric_exp e2) # [Types.I.Plus]
| Bop (BNSub,e1,e2) -> (compile_numeric_exp e1) # (compile_numeric_exp e2) # [Types.I.Minus]
| Bop (BNMul,e1,e2) -> (compile_numeric_exp e1) # (compile_numeric_exp e2) # [Types.I.Times]
| Bop (BNDiv,e1,e2) -> (compile_numeric_exp e1) # (compile_numeric_exp e2) # [Types.I.Div]
| Bop (BNOr,e1,e2) -> (compile_numeric_exp e1) # (compile_numeric_exp e2) # [Types.I.Or]
| VarRef n -> [Types.I.MemoryGet (Memory.index_for_name n)]
| VarIncr ((VarRef n) as vr,i) -> (compile_numeric_exp vr) # [Push (Types.I.Int i);Types.I.Plus;Types.I.Dupe] # (compile_assignment_to n)
| _ -> []
Parsing is simply not the right place to do type-checking. I don't understand why you insist on doing this in this pass. You would have much clearer code and greater expressive power by doing it in a separate pass.
Is it for efficiency reasons? I'm confident you could devise efficient incremental-typing routines elsewhere, to be called from the grammar production (but I'm not sure you'll win that much). This looks like premature optimization.
There has been work on writing type systems as attribute grammars (which could be seen as a declarative way to express typing derivations), but I don't think it is meant to conflate parsing and typing in a single pass.
If you really want to go further in this direction, I would advise you to use a simple lexical differentiation between num-typed and bool-typed variables. This sounds ugly but is simple.
If you want to treat numeric expressions and boolean expressions as different syntactic categories, then consider how you must parse var x = ( ( y + z ) ). You don't know which type of expression you're parsing until you hit the +. Therefore, you need to eat up several tokens before you know whether you are seeing a numeric_exp_val or a boolean_exp_val: you need some unbounded lookahead. Yacc does not provide such lookahead (Yacc only provides a restricted form of lookahead, roughly described as LALR, which puts bounds on parsing time and memory requirements). There is even an ambiguous case that makes your grammar context-sensitive: with a definition like var x = y, you need to look up the type of y.
You can solve this last ambiguity by feeding back the type information into the lexer, and you can solve the need for lookahead by using a parser generator that supports unbounded lookahead. However, both of these techniques will push your parser towards a point where it can't easily evolve if you want to expand the language later on (for example to distinguish between integer and floating-point numbers, to add strings or lists, etc.).
If you want a simple but constraining fix with a low technological overhead, I'll second gasche's suggestion of adding a syntactic distinguisher for numeric and boolean variable definitions, something like bvar b = … and nvar x = …. There again, this will make it difficult to support other types later on.
You will have an easier time overall if you separate the type checking from the parsing. Once you've built an abstract syntax tree, do a pass of type checking (in which you will infer the type of variables.
type numeric_expression = Nconst of float | Nplus of numeric_expression * numeric_expression | …
and boolean_expression = Bconst of bool | Bor of boolean_expression * boolean_expression | …
type typed_expression = Tnum of numeric_expression | Tbool of boolean_expression
type typed_statement = Tvar of string * typed_expression
let rec type_expression : Syntax.expression -> typed_expression = function
| Syntax.Float x -> Tnum (Nconst x)
| Syntax.Plus (e1, e2) ->
begin match type_expression e1, type_expression e2 with
| Tnum n1, Tnum n2 -> Tnum (Nplus (n1, n2))
| _, (Tbool _ as t2) -> raise (Invalid_argument_type ("+", t2))
| (Tbool _ as t1), _ -> raise (Invalid_argument_type ("+", t1))
end
| …