I am working in a small project with a goal to give a definition of Gcd which gives the gcd of two numbers along with the proof that the result is correct. But I am unable to give a total definition of Gcd. The definition of Gcd in Idris 1.3.0 is total but uses assert_total to force totality which defeats the purpose of my project. Does someone have a total definition of Gcd which does not use assert_total?
P.S. - My codes are uploaded in https://github.com/anotherArka/Idris-Number-Theory.git
I have a version which uses the Accessible relation to show that the sum of the two numbers you're finding the gcd of gets smaller on every recursive call: https://gist.github.com/edwinb/1907723fbcfce2fde43a380b1faa3d2c#file-gcd-idr-L25
It relies on this, from Prelude.Wellfounded:
data Accessible : (rel : a -> a -> Type) -> (x : a) -> Type where
Access : (rec : (y : a) -> rel y x -> Accessible rel y) ->
Accessible rel x
The general idea is that you can make a recursive call by explicitly stating what gets smaller, and providing a proof on each recursive call that it really does get smaller. For gcd, it looks like this (gcdt for the total version since gcd is in the prelude):
gcdt : Nat -> Nat -> Nat
gcdt m n with (sizeAccessible (m + n))
gcdt m Z | acc = m
gcdt Z n | acc = n
gcdt (S m) (S n) | (Access rec)
= if m > n
then gcdt (minus m n) (S n) | rec _ (minusSmaller_1 _ _)
else gcdt (S m) (minus n m) | rec _ (minusSmaller_2 _ _)
sizeAccessible is defined in the prelude and allows you to explicitly state here that it's the sum of the inputs that's getting smaller. The recursive call is smaller than the input because rec is an argument of Access rec.
If you want to see in a bit more detail what's going on, you can try replacing the minusSmaller_1 and minusSmaller_2 calls with holes, to see what you have to prove:
gcdt : Nat -> Nat -> Nat
gcdt m n with (sizeAccessible (m + n))
gcdt m Z | acc = m
gcdt Z n | acc = n
gcdt (S m) (S n) | (Access rec)
= if m > n
then gcdt (minus m n) (S n) | rec _ ?smaller1
else gcdt (S m) (minus n m) | rec _ ?smaller2
For example:
*gcd> :t smaller1
m : Nat
n : Nat
rec : (y : Nat) ->
LTE (S y) (S (plus m (S n))) -> Accessible Smaller y
--------------------------------------
smaller1 : LTE (S (plus (minus m n) (S n))) (S (plus m (S n)))
I don't know of anywhere that documents Accessible in much detail, at least for Idris (you might find examples for Coq), but there are more examples in the base libraries in Data.List.Views, Data.Vect.Views and Data.Nat.Views.
FYI: the implemenation in idris 1.3.0 (and probably 1.2.0) is total, but uses the assert_total function to achieve this.
:printdef gcd
gcd : (a : Nat) ->
(b : Nat) -> {auto ok : NotBothZero a b} -> Nat
gcd a 0 = a
gcd 0 b = b
gcd a (S b) = assert_total (gcd (S b)
(modNatNZ a (S b) SIsNotZ))
Related
I'm deciding between parametrising my type by a Vect r Nat and List Nat. At first, I thought that with the former I can constrain the vector length like
foo : Vect m Nat -> Vect m Nat -> Vect (2 * m) Nat
and this makes Vect more powerful, but then I realised I can do the same, albeit more verbosely, with lists and proofs
foo : (x : List Nat) -> (y : List Nat)
-> {auto _ : length x = length y} -> List (2 * length x) Nat
Now I'm wondering if this is generally true. Is parametrising with values any different to using type-level functions with proofs?
Let's say we want to prove that weakening the upper bound of a Data.Fin doesn't change the value of the number. The intuitive way to state this is:
weakenEq : (num : Fin n) -> num = weaken num
Let's now generate the defini... Hold on! Let's think a bit about that statement. num and weaken num have different types. Can we state the equality in this case?
The documentation on = suggests we can try to, but we might want to use ~=~ instead. Well, anyway, let's go ahead and generate the definition and case-split, resulting in
weakenEq : (num : Fin n) -> num = weaken num
weakenEq FZ = ?weakenEq_rhs_1
weakenEq (FS x) = ?weakenEq_rhs_2
The goal in the weakenEq_rhs_1 hole is FZ = FZ, which still makes sense from the value point of view. So we optimistically replace the hole with Refl, but only to fail:
When checking right hand side of weakenEq with expected type
FZ = weaken FZ
Unifying k and S k would lead to infinite value
A somewhat cryptic error message, so we wonder if that's really related to the types being different.
Anyway, let's try again, but now with ~=~ instead of =. Unfortunately, the error is still the same.
So, how one would state and prove that weaken x doesn't change the value of x? Does it really make sense to? What should I do if that's a part of a larger proof where I might want to rewrite a Vect n (Fin k) with Vect n (Fin (S k)) that's obtained by mapping weaken over the original vector?
If you really want to prove that the value of the Fin n does not change after applying the weaken function, you would need to prove the equality of these values:
weakenEq: (num: Fin n) -> finToNat num = finToNat $ weaken num
weakenEq FZ = Refl
weakenEq (FS x) = cong $ weakenEq x
To your second problem/Markus' comment about map (Data.Fin.finToNat) v = map (Data.Fin.finToNat . Data.Fin.weaken) v:
vectorWeakenEq : (v: Vect n (Fin k)) ->
map Fin.finToNat v = map (Fin.finToNat . Fin.weaken) v
vectorWeakenEq [] = Refl
vectorWeakenEq (x :: xs) =
rewrite sym $ weakenEq x in
cong {f=(::) (finToNat x)} (vectorWeakenEq xs)
And to see why num = weaken num won't work, let's take a look at a counterexample:
getSize : Fin n -> Nat
getSize _ {n} = n
Now with x : Fin n, getSize x = n != (n + 1) = getSize (weaken x). This won't happen with functions which only depend on the constructors, like finToNat. So you have to constrain yourself to those and prove that they behave like that.
I'm trying to define the dependent type of n-ary functions (built as a tree out of binary and unary functions; I suspect this is isomorphic to the type of (Vect n a) -> a) as an exercise in learning Idris.
While trying to define a function that applies an argument to an n-ary function (producing an (n-1)-ary function) I got a very suspicious error:
Type mismatch between
ArityFn m a (Type of ng)
and
ArityFn (minus m 0) a (Expected type)
Specifically:
Type mismatch between
m
and
minus m 0
here's the code in question, for reference
data ArityFn : Nat -> (ty: Type) -> Type where
Val : (x : ty) -> ArityFn 0 ty
UnaryFn : (ty -> ty) -> ArityFn 1 ty
BinaryFn : (ty -> ty -> ty) -> ArityFn 2 ty
NAryFn : (ty -> ty -> ty) -> (ArityFn n ty) -> (ArityFn m ty) -> ArityFn (n + m) ty
%name ArityFn nf, ng, nh
applyArityFn : a -> (ArityFn n a) -> (LTE 1 n) -> ArityFn (n - 1) a
... (some definitions elided)
applyArityFn x (NAryFn h (UnaryFn f) ng) _ = mkNAryFn h (Val (f x)) ng
is this a bug in the typechecker?
When in doubt, look for the definition of the function which got stuck:
:def minus returns (among other things, modulo some cleanup):
Original definiton:
minus 0 right = 0
minus left 0 = left
minus (S left) (S right) = minus left right
You can see that minus left 0 = left won't hold definitionally because there is a pattern minus 0 right = 0 before. Now, of course both equations return the same result when they happen to coincide but idris doesn't know that.
To get the result you want you can:
either somehow pattern match on m and get minus to reduce now that the head constructor of its first argument is exposed
or rewrite by a proof that minus m 0 = m.
The following declaration of vector cons
cons : a -> Vect n a -> Vect (n + 1) a
cons x xs = x :: xs
fails with the error
Type mismatch between
S n
and
plus n 1
while the following vector append compiles and works
append : Vect n a -> Vect m a -> Vect (n + m) a
append xs ys = xs ++ ys
Why type-level plus is accepted for the second case but not for the first. What's the difference?
Why does x :: xs : Vect (n + 1) a lead to a type error?
(+) is defined by induction on its first argument so n + 1 is stuck (because n is a stuck expression, a variable in this case).
(::) is defined with the type a -> Vect m a -> Vect (S m) a.
So Idris needs to solve the unification problem n + 1 =? S m and because you have a stuck term vs. an expression with a head constructor these two things simply won't unify.
If you had written 1 + n on the other hand, Idris would have reduced that expression down to S n and the unification would have been successful.
Why does xs ++ ys : Vect (n + m) a succeeds?
(++) is defined with the type Vect p a -> Vect q a -> Vect (p + q) a.
Under the assumption that xs : Vect n a and ys : Vect m a, you will have to solve the constraints:
Vect n a ?= Vect p a (xs is the first argument passed to (++))
Vect m a ?= Vect q a (ys is the first argument passed to (++))
Vect (n + m) a ?= Vect (p + q) a (xs ++ ys is the result of append)
The first two constraints lead to n = p and m = q respectively which make the third constraint hold: everything works out.
Consider append : Vect n a -> Vect m a -> Vect (m + n) a.
Notice how I have swapped the two arguments to (+) in this one. You would then be a situation similar to your first question: after a bit of unification, you would end up with the constraint m + n ?= n + m which Idris, not knowing that (+) is commutative, would'nt be able to solve.
Solutions? Work arounds?
Whenever you can it is infinitely more convenient to have a function defined using the same recurrence pattern as the computation that happens in its type. Indeed when that is the case the type will be simplified by computation in the various branches of the function definition.
When you can't, you can rewrite proofs that two things are equal (e.g. that n + 1 = S n for all n) to adjust the mismatch between a term's type and the expected one. Even though this may seem more convenient than refactoring your code to have a different recurrence pattern, and is sometimes necessary, it usually is the start of path full of pitfalls.
I am just starting to learn Idris, and I figured a nice little project to start off would be to implement finite sequences as 2-3 finger trees. Each internal node in a tree needs to be annotated at run time with the total number of elements stored below it in order to support fast splitting and indexing. This size information also needs to be managed at compile time in order to (eventually) prove that splitting with appropriate indices and zipping a sequence with another sequence are total operations.
I can think of two ways to deal with this:
What I'm currently doing, having written a tiny fraction of the total necessary code: handle the sizes entirely in the types, and then use something like proof {intros; exact s;} to get at them. I don't know what, if any, horrible efficiency consequences this might have. Among potentials in my mind: a) Unnecessarily storing a size with each leaf node. b) I think this unlikely, but it would be real bad if it insisted on calculating sizes from the bottom up rather than lazily from the top down.
Include an explicit size field in each node constructor, along with a proof that the number in the size field matches the size the type system requires. This approach seems extremely awkward. On the plus side, I should be able to be quite certain that the type-level numbers and the equality proofs get erased, leaving just one number per internal node at run time.
Which of these, if either, is the right way to go?
The current code
Please feel free to give style tips, and maybe to explain how to inline the size code. I could only figure out what to do interactively, but it seems a bit weird to have proofs at the bottom for such simple things.
data Tree23 : Nat -> Nat -> Type -> Type where
Elem : a -> Tree23 0 1 a
Node2 : Lazy (Tree23 d s1 a) -> Lazy (Tree23 d s2 a) ->
Tree23 (S d) (s1 + s2) a
Node3 : Lazy (Tree23 d s1 a) -> Lazy (Tree23 d s2 a) -> Lazy (Tree23 d s3 a) ->
Tree23 (S d) (s1 + s2 + s3) a
size23 : Tree23 d s a -> Nat
size23 t = ?size23RHS
data Digit : Nat -> Nat -> Type -> Type where
One : Lazy (Tree23 d s a) -> Digit d s a
Two : Lazy (Tree23 d s1 a) -> Lazy (Tree23 d s2 a) -> Digit d (s1+s2) a
Three : Lazy (Tree23 d s1 a) -> Lazy (Tree23 d s2 a) ->
Lazy (Tree23 d s3 a) -> Digit d (s1+s2+s3) a
Four : Lazy (Tree23 d s1 a) -> Lazy (Tree23 d s2 a) ->
Lazy (Tree23 d s3 a) -> Lazy (Tree23 d s4 a) -> Digit d (s1+s2+s3+s4) a
sizeDig : Digit d s a -> Nat
sizeDig t = ?sizeDigRHS
data FingerTree : Nat -> Nat -> Type -> Type where
Empty : FingerTree d 0 a
Single : Tree23 d s a -> FingerTree d s a
Deep : Digit d spr a -> Lazy (FingerTree (S d) sm a) -> Digit d ssf a ->
FingerTree d (spr + sm + ssf) a
data Seq' : Nat -> Type -> Type where
MkSeq' : FingerTree 0 n a -> Seq' n a
Seq : Type -> Type
Seq a = (n ** Seq' n a)
---------- Proofs ----------
try.sizeDigRHS = proof
intros
exact s
try.size23RHS = proof
intros
exact s
Edit
Another option I've explored a bit is to try to separate the data structure from its validity. This leads to the following:
data Tree23 : Nat -> Type -> Type where
Elem : a -> Tree23 0 a
Node2 : Nat -> Lazy (Tree23 d a) -> Lazy (Tree23 d a) ->
Tree23 (S d) a
Node3 : Nat -> Lazy (Tree23 d a) -> Lazy (Tree23 d a) -> Lazy (Tree23 d a) ->
Tree23 (S d) a
size23 : Tree23 d a -> Nat
size23 (Elem x) = 1
size23 (Node2 s _ _) = s
size23 (Node3 s _ _ _) = s
data Valid23 : Tree23 d a -> Type where
ElemValid : Valid23 (Elem x)
Node2Valid : Valid23 x -> Valid23 y -> Valid23 (Node2 (size23 x + size23 y) x y)
Node3Valid : Valid23 x -> Valid23 y -> Valid23 z
-> Valid23 (Node3 (size23 x + size23 y + size23 z) x y z)
data Digit : Nat -> Type -> Type where
One : Lazy (Tree23 d a) -> Digit d a
Two : Lazy (Tree23 d a) -> Lazy (Tree23 d a) -> Digit d a
Three : Lazy (Tree23 d a) -> Lazy (Tree23 d a) ->
Lazy (Tree23 d a) -> Digit d a
Four : Lazy (Tree23 d a) -> Lazy (Tree23 d a) ->
Lazy (Tree23 d a) -> Lazy (Tree23 d a) -> Digit d a
data ValidDig : Digit d a -> Type where
OneValid : Valid23 x -> ValidDig (One x)
TwoValid : Valid23 x -> Valid23 y -> ValidDig (Two x y)
ThreeValid : Valid23 x -> Valid23 y -> Valid23 z -> ValidDig (Three x y z)
FourValid : Valid23 x -> Valid23 y -> Valid23 z -> Valid23 w -> ValidDig (Four x y z w)
sizeDig : Digit d a -> Nat
sizeDig (One x) = size23 x
sizeDig (Two x y) = size23 x + size23 y
sizeDig (Three x y z) = size23 x + size23 y + size23 z
sizeDig (Four x y z w) = (size23 x + size23 y) + (size23 z + size23 w)
data FingerTree : Nat -> Type -> Type where
Empty : FingerTree d a
Single : Tree23 d a -> FingerTree d a
Deep : Nat -> Digit d a -> Lazy (FingerTree (S d) a) -> Digit d a ->
FingerTree d a
sizeFT : FingerTree d a -> Nat
sizeFT Empty = 0
sizeFT (Single x) = size23 x
sizeFT (Deep k x y z) = k
data ValidFT : FingerTree d a -> Type where
ValidEmpty : ValidFT Empty
ValidSingle : Valid23 x -> ValidFT (Single x)
ValidDeep : ValidDig pr -> ValidFT m -> ValidDig sf ->
ValidFT (Deep (sizeDig pr + sizeFT m + sizeDig sf) pr m sf)
record Seq : Type -> Type where
MkSeq : FingerTree 0 a -> Seq a
data ValidSeq : Seq a -> Type where
MkValidSeq : ValidFT t -> ValidSeq (MkSeq t)
Then each function is accompanied by a (separate) proof of its validity.
I kind of like the way this approach separates "code" from "proofs", but I've run into a couple problems with it:
While the "code" gets easier, the proofs seem to get a good bit harder to construct. I imagine a large part of that is probably a result of my not being familiar with the system.
I haven't actually gotten near the point where I will write this code, but the indexing, splitting, and zipping functions will all have to insist on getting a proof of the validity of their input(s). It seems a bit weird for some functions to work with just sequences, and others to insist on proofs, but maybe that's just me.
Your problem can be simplified to having a type like
data Steps : Nat -> Type where
Nil : Steps 0
Cons : Steps n -> Steps (S n)
and wanting to write
size : Steps n -> Nat
This is very easy to do since the implicitly-quantified arguments (n in this case) are passed to size as implicit arguments! So the above type of size is the same as
size : {n : _} -> Steps n -> Nat
which means it can be defined as
size {n} _ = n