Lets say I want to define the Fibonacci function as following function:
fibo : Int -> Int
fibo 1 = 1
fibo 2 = 2
fibo n = fibo (n-1) + fibo (n-2)
This function is obviously not total since its undefined for integers below 1, so I need to constrain the input argument somehow..
I've tried playing around with defining a new data type MyInt. Something along the lines:
-- bottom is the lower limit
data MyInt : (bottom: Int) -> (n: Int) -> Type
where
...
fibo : MyInt 1 n -> Int
...
However I get lost rather quickly.
How can I constraint the input argument to, for example, my fibo function to be integer values of 1 or above?
There are actually two reasons why Idris will not recognise the fibo function as total. Firstly, as you pointed out, it is not defined for integers less than 1, but secondly, it calls itself recursively. Although Idris is capable of recognising the totality of recursive functions, it can generally only do so when it can be shown that the argument to the recursive call is 'smaller' (i.e. closer to a base case*) than the original argument (for example, if a function receives a list as an argument, it can call itself with the tail of the list without necessarily sacrificing totality, because the tail is a substructure of the original list and thus closer to Nil). The problem with expressions like (n-1) and (n-2), when they are of type Int, is that although they are numerically smaller than n, they are not structurally smaller, because Int is not inductively defined and so has no base cases. Therefore the totality checker is unable to satisfy itself that the recursion will always eventually reach a base case (even though it might seem obvious to us), and so it will not consider fibo to be total.
First off, let's solve the recursion problem. Instead of Int, we can use an inductively-defined datatype such as Nat:
data Nat =
Z | S Nat
(A natural number is either zero, or the successor of another natural number.)
This allows us to rewrite fibo as:
fibo : Nat -> Int
fibo (S Z) = 1
fibo (S (S Z)) = 2
fibo (S (S n)) = fibo (S n) + fibo n
(Note how in the recursive case, instead of calculating (n-1) and (n-2) explicitly, we produce them by pattern matching on the argument, thereby demonstrating to Idris that they are structurally smaller.)
This new definition of fibo is still not entirely total, though, because it lacks a case for Z (i.e. zero). If we don't want to provide for such a case, then we need to give Idris some assurance that it will not be allowed to occur. One way we can do this is to require a proof that the argument to fibo is greater than or equal to one (or equivalently, one is less than or equal to the argument):
fibo : (n : Nat) -> LTE 1 n -> Int
fibo Z LTEZero impossible
fibo Z (LTESucc _) impossible
fibo (S Z) _ = 1
fibo (S (S Z)) _ = 2
fibo (S (S (S n))) _ = fibo (S (S n)) (LTESucc LTEZero) + fibo (S n) (LTESucc LTEZero)
LTE 1 n is the type whose values are proofs that 1 ≤ n (within the natural numbers). LTEZero represents the axiom that zero ≤ any natural number, and LTESucc represents the rule that if n ≤ m, then (successor of n) ≤ (successor of m). The impossible keyword indicates that a given case cannot occur. In the above definition, it is impossible for the first argument to fibo to be zero because there is no way to prove that 1 ≤ 0. For any other natural number n, we can prove that 1 ≤ n using (LTESucc LTEZero).
Now at last fibo is total, but it's rather cumbersome to have to provide it with an explicit proof that its argument is greater than or equal to 1. Luckily, we can mark the proof argument as auto implicit:
fibo : (n : Nat) -> {auto p : LTE 1 n} -> Int
fibo Z {p = LTEZero} impossible
fibo Z {p = (LTESucc _)} impossible
fibo (S Z) = 1
fibo (S (S Z)) = 2
fibo (S (S (S n))) = fibo (S (S n)) + fibo (S n)
Idris will now automatically find a proof that 1 ≤ n where possible, otherwise we will still be required to provide one ourselves.
* There may well be some codata-related subtleties that I'm glossing over here without realising, but this is the broad principle.
Related
I'm deciding between parametrising my type by a Vect r Nat and List Nat. At first, I thought that with the former I can constrain the vector length like
foo : Vect m Nat -> Vect m Nat -> Vect (2 * m) Nat
and this makes Vect more powerful, but then I realised I can do the same, albeit more verbosely, with lists and proofs
foo : (x : List Nat) -> (y : List Nat)
-> {auto _ : length x = length y} -> List (2 * length x) Nat
Now I'm wondering if this is generally true. Is parametrising with values any different to using type-level functions with proofs?
Let's say we want to prove that weakening the upper bound of a Data.Fin doesn't change the value of the number. The intuitive way to state this is:
weakenEq : (num : Fin n) -> num = weaken num
Let's now generate the defini... Hold on! Let's think a bit about that statement. num and weaken num have different types. Can we state the equality in this case?
The documentation on = suggests we can try to, but we might want to use ~=~ instead. Well, anyway, let's go ahead and generate the definition and case-split, resulting in
weakenEq : (num : Fin n) -> num = weaken num
weakenEq FZ = ?weakenEq_rhs_1
weakenEq (FS x) = ?weakenEq_rhs_2
The goal in the weakenEq_rhs_1 hole is FZ = FZ, which still makes sense from the value point of view. So we optimistically replace the hole with Refl, but only to fail:
When checking right hand side of weakenEq with expected type
FZ = weaken FZ
Unifying k and S k would lead to infinite value
A somewhat cryptic error message, so we wonder if that's really related to the types being different.
Anyway, let's try again, but now with ~=~ instead of =. Unfortunately, the error is still the same.
So, how one would state and prove that weaken x doesn't change the value of x? Does it really make sense to? What should I do if that's a part of a larger proof where I might want to rewrite a Vect n (Fin k) with Vect n (Fin (S k)) that's obtained by mapping weaken over the original vector?
If you really want to prove that the value of the Fin n does not change after applying the weaken function, you would need to prove the equality of these values:
weakenEq: (num: Fin n) -> finToNat num = finToNat $ weaken num
weakenEq FZ = Refl
weakenEq (FS x) = cong $ weakenEq x
To your second problem/Markus' comment about map (Data.Fin.finToNat) v = map (Data.Fin.finToNat . Data.Fin.weaken) v:
vectorWeakenEq : (v: Vect n (Fin k)) ->
map Fin.finToNat v = map (Fin.finToNat . Fin.weaken) v
vectorWeakenEq [] = Refl
vectorWeakenEq (x :: xs) =
rewrite sym $ weakenEq x in
cong {f=(::) (finToNat x)} (vectorWeakenEq xs)
And to see why num = weaken num won't work, let's take a look at a counterexample:
getSize : Fin n -> Nat
getSize _ {n} = n
Now with x : Fin n, getSize x = n != (n + 1) = getSize (weaken x). This won't happen with functions which only depend on the constructors, like finToNat. So you have to constrain yourself to those and prove that they behave like that.
I am working in a small project with a goal to give a definition of Gcd which gives the gcd of two numbers along with the proof that the result is correct. But I am unable to give a total definition of Gcd. The definition of Gcd in Idris 1.3.0 is total but uses assert_total to force totality which defeats the purpose of my project. Does someone have a total definition of Gcd which does not use assert_total?
P.S. - My codes are uploaded in https://github.com/anotherArka/Idris-Number-Theory.git
I have a version which uses the Accessible relation to show that the sum of the two numbers you're finding the gcd of gets smaller on every recursive call: https://gist.github.com/edwinb/1907723fbcfce2fde43a380b1faa3d2c#file-gcd-idr-L25
It relies on this, from Prelude.Wellfounded:
data Accessible : (rel : a -> a -> Type) -> (x : a) -> Type where
Access : (rec : (y : a) -> rel y x -> Accessible rel y) ->
Accessible rel x
The general idea is that you can make a recursive call by explicitly stating what gets smaller, and providing a proof on each recursive call that it really does get smaller. For gcd, it looks like this (gcdt for the total version since gcd is in the prelude):
gcdt : Nat -> Nat -> Nat
gcdt m n with (sizeAccessible (m + n))
gcdt m Z | acc = m
gcdt Z n | acc = n
gcdt (S m) (S n) | (Access rec)
= if m > n
then gcdt (minus m n) (S n) | rec _ (minusSmaller_1 _ _)
else gcdt (S m) (minus n m) | rec _ (minusSmaller_2 _ _)
sizeAccessible is defined in the prelude and allows you to explicitly state here that it's the sum of the inputs that's getting smaller. The recursive call is smaller than the input because rec is an argument of Access rec.
If you want to see in a bit more detail what's going on, you can try replacing the minusSmaller_1 and minusSmaller_2 calls with holes, to see what you have to prove:
gcdt : Nat -> Nat -> Nat
gcdt m n with (sizeAccessible (m + n))
gcdt m Z | acc = m
gcdt Z n | acc = n
gcdt (S m) (S n) | (Access rec)
= if m > n
then gcdt (minus m n) (S n) | rec _ ?smaller1
else gcdt (S m) (minus n m) | rec _ ?smaller2
For example:
*gcd> :t smaller1
m : Nat
n : Nat
rec : (y : Nat) ->
LTE (S y) (S (plus m (S n))) -> Accessible Smaller y
--------------------------------------
smaller1 : LTE (S (plus (minus m n) (S n))) (S (plus m (S n)))
I don't know of anywhere that documents Accessible in much detail, at least for Idris (you might find examples for Coq), but there are more examples in the base libraries in Data.List.Views, Data.Vect.Views and Data.Nat.Views.
FYI: the implemenation in idris 1.3.0 (and probably 1.2.0) is total, but uses the assert_total function to achieve this.
:printdef gcd
gcd : (a : Nat) ->
(b : Nat) -> {auto ok : NotBothZero a b} -> Nat
gcd a 0 = a
gcd 0 b = b
gcd a (S b) = assert_total (gcd (S b)
(modNatNZ a (S b) SIsNotZ))
The following declaration of vector cons
cons : a -> Vect n a -> Vect (n + 1) a
cons x xs = x :: xs
fails with the error
Type mismatch between
S n
and
plus n 1
while the following vector append compiles and works
append : Vect n a -> Vect m a -> Vect (n + m) a
append xs ys = xs ++ ys
Why type-level plus is accepted for the second case but not for the first. What's the difference?
Why does x :: xs : Vect (n + 1) a lead to a type error?
(+) is defined by induction on its first argument so n + 1 is stuck (because n is a stuck expression, a variable in this case).
(::) is defined with the type a -> Vect m a -> Vect (S m) a.
So Idris needs to solve the unification problem n + 1 =? S m and because you have a stuck term vs. an expression with a head constructor these two things simply won't unify.
If you had written 1 + n on the other hand, Idris would have reduced that expression down to S n and the unification would have been successful.
Why does xs ++ ys : Vect (n + m) a succeeds?
(++) is defined with the type Vect p a -> Vect q a -> Vect (p + q) a.
Under the assumption that xs : Vect n a and ys : Vect m a, you will have to solve the constraints:
Vect n a ?= Vect p a (xs is the first argument passed to (++))
Vect m a ?= Vect q a (ys is the first argument passed to (++))
Vect (n + m) a ?= Vect (p + q) a (xs ++ ys is the result of append)
The first two constraints lead to n = p and m = q respectively which make the third constraint hold: everything works out.
Consider append : Vect n a -> Vect m a -> Vect (m + n) a.
Notice how I have swapped the two arguments to (+) in this one. You would then be a situation similar to your first question: after a bit of unification, you would end up with the constraint m + n ?= n + m which Idris, not knowing that (+) is commutative, would'nt be able to solve.
Solutions? Work arounds?
Whenever you can it is infinitely more convenient to have a function defined using the same recurrence pattern as the computation that happens in its type. Indeed when that is the case the type will be simplified by computation in the various branches of the function definition.
When you can't, you can rewrite proofs that two things are equal (e.g. that n + 1 = S n for all n) to adjust the mismatch between a term's type and the expected one. Even though this may seem more convenient than refactoring your code to have a different recurrence pattern, and is sometimes necessary, it usually is the start of path full of pitfalls.
Type-Driven Development with Idris presents:
twoPlusTwoNotFive : 2 + 2 = 5 -> Void
twoPlusTwoNotFive Refl impossible
Is the above a function or value? If it's the former, then why is there no variable arguments, e.g.
add1 : Int -> Int
add1 x = x + 1
In particular, I'm confused at the lack of = in twoPlusTwoNotFive.
impossible calls out combinations of arguments which are, well, impossible. Idris absolves you of the responsibility to provide a right-hand side when a case is impossible.
In this instance, we're writing a function of type (2 + 2 = 5) -> Void. Void is a type with no values, so if we succeed in implementing such a function we should expect that all of its cases will turn out to be impossible. Now, = has only one constructor (Refl : x = x), and it can't be used here because it requires ='s arguments to be definitionally equal - they have to be the same x. So, naturally, it's impossible. There's no way anyone could successfully call this function at runtime, and we're saved from having to prove something that isn't true, which would have been quite a big ask.
Here's another example: you can't index into an empty vector. Scrutinising the Vect and finding it to be [] tells us that n ~ Z; since Fin n is the type of natural numbers less than n there's no value a caller could use to fill in the second argument.
at : Vect n a -> Fin n -> a
at [] FZ impossible
at [] (FS i) impossible
at (x::xs) FZ = x
at (x::xs) (FS i) = at xs i
Much of the time you're allowed to omit impossible cases altogether.
I slightly prefer Agda's notation for the same concept, which uses the symbol () to explicitly pinpoint which bit of the input expression is impossible.
twoPlusTwoNotFive : (2 + 2 ≡ 5) -> ⊥
twoPlusTwoNotFive () -- again, no RHS
at : forall {n}{A : Set} -> Vec A n -> Fin n -> A
at [] ()
at (x ∷ xs) zero = x
at (x ∷ xs) (suc i) = at xs i
I like it because sometimes you only learn that a case is impossible after doing some further pattern matching on the arguments; when the impossible thing is buried several layers down it's nice to have a visual aid to help you spot where it was.