Plus vs S in a function type - idris

The following declaration of vector cons
cons : a -> Vect n a -> Vect (n + 1) a
cons x xs = x :: xs
fails with the error
Type mismatch between
S n
and
plus n 1
while the following vector append compiles and works
append : Vect n a -> Vect m a -> Vect (n + m) a
append xs ys = xs ++ ys
Why type-level plus is accepted for the second case but not for the first. What's the difference?

Why does x :: xs : Vect (n + 1) a lead to a type error?
(+) is defined by induction on its first argument so n + 1 is stuck (because n is a stuck expression, a variable in this case).
(::) is defined with the type a -> Vect m a -> Vect (S m) a.
So Idris needs to solve the unification problem n + 1 =? S m and because you have a stuck term vs. an expression with a head constructor these two things simply won't unify.
If you had written 1 + n on the other hand, Idris would have reduced that expression down to S n and the unification would have been successful.
Why does xs ++ ys : Vect (n + m) a succeeds?
(++) is defined with the type Vect p a -> Vect q a -> Vect (p + q) a.
Under the assumption that xs : Vect n a and ys : Vect m a, you will have to solve the constraints:
Vect n a ?= Vect p a (xs is the first argument passed to (++))
Vect m a ?= Vect q a (ys is the first argument passed to (++))
Vect (n + m) a ?= Vect (p + q) a (xs ++ ys is the result of append)
The first two constraints lead to n = p and m = q respectively which make the third constraint hold: everything works out.
Consider append : Vect n a -> Vect m a -> Vect (m + n) a.
Notice how I have swapped the two arguments to (+) in this one. You would then be a situation similar to your first question: after a bit of unification, you would end up with the constraint m + n ?= n + m which Idris, not knowing that (+) is commutative, would'nt be able to solve.
Solutions? Work arounds?
Whenever you can it is infinitely more convenient to have a function defined using the same recurrence pattern as the computation that happens in its type. Indeed when that is the case the type will be simplified by computation in the various branches of the function definition.
When you can't, you can rewrite proofs that two things are equal (e.g. that n + 1 = S n for all n) to adjust the mismatch between a term's type and the expected one. Even though this may seem more convenient than refactoring your code to have a different recurrence pattern, and is sometimes necessary, it usually is the start of path full of pitfalls.

Related

Dependent parameters or type-level functions with proofs?

I'm deciding between parametrising my type by a Vect r Nat and List Nat. At first, I thought that with the former I can constrain the vector length like
foo : Vect m Nat -> Vect m Nat -> Vect (2 * m) Nat
and this makes Vect more powerful, but then I realised I can do the same, albeit more verbosely, with lists and proofs
foo : (x : List Nat) -> (y : List Nat)
-> {auto _ : length x = length y} -> List (2 * length x) Nat
Now I'm wondering if this is generally true. Is parametrising with values any different to using type-level functions with proofs?

Check a function application isn't possible?

I'd like to check my function signatures mean what I think they mean. I can of course check the positive variants
test_concat : Vect [3] Nat -> Vect [2] Nat -> Vect [5] Nat
test_concat x y = x ++ y
but I'd like to test some negative variants, sth like
test_concat_invalid : Vect [3] Nat -> Vect [2] Nat -> Vect [1] Nat -> Void
test_concat_invalid x y = x ++ y impossible
though obviously the syntax is made up. I've found I can do sth similar to test if I've written a data constructor for proofs correctly. Is this possible?
You can create a type which is indexed by a separate type and value:
data Typed : Type -> a -> Type where
MkTyped : {A : Type} -> {x : A} -> Typed A x
But only provide a constructor which links the type index with the value index - i.e. Typed can only be constructed if the value x has the type A.
This lets us write a proposition like so:
test_concat_invalid
: (x : Vect 3 Nat)
-> (y : Vect 2 Nat)
-> Typed (Vect 1 Nat) (x ++ y)
-> Void
test_concat_invalid _ _ MkTyped impossible
When Idris tries to unify the types, it sees 1 is not 5 which are obviously different, by construction. This means Idris will happily accept that the MkTyped constructor is impossible in this proposition.

Proving `weaken` doesn't change the value of a number

Let's say we want to prove that weakening the upper bound of a Data.Fin doesn't change the value of the number. The intuitive way to state this is:
weakenEq : (num : Fin n) -> num = weaken num
Let's now generate the defini... Hold on! Let's think a bit about that statement. num and weaken num have different types. Can we state the equality in this case?
The documentation on = suggests we can try to, but we might want to use ~=~ instead. Well, anyway, let's go ahead and generate the definition and case-split, resulting in
weakenEq : (num : Fin n) -> num = weaken num
weakenEq FZ = ?weakenEq_rhs_1
weakenEq (FS x) = ?weakenEq_rhs_2
The goal in the weakenEq_rhs_1 hole is FZ = FZ, which still makes sense from the value point of view. So we optimistically replace the hole with Refl, but only to fail:
When checking right hand side of weakenEq with expected type
FZ = weaken FZ
Unifying k and S k would lead to infinite value
A somewhat cryptic error message, so we wonder if that's really related to the types being different.
Anyway, let's try again, but now with ~=~ instead of =. Unfortunately, the error is still the same.
So, how one would state and prove that weaken x doesn't change the value of x? Does it really make sense to? What should I do if that's a part of a larger proof where I might want to rewrite a Vect n (Fin k) with Vect n (Fin (S k)) that's obtained by mapping weaken over the original vector?
If you really want to prove that the value of the Fin n does not change after applying the weaken function, you would need to prove the equality of these values:
weakenEq: (num: Fin n) -> finToNat num = finToNat $ weaken num
weakenEq FZ = Refl
weakenEq (FS x) = cong $ weakenEq x
To your second problem/Markus' comment about map (Data.Fin.finToNat) v = map (Data.Fin.finToNat . Data.Fin.weaken) v:
vectorWeakenEq : (v: Vect n (Fin k)) ->
map Fin.finToNat v = map (Fin.finToNat . Fin.weaken) v
vectorWeakenEq [] = Refl
vectorWeakenEq (x :: xs) =
rewrite sym $ weakenEq x in
cong {f=(::) (finToNat x)} (vectorWeakenEq xs)
And to see why num = weaken num won't work, let's take a look at a counterexample:
getSize : Fin n -> Nat
getSize _ {n} = n
Now with x : Fin n, getSize x = n != (n + 1) = getSize (weaken x). This won't happen with functions which only depend on the constructors, like finToNat. So you have to constrain yourself to those and prove that they behave like that.

Total definition of Gcd in Idris

I am working in a small project with a goal to give a definition of Gcd which gives the gcd of two numbers along with the proof that the result is correct. But I am unable to give a total definition of Gcd. The definition of Gcd in Idris 1.3.0 is total but uses assert_total to force totality which defeats the purpose of my project. Does someone have a total definition of Gcd which does not use assert_total?
P.S. - My codes are uploaded in https://github.com/anotherArka/Idris-Number-Theory.git
I have a version which uses the Accessible relation to show that the sum of the two numbers you're finding the gcd of gets smaller on every recursive call: https://gist.github.com/edwinb/1907723fbcfce2fde43a380b1faa3d2c#file-gcd-idr-L25
It relies on this, from Prelude.Wellfounded:
data Accessible : (rel : a -> a -> Type) -> (x : a) -> Type where
Access : (rec : (y : a) -> rel y x -> Accessible rel y) ->
Accessible rel x
The general idea is that you can make a recursive call by explicitly stating what gets smaller, and providing a proof on each recursive call that it really does get smaller. For gcd, it looks like this (gcdt for the total version since gcd is in the prelude):
gcdt : Nat -> Nat -> Nat
gcdt m n with (sizeAccessible (m + n))
gcdt m Z | acc = m
gcdt Z n | acc = n
gcdt (S m) (S n) | (Access rec)
= if m > n
then gcdt (minus m n) (S n) | rec _ (minusSmaller_1 _ _)
else gcdt (S m) (minus n m) | rec _ (minusSmaller_2 _ _)
sizeAccessible is defined in the prelude and allows you to explicitly state here that it's the sum of the inputs that's getting smaller. The recursive call is smaller than the input because rec is an argument of Access rec.
If you want to see in a bit more detail what's going on, you can try replacing the minusSmaller_1 and minusSmaller_2 calls with holes, to see what you have to prove:
gcdt : Nat -> Nat -> Nat
gcdt m n with (sizeAccessible (m + n))
gcdt m Z | acc = m
gcdt Z n | acc = n
gcdt (S m) (S n) | (Access rec)
= if m > n
then gcdt (minus m n) (S n) | rec _ ?smaller1
else gcdt (S m) (minus n m) | rec _ ?smaller2
For example:
*gcd> :t smaller1
m : Nat
n : Nat
rec : (y : Nat) ->
LTE (S y) (S (plus m (S n))) -> Accessible Smaller y
--------------------------------------
smaller1 : LTE (S (plus (minus m n) (S n))) (S (plus m (S n)))
I don't know of anywhere that documents Accessible in much detail, at least for Idris (you might find examples for Coq), but there are more examples in the base libraries in Data.List.Views, Data.Vect.Views and Data.Nat.Views.
FYI: the implemenation in idris 1.3.0 (and probably 1.2.0) is total, but uses the assert_total function to achieve this.
:printdef gcd
gcd : (a : Nat) ->
(b : Nat) -> {auto ok : NotBothZero a b} -> Nat
gcd a 0 = a
gcd 0 b = b
gcd a (S b) = assert_total (gcd (S b)
(modNatNZ a (S b) SIsNotZ))

Struggling with rewrite tactic in Idris

I'm going through Terry Tao's real analysis textbook, which builds up fundamental mathematics from the natural numbers up. By formalizing as many of the proofs as possible, I hope to familiarize myself with both Idris and dependent types.
I have defined the following datatype:
data GE: Nat -> Nat -> Type where
Ge : (n: Nat) -> (m: Nat) -> GE n (n + m)
to represent the proposition that one natural number is greater than or equal to another.
I'm currently struggling to prove reflexivity of this relation, i.e. to construct the proof with signature
geRefl : GE n n
My first attempt was to simply try geRefl {n} = Ge n Z, but this has type Ge n (add n Z). To get this to unify with the desired signature, we obviously have to perform some kind of rewrite, presumably involving the lemma
plusZeroRightNeutral : (left : Nat) -> left + fromInteger 0 = left
My best attempt is the following:
geRefl : GE n n
geRefl {n} = x
where x : GE n (n + Z)
x = rewrite plusZeroRightNeutral n in Ge n Z
but this does not typecheck.
Can you please give a correct proof of this theorem, and explain the reasoning behind it?
The first problem is superficial: you are trying to apply the rewriting at the wrong place. If you have x : GE n (n + Z), then you'll have to rewrite its type if you want to use it as the definition of geRefl : GE n n, so you'd have to write
geRefl : GE n n
geRefl {n} = rewrite plusZeroRightNeutral n in x
But that still won't work. The real problem is you only want to rewrite a part of the type GE n n: if you just rewrite it using n + 0 = n, you'd get GE (n + 0) (n + 0), which you still can't prove using Ge n 0 : GE n (n + 0).
What you need to do is use the fact that if a = b, then x : GE n a can be turned into x' : GE n b. This is exactly what the replace function in the standard library provides:
replace : (x = y) -> P x -> P y
Using this, by setting P = GE n, and using Ge n 0 : GE n (n + 0), we can write geRefl as
geRefl {n} = replace {P = GE n} (plusZeroRightNeutral n) (Ge n 0)
(note that Idris is perfectly able to infer the implicit parameter P, so it works without that, but I think in this case it makes it clearer what is happening)