Type-Driven Development with Idris presents:
twoPlusTwoNotFive : 2 + 2 = 5 -> Void
twoPlusTwoNotFive Refl impossible
Is the above a function or value? If it's the former, then why is there no variable arguments, e.g.
add1 : Int -> Int
add1 x = x + 1
In particular, I'm confused at the lack of = in twoPlusTwoNotFive.
impossible calls out combinations of arguments which are, well, impossible. Idris absolves you of the responsibility to provide a right-hand side when a case is impossible.
In this instance, we're writing a function of type (2 + 2 = 5) -> Void. Void is a type with no values, so if we succeed in implementing such a function we should expect that all of its cases will turn out to be impossible. Now, = has only one constructor (Refl : x = x), and it can't be used here because it requires ='s arguments to be definitionally equal - they have to be the same x. So, naturally, it's impossible. There's no way anyone could successfully call this function at runtime, and we're saved from having to prove something that isn't true, which would have been quite a big ask.
Here's another example: you can't index into an empty vector. Scrutinising the Vect and finding it to be [] tells us that n ~ Z; since Fin n is the type of natural numbers less than n there's no value a caller could use to fill in the second argument.
at : Vect n a -> Fin n -> a
at [] FZ impossible
at [] (FS i) impossible
at (x::xs) FZ = x
at (x::xs) (FS i) = at xs i
Much of the time you're allowed to omit impossible cases altogether.
I slightly prefer Agda's notation for the same concept, which uses the symbol () to explicitly pinpoint which bit of the input expression is impossible.
twoPlusTwoNotFive : (2 + 2 ≡ 5) -> ⊥
twoPlusTwoNotFive () -- again, no RHS
at : forall {n}{A : Set} -> Vec A n -> Fin n -> A
at [] ()
at (x ∷ xs) zero = x
at (x ∷ xs) (suc i) = at xs i
I like it because sometimes you only learn that a case is impossible after doing some further pattern matching on the arguments; when the impossible thing is buried several layers down it's nice to have a visual aid to help you spot where it was.
Related
This is a follow up to this question. Thanks to Kwartz I now have a state of the proposition if b divides a then b divides a * c for any integer c, namely:
alsoDividesMultiples : (a, b, c : Integer) ->
DivisibleBy a b ->
DivisibleBy (a * c) b
Now, the goal has been to prove that statement. I realized that I do not understand how to operate on dependent pairs. I tried a simpler problem, which was show that every number is divisible by 1. After a shameful amount of thought on it, I thought I had come up with a solution:
-- All numbers are divisible by 1.
DivisibleBy a 1 = let n = a in
(n : Integer ** a = 1 * n)
This compiles, but I was had doubts it was valid. To verify that I was wrong, it changed it slightly to:
-- All numbers are divisible by 1.
DivisibleBy a 1 = let n = a in
(n : Integer ** a = 2 * n)
This also compiles, which means my "English" interpretation is certainly incorrect, for I would interpret this as "All numbers are divisible by one since every number is two times another integer". Thus, I am not entirely sure what I am demonstrating with that statement. So, I went back and tried a more conventional way of stating the problem:
oneDividesAll : (a : Integer) ->
(DivisibleBy a 1)
oneDividesAll a = ?sorry
For the implementation of oneDividesAll I am not really sure how to "inject" the fact that (n = a). For example, I would write (in English) this proof as:
We wish to show that 1 | a. If so, it follows that a = 1 * n for some n. Let n = a, then a = a * 1, which is true by identity.
I am not sure how to really say: "Consider when n = a". From my understanding, the rewrite tactic requires a proof that n = a.
I tried adapting my fallacious proof:
oneDividesAll : (a : Integer) ->
(DivisibleBy a 1)
oneDividesAll a = let n = a in (n : Integer ** a = b * n)
But this gives:
|
12 | oneDividesAll a = let n = a in (n : Integer ** a = b * n)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When checking right hand side of oneDividesAll with expected type
DivisibleBy a 1
Type mismatch between
Type (Type of DPair a P)
and
(n : Integer ** a = prim__mulBigInt 1 n) (Expected type)
Any help/hints would be appreciated.
First off, if you want to prove properties on number, you should use Nat (or other inductive types). Integer uses primitives that the argument can't argue further than prim__mulBigInt : Integer -> Integer -> Integer; that you pass two Integer to get one. The compiler doesn't know anything how the resulting Integer looks like, so it cannot prove stuff about it.
So I'll go along with Nat:
DivisibleBy : Nat -> Nat -> Type
DivisibleBy a b = (n : Nat ** a = b * n)
Again, this is a proposition, not a proof. DivisibleBy 6 0 is a valid type, but you won't find a proof : Divisible 6 0. So you were right with
oneDividesAll : (a : Nat) ->
(DivisibleBy a 1)
oneDividesAll a = ?sorry
With that, you could generate proofs of the form oneDividesAll a : DivisibleBy a 1. So, what comes into the hole ?sorry? :t sorry gives us sorry : (n : Nat ** a = plus n 0) (which is just DivisibleBy a 1 resolved as far as Idris can). You got confused on the right part of the pair: x = y is a type, but now we need a value – that's what's your last error cryptic error message hints at). = has only one constructor, Refl : x = x. So we need to get both sides of the equality to the same value, so the result looks something like (n ** Refl).
As you thought, we need to set n to a:
oneDividesAll a = (a ** ?hole)
For the needed rewrite tactic we check out :search plus a 0 = a, and see plusZeroRightNeutral has the right type.
oneDividesAll a = (a ** rewrite plusZeroRightNeutral a in ?hole)
Now :t hole gives us hole : a = a so we can just auto-complete to Refl:
oneDividesAll a = (a ** rewrite plusZeroRightNeutral a in Refl)
A good tutorial on theorem proving (where it's also explained why plus a Z does not reduce) is in the Idris Doc.
Let's say we want to prove that weakening the upper bound of a Data.Fin doesn't change the value of the number. The intuitive way to state this is:
weakenEq : (num : Fin n) -> num = weaken num
Let's now generate the defini... Hold on! Let's think a bit about that statement. num and weaken num have different types. Can we state the equality in this case?
The documentation on = suggests we can try to, but we might want to use ~=~ instead. Well, anyway, let's go ahead and generate the definition and case-split, resulting in
weakenEq : (num : Fin n) -> num = weaken num
weakenEq FZ = ?weakenEq_rhs_1
weakenEq (FS x) = ?weakenEq_rhs_2
The goal in the weakenEq_rhs_1 hole is FZ = FZ, which still makes sense from the value point of view. So we optimistically replace the hole with Refl, but only to fail:
When checking right hand side of weakenEq with expected type
FZ = weaken FZ
Unifying k and S k would lead to infinite value
A somewhat cryptic error message, so we wonder if that's really related to the types being different.
Anyway, let's try again, but now with ~=~ instead of =. Unfortunately, the error is still the same.
So, how one would state and prove that weaken x doesn't change the value of x? Does it really make sense to? What should I do if that's a part of a larger proof where I might want to rewrite a Vect n (Fin k) with Vect n (Fin (S k)) that's obtained by mapping weaken over the original vector?
If you really want to prove that the value of the Fin n does not change after applying the weaken function, you would need to prove the equality of these values:
weakenEq: (num: Fin n) -> finToNat num = finToNat $ weaken num
weakenEq FZ = Refl
weakenEq (FS x) = cong $ weakenEq x
To your second problem/Markus' comment about map (Data.Fin.finToNat) v = map (Data.Fin.finToNat . Data.Fin.weaken) v:
vectorWeakenEq : (v: Vect n (Fin k)) ->
map Fin.finToNat v = map (Fin.finToNat . Fin.weaken) v
vectorWeakenEq [] = Refl
vectorWeakenEq (x :: xs) =
rewrite sym $ weakenEq x in
cong {f=(::) (finToNat x)} (vectorWeakenEq xs)
And to see why num = weaken num won't work, let's take a look at a counterexample:
getSize : Fin n -> Nat
getSize _ {n} = n
Now with x : Fin n, getSize x = n != (n + 1) = getSize (weaken x). This won't happen with functions which only depend on the constructors, like finToNat. So you have to constrain yourself to those and prove that they behave like that.
In the Idris Tutorial a function for filtering vectors is based on dependent pairs.
filter : (a -> Bool) -> Vect n a -> (p ** Vect p a)
filter f [] = (_ ** [])
filter f (x :: xs) with (filter f xs )
| (_ ** xs') = if (f x) then (_ ** x :: xs') else (_ ** xs')
But why is it necessary to put this in terms of a dependent pair instead of something more direct such as?
filter' : (a -> Bool) -> Vect n a -> Vect p a
In both cases the type of p must be determined, but in my supposed alternative the redundancy of listing p twice is eliminated.
My naive attempts at implementing filter' failed, so I was wondering is there a fundamental reason that it can't be implemented? Or can filter' be implemented, and perhaps filter was just a poor example to showcase dependent pairs in Idris? But if that is the case then in what situations would dependent pairs be useful?
Thanks!
The difference between filter and filter' is between existential and universal quantification. If (a -> Bool) -> Vect n a -> Vect p a was the correct type for filter, that would mean filter returns a Vector of length p and the caller can specify what p should be.
Kim Stebel's answer is right on the money. Let me just note that this was already discussed on the Idris mailing list back in 2012 (!!):
filter for vector, a question - Idris Programming Language
What raichoo posted there can help clarifying it I think; the real signature of your filter' is
filter' : {p : Nat} -> {n: Nat} -> {a: Type} -> (a -> Bool) -> Vect a n -> Vect a p
from which it should be obvious that this is not what filter should (or even could) do; p actually depends on the predicate and the vector you are filtering, and you can (actually need to) express this using a dependent pair. Note that in the pair (p ** Vect p a), p (and thus Vect p a) implicitly depends on the (unnamed) predicate and vector appearing before it in its signature.
Expanding on this, why a dependent pair? You want to return a vector, but there's no "Vector with unknown length" type; you need a length value for obtaining a Vector type. But then you can just think "OK, I will return a Nat together with a vector with that length". The type of this pair is, unsurprisingly, an example of a dependent pair. In more detail, a dependent pair DPair a P is a type built out of
A type a
A function P: a -> Type
A value of that type DPair a P is a pair of values
x: a
y: P a
At this point I think that is just syntax what might be misleading you. The type p ** Vect p a is DPair Nat (\p => Vect p a); p there is not a parameter for filter or anything like it. All this can be a bit confusing at first; if so, maybe it helps thinking of p ** Vect p a as a substitute for the "Vector with unknown length" type.
Not an answer, but additional context
Idris 1 documentation - https://docs.idris-lang.org/en/latest/tutorial/typesfuns.html#dependent-pairs
Idris 2 documentation - https://idris2.readthedocs.io/en/latest/tutorial/typesfuns.html?highlight=dependent#dependent-pairs
In Idris 2 the dependent pair defined here
and is similar to Exists and Subset but BOTH of it's values are NOT erased at runtime
I want to be able to say, for a function of f with signature t->t, that for all x in t, f(f(x)) = x.
When I run this:
%default total
-- The type of parity values - either Even or Odd
data Parity = Even | Odd
-- Even is the opposite of Odd and Odd is the opposite of Even
opposite: Parity -> Parity
opposite Even = Odd
opposite Odd = Even
-- The 'opposite' function is it's own inverse
opposite_its_own_inverse : (p : Parity) -> opposite (opposite p) = p
opposite_its_own_inverse Even = Refl
opposite_its_own_inverse Odd = Refl
-- abstraction of being one's own inverse
IsItsOwnInverse : {t : Type} -> (f: t->t) -> Type
IsItsOwnInverse {t} f = (x: t) -> f (f x) = x
opposite_IsItsOwnInverse : IsItsOwnInverse {t=Parity} opposite
opposite_IsItsOwnInverse = opposite_its_own_inverse
I get this error message:
- + Errors (1)
`-- own_inverse_example.idr line 22 col 25:
When checking right hand side of opposite_IsItsOwnInverse with expected type
IsItsOwnInverse opposite
Type mismatch between
(p : Parity) ->
opposite (opposite p) = p (Type of opposite_its_own_inverse)
and
(x : Parity) -> opposite (opposite x) = x (Expected type)
Specifically:
Type mismatch between
opposite (opposite v0)
and
opposite (opposite v0)
Am I doing something wrong, or is that just a bug?
If I replace the last 'opposite_its_own_inverse' with '?hole', I get:
Holes
This buffer displays the unsolved holes from the currently-loaded code. Press
the [P] buttons to solve the holes interactively in the prover.
- + Main.hole [P]
`-- opposite : Parity -> Parity
-------------------------------------------------------
Main.hole : (x : Parity) -> opposite (opposite x) = x
The name for this property is an involution. Your type for this property is pretty good but I like writing it like so:
Involution : (t -> t) -> t -> Type
Involution f x = f (f x) = x
The first problem with your opposite_IsItsOwnInverse is that you haven't fully applied Involution so you haven't yet gotten a type. You also need apply a Parity so that Involution gives a Type, like so:
opposite_IsItsOwnInverse : Involution opposite p
That p is an implicit argument. Implicit arguments are implicitly created by lowercase identifiers in type signatures. This is like writing:
opposite_IsItsOwnInverse : {p : Parity} -> Involution opposite p
But this leads to another problem with the signature - opposite is also lowercase, so it's getting treated as an implicit argument! (This is why you get the confusing error message, Idris has created another variable called opposite) You have 2 possible solutions here: qualify the identifier, or use an identifier which starts with an uppercase letter.
I'll assume the module you're writing uses the default name of Main. The final type signature looks like:
opposite_IsItsOwnInverse : Involution Main.opposite p
And the implementation will just use the implicit argument and supply it to the function you've already written:
opposite_IsItsOwnInverse {p} = opposite_its_own_inverse p
Lets say I want to define the Fibonacci function as following function:
fibo : Int -> Int
fibo 1 = 1
fibo 2 = 2
fibo n = fibo (n-1) + fibo (n-2)
This function is obviously not total since its undefined for integers below 1, so I need to constrain the input argument somehow..
I've tried playing around with defining a new data type MyInt. Something along the lines:
-- bottom is the lower limit
data MyInt : (bottom: Int) -> (n: Int) -> Type
where
...
fibo : MyInt 1 n -> Int
...
However I get lost rather quickly.
How can I constraint the input argument to, for example, my fibo function to be integer values of 1 or above?
There are actually two reasons why Idris will not recognise the fibo function as total. Firstly, as you pointed out, it is not defined for integers less than 1, but secondly, it calls itself recursively. Although Idris is capable of recognising the totality of recursive functions, it can generally only do so when it can be shown that the argument to the recursive call is 'smaller' (i.e. closer to a base case*) than the original argument (for example, if a function receives a list as an argument, it can call itself with the tail of the list without necessarily sacrificing totality, because the tail is a substructure of the original list and thus closer to Nil). The problem with expressions like (n-1) and (n-2), when they are of type Int, is that although they are numerically smaller than n, they are not structurally smaller, because Int is not inductively defined and so has no base cases. Therefore the totality checker is unable to satisfy itself that the recursion will always eventually reach a base case (even though it might seem obvious to us), and so it will not consider fibo to be total.
First off, let's solve the recursion problem. Instead of Int, we can use an inductively-defined datatype such as Nat:
data Nat =
Z | S Nat
(A natural number is either zero, or the successor of another natural number.)
This allows us to rewrite fibo as:
fibo : Nat -> Int
fibo (S Z) = 1
fibo (S (S Z)) = 2
fibo (S (S n)) = fibo (S n) + fibo n
(Note how in the recursive case, instead of calculating (n-1) and (n-2) explicitly, we produce them by pattern matching on the argument, thereby demonstrating to Idris that they are structurally smaller.)
This new definition of fibo is still not entirely total, though, because it lacks a case for Z (i.e. zero). If we don't want to provide for such a case, then we need to give Idris some assurance that it will not be allowed to occur. One way we can do this is to require a proof that the argument to fibo is greater than or equal to one (or equivalently, one is less than or equal to the argument):
fibo : (n : Nat) -> LTE 1 n -> Int
fibo Z LTEZero impossible
fibo Z (LTESucc _) impossible
fibo (S Z) _ = 1
fibo (S (S Z)) _ = 2
fibo (S (S (S n))) _ = fibo (S (S n)) (LTESucc LTEZero) + fibo (S n) (LTESucc LTEZero)
LTE 1 n is the type whose values are proofs that 1 ≤ n (within the natural numbers). LTEZero represents the axiom that zero ≤ any natural number, and LTESucc represents the rule that if n ≤ m, then (successor of n) ≤ (successor of m). The impossible keyword indicates that a given case cannot occur. In the above definition, it is impossible for the first argument to fibo to be zero because there is no way to prove that 1 ≤ 0. For any other natural number n, we can prove that 1 ≤ n using (LTESucc LTEZero).
Now at last fibo is total, but it's rather cumbersome to have to provide it with an explicit proof that its argument is greater than or equal to 1. Luckily, we can mark the proof argument as auto implicit:
fibo : (n : Nat) -> {auto p : LTE 1 n} -> Int
fibo Z {p = LTEZero} impossible
fibo Z {p = (LTESucc _)} impossible
fibo (S Z) = 1
fibo (S (S Z)) = 2
fibo (S (S (S n))) = fibo (S (S n)) + fibo (S n)
Idris will now automatically find a proof that 1 ≤ n where possible, otherwise we will still be required to provide one ourselves.
* There may well be some codata-related subtleties that I'm glossing over here without realising, but this is the broad principle.