Defining the predecessor function (with pred 0 = 0) for the natural numbers in Lean - dependent-type

I'm learning the Lean proof assistant. An exercise in https://leanprover.github.io/theorem_proving_in_lean/inductive_types.html is to define the predecessor function for the natural numbers. Can someone help me with that?

You are probably familiar with pattern-matching from Lean or some functional programming language, so here is a solution that uses this mechanism:
open nat
definition pred : ℕ → ℕ
| zero := zero
| (succ n) := n
Another way of doing this is using a recursor like so:
def pred (n : ℕ) : ℕ :=
nat.rec_on n 0 (λ p _, p)
Here, 0 is what we return if the argument is zero and (λ p _, p) is an anonymous function that takes two arguments: the predecessor (p) of n and the result of recursive call pred p. The anonymous function ignores the second argument and returns the predecessor.

Related

Type Level Fix Point while Ensuring Termination

It is possible to represent to some how make unFix total? Possibly by restricting what f is?
record Fix (f : Type -> Type) where
constructor MkFix
unFix : f (Fix f)
> :total unFix
Fix.unFix is possibly not total due to:
MkFix, which is not strictly positive
The problem here is that Idris has no way of knowing that the base functor you are using for your datatype is strictly positive. If it were to accept your definition, you could then use it with a concrete, negative functor and prove Void from it.
There are two ways to represent strictly positive functors: by defining a universe or by using containers. I have put everything in two self-contained gists (but there are no comments there).
A Universe of Strictly Positive Functors
You can start with a basic representation: we can decompose a functor into either a sigma type (Sig), a (strictly-positive) position for a recursive substructure (Rec) or nothing at all (End). This gives us this description and its decoding as a Type -> Type function:
-- A universe of positive functor
data Desc : Type where
Sig : (a : Type) -> (a -> Desc) -> Desc
Rec : Desc -> Desc
End : Desc
-- The decoding function
desc : Desc -> Type -> Type
desc (Sig a d) x = (v : a ** desc (d v) x)
desc (Rec d) x = (x, desc d x)
desc End x = ()
Once you have this universe of functors which are guaranteed to be strictly positive, you can take their least fixpoint:
-- The least fixpoint of such a positive functor
data Mu : Desc -> Type where
In : desc d (Mu d) -> Mu d
You can now define your favourite datatype.
Example: Nat
We start with a sum type of tags for each one of the constructors.
data NatC = ZERO | SUCC
We then define the strictly positive base functor by storing the constructor tag in a sigma and computing the rest of the description based on the tag value. The ZERO tag is associated to End (there is nothing else to store in a zero constructor) whilst the SUCC one demands a Rec End, that is to say one recursive substructure corresponding to the Nat's predecessor.
natD : Desc
natD = Sig NatC $ \ c => case c of
ZERO => End
SUCC => Rec End
Our inductive type is then obtained by taking the fixpoint of the description:
nat : Type
nat = Mu natD
We can naturally recover the constructors we expect:
zero : nat
zero = In (ZERO ** ())
succ : nat -> nat
succ n = In (SUCC ** (n, ()))
References
This specific universe is taken from 'Ornamental Algebras, Algebraic Ornaments' by McBride but you can find more details in 'The Gentle Art of Levitation' by Chapman, Dagand, McBride, and Morris.
Strictly Positive Functors as the Extension of Containers
The second representation is based on another decomposition: each inductive type is seen as a general shape (i.e. its constructors and the data they store) plus a number of recursive positions (which can depend on the specific value of the shape).
record Container where
constructor MkContainer
shape : Type
position : shape -> Type
Once more we can give it an interpretation as a Type -> Type function:
container : Container -> Type -> Type
container (MkContainer s p) x = (v : s ** p v -> x)
And take the fixpoint of the strictly positive functor thus defined:
data W : Container -> Type where
In : container c (W c) -> W c
You can once more recover your favourite datatypes by defining Containers of interest.
Example: Nat
Natural numbers have two constructors, each storing nothing at all. So the shape will be a Bool. If we are in the zero case then there are no recursive positions (Void) and in the succ one there is exactly one (()).
natC : Container
natC = MkContainer Bool (\ b => if b then Void else ())
Our type is obtained by taking the fixpoint of the container:
nat : Type
nat = W natC
And we can recover the usual constructors:
zero : nat
zero = In (True ** \ v => absurd v)
succ : nat -> nat
succ n = In (False ** \ _ => n)
References
This is based on 'Containers: Constructing Strictly Positive Types' by Abbott, Altenkirch, and Ghani.

How do you operate on dependent pairs in a proof?

This is a follow up to this question. Thanks to Kwartz I now have a state of the proposition if b divides a then b divides a * c for any integer c, namely:
alsoDividesMultiples : (a, b, c : Integer) ->
DivisibleBy a b ->
DivisibleBy (a * c) b
Now, the goal has been to prove that statement. I realized that I do not understand how to operate on dependent pairs. I tried a simpler problem, which was show that every number is divisible by 1. After a shameful amount of thought on it, I thought I had come up with a solution:
-- All numbers are divisible by 1.
DivisibleBy a 1 = let n = a in
(n : Integer ** a = 1 * n)
This compiles, but I was had doubts it was valid. To verify that I was wrong, it changed it slightly to:
-- All numbers are divisible by 1.
DivisibleBy a 1 = let n = a in
(n : Integer ** a = 2 * n)
This also compiles, which means my "English" interpretation is certainly incorrect, for I would interpret this as "All numbers are divisible by one since every number is two times another integer". Thus, I am not entirely sure what I am demonstrating with that statement. So, I went back and tried a more conventional way of stating the problem:
oneDividesAll : (a : Integer) ->
(DivisibleBy a 1)
oneDividesAll a = ?sorry
For the implementation of oneDividesAll I am not really sure how to "inject" the fact that (n = a). For example, I would write (in English) this proof as:
We wish to show that 1 | a. If so, it follows that a = 1 * n for some n. Let n = a, then a = a * 1, which is true by identity.
I am not sure how to really say: "Consider when n = a". From my understanding, the rewrite tactic requires a proof that n = a.
I tried adapting my fallacious proof:
oneDividesAll : (a : Integer) ->
(DivisibleBy a 1)
oneDividesAll a = let n = a in (n : Integer ** a = b * n)
But this gives:
|
12 | oneDividesAll a = let n = a in (n : Integer ** a = b * n)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When checking right hand side of oneDividesAll with expected type
DivisibleBy a 1
Type mismatch between
Type (Type of DPair a P)
and
(n : Integer ** a = prim__mulBigInt 1 n) (Expected type)
Any help/hints would be appreciated.
First off, if you want to prove properties on number, you should use Nat (or other inductive types). Integer uses primitives that the argument can't argue further than prim__mulBigInt : Integer -> Integer -> Integer; that you pass two Integer to get one. The compiler doesn't know anything how the resulting Integer looks like, so it cannot prove stuff about it.
So I'll go along with Nat:
DivisibleBy : Nat -> Nat -> Type
DivisibleBy a b = (n : Nat ** a = b * n)
Again, this is a proposition, not a proof. DivisibleBy 6 0 is a valid type, but you won't find a proof : Divisible 6 0. So you were right with
oneDividesAll : (a : Nat) ->
(DivisibleBy a 1)
oneDividesAll a = ?sorry
With that, you could generate proofs of the form oneDividesAll a : DivisibleBy a 1. So, what comes into the hole ?sorry? :t sorry gives us sorry : (n : Nat ** a = plus n 0) (which is just DivisibleBy a 1 resolved as far as Idris can). You got confused on the right part of the pair: x = y is a type, but now we need a value – that's what's your last error cryptic error message hints at). = has only one constructor, Refl : x = x. So we need to get both sides of the equality to the same value, so the result looks something like (n ** Refl).
As you thought, we need to set n to a:
oneDividesAll a = (a ** ?hole)
For the needed rewrite tactic we check out :search plus a 0 = a, and see plusZeroRightNeutral has the right type.
oneDividesAll a = (a ** rewrite plusZeroRightNeutral a in ?hole)
Now :t hole gives us hole : a = a so we can just auto-complete to Refl:
oneDividesAll a = (a ** rewrite plusZeroRightNeutral a in Refl)
A good tutorial on theorem proving (where it's also explained why plus a Z does not reduce) is in the Idris Doc.

Why is filter based on dependent pair?

In the Idris Tutorial a function for filtering vectors is based on dependent pairs.
filter : (a -> Bool) -> Vect n a -> (p ** Vect p a)
filter f [] = (_ ** [])
filter f (x :: xs) with (filter f xs )
| (_ ** xs') = if (f x) then (_ ** x :: xs') else (_ ** xs')
But why is it necessary to put this in terms of a dependent pair instead of something more direct such as?
filter' : (a -> Bool) -> Vect n a -> Vect p a
In both cases the type of p must be determined, but in my supposed alternative the redundancy of listing p twice is eliminated.
My naive attempts at implementing filter' failed, so I was wondering is there a fundamental reason that it can't be implemented? Or can filter' be implemented, and perhaps filter was just a poor example to showcase dependent pairs in Idris? But if that is the case then in what situations would dependent pairs be useful?
Thanks!
The difference between filter and filter' is between existential and universal quantification. If (a -> Bool) -> Vect n a -> Vect p a was the correct type for filter, that would mean filter returns a Vector of length p and the caller can specify what p should be.
Kim Stebel's answer is right on the money. Let me just note that this was already discussed on the Idris mailing list back in 2012 (!!):
filter for vector, a question - Idris Programming Language
What raichoo posted there can help clarifying it I think; the real signature of your filter' is
filter' : {p : Nat} -> {n: Nat} -> {a: Type} -> (a -> Bool) -> Vect a n -> Vect a p
from which it should be obvious that this is not what filter should (or even could) do; p actually depends on the predicate and the vector you are filtering, and you can (actually need to) express this using a dependent pair. Note that in the pair (p ** Vect p a), p (and thus Vect p a) implicitly depends on the (unnamed) predicate and vector appearing before it in its signature.
Expanding on this, why a dependent pair? You want to return a vector, but there's no "Vector with unknown length" type; you need a length value for obtaining a Vector type. But then you can just think "OK, I will return a Nat together with a vector with that length". The type of this pair is, unsurprisingly, an example of a dependent pair. In more detail, a dependent pair DPair a P is a type built out of
A type a
A function P: a -> Type
A value of that type DPair a P is a pair of values
x: a
y: P a
At this point I think that is just syntax what might be misleading you. The type p ** Vect p a is DPair Nat (\p => Vect p a); p there is not a parameter for filter or anything like it. All this can be a bit confusing at first; if so, maybe it helps thinking of p ** Vect p a as a substitute for the "Vector with unknown length" type.
Not an answer, but additional context
Idris 1 documentation - https://docs.idris-lang.org/en/latest/tutorial/typesfuns.html#dependent-pairs
Idris 2 documentation - https://idris2.readthedocs.io/en/latest/tutorial/typesfuns.html?highlight=dependent#dependent-pairs
In Idris 2 the dependent pair defined here
and is similar to Exists and Subset but BOTH of it's values are NOT erased at runtime

Understanding 'impossible'

Type-Driven Development with Idris presents:
twoPlusTwoNotFive : 2 + 2 = 5 -> Void
twoPlusTwoNotFive Refl impossible
Is the above a function or value? If it's the former, then why is there no variable arguments, e.g.
add1 : Int -> Int
add1 x = x + 1
In particular, I'm confused at the lack of = in twoPlusTwoNotFive.
impossible calls out combinations of arguments which are, well, impossible. Idris absolves you of the responsibility to provide a right-hand side when a case is impossible.
In this instance, we're writing a function of type (2 + 2 = 5) -> Void. Void is a type with no values, so if we succeed in implementing such a function we should expect that all of its cases will turn out to be impossible. Now, = has only one constructor (Refl : x = x), and it can't be used here because it requires ='s arguments to be definitionally equal - they have to be the same x. So, naturally, it's impossible. There's no way anyone could successfully call this function at runtime, and we're saved from having to prove something that isn't true, which would have been quite a big ask.
Here's another example: you can't index into an empty vector. Scrutinising the Vect and finding it to be [] tells us that n ~ Z; since Fin n is the type of natural numbers less than n there's no value a caller could use to fill in the second argument.
at : Vect n a -> Fin n -> a
at [] FZ impossible
at [] (FS i) impossible
at (x::xs) FZ = x
at (x::xs) (FS i) = at xs i
Much of the time you're allowed to omit impossible cases altogether.
I slightly prefer Agda's notation for the same concept, which uses the symbol () to explicitly pinpoint which bit of the input expression is impossible.
twoPlusTwoNotFive : (2 + 2 ≡ 5) -> ⊥
twoPlusTwoNotFive () -- again, no RHS
at : forall {n}{A : Set} -> Vec A n -> Fin n -> A
at [] ()
at (x ∷ xs) zero = x
at (x ∷ xs) (suc i) = at xs i
I like it because sometimes you only learn that a case is impossible after doing some further pattern matching on the arguments; when the impossible thing is buried several layers down it's nice to have a visual aid to help you spot where it was.

Is there a way to automate a Coq proof with rewrite steps?

I am working on a proof and one of my subgoals looks a bit like this:
Goal forall
(a b : bool)
(p: Prop)
(H1: p -> a = b)
(H2: p),
negb a = negb b.
Proof.
intros.
apply H1 in H2. rewrite H2. reflexivity.
Qed.
The proof does not rely on any outside lemmas and just consists of applying one hypothesis in the context to another hypothesis and doing rewriting steps with a known hypothesis.
Is there a way to automate this? I tried doing intros. auto. but it had no effect. I suspect that this is because auto can only do apply steps but no rewrite steps but I am not sure. Maybe I need some stronger tactic?
The reason I want to automate this is that in my original problem I actually have a large number of subgoals that are very similar to this one, but with small differences in the names of the hypotheses (H1, H2, etc), the number of hypotheses (sometimes there is an extra induction hypothesis or two) and the boolean formula at the end. I think that if I could use automation to solve this my overall proof script would be more concise and robust.
edit: What if there is a forall in one of the hypothesis?
Goal forall
(a b c : bool)
(p: bool -> Prop)
(H1: forall x, p x -> a = b)
(H2: p c),
negb a = negb b.
Proof.
intros.
apply H1 in H2. subst. reflexivity.
Qed
When you see a repetitive pattern in the way you prove some lemmas, you can often define your own tactics to automate the proofs.
In your specific case, you could write the following:
Ltac rewrite_all' :=
match goal with
| H : _ |- _ => rewrite H; rewrite_all'
| _ => idtac
end.
Ltac apply_in_all :=
match goal with
| H : _, H2 : _ |- _ => apply H in H2; apply_in_all
| _ => idtac
end.
Ltac my_tac :=
intros;
apply_in_all;
rewrite_all';
auto.
Goal forall (a b : bool) (p: Prop) (H1: p -> a = b) (H2: p), negb a = negb b.
Proof.
my_tac.
Qed.
Goal forall (a b c : bool) (p: bool -> Prop)
(H1: forall x, p x -> a = b)
(H2: p c),
negb a = negb b.
Proof.
my_tac.
Qed.
If you want to follow this path of writing proofs, a reference that is often recommended (but that I haven't read) is CPDT by Adam Chlipala.
This particular goal can be solved like this:
Goal forall (a b : bool) (p: Prop) (H1: p -> a = b) (H2: p),
negb a = negb b.
Proof.
now intuition; subst.
Qed.
Or, using the destruct_all tactic (provided you don't have a lot of boolean variables):
intros; destruct_all bool; intuition.
The above has been modeled after the destr_bool tactic, defined in Coq.Bool.Bool:
Ltac destr_bool :=
intros; destruct_all bool; simpl in *; trivial; try discriminate.
You could also try using something like
destr_bool; intuition.
to fire up powerful intuition after simpler destr_bool.
now is defined in Coq.Init.Tactics as follows
Tactic Notation "now" tactic(t) := t; easy.
easy is defined right above it and (as its name suggests) can solve easy goals.
intuition can solve goals which require applying the laws of (intuitionistic) logic. E.g. the following two hypotheses from the original version of the question require an application of the modus ponens law.
H1 : p -> false = true
H2 : p
auto, on the other hand, doesn't do that by default, it also doesn't solve contradictions.
If your hypotheses include some first-order logic statements, the firstorder tactic may be the answer (like in this case) -- just replace intuition with it.