How to prove a relation at compile-time in Lean? - dependent-type

Say I have a type:
inductive is_sorted {α: Type} [decidable_linear_order α] : list α -> Prop
| is_sorted_zero : is_sorted []
| is_sorted_one : Π (x: α), is_sorted [x]
| is_sorted_many : Π {x y: α} {ys: list α}, x < y -> is_sorted (y::ys) -> is_sorted (x::y::ys)
And it's decidable:
instance decidable_sorted {α: Type} [decidable_linear_order α] : ∀ (l : list α), decidable (is_sorted l)
If I have a particular list:
def l1: list ℕ := [2,3,4,5,16,66]
Is it possible to prove that it's sorted at "compile time"; to produce an is_sorted l1 at the top level?
I've tried def l1_sorted: is_sorted l1 := if H: is_sorted l1 then H else sorry, but I don't know how to show the latter case is impossible. I've also tried the simp tactic but that didn't seem to help.
I can prove it with #reduce, but it doesn't possible to assign the output of that to a variable.

You should be able to use dec_trivial to prove l1_sorted. This will try to infer an instance of decidable (is_sorted l1), and if the instance evaluates to is_true p, it will reduce to p.

Related

Equality between paths

Using the cubical-demo library, I thought the following would be trivial to prove:
{-# OPTIONS --cubical #-}
open import Cubical.PathPrelude
foo : ∀ {ℓ} {A : Set ℓ} {x y : A} (p : x ≡ y) → trans refl p ≡ p
foo p = ?
But alas, it doesn't hold definitionally: trying to use refl fails with
primComp (λ _ → ;A) (~ i ∨ i) (λ { i₁ (i = i0) → ;x ; i₁ (i = i1) → p i₁ }) (refl i)
!= p i
of type ;A
and I don't know where to start.
No, sadly we lose some definitional equalities when using Path, because we don't know how to keep the system confluent if we were to add those reductions.
The eliminator of the Id type instead has the usual reduction rules.
https://github.com/Saizan/cubical-demo/blob/master/src/Cubical/Id.agda
In the case of the lemma you want to prove about trans you can find a proof at
https://github.com/Saizan/cubical-demo/blob/master/src/Cubical/Lemmas.agda
By the way, cubical-demo grew up organically, and we are starting fresh with hopefully a cleaner setup (altough with different primitives) at
https://github.com/agda/cubical
cubical has a better Id module for example:
https://github.com/agda/cubical/blob/master/Cubical/Core/Id.agda
Based on Saizan's answer I looked up the proof in cubical-demo and ported it to the new cubical library. I can see how it works out (as in, I can see that the value of the given path is x on all three designated edges) but I don't see yet how I would come up with a similar proof for a similar situation:
{-# OPTIONS --cubical #-}
module _ where
open import Cubical.Core.Prelude
refl-compPath : ∀ {ℓ} {A : Set ℓ} {x y : A} (p : x ≡ y) → compPath refl p ≡ p
refl-compPath {x = x} p i j = hcomp {φ = ~ j ∨ j ∨ i}
(λ { k (j = i0) → x
; k (j = i1) → p k
; k (i = i1) → p (k ∧ j)
})
x

Dependent types: enforcing global properties in inductive types

I have the following inductive type MyVec:
import Data.Vect
data MyVec: {k: Nat} -> Vect k Nat -> Type where
Nil: MyVec []
(::): {k, n: Nat} -> {v: Vect k Nat} -> Vect n Nat -> MyVec v -> MyVec (n :: v)
-- example:
val: MyVec [3,2,3]
val = [[2,1,2], [0,2], [1,1,0]]
That is, the type specifies the lengths of all vectors inside a MyVec.
The problem is, val will have k = 3 (k is the number of vectors inside a MyVec), but the ctor :: does not know this fact. It will first build a MyVec with k = 1, then with 2, and finally with 3. This makes it impossible to define constraints based on the final shape of the value.
For example, I cannot constrain the values to be strictly less than k. Accepting Vects of Fin (S k) instead of Vects of Nat would rule out some valid values, because the last vectors (the first inserted by the ctor) would "know" a smaller value of k, and thus a stricter contraint.
Or, another example, I cannot enforce the following constraint: the vector at position i cannot contain the number i. Because the final position of a vector in the container is not known to the ctor (it would be automatically known if the final value of k was known).
So the question is, how can I enforce such global properties?
There are (at least) two ways to do it, both of which may require tracking additional information in order to check the property.
Enforcing properties in the data definition
Enforcing all elements < k
I cannot constrain the values to be strictly less than k. Accepting Vects of Fin (S k) instead of Vects of Nat would rule out some valid values...
You are right that simply changing the definition of MyVect to have Vect n (Fin (S k)) in it would not be correct.
However, it is not too hard to fix this by generalizing MyVect to be polymorphic, as follows.
data MyVec: (A : Type) -> {k: Nat} -> Vect k Nat -> Type where
Nil: {A : Type} -> MyVec A []
(::): {A : Type} -> {k, n: Nat} -> {v: Vect k Nat} -> Vect n A -> MyVec A v -> MyVec A (n :: v)
val : MyVec (Fin 3) [3,2,3]
val = [[2,1,2], [0,2], [1,1,0]]
The key to this solution is separating the type of the vector from k in the definition of MyVec, and then, at top level, using the "global value of k to constrain the type of vector elements.
Enforcing vector at position i does not contain i
I cannot enforce that the vector at position i cannot contain the number i because the final position of a vector in the container is not known to the constructor.
Again, the solution is to generalize the data definition to keep track of the necessary information. In this case, we'd like to keep track of what the current position in the final value will be. I call this index. I then generalize A to be passed the current index. Finally, at top level, I pass in a predicate enforcing that the value does not equal the index.
data MyVec': (A : Nat -> Type) -> (index : Nat) -> {k: Nat} -> Vect k Nat -> Type where
Nil: {A : Nat -> Type} -> {index : Nat} -> MyVec' A index []
(::): {A : Nat -> Type} -> {k, n, index: Nat} -> {v: Vect k Nat} ->
Vect n (A index) -> MyVec' A (S index) v -> MyVec' A index (n :: v)
val : MyVec' (\n => (m : Nat ** (n == m = False))) 0 [3,2,3]
val = [[(2 ** Refl),(1 ** Refl),(2 ** Refl)], [(0 ** Refl),(2 ** Refl)], [(1 ** Refl),(1 ** Refl),(0 ** Refl)]]
Enforcing properties after the fact
Another, sometimes simpler way to do it, is to not enforce the property immediately in the data definition, but to write a predicate after the fact.
Enforcing all elements < k
For example, we can write a predicate that checks whether all elements of all vectors are < k, and then assert that our value has this property.
wf : (final_length : Nat) -> {k : Nat} -> {v : Vect k Nat} -> MyVec v -> Bool
wf final_length [] = True
wf final_length (v :: mv) = isNothing (find (\x => x >= final_length) v) && wf final_length mv
val : (mv : MyVec [3,2,3] ** wf 3 mv = True)
val = ([[2,1,2], [0,2], [1,1,0]] ** Refl)
Enforcing vector at position i does not contain i
Again, we can express the property by checking it, and then asserting that the value has the property.
wf : (index : Nat) -> {k : Nat} -> {v : Vect k Nat} -> MyVec v -> Bool
wf index [] = True
wf index (v :: mv) = isNothing (find (\x => x == index) v) && wf (S index) mv
val : (mv : MyVec [3,2,3] ** wf 0 mv = True)
val = ([[2,1,2], [0,2], [1,1,0]] ** Refl)

Using sets in lean

I'd like to do some work in topology using lean.
As a good start, I wanted to prove a couple of simple lemmas about sets in lean.
For example
def inter_to_union (H : a ∈ set.inter A B) : a ∈ set.union A B :=
sorry
or
def set_deMorgan : a ∈ set.inter A B → a ∈ set.compl (set.union (set.compl A) (set.compl B)) :=
sorry
or, perhaps more interestingly
def set_deMorgan2 : set.inter A B = set.compl (set.union (set.compl A) (set.compl B)) :=
sorry
But I can't find anywhere elimination rules for set.union or set.inter, so I just don't know how to work with them.
How do I prove the lemmas?
Also, looking at the definition of sets in lean, I can see bits of syntax, which look very much like paper maths, but I don't understand at the level of dependent type theory, for example:
protected def sep (p : α → Prop) (s : set α) : set α :=
{a | a ∈ s ∧ p a}
How can one break down the above example into simpler notions of dependent/inductive types?
That module identifies sets with predicates on some type α (α is usually called 'the universe'):
def set (α : Type u) := α → Prop
If you have a set s : set α and for some x : α you can prove s a, this is interpreted as 'x belongs to s'.
In this case, x ∈ A is a notation (let us not mind about typeclasses for now) for set.mem x A which is defined as follows:
protected def mem (a : α) (s : set α) :=
s a
The above explains why the empty set is represented as the predicate always returning false:
instance : has_emptyc (set α) :=
⟨λ a, false⟩
And also, the universe is unsurprisingly represented like so:
def univ : set α :=
λ a, true
I'll show how prove the first lemma:
def inter_to_union {α : Type} {A B : set α} {a : α} : A ∩ B ⊆ A ∪ B :=
assume (x : α) (xinAB : x ∈ A ∩ B), -- unfold the definition of `subset`
have xinA : x ∈ A, from and.left xinAB,
#or.inl _ (x ∈ B) xinA
This is all like the usual "pointful" proofs for these properties in basic set theory.
Regarding your question about sep -- you can see through notations like so:
set_option pp.notation false
#print set.sep
Here is the output:
protected def set.sep : Π {α : Type u}, (α → Prop) → set α → set α :=
λ {α : Type u} (p : α → Prop) (s : set α),
set_of (λ (a : α), and (has_mem.mem a s) (p a))

Distributivity of `subst`

Suppose I have a transitive relation ~with two endomaps f and g.
Assuming f and g agree everywhere and f a ~ f b ~ f c
then there are two ways to show g a ~ g c:
transform each f into a g by the given equality then apply
transitivity,
or apply transitivity then transform along the equality.
Are the resulting proofs identical? Apparently so,
open import Relation.Binary.PropositionalEquality
postulate A : Set
postulate _~_ : A → A → Set
postulate _⟨~~⟩_ : ∀{a b c} → a ~ b → b ~ c → a ~ c
postulate f g : A → A
subst-dist : ∀{a b c}{ef : f a ~ f b}{psf : f b ~ f c} → (eq : ∀ {z} → f z ≡ g z)
→
subst₂ _~_ eq eq ef ⟨~~⟩ subst₂ _~_ eq eq psf
≡ subst₂ _~_ eq eq (ef ⟨~~⟩ psf)
subst-dist {a} {b} {c} {ef} {psf} eq rewrite eq {a} | eq {b} | eq {c} = refl
I just recently learned about the rewrite keyword and thought it might help here; apparently it does. However, I honestly do not understand what is going on here. I've used rewrite other times, with comprehension. However, all these substs are confusing me.
I'd like to know
if is there a simplier way to obtain subst-dist? Maybe something similar in the libraries?
what is going on with this particular usage of rewrite
an alternate proof of subst-dist without using rewrite (most important)
is there another way to obtain g a ~ g c without using subst?
what are some of the downsides of using heterogeneous equality, it doesn't seem like most people are fond of it. (also important)
Any help is appreciated.
rewrite is just a sugared with, which is just sugared "top-level" pattern matching. See in Agda’s documentation.
what are some of the downsides of using heterogeneous equality, it
doesn't seem like most people are fond of it. (also important)
This is OK
types-equal : ∀ {α} {A B : Set α} {x : A} {y : B} -> x ≅ y -> A ≡ B
types-equal refl = refl
this is OK as well
A-is-Bool : {A : Set} {x : A} -> x ≅ true -> A ≡ Bool
A-is-Bool refl = refl
This is an error
fail : ∀ {n m} {i : Fin n} {j : Fin m} -> i ≅ j -> n ≡ m
fail refl = {!!}
-- n != m of type ℕ
-- when checking that the pattern refl has type i ≅ j
because Fin n ≡ Fin m doesn't immediately imply n ≡ m (you can make it so by enabling --injective-type-constructors, but that makes Agda anti-classical) (Fin n ≡ Fin m -> n ≡ m is provable though).
Originally Agda permitted to pattern match on x ≅ y when x and y have non-unifiable types, but that allows to write weird things like (quoting from this thread)
P : Set -> Set
P S = Σ S (\s → s ≅ true)
pbool : P Bool
pbool = true , refl
¬pfin : ¬ P (Fin 2)
¬pfin ( zero , () )
¬pfin ( suc zero , () )
¬pfin ( suc (suc ()) , () )
tada : ¬ (Bool ≡ Fin 2)
tada eq = ⊥-elim ( ¬pfin (subst (\ S → P S) eq pbool ) )
Saizan or maybe it's just ignoring the types and comparing the constructor names?
pigworker Saizan: that's exactly what I think is happening
Andread Abel:
If I slighly modify the code, I can prove Bool unequal Bool2, where true2, false2 : Bool2 (see file ..22.agda)
However, if I rename the constructors to true, false : Bool2, then suddenly I cannot prove that Bool is unequal to Bool2 anymore (see
other file). So, at the moment Agda2 compares apples and oranges in
certain situations. ;-)
So in order to pattern match on i ≅ j, where i : Fin n, j : Fin m, you first need to unify n with m
OK : ∀ {n m} {i : Fin n} {j : Fin m} -> n ≡ m -> i ≅ j -> ...
OK refl refl = ...
That's the main drawback of heteregeneous equality: you need to provide proofs of equality of indices everywhere. Usual cong and subst are non-indexed, so you also have to provide indexed versions of them (or use even more annoying cong₂ and subst₂).
There is no such problem with "heteroindexed" (I don't know if it has a proper name) equality
data [_]_≅_ {ι α} {I : Set ι} {i} (A : I -> Set α) (x : A i) : ∀ {j} -> A j -> Set where
refl : [ A ] x ≅ x
e.g.
OK : ∀ {n m} {i : Fin n} {j : Fin m} -> [ Fin ] i ≅ j -> n ≡ m
OK refl = refl
More generally, whenever you have x : A i, y : A j, p : [ A ] x ≅ y, you can pattern match on p and j will be unified with i, so you don't need to carry an additional proof of n ≡ m.
Heterogeneous equality, as it presented in Agda, is also inconsistent with the univalence axiom.
EDIT
Pattern matching on x : A, y : B, x ≅ y is equal to pattern matching on A ≡ B and then changing every y in a context to x. So when you write
fail : ∀ {n m} {i : Fin n} {j : Fin m} -> i ≅ j -> n ≡ m
fail refl = {!!}
it's the same as
fail' : ∀ {n m} {i : Fin n} {j : Fin m} -> Fin n ≡ Fin m -> i ≅ j -> n ≡ m
fail' refl refl = {!!}
but you can't pattern match on Fin n ≡ Fin m
fail-coerce : ∀ {n m} -> Fin n ≡ Fin m -> Fin n -> Fin m
fail-coerce refl = {!!}
-- n != m of type ℕ
-- when checking that the pattern refl has type Fin n ≡ Fin m
like you cannot pattern match on
fail'' : ∀ {n m} -> Nat.pred n ≡ Nat.pred m -> n ≡ m
fail'' refl = {!!}
-- n != m of type ℕ
-- when checking that the pattern refl has type Nat.pred n ≡ Nat.pred m
In general
f-inj : ∀ {n m} -> f n ≡ f m -> ...
f-inj refl = ...
works only if f is obviously injective. I.e. if f is a series of constructors (e.g. suc (suc n) ≡ suc (suc m)) or computes to it (e.g. 2 + n ≡ 2 + m). Type constructors (which Fin is) are not injective because that would make Agda anti-classical, so you cannot pattern on Fin n ≡ Fin m unless you enable --injective-type-constructors.
Indices unify for
data [_]_≅_ {ι α} {I : Set ι} {i} (A : I -> Set α) (x : A i) : ∀ {j} -> A j -> Set where
refl : [ A ] x ≅ x
because you don't try to unify A i with A j, but instead explicitly carry indices in the type of [_]_≅_, which make them available for unification. When indices are unified, both types become the same A i and it's possible to proceed like with propositional equality.
EDIT
One another problem with heterogeneous equality is that it's not fully heterogeneous: in x : A, y : B, x ≅ y A and B must be in the same universe. The treatment of universe levels in data definitions has been changed recently and now we can define fully heterogeneous equality:
data _≅_ {α} {A : Set α} (x : A) : ∀ {β} {B : Set β} -> B -> Set where
refl : x ≅ x
But this doesn't work
levels-equal : ∀ {α β} -> Set α ≅ Set β -> α ≅ β
levels-equal refl = refl
-- Refuse to solve heterogeneous constraint Set α : Set (suc α) =?=
-- Set β : Set (suc β)
because Agda doesn't think suc is injective
suc-inj : {α β : Level} -> suc α ≅ suc β -> α ≅ β
suc-inj refl = refl
-- α != β of type Level
-- when checking that the pattern refl has type suc α ≅ suc β
If we postulate it, then we can prove levels-equal:
hcong : ∀ {α β δ} {A : Set α} {B : Set β} {D : Set δ} {x : A} {y : B}
-> (f : ∀ {γ} {C : Set γ} -> C -> D) -> x ≅ y -> f x ≅ f y
hcong f refl = refl
levelOf : ∀ {α} {A : Set α} -> A -> Level
levelOf {α} _ = α
postulate
suc-inj : {α β : Level} -> suc α ≅ suc β -> α ≅ β
levels-equal : ∀ {α β} -> Set α ≅ Set β -> α ≅ β
levels-equal p = suc-inj (suc-inj (hcong levelOf p))

refl in agda : explaining congruence property

With the following definition of equality, we have refl as constructor
data _≡_ {a} {A : Set a} (x : A) : A → Set a where
refl : x ≡ x
and we can prove that function are congruent on equality
cong : ∀ { a b} { A : Set a } { B : Set b }
(f : A → B ) {m n} → m ≡ n → f m ≡ f n
cong f refl = refl
I am not sure I can parse what is going on exactly here.
I think we are pattern matching refl on hidden parameters : if we replace the first occurence by refl by another identifier, we get a type error.
after pattern matching, I imagine that m and n are the same by the definition of refl. then magic occurs (a definition of functionality of a relation is applied ? or is it build in ?)
Is there an intuitive description on what is going on ?
Yes, the arguments in curly braces {} are implicit and they only need to be supplied or matched if agda cannot figure them out. It is necessary to specify them, since dependent types needs to refer to the values they depend on, but dragging them around all the time would make the code rather clunky.
The expression cong f refl = refl matches the explicit arguments (A → B) and (m ≡ n). If you wanted to match the implicit arguments, you'd need to put the matching expression in {}, but here there is no need for that. Then on the right hand side it is indeed the construction of (f m ≡ f n) using refl, and it works "by magic". Agda has a built-in axiom that proves this to be true. That axiom is similar (but stronger than) J-axiom - the induction axiom: if something C : (x y : A) → (x ≡ y) → Set is true for C x x refl, then it is also true for any x y : A and p : x ≡ y.
J : forall {A : Set} {C : (x y : A) → (x ≡ y) → Set} →
(c : ∀ x → C x x refl) →
(x y : A) → (p : x ≡ y) → C x y p
-- this really is an axiom, but in Agda there is a stronger built-in,
-- which can be used to prove this
J c x .x refl = c x -- this _looks_ to only mean x ≡ x
-- but Agda's built-in extends this proof to all cases
-- for which x ≡ y can be constructed - that's the point
-- of having induction
cong : ∀ { a b} { A : Set a } { B : Set b }
(f : A → B ) {m n} → m ≡ n → f m ≡ f n
cong f {x} {y} p = J {C = \x y p → f x ≡ f y} -- the type of equality
-- of function results
(\_ → refl) -- f x ≡ f x is true indeed
x y p
(In this last line we: match explicit arguments f and p, and also the implicit arguments m=x and n=y. Then we pass to J one implicit argument, but it is not the first positional implicit, so we tell agda that it is C in the definition - without doing that, Agda won't see what type is meant by refl in \_ → refl)