Strange error message in Idris - dependent-type

I'm implementing in Idris the algorithm and proofs of first-order unification by structural recursion (current status of the development available here).
Idris in giving me the following error message
`-- UnifyProofs.idr line 130 col 60:
When checking right hand side of maxEquiv with expected type
(Max p n f -> Max q n f, Max q n f -> Max p n f)
When checking argument b to constructor Builtins.MkPair:
No such variable k
when it tries to type check the following function
maxEquiv : p .=. q -> Max p .=. Max q
maxEquiv pr n f = ( \ a => ( fst (pr n f) (fst a)
, \ n1 => \ g => \pr1 => (snd a) n1 g
(snd (pr n1 g) pr1))
, \ a' => (snd (pr n f) (fst a')
, \ n2 => \ g' => \ pr2 => (snd a') n2 g'
(fst (pr n2 g') pr2)))
where Max and .=. are defined as
Property : (m : Nat) -> Type
Property m = (n : Nat) -> (Fin m -> Term n) -> Type
infix 5 .=.
(.=.) : Property m -> Property m -> Type
(.=.) {m = m} P Q = (n : Nat) -> (f : Fin m -> Term n) ->
Pair (P n f -> Q n f)
(Q n f -> P n f)
Max : (p : Property m) -> Property m
Max {m = m} p = \n => \f => (p n f , (k : Nat) -> (f' : Fin m -> Term k) ->
p k f' -> f' .< f)
I've tried to pass all function arguments explicitly in order to avoid problems with implicit argument inference. But the error persists. Could someone provide me some tip on how can I solve this?

Here is the solution to my question:
Max : (p : Property m) -> Property m
Max {m = m} p = \n => \f => (p n f , (k : Nat) -> (f' : Fin m -> Term k) -> p k f' -> f' .< f)
applySnd : Max {m = m} p n f -> (k : Nat) -> (f' : Fin m -> Term k) -> p k f' -> f' .< f
applySnd = snd
maxEquiv : p .=. q -> Max p .=. Max q
maxEquiv pr n f = ( \ a => ( fst (pr n f) (fst a)
, \ n1 => \ g => \pr1 => (applySnd a) n1 g (snd (pr n1 g) pr1))
, \ a' => (snd (pr n f) (fst a
, \ n2 => \ g' => \ pr2 => (applySnd a') n2 g' (fst (pr n2 g') pr2)))
I just have used a function applySnd to make the same thing as snd. I do not know why this is necessary... Probably a Idris bug...

Related

Hole with Delay in type. How to prove?

I was trying to prove that replicate1 works correctly by showing that all elements of replicate1 n x are x:
all1 : (p : a -> Bool) -> List a -> Bool
all1 p [] = True
all1 p (x :: xs) = p x && all1 p xs
replicate1 : (n: Nat) -> a -> List a
replicate1 Z x = [x]
replicate1 (S k) x = x :: replicate1 k x
all_replicate_is_x : Eq a => {x: a} -> all1 (== x) (replicate1 n x) = True
all_replicate_is_x {n = Z} = ?hole
all_replicate_is_x {n = (S k)} = ?all_replicate_is_x_rhs_2
The base case hole is
Test.hole [P]
`-- a : Type
constraint : Eq a
x : a
-----------------------------------------
Test.hole : x == x && Delay True = True
How to prove this?

Idris rewrite does not happen

import Data.Vect
import Data.Vect.Quantifiers
sameKeys : Vect n (lbl, Type) -> Vect n (lbl, Type) -> Type
sameKeys xs ys = All (uncurry (=)) (zip (map fst xs) (map fst ys))
g : {xs,ys : Vect n (lbl, Type)} -> sameKeys xs ys -> map (\b => fst b) xs = map (\b => fst b) ys
g {xs = []} {ys = []} [] = Refl
g {xs = x::xs} {ys = y::ys} (p::ps) = rewrite g ps in ?q
This is the error I see:
*main> :load main.idr
Type checking ./main.idr
main.idr:57:3:When checking right hand side of g with expected type
map (\b => fst b) (x :: xs) = map (\b6 => fst b6) (y :: ys)
rewriting
Data.Vect.Vect n implementation of Prelude.Functor.Functor, method map (\b => fst b) xs
to
Data.Vect.Vect n implementation of Prelude.Functor.Functor, method map (\b6 => fst b6) ys
did not change type
fst x :: Data.Vect.Vect n implementation of Prelude.Functor.Functor, method map (\b => fst b) xs = fst y :: Data.Vect.Vect n implementation of Prelude.Functor.Functor, method map (\b6 => fst b6) ys
Holes: Main.g
Why does it not rewrite it?
This is happening because Idris somehow fails to infer the correct implicit arguments to g, instead it introduces fresh vectors in the context.
As a workaround I can suggest to prove it as follows. First, we'll need a congruence lemma for two-argument functions:
total
cong2 : {f : a -> b -> c} -> (a1 = a2) -> (b1 = b2) -> f a1 b1 = f a2 b2
cong2 Refl Refl = Refl
Now the proof of the original lemma is trivial:
total
g : sameKeys xs ys -> map (\b => fst b) xs = map (\b => fst b) ys
g {xs = []} {ys = []} x = Refl
g {xs = x :: xs} {ys = y :: ys} (p :: ps) = cong2 p $ g ps

List Equality w/ `cong`

Following my other question, I tried to implement the actual exercise in Type-Driven Development with Idris for same_cons to prove that, given two equal lists, prepending the same element to each list results in two equal lists.
Example:
prove that 1 :: [1,2,3] == 1 :: [1,2,3]
So I came up with the following code that compiles:
sameS : {xs : List a} -> {ys : List a} -> (x: a) -> xs = ys -> x :: xs = x :: ys
sameS {xs} {ys} x prf = cong prf
same_cons : {xs : List a} -> {ys : List a} -> xs = ys -> x :: xs = x :: ys
same_cons prf = sameS _ prf
I can call it via:
> same_cons {x=5} {xs = [1,2,3]} {ys = [1,2,3]} Refl
Refl : [5, 1, 2, 3] = [5, 1, 2, 3]
Regarding the cong function, my understanding is that it takes a proof, i.e. a = b, but I don't understand its second argument: f a.
> :t cong
cong : (a = b) -> f a = f b
Please explain.
If you have two values u : c and v : c, and a function f : c -> d, then if you know that u = v, it has to follow that f u = f v, following simply from referential transparency.
cong is the proof of the above statement.
In this particular use case, you are setting (via unification) c and d to List a, u to xs, v to ys, and f to (:) x, since you want to prove that xs = ys -> (:) x xs = (:) x ys.

In Idris, how to write a "vect generator" function that take a function of index in parameter

I'm trying to write in Idris a function that create a Vect by passing the size of the Vect and a function taking the index in parameter.
So far, I've this :
import Data.Fin
import Data.Vect
generate: (n:Nat) -> (Nat -> a) ->Vect n a
generate n f = generate' 0 n f where
generate': (idx:Nat) -> (n:Nat) -> (Nat -> a) -> Vect n a
generate' idx Z f = []
generate' idx (S k) f = (f idx) :: generate' (idx + 1) k f
But I would like to ensure that the function passed in parameter is only taking index lesser than the size of the Vect.
I tried that :
generate: (n:Nat) -> (Fin n -> a) ->Vect n a
generate n f = generate' 0 n f where
generate': (idx:Fin n) -> (n:Nat) -> (Fin n -> a) -> Vect n a
generate' idx Z f = []
generate' idx (S k) f = (f idx) :: generate' (idx + 1) k f
But it doesn't compile with the error
Can't convert
Fin n
with
Fin (S k)
My question is : is what I want to do possible and how ?
The key idea is that the first element of the vector is f 0, and for the tail, if you have k : Fin n, then FS k : Fin (S n) is a "shift" of the finite number that increments its value and its type at the same time.
Using this observation, we can rewrite generate as
generate : {n : Nat} -> (f : Fin n -> a) -> Vect n a
generate {n = Z} f = []
generate {n = S _} f = f 0 :: generate (f . FS)
Another possibility is to do what #dfeuer suggested and generate a vector of Fins, then map f over it:
fins : (n : Nat) -> Vect n (Fin n)
fins Z = []
fins (S n) = FZ :: map FS (fins n)
generate' : {n : Nat} -> (f : Fin n -> a) -> Vect n a
generate' f = map f $ fins _
Proving generate f = generate' f is left as en exercise to the reader.
Cactus's answer appears to be about the best way to get what you asked for, but if you want something that can be used at runtime, it will be quite inefficient. The essential reason for this is that to weaken a Fin n to a Fin n+m requires that you completely deconstruct it to change the type of its FZ, and then build it back up again. So there can be no sharing at all between the Fin values produced for each vector element. An alternative is to combine a Nat with a proof that it is below a given bound, which leads to the possibility of erasure:
data NFin : Nat -> Type where
MkNFin : (m : Nat) -> .(LT m n) -> NFin n
lteSuccLeft : LTE (S n) m -> LTE n m
lteSuccLeft {n = Z} prf = LTEZero
lteSuccLeft {n = (S k)} {m = Z} prf = absurd (succNotLTEzero prf)
lteSuccLeft {n = (S k)} {m = (S j)} (LTESucc prf) = LTESucc (lteSuccLeft prf)
countDown' : (m : Nat) -> .(m `LTE` n) -> Vect m (NFin n)
countDown' Z mLTEn = []
countDown' (S k) mLTEn = MkNFin k mLTEn :: countDown' k (lteSuccLeft mLTEn)
countDown : (n : Nat) -> Vect n (NFin n)
countDown n = countDown' n lteRefl
countUp : (n : Nat) -> Vect n (NFin n)
countUp n = reverse $ countDown n
generate : (n : Nat) -> (NFin n -> a) -> Vect n a
generate n f = map f (countUp n)
As in the Fin approach, the function you pass to generate does not need to work on all naturals; it only needs to handle ones less than n.
I used the NFin type to explicitly indicate that I want the LT m n proof to be erased in all cases. If I didn't want/need that, I could just use (m ** LT m n) instead.

Promoting free variables in type terms to implicit function arguments

In order for my question to be meaningful, I must provide some background.
I think it would be useful to have a dependently typed language that can infer the existence and type of an argument a for a function whose other parameters and/or return value have types that depend on a. Consider the following snippet in a language I am designing:
(* Backticks are used for infix functions *)
def Cat (`~>` : ob -> ob -> Type) :=
sig
exs id : a ~> a
exs `.` : b ~> c -> a ~> b -> a ~> c
exs lid : id . f = f
exs rid : f . id = f
exs asso : (h . g) . f = h . (g . f)
end
If we make two (admittedly, unwarranted) assumptions:
No dependencies must exist that cannot be inferred from explicitly provided information.
Every free variable must be converted into an implicit argument of the last identifier introduced using def or exs.
We can interpret the above snippet as being equivalent to the following one:
def Cat {ob} (`~>` : ob -> ob -> Type) :=
sig
exs id : all {a} -> a ~> a
exs `.` : all {a b c} -> b ~> c -> a ~> b -> a ~> c
exs lid : all {a b} {f : a ~> b} -> id . f = f
exs rid : all {a b} {f : a ~> b} -> f . id = f
exs asso : all {a b c d} {f : a ~> b} {g} {h : c ~> d}
-> (h . g) . f = h . (g . f)
end
Which is more or less the same as the following Agda snippet:
record Cat {ob : Set} (_⇒_ : ob → ob → Set) : Set₁ where
field
id : ∀ {a} → a ⇒ a
_∙_ : ∀ {a b c} → b ⇒ c → a ⇒ b → a ⇒ c
lid : ∀ {a b} {f : a ⇒ b} → id ∙ f ≡ f
rid : ∀ {a b} {f : a ⇒ b} → f ∙ id ≡ f
asso : ∀ {a b c d} {f : a ⇒ b} {g} {h : c ⇒ d} → (h ∙ g) ∙ f ≡ h ∙ (g ∙ f)
Clearly, two unwarranted assumptions have saved us a lot of typing!
Note: Of course, this mechanism only works as long as the original assumptions hold. For example, we cannot correctly infer the implicit arguments of the dependent function composition operator:
(* Only infers (?2 -> ?3) -> (?1 -> ?2) -> (?1 -> ?3) *)
def `.` g f x := g (f x)
In this case, we have to explicitly provide some additional information:
(* If we omitted {x}, it would become an implicit argument of `.` *)
def `.` (g : all {x} (y : B x) -> C y) (f : all x -> B x) x := g (f x)
Which can be expanded into the following:
def `.` {A} {B : A -> Type} {C : all {x} -> B x -> Type}
(g : all {x} (y : B x) -> C y) (f : all x -> B x) x := g (f x)
Here is the equivalent Agda definition, for comparison:
_∘_ : ∀ {A : Set} {B : A → Set} {C : ∀ {x} → B x → Set}
(g : ∀ {x} (y : B x) → C y) (f : ∀ x → B x) x → C (f x)
(g ∘ f) x = g (f x)
End of Note
Is the mechanism described above feasible? Even better, is there any language that implements something resembling this mechanism?
This sounds like implicit generalization in Coq.