Equality in Agda - irrelevant arguments - equality

I have a dependant type that consists of a value plus some proofs about its properties. Naturally I would like my notion of equality over this type to be equivalent to equality over the value component. This is all fine except that I run into problems when proving properties of lifted notions of this equality (for example equality over lists of this type).
For example:
open import Data.Nat
open import Data.Product
open import Relation.Binary
open import Relation.Binary.PropositionalEquality
open import Relation.Binary.List.Pointwise renaming (Rel to ListRel) hiding (refl)
module Test where
_≈_ : Rel (ℕ × ℕ) _
(a₁ , _) ≈ (a₂ , _) = a₁ ≡ a₂
≈-sym : Symmetric _≈_
≈-sym refl = refl
≋-sym : Symmetric (ListRel _≈_)
≋-sym = symmetric ≈-sym
In the last line Agda complains that it can't solve for the second projections out of the pairs. Interestingly changing the last line to the following eta-equivalent expression means that it can solve them:
≋-sym = symmetric (λ {x} {y} → ≈-sym {x} {y})
Now naturally I know that sometimes Agda can't solve for all the implicit arguments and needs a bit of extra help but I don't understand what new information I'm providing it by doing this...
I'm doing a lot of lifting of equality and I'd rather avoid adding these ugly eta expansions everywhere in my code. I was wondering if anyone has any suggestions to allow something similar to the original code to pass?
I have looked into irrelevance but the second projection is used elsewhere, even if it is computationally irrelevant for equality.

I'm guessing, but I think the problem is that the order of implicit arguments doesn't matter for (a part of) unification. Consider
flipped : (({n m : ℕ} -> n ≡ m) -> ℕ) -> ({m n : ℕ} -> n ≡ m) -> ℕ
flipped k f = k f
Here k receives something of type {n m : ℕ} -> n ≡ m and f is of type {m n : ℕ} -> n ≡ m (m comes before n), but Agda happily unifies these two expressions, since each implicit argument becomes a metavariable during elaboration and metas are not ordered by when they was introduced — they are ordered by how they was instantiated (e.g. you can't instantiate α to β -> β and then β to α as it would result in α ≡ α -> α and handling such loops (called equirecursive types) is an unsound nightmare). So when you write
≋-sym = symmetric ≈-sym
Agda is confused, because it could mean any of
≋-sym = symmetric (λ {x} {y} → ≈-sym {x} {y})
≋-sym = symmetric (λ {x} {y} → ≈-sym {y} {x})
(well, not quite, because the second expression is ill-typed, but Agda doesn't backtrack and can't solve such problems therefore)
This is different from the flipped example, because flipped explicitly specifies how n and m are used, so unification of
{n1 m1 : ℕ} -> n1 ≡ m1
{m2 n2 : ℕ} -> n2 ≡ m2
results in n1 ≡ n2 and m1 ≡ m2 and hence the problem is solved. If you drop this specification
unsolved : (({n m : ℕ} -> ℕ) -> ℕ) -> ({m n : ℕ} -> ℕ) -> ℕ
unsolved k f = k f
you'll get unsolved metas back.
The exact problem with your definition is that only the first projections of pairs are mentioned in the RHS of _≈_, so Agda doesn't know how to unify the second projections. Here is a workaround:
record Sing {α} {A : Set α} (x : A) : Set where
module Test where
_≈_ : Rel (ℕ × ℕ) _
(a₁ , b₁) ≈ (a₂ , b₂) = a₁ ≡ a₂ × Sing (b₁ , b₂)
≈-sym : Symmetric _≈_
≈-sym (refl , _) = (refl , _)
≋-sym : Symmetric (ListRel _≈_)
≋-sym = symmetric ≈-sym
Sing is a dummy record that have only one inhabitant that can be inferred automatically. But Sing allows to mention b₁ and b₂ in the RHS of _≈_ in an inference-friendly way, which makes those metas in ≋-sym solvable.
Though, it seems that (a₁ , b₁) ≈ (a₂ , b₂) = a₁ ≡ a₂ × Sing b₁ already gives Agda enough hints how to solve metas in ≋-sym.
You can also define a pattern synonym to make the code slightly nicer:
pattern refl₁ = (refl , _)
≈-sym : Symmetric _≈_
≈-sym refl₁ = refl₁

Related

How can I learn more about how type checking happens in Idris?

I am learning programming using dependent types and started using Idris recently. I have some background in programming languages and understand how type checking happens in languages like Haskell and ML. I wanted to know if there is some description of how type checking happens in Idris, even a flag that can be passed to the Idris compiler which dumps the output of the type checker will also be useful.
For example, I want to understand how Idris type checks the correctness of the function append,
append : Vect m elem -> Vect n elem -> Vect (m + n) elem
append [] ys = ys
append (x::xs) ys = x :: (append xs ys)
The reason is that Idris gives a type error when I wrongly define append as append (x::xs) ys = x :: (append ys xs), I interchanged the arguments in the recursive call, saying that (plus k n) is not equal to (plus n k).
My guess is that even if I fix this by giving a proof that plus is commutative, Idris must still give a type error. Therefore, I want to understand type checking in Idris and see if it really happens and if so how it figures out. Thanks in advance.

confused about lazy evaluation in Idris

Reading and playing around with some examples from the official Idris tutorial has left me a bit confused regarding lazy evaluation.
As stated in the tutorial Idris uses eager evaluation, and they go on to give an example where this would not be appropriate
ifThenElse : Bool -> a -> a -> a
ifThenElse True t e = t
ifThenElse False t e = e
And they then proceed to show an example using lazy evaluation
ifThenElse : Bool -> Lazy a -> Lazy a -> a
ifThenElse True t e = t
ifThenElse False t e = e
I like to try things out while reading, so I created an inefficient Fibonacci function to test out the non-lazy and lazy ifThenElse.
fibo : Nat -> Nat
fibo Z = Z
fibo (S Z) = S Z
fibo (S(S Z)) = S(S Z)
fibo (S(S(S n))) = fibo (S(S n)) + fibo (S n)
-- the non lazy
ifThenElse1 : Bool -> (t: a) -> (f: a) -> a
ifThenElse1 False t f = f
ifThenElse1 True t f = t
-- should be slow when applied to True
iftest1 : Bool -> Nat
iftest1 b = ifThenElse1 b (fibo 5) (fibo 25)
-- the lazy
ifThenElse2 : Bool -> (t: Lazy a) -> (f: Lazy a) -> a
ifThenElse2 False t f = f
ifThenElse2 True t f = t
-- should be fast when applied to True
iftest2 : Bool -> Nat
iftest2 b = ifThenElse2 b (fibo 5) (fibo 25)
Given that Idris should be performing eager evaluation, I would expect the execution of iftest1 to be slowed down by (fibo 25) even when applied to True. However, both iftest1 and iftest2 execute very fast when applied to True. So maybe my understanding of lazyness/eagerness is fundamentally flawed?
What is a good example to observe the difference between lazyness and eagerness in Idris?
You probably tried iftest1 and iftest2 from the Idris REPL. The REPL uses different evaluation order than compiled code:
Being a fully dependently typed language, Idris has two phases where it evaluates things, compile-time and run-time. At compile-time it will only evaluate things which it knows to be total (i.e. terminating and covering all possible inputs) in order to keep type checking decidable. The compile-time evaluator is part of the Idris kernel, and is implemented in Haskell using a HOAS (higher order abstract syntax) style representation of values. Since everything is known to have a normal form here, the evaluation strategy doesn’t actually matter because either way it will get the same answer, and in practice it will do whatever the Haskell run-time system chooses to do.
The REPL, for convenience, uses the compile-time notion of evaluation.

Defining Functor Instance for Tensor Type (Idris)

I've been learning Idris recently, and decided I'd try to write a simple tensor library. I started by defining the type.
data Tensor : Vect n Nat -> Type -> Type
Scalar : a -> Tensor [] a
Dimension : Vect n (Tensor d a) -> Tensor (n::d) a
As you can see, the type Tensor is parameterized by a Vect of Nats describing the tensor's dimensions, and a type, describing its contents. So far so good. Next I decided to try making the Tensor type a Functor.
instance Functor (Tensor d) where
map f (Scalar x) = f x
map f (Dimension x) = map f x
And Idris gave me the following error.
Unifying `b` and `Tensor [] b` would lead to infinite type
Okay. From the error, I figured that maybe the issue was that the first pattern of map was too specific (i.e., would only accept scalars when the type declaration of map is such that it accepts any tensor). That seemed odd, but I figured I'd try rewriting it using a with statement.
dimensions : {d : Vect n Nat} -> Tensor d a -> Vect n Nat
dimensions {d} _ = d
instance Functor (Tensor d) where
map f t with (dimensions t)
map f (Scalar x) | [] = f x
map f (Dimension x) | (_::_) = map f x
But I got the same error. I have quite a bit of experience in Haskell, but I'm still not quite used to the lingo used in dependently typed programming in general and by Idris in particular, so any help understanding the error message would be greatly appreciated.
(Note: from Idris 0.10, the instance keyword is deprecated and should be simply left out).
The task is to apply the function to all elements in the Scalar constructors, but otherwise leave the structure unchanged. So, we need to map Scalar to Scalar and Dimension to Dimension, and since Dimension contains a vector of recursive occurrences, we should use Vect's map to access them.
Functor (Tensor d) where
map f (Scalar x) = Scalar (f x)
map f (Dimension xs) = Dimension (map (map f) xs)
So, in map (map f) xs, the first map is for mapping over Vect, and map f is the recursive call.

Is there a modulo function in idris?

In Haskell, there are the mod and rem functions. Are there similar functions in Idris, particularly defined over Nat?
In Prelude.Nat there are
modNatNZ : Nat -> (y : Nat) -> Not (y = 0) -> Nat
modNat : Nat -> Nat -> Nat
The first one needs a proof that the divisor is not zero, while the second is partial (i. e. can crash during runtime). Practically there is also a proof,
SIsNotZ : {x: Nat} -> Not (S x = Z)
that a successor cannot be zero. So you can just use modNatNZ 10 3 SIsNotZ and the unification system will proof Not (3 = 0). You can see how modNatNZ works here. As Nat is always positive, a remainder function would behave the same.
Otherwise, a generic
mod : Integral ty => ty -> ty -> ty
is defined for all types implementing Integral (e.g. Int).

Apply list of values as arguments to a function

Can I apply a list of n values to a function that takes n values, where n varies?
A first naive attempt is the following but the compiler (fairly) complains of a weird self-referential type for applyN
applyN f xs =
case xs of
[] -> f
(x::xs) -> applyN (f x) xs
I can't see how a fold would work and respect its type signature either.
For context, I want to take a list of N Json Decoders and evaluate
Json.objectN ConstructorN n1 n2 ... nN
Clearly if n is known (let's say 2) then we have
case lst of
(d1 :: d2 :: _) -> Json.object2 Constructor2 d1 d2
otherwise -> ....
but that's a lot of code to write if I cannot generalise for n.
I'm fearing it is not possible as in Haskell it needs some special compiler flags.
No, you can not do that, at least not without dependant types or at least some type level trickery, which there is none of in Elm (for reference: How do I define Lisp’s apply in Haskell?)
(This is why there are all the objectN functions by the way.)
Try to restructure your code - can't f just take a list?
In this context of Json decoding, if you have a list literal with decoders, you can do something equivalent to applyN. This pattern uses the functions map: (a -> b) -> Decoder a -> Decoder b and andMap : Decoder (a -> b) -> Decoder a -> Decoder b. You use it like so:
Constructor
`Json.map` n1
`Json.andMap` n2
`Json.andMap` ...
`Json.andMap` nN
Sadly andMap isn't offered for every core module. You can define it yourself if there is a map2 or andThen. In this case object2 works, it's basically the same as map2. So:
andMap : Decoder (a -> b) -> Decoder a -> Decoder b
andMap decFun decA =
object2 (\f a -> f a) decFun decA
You can also use Json.Decode.Extra.apply, which is the same thing, just named in a non-standard way*.
*non-standard within the Elm world anyway