Idris - use same interface instance - idris

I have a data structure
record IdentityPreservingMorphism domain codomain where
constructor MkMorphismOfMonoids
func : domain -> codomain
funcRespId : (Monoid domain, Monoid codomain) => func (Algebra.neutral) = Algebra.neutral
which just says that an IdentityPreservingMorphism is a morphism between monoids which needs to respect the identity.
I'm trying to prove that the identity morphisms is an IdentityPreservingMorphism
monoidIdentity : Monoid m => MorphismOfMonoids m m
monoidIdentity = MkMorphismOfMonoids
id
?respId
The easy shot for ?respId at Refl does not work because there are too many Monoid instances available. How can I tell the compiler that I would like to use only the instance coming from the monoidIdentity definition?

The "proper" solution to this requires (1) writing a proof of a form (m1 : Monoid m, m2 : Monoid m) => m1 = m2 and 2) being able to reify the two Monoid implementations from funcRespId to feed them into step 1. While the former can be simulated with a postulate/assert, it's the latter step that becomes problematic, which is probably related to https://github.com/idris-lang/Idris-dev/issues/4591.
A simpler workaround is to trivialize reification by storing implementations directly in the record:
record MorphismOfMonoids domain codomain where
constructor MkMorphismOfMonoids
func : domain -> codomain
mon1 : Monoid domain
mon2 : Monoid codomain
funcRespId : func (Algebra.neutral #{mon1}) = Algebra.neutral #{mon2}
monoidIdentity : Monoid m => MorphismOfMonoids m m
monoidIdentity #{mon} = MkMorphismOfMonoids id mon mon Refl

Related

Is there a notion of "heterogenous collection of a given shape"?

A common pattern in functional programming languages with a sufficiently advanced type system to to have a type of "heterogeneous lists". For instance, given a list defined as:
data List a = Nil | Cons a (List a)
(Note: For concreteness, I will use Idris in this question, but this could also be answered in Haskell (with the right extensions), Agda, etc...)
We can define HList:
data HList : List a -> Type where
Nil : HList []
(::) : a -> HList as -> HList (a :: as)
This is a list which holds a different type (specified by the type-level List a) at each "position" of the List data type. This made me wonder: Can we generalize this construction? For instance, given a simple tree-like structure:
data Tree a = Branch a [Tree a]
Does it make sense to define a heterogenous tree?
where HTree : Tree a -> Type where
...
More generally in a dependently-typed language, is it possible to define a general construction:
data Hetero : (f : Type -> Type) -> f a -> Type where
....
that takes a data type of kind Type -> Type and returns the "heterogeneous container" of shape f? Has anyone made use of this construction before if possible?
We can talk about the shape of any functor using map and propositional equality. In Idris 2:
Hetero : (f : Type -> Type) -> Functor f => f Type -> Type
Hetero f tys = (x : f (A : Type ** A) ** map fst x = tys)
The type (A : Type ** A) is the type of non-empty types, in other words, values of arbitrary type. We get heterogeneous collections by putting arbitrarily typed values into functors, then constraining the types elementwise to particular types.
Some examples:
ex1 : Hetero List [Bool, Nat, Bool]
ex1 = ([(_ ** True), (_ ** 10), (_ ** False)] ** Refl)
data Tree : Type -> Type where
Leaf : a -> Tree a
Node : Tree a -> Tree a -> Tree a
Functor Tree where
map f (Leaf a) = Leaf (f a)
map f (Node l r) = Node (map f l) (map f r)
ex2 : Hetero Tree (Node (Leaf Bool) (Leaf Nat))
ex2 = (Node (Leaf (_ ** False)) (Leaf (_ ** 10)) ** Refl)

Defining groups in Idris

I defined monoid in Idris as
interface Is_monoid (ty : Type) (op : ty -> ty -> ty) where
id_elem : () -> ty
proof_of_left_id : (a : ty) -> ((op a (id_elem ())) = a)
proof_of_right_id : (a : ty) -> ((op (id_elem ())a) = a)
proof_of_associativity : (a, b, c : ty) -> ((op a (op b c)) = (op (op a b) c))
then tried to define groups as
interface (Is_monoid ty op) => Is_group (ty : Type) (op : ty -> ty -> ty) where
inverse : ty -> ty
proof_of_left_inverse : (a : ty) -> (a = (id_elem ()))
but during compilation it showed
When checking type of Group.proof_of_left_inverse:
Can't find implementation for Is_monoid ty op
Is there a way around it.
The error message is a bit misleading, but indeed, the compiler does not know which implementation of Is_monoid to use for your call to id_elem in your definition of proof_of_left_inverse. You can make it work by making it making the call more explicit:
proof_of_left_inverse : (a : ty) -> (a = (id_elem {ty = ty} {op = op} ()))
Now, why is this necessary? If we have a simple interface like
interface Pointed a where
x : a
we can just write a function like
origin : (Pointed b) => b
origin = x
without specifying any type parameters explicitly.
One way to understand this is to look at interfaces and implementations through the lens of other, in a way more basic Idris features. x can be thought of as a function
x : {a : Type} -> {auto p : PointedImpl a} -> a
where PointedImpl is some pseudo type that represents the implementations of Pointed. (Think a record of functions.)
Similarly, origin looks something like
origin : {b : Type} -> {auto j : PointedImpl b} -> b
x notably has two implicit arguments, which the compiler tries to infer during type checking and unification. In the above example, we know that origin has to return a b, so we can unify a with b.
Now i is also auto, so it is not only subject to unification (which does not help here), but in addition, the compiler looks for "surrounding values" that can fill that hole if no explicit one was specified. The first place to look after local variables which we don't have is the parameter list, where we indeed find j.
Thus, our call to origin resolves without us having to explicitly specify any additional arguments.
Your case is more akin to this:
interface Test a b where
x : a
y : b
test : (Test c d) => c
test = x
This will error in the same manner your example did. Going through the same steps as above, we can write
x : {a : Type} -> {b -> Type} -> {auto i : TestImpl a b} -> a
test : {c : Type} -> {d -> Type} -> {auto j : TestImpl c d} -> c
As above, we can unify a and c, but there is nothing that tells us what d is supposed to be. Specifically, we can't unify it with b, and consequently we can't unify TestImpl a b with TestImpl c d and thus we can't use j as value for the auto-parameter i.
Note that I don't claim that this is how things are implemented under the covers. This is just an analogy in a sense, but one that holds up to at least some scrutiny.

The signature for this packaged module couldn't be inferred in recursive function

I'm still trying to figure out how to split code when using mirage and it's myriad of first class modules.
I've put everything I need in a big ugly Context module, to avoid having to pass ten modules to all my functions, one is pain enough.
I have a function to receive commands over tcp :
let recvCmds (type a) (module Ctx : Context with type chan = a) nodeid chan = ...
After hours of trial and errors, I figured out that I needed to add (type a) and the "explicit" type chan = a to make it work. Looks ugly, but it compiles.
But if I want to make that function recursive :
let rec recvCmds (type a) (module Ctx : Context with type chan = a) nodeid chan =
Ctx.readMsg chan >>= fun res ->
... more stuff ...
|> OtherModule.getStorageForId (module Ctx)
... more stuff ...
recvCmds (module Ctx) nodeid chan
I pass the module twice, the first time no problem but
I get an error on the recursion line :
The signature for this packaged module couldn't be inferred.
and if I try to specify the signature I get
This expression has type a but an expression was expected of type 'a
The type constructor a would escape its scope
And it seems like I can't use the whole (type chan = a) thing.
If someone could explain what is going on, and ideally a way to work around it, it'd be great.
I could just use a while of course, but I'd rather finally understand these damn modules. Thanks !
The pratical answer is that recursive functions should universally quantify their locally abstract types with let rec f: type a. .... = fun ... .
More precisely, your example can be simplified to
module type T = sig type t end
let rec f (type a) (m: (module T with type t = a)) = f m
which yield the same error as yours:
Error: This expression has type (module T with type t = a)
but an expression was expected of type 'a
The type constructor a would escape its scope
This error can be fixed with an explicit forall quantification: this can be done with
the short-hand notation (for universally quantified locally abstract type):
let rec f: type a. (module T with type t = a) -> 'never = fun m -> f m
The reason behind this behavior is that locally abstract type should not escape
the scope of the function that introduced them. For instance, this code
let ext_store = ref None
let store x = ext_store := Some x
let f (type a) (x:a) = store x
should visibly fail because it tries to store a value of type a, which is a non-sensical type outside of the body of f.
By consequence, values with a locally abstract type can only be used by polymorphic function. For instance, this example
let id x = x
let f (x:a) : a = id x
is fine because id x works for any x.
The problem with a function like
let rec f (type a) (m: (module T with type t = a)) = f m
is then that the type of f is not yet generalized inside its body, because type generalization in ML happens at let definition. The fix is therefore to explicitly tell to the compiler that f is polymorphic in its argument:
let rec f: 'a. (module T with type t = 'a) -> 'never =
fun (type a) (m:(module T with type t = a)) -> f m
Here, 'a. ... is an universal quantification that should read forall 'a. ....
This first line tells to the compiler that the function f is polymorphic in its first argument, whereas the second line explicitly introduces the locally abstract type a to refine the packed module type. Splitting these two declarations is quite verbose, thus the shorthand notation combines both:
let rec f: type a. (module T with type t = a) -> 'never = fun m -> f m

Theorem Proving in Idris

I was reading Idris tutorial. And I can't understand the following code.
disjoint : (n : Nat) -> Z = S n -> Void
disjoint n p = replace {P = disjointTy} p ()
where
disjointTy : Nat -> Type
disjointTy Z = ()
disjointTy (S k) = Void
So far, what I figure out is ...
Void is the empty type which is used to prove something is impossible.
replace : (x = y) -> P x -> P y
replace uses an equality proof to transform a predicate.
My questions are:
which one is an equality proof? (Z = S n)?
which one is a predicate? the disjointTy function?
What's the purpose of disjointTy? Does disjointTy Z = () means Z is in one Type "land" () and (S k) is in another land Void?
In what way can an Void output represent contradiction?
Ps. What I know about proving is "all things are no matched then it is false." or "find one thing that is contradictory"...
which one is an equality proof? (Z = S n)?
The p parameter is the equality proof here. p has type Z = S n.
which one is a predicate? the disjointTy function?
Yes, you are right.
What's the purpose of disjointTy?
Let me repeat the definition of disjointTy here:
disjointTy : Nat -> Type
disjointTy Z = ()
disjointTy (S k) = Void
The purpose of disjointTy is to be that predicate replace function needs. This consideration determines the type of disjointTy, viz. [domain] -> Type. Since we have equality between naturals numbers, [domain] is Nat.
To understand how the body has been constructed we need to take a look at replace one more time:
replace : (x = y) -> P x -> P y
Recall that we have p of Z = S n, so x from the above type is Z and y is S n. To call replace we need to construct a term of type P x, i.e. P Z in our case. This means the type P Z returns must be easily constructible, e.g. the unit type is the ideal candidate for this. We have justified disjointTy Z = () clause of the definition of disjointTy. Of course it's not the only option, we could have used any other inhabited (non-empty) type, like Bool or Nat, etc.
The return value in the second clause of disjointTy is obvious now -- we want replace to return a value of Void type, so P (S n) has to be Void.
Next, we use disjointTy like so:
replace {P = disjointTy} p ()
^ ^ ^
| | |
| | the value of `()` type
| |
| proof term of Z = S n
|
we are saying "this is the predicate"
As a bonus, here is an alternative proof:
disjoint : (n : Nat) -> Z = S n -> Void
disjoint n p = replace {P = disjointTy} p False
where
disjointTy : Nat -> Type
disjointTy Z = Bool
disjointTy (S k) = Void
I have used False, but could have used True -- it doesn't matter. What matters is our ability to construct a term of type disjointTy Z.
In what way can an Void output represent contradiction?
Void is defined like so:
data Void : Type where
It has no constructors! There is no way to create a term of this type whatsoever (under some conditions: like Idris' implementation is correct and the underlying logic of Idris is sane, etc.). So if some function claims it can return a term of type Void there must be something fishy going on. Our function says: if you give me a proof of Z = S n, I will return a term of the empty type. This means Z = S n cannot be constructed in the first place because it leads to a contradiction.
Yes, p : x = y is an equality proof. So p is a equality proof and Z = S k is a equality type.
Also yes, usually any P : a -> Type is called predicate, like IsSucc : Nat -> Type. In boolean logic, a predicate would map Nat to true or false. Here, a predicate holds, if we can construct a proof for it. And it is true, if we can construct it (prf : ItIsSucc 4). And it is false, if we cannot construct it (there is no member of ItIsSucc Z).
At the end, we want Void. Read the replace call as Z = S k -> disjointTy Z -> disjointTy (S k), that is Z = S K -> () -> Void. So replace needs two arguments: the proof p : Z = S k and the unit () : (), and voilà, we have a void. By the way, instead of () you could use any type that you can construct, e.g. disjointTy Z = Nat and then use Z instead of ().
In dependent type theory we construct proofs like prf : IsSucc 4. We would say, we have a proof prf that IsSucc 4 is true. prf is also called a witness for IsSucc 4. But with this alone we could only proove things to be true. This is the definiton for Void:
data Void : Type where
There is no constructor. So we cannot construct a witness that Void holds. If you somehow ended up with a prf : Void, something is wrong and you have a contradiction.

Is there a more convenient way to use nested records?

As I told before, I'm working in a library about algebra, matrices and category theory. I have decomposed the algebraic structures in a "tower" of record types, each one representing an algebraic structure. As example, to specify a monoid, we define first a semigroup and to define a commutative monoid we use monoid definition, following the same pattern as Agda standard library.
My trouble is that when I need a property of an algebraic structure that is deep within another one (e.g. property of a Monoid that is part of a CommutativeSemiring) we need to use a number of a projections equal to desired algebraic structure depth.
As an example of my problem, consider the following "lemma":
open import Algebra
open import Algebra.Structures
open import Data.Vec
open import Relation.Binary.PropositionalEquality
open import Algebra.FunctionProperties
open import Data.Product
module _ {Carrier : Set} {_+_ _*_ : Op₂ Carrier} {0# 1# : Carrier} (ICSR : IsCommutativeSemiring _≡_ _+_ _*_ 0# 1#) where
csr : CommutativeSemiring _ _
csr = record{ isCommutativeSemiring = ICSR }
zipWith-replicate-0# : ∀ {n}(xs : Vec Carrier n) → zipWith _+_ (replicate 0#) xs ≡ xs
zipWith-replicate-0# [] = refl
zipWith-replicate-0# (x ∷ xs) = cong₂ _∷_ (proj₁ (IsMonoid.identity (IsCommutativeMonoid.isMonoid
(IsCommutativeSemiring.+-isCommutativeMonoid
(CommutativeSemiring.isCommutativeSemiring csr)))) x)
(zipWith-replicate-0# xs)
Note that in order to access the left identity property of a monoid, I need to project it from the monoid that is within the commutative monoid that lies in the structure of an commutative semiring.
My concern is that, as I add more and more algebraic structures, such lemmas it will become unreadable.
My question is: Is there a pattern or trick that can avoid this "ladder" of record projections?
Any clue on this is highly welcome.
If you look at the Agda standard library, you'll see that for most specialized algebraic structures, the record defining them has the less specialized structure open public. E.g. in Algebra.AbelianGroup, we have:
record AbelianGroup c ℓ : Set (suc (c ⊔ ℓ)) where
-- ... snip ...
open IsAbelianGroup isAbelianGroup public
group : Group _ _
group = record { isGroup = isGroup }
open Group group public using (setoid; semigroup; monoid; rawMonoid)
-- ... snip ...
So an AbelianGroup record will have not just the AbelianGroup/IsAbelianGroup fields available, but also setoid, semigroup and monoid and rawMonoid from Group. In turn, setoid, monoid and rawMonoid in Group come from similarly open public-reexporting these fields from Monoid.
Similarly, for algebraic property witnesses, they re-export the less specialized version's fields, e.g. in IsAbelianGroup we have
record IsAbelianGroup
{a ℓ} {A : Set a} (≈ : Rel A ℓ)
(∙ : Op₂ A) (ε : A) (⁻¹ : Op₁ A) : Set (a ⊔ ℓ) where
-- ... snip ...
open IsGroup isGroup public
-- ... snip ...
and then IsGroup reexports IsMonoid, IsMonoid reexports IsSemigroup, and so on. And so, if you have IsAbelianGroup open, you can still use assoc (coming from IsSemigroup) without having to write out the whole path to it by hand.
The bottom line is you can write your function as follows:
open CommutativeSemiring CSR hiding (refl)
open import Function using (_⟨_⟩_)
zipWith-replicate-0# : ∀ {n}(xs : Vec Carrier n) → zipWith _+_ (replicate 0#) xs ≡ xs
zipWith-replicate-0# [] = refl
zipWith-replicate-0# (x ∷ xs) = proj₁ +-identity x ⟨ cong₂ _∷_ ⟩ zipWith-replicate-0# xs