Is there a more convenient way to use nested records? - record

As I told before, I'm working in a library about algebra, matrices and category theory. I have decomposed the algebraic structures in a "tower" of record types, each one representing an algebraic structure. As example, to specify a monoid, we define first a semigroup and to define a commutative monoid we use monoid definition, following the same pattern as Agda standard library.
My trouble is that when I need a property of an algebraic structure that is deep within another one (e.g. property of a Monoid that is part of a CommutativeSemiring) we need to use a number of a projections equal to desired algebraic structure depth.
As an example of my problem, consider the following "lemma":
open import Algebra
open import Algebra.Structures
open import Data.Vec
open import Relation.Binary.PropositionalEquality
open import Algebra.FunctionProperties
open import Data.Product
module _ {Carrier : Set} {_+_ _*_ : Op₂ Carrier} {0# 1# : Carrier} (ICSR : IsCommutativeSemiring _≡_ _+_ _*_ 0# 1#) where
csr : CommutativeSemiring _ _
csr = record{ isCommutativeSemiring = ICSR }
zipWith-replicate-0# : ∀ {n}(xs : Vec Carrier n) → zipWith _+_ (replicate 0#) xs ≡ xs
zipWith-replicate-0# [] = refl
zipWith-replicate-0# (x ∷ xs) = cong₂ _∷_ (proj₁ (IsMonoid.identity (IsCommutativeMonoid.isMonoid
(IsCommutativeSemiring.+-isCommutativeMonoid
(CommutativeSemiring.isCommutativeSemiring csr)))) x)
(zipWith-replicate-0# xs)
Note that in order to access the left identity property of a monoid, I need to project it from the monoid that is within the commutative monoid that lies in the structure of an commutative semiring.
My concern is that, as I add more and more algebraic structures, such lemmas it will become unreadable.
My question is: Is there a pattern or trick that can avoid this "ladder" of record projections?
Any clue on this is highly welcome.

If you look at the Agda standard library, you'll see that for most specialized algebraic structures, the record defining them has the less specialized structure open public. E.g. in Algebra.AbelianGroup, we have:
record AbelianGroup c ℓ : Set (suc (c ⊔ ℓ)) where
-- ... snip ...
open IsAbelianGroup isAbelianGroup public
group : Group _ _
group = record { isGroup = isGroup }
open Group group public using (setoid; semigroup; monoid; rawMonoid)
-- ... snip ...
So an AbelianGroup record will have not just the AbelianGroup/IsAbelianGroup fields available, but also setoid, semigroup and monoid and rawMonoid from Group. In turn, setoid, monoid and rawMonoid in Group come from similarly open public-reexporting these fields from Monoid.
Similarly, for algebraic property witnesses, they re-export the less specialized version's fields, e.g. in IsAbelianGroup we have
record IsAbelianGroup
{a ℓ} {A : Set a} (≈ : Rel A ℓ)
(∙ : Op₂ A) (ε : A) (⁻¹ : Op₁ A) : Set (a ⊔ ℓ) where
-- ... snip ...
open IsGroup isGroup public
-- ... snip ...
and then IsGroup reexports IsMonoid, IsMonoid reexports IsSemigroup, and so on. And so, if you have IsAbelianGroup open, you can still use assoc (coming from IsSemigroup) without having to write out the whole path to it by hand.
The bottom line is you can write your function as follows:
open CommutativeSemiring CSR hiding (refl)
open import Function using (_⟨_⟩_)
zipWith-replicate-0# : ∀ {n}(xs : Vec Carrier n) → zipWith _+_ (replicate 0#) xs ≡ xs
zipWith-replicate-0# [] = refl
zipWith-replicate-0# (x ∷ xs) = proj₁ +-identity x ⟨ cong₂ _∷_ ⟩ zipWith-replicate-0# xs

Related

Is there a notion of "heterogenous collection of a given shape"?

A common pattern in functional programming languages with a sufficiently advanced type system to to have a type of "heterogeneous lists". For instance, given a list defined as:
data List a = Nil | Cons a (List a)
(Note: For concreteness, I will use Idris in this question, but this could also be answered in Haskell (with the right extensions), Agda, etc...)
We can define HList:
data HList : List a -> Type where
Nil : HList []
(::) : a -> HList as -> HList (a :: as)
This is a list which holds a different type (specified by the type-level List a) at each "position" of the List data type. This made me wonder: Can we generalize this construction? For instance, given a simple tree-like structure:
data Tree a = Branch a [Tree a]
Does it make sense to define a heterogenous tree?
where HTree : Tree a -> Type where
...
More generally in a dependently-typed language, is it possible to define a general construction:
data Hetero : (f : Type -> Type) -> f a -> Type where
....
that takes a data type of kind Type -> Type and returns the "heterogeneous container" of shape f? Has anyone made use of this construction before if possible?
We can talk about the shape of any functor using map and propositional equality. In Idris 2:
Hetero : (f : Type -> Type) -> Functor f => f Type -> Type
Hetero f tys = (x : f (A : Type ** A) ** map fst x = tys)
The type (A : Type ** A) is the type of non-empty types, in other words, values of arbitrary type. We get heterogeneous collections by putting arbitrarily typed values into functors, then constraining the types elementwise to particular types.
Some examples:
ex1 : Hetero List [Bool, Nat, Bool]
ex1 = ([(_ ** True), (_ ** 10), (_ ** False)] ** Refl)
data Tree : Type -> Type where
Leaf : a -> Tree a
Node : Tree a -> Tree a -> Tree a
Functor Tree where
map f (Leaf a) = Leaf (f a)
map f (Node l r) = Node (map f l) (map f r)
ex2 : Hetero Tree (Node (Leaf Bool) (Leaf Nat))
ex2 = (Node (Leaf (_ ** False)) (Leaf (_ ** 10)) ** Refl)

Defining groups in Idris

I defined monoid in Idris as
interface Is_monoid (ty : Type) (op : ty -> ty -> ty) where
id_elem : () -> ty
proof_of_left_id : (a : ty) -> ((op a (id_elem ())) = a)
proof_of_right_id : (a : ty) -> ((op (id_elem ())a) = a)
proof_of_associativity : (a, b, c : ty) -> ((op a (op b c)) = (op (op a b) c))
then tried to define groups as
interface (Is_monoid ty op) => Is_group (ty : Type) (op : ty -> ty -> ty) where
inverse : ty -> ty
proof_of_left_inverse : (a : ty) -> (a = (id_elem ()))
but during compilation it showed
When checking type of Group.proof_of_left_inverse:
Can't find implementation for Is_monoid ty op
Is there a way around it.
The error message is a bit misleading, but indeed, the compiler does not know which implementation of Is_monoid to use for your call to id_elem in your definition of proof_of_left_inverse. You can make it work by making it making the call more explicit:
proof_of_left_inverse : (a : ty) -> (a = (id_elem {ty = ty} {op = op} ()))
Now, why is this necessary? If we have a simple interface like
interface Pointed a where
x : a
we can just write a function like
origin : (Pointed b) => b
origin = x
without specifying any type parameters explicitly.
One way to understand this is to look at interfaces and implementations through the lens of other, in a way more basic Idris features. x can be thought of as a function
x : {a : Type} -> {auto p : PointedImpl a} -> a
where PointedImpl is some pseudo type that represents the implementations of Pointed. (Think a record of functions.)
Similarly, origin looks something like
origin : {b : Type} -> {auto j : PointedImpl b} -> b
x notably has two implicit arguments, which the compiler tries to infer during type checking and unification. In the above example, we know that origin has to return a b, so we can unify a with b.
Now i is also auto, so it is not only subject to unification (which does not help here), but in addition, the compiler looks for "surrounding values" that can fill that hole if no explicit one was specified. The first place to look after local variables which we don't have is the parameter list, where we indeed find j.
Thus, our call to origin resolves without us having to explicitly specify any additional arguments.
Your case is more akin to this:
interface Test a b where
x : a
y : b
test : (Test c d) => c
test = x
This will error in the same manner your example did. Going through the same steps as above, we can write
x : {a : Type} -> {b -> Type} -> {auto i : TestImpl a b} -> a
test : {c : Type} -> {d -> Type} -> {auto j : TestImpl c d} -> c
As above, we can unify a and c, but there is nothing that tells us what d is supposed to be. Specifically, we can't unify it with b, and consequently we can't unify TestImpl a b with TestImpl c d and thus we can't use j as value for the auto-parameter i.
Note that I don't claim that this is how things are implemented under the covers. This is just an analogy in a sense, but one that holds up to at least some scrutiny.

Idris - use same interface instance

I have a data structure
record IdentityPreservingMorphism domain codomain where
constructor MkMorphismOfMonoids
func : domain -> codomain
funcRespId : (Monoid domain, Monoid codomain) => func (Algebra.neutral) = Algebra.neutral
which just says that an IdentityPreservingMorphism is a morphism between monoids which needs to respect the identity.
I'm trying to prove that the identity morphisms is an IdentityPreservingMorphism
monoidIdentity : Monoid m => MorphismOfMonoids m m
monoidIdentity = MkMorphismOfMonoids
id
?respId
The easy shot for ?respId at Refl does not work because there are too many Monoid instances available. How can I tell the compiler that I would like to use only the instance coming from the monoidIdentity definition?
The "proper" solution to this requires (1) writing a proof of a form (m1 : Monoid m, m2 : Monoid m) => m1 = m2 and 2) being able to reify the two Monoid implementations from funcRespId to feed them into step 1. While the former can be simulated with a postulate/assert, it's the latter step that becomes problematic, which is probably related to https://github.com/idris-lang/Idris-dev/issues/4591.
A simpler workaround is to trivialize reification by storing implementations directly in the record:
record MorphismOfMonoids domain codomain where
constructor MkMorphismOfMonoids
func : domain -> codomain
mon1 : Monoid domain
mon2 : Monoid codomain
funcRespId : func (Algebra.neutral #{mon1}) = Algebra.neutral #{mon2}
monoidIdentity : Monoid m => MorphismOfMonoids m m
monoidIdentity #{mon} = MkMorphismOfMonoids id mon mon Refl

Theorem Proving in Idris

I was reading Idris tutorial. And I can't understand the following code.
disjoint : (n : Nat) -> Z = S n -> Void
disjoint n p = replace {P = disjointTy} p ()
where
disjointTy : Nat -> Type
disjointTy Z = ()
disjointTy (S k) = Void
So far, what I figure out is ...
Void is the empty type which is used to prove something is impossible.
replace : (x = y) -> P x -> P y
replace uses an equality proof to transform a predicate.
My questions are:
which one is an equality proof? (Z = S n)?
which one is a predicate? the disjointTy function?
What's the purpose of disjointTy? Does disjointTy Z = () means Z is in one Type "land" () and (S k) is in another land Void?
In what way can an Void output represent contradiction?
Ps. What I know about proving is "all things are no matched then it is false." or "find one thing that is contradictory"...
which one is an equality proof? (Z = S n)?
The p parameter is the equality proof here. p has type Z = S n.
which one is a predicate? the disjointTy function?
Yes, you are right.
What's the purpose of disjointTy?
Let me repeat the definition of disjointTy here:
disjointTy : Nat -> Type
disjointTy Z = ()
disjointTy (S k) = Void
The purpose of disjointTy is to be that predicate replace function needs. This consideration determines the type of disjointTy, viz. [domain] -> Type. Since we have equality between naturals numbers, [domain] is Nat.
To understand how the body has been constructed we need to take a look at replace one more time:
replace : (x = y) -> P x -> P y
Recall that we have p of Z = S n, so x from the above type is Z and y is S n. To call replace we need to construct a term of type P x, i.e. P Z in our case. This means the type P Z returns must be easily constructible, e.g. the unit type is the ideal candidate for this. We have justified disjointTy Z = () clause of the definition of disjointTy. Of course it's not the only option, we could have used any other inhabited (non-empty) type, like Bool or Nat, etc.
The return value in the second clause of disjointTy is obvious now -- we want replace to return a value of Void type, so P (S n) has to be Void.
Next, we use disjointTy like so:
replace {P = disjointTy} p ()
^ ^ ^
| | |
| | the value of `()` type
| |
| proof term of Z = S n
|
we are saying "this is the predicate"
As a bonus, here is an alternative proof:
disjoint : (n : Nat) -> Z = S n -> Void
disjoint n p = replace {P = disjointTy} p False
where
disjointTy : Nat -> Type
disjointTy Z = Bool
disjointTy (S k) = Void
I have used False, but could have used True -- it doesn't matter. What matters is our ability to construct a term of type disjointTy Z.
In what way can an Void output represent contradiction?
Void is defined like so:
data Void : Type where
It has no constructors! There is no way to create a term of this type whatsoever (under some conditions: like Idris' implementation is correct and the underlying logic of Idris is sane, etc.). So if some function claims it can return a term of type Void there must be something fishy going on. Our function says: if you give me a proof of Z = S n, I will return a term of the empty type. This means Z = S n cannot be constructed in the first place because it leads to a contradiction.
Yes, p : x = y is an equality proof. So p is a equality proof and Z = S k is a equality type.
Also yes, usually any P : a -> Type is called predicate, like IsSucc : Nat -> Type. In boolean logic, a predicate would map Nat to true or false. Here, a predicate holds, if we can construct a proof for it. And it is true, if we can construct it (prf : ItIsSucc 4). And it is false, if we cannot construct it (there is no member of ItIsSucc Z).
At the end, we want Void. Read the replace call as Z = S k -> disjointTy Z -> disjointTy (S k), that is Z = S K -> () -> Void. So replace needs two arguments: the proof p : Z = S k and the unit () : (), and voilà, we have a void. By the way, instead of () you could use any type that you can construct, e.g. disjointTy Z = Nat and then use Z instead of ().
In dependent type theory we construct proofs like prf : IsSucc 4. We would say, we have a proof prf that IsSucc 4 is true. prf is also called a witness for IsSucc 4. But with this alone we could only proove things to be true. This is the definiton for Void:
data Void : Type where
There is no constructor. So we cannot construct a witness that Void holds. If you somehow ended up with a prf : Void, something is wrong and you have a contradiction.

Can I avoid committing to particular types in a module type and still get pattern matching?

I have two module types:
module type ORDERED =
sig
type t
val eq : t * t -> bool
val lt : t * t -> bool
val leq : t * t -> bool
end
module type STACK =
sig
exception Empty
type 'a t
val empty : 'a t
val isEmpty : 'a t -> bool
val cons : 'a * 'a t -> 'a t
val head : 'a t -> 'a
val tail : 'a t -> 'a t
val length : 'a t -> int
end
I want to write a functor which "lifts" the order relation from the basic ORDERED type to STACKs of that type. That can be done by saying that, for example, two stacks of elements will be equal if all its individual elements are equal. And that stacks s1 and s2 are s.t. s1 < s2 if the first of each of their elements, e1 and e2, are also s.t. e1 < e2, etc.
Now, if don't commit to explicitly defining the type in the module type, I will have to write something like this (or won't I?):
module StackLift (O : ORDERED) (S : STACK) : ORDERED =
struct
type t = O.t S.t
let rec eq (x,y) =
if S.isEmpty x
then if S.isEmpty y
then true
else false
else if S.isEmpty y
then false
else if O.eq (S.head x,S.head y)
then eq (S.tail x, S.tail y)
else false
(* etc for lt and leq *)
end
which is a very clumsy way of doing what pattern matching serves so well. An alternative would be to impose the definition of type STACK.t using explicit constructors, but that would tie my general module somewhat to a particular implementation, which I don't want to do.
Question: can I define something different above so that I can still use pattern matching while at the same time keeping the generality of the module types?
As an alternative or supplement to the other access functions, the module can provide a view function that returns a variant type to use in pattern matching.
type ('a, 's) stack_view = Nil | Cons of 'a * 's
module type STACK =
sig
val view : 'a t -> ('a , 'a t) stack_view
...
end
module StackLift (O : ORDERED) (S : STACK) : ORDERED =
struct
let rec eq (x, y) =
match S.view x, S.view y with
Cons (x, xs), Cons (y, ys) -> O.eq (x, y) && eq (xs, ys)
| Nil, Nil -> true
| _ -> false
...
end
Any stack with a head and tail function can have a view function too, regardless of the underlying data structure.
I believe you've answered your own question. A module type in ocaml is an interface which you cannot look behind. Otherwise, there's no point. You cannot keep the generality of the interface while exposing details of the implementation. The only thing you can use is what's been exposed through the interface.
My answer to your question is yes, there might be something you can do to your definition of stack, that would make the type of a stack a little more complex, thereby making it match a different pattern than just a single value, like (val,val) for instance. However, you've got a fine definition of a stack to work with, and adding more type-fluff is probably a bad idea.
Some suggestions with regards to your definitions:
Rename the following functions: cons => push, head => peek, tail => pop_. I would also add a function val pop : 'a t -> 'a * 'a t, in order to combine head and tail into one function, as well as to mirror cons. Your current naming scheme seems to imply that a list is backing your stack, which is a mental leak of the implementation :D.
Why do eq, lt, and leq take a pair as the first parameter? In constraining the type of eq to be val eq : 'a t * 'a t -> 'a t, you're forcing the programmer that uses your interface to keep around one side of the equality predicate until they've got the other side, before finally applying the function. Unless you have a very good reason, I would use the default curried form of the function, since it provides a little more freedom to the user (val eq : 'a t -> 'a t -> 'a t). The freedom comes in that they can partially apply eq and pass the function around instead of the value and function together.