Is there a modulo function in idris? - idris

In Haskell, there are the mod and rem functions. Are there similar functions in Idris, particularly defined over Nat?

In Prelude.Nat there are
modNatNZ : Nat -> (y : Nat) -> Not (y = 0) -> Nat
modNat : Nat -> Nat -> Nat
The first one needs a proof that the divisor is not zero, while the second is partial (i. e. can crash during runtime). Practically there is also a proof,
SIsNotZ : {x: Nat} -> Not (S x = Z)
that a successor cannot be zero. So you can just use modNatNZ 10 3 SIsNotZ and the unification system will proof Not (3 = 0). You can see how modNatNZ works here. As Nat is always positive, a remainder function would behave the same.
Otherwise, a generic
mod : Integral ty => ty -> ty -> ty
is defined for all types implementing Integral (e.g. Int).

Related

Can't find implementation with constraint in signature

I have an interface that depends on another interface, with some dependent types, and I can't get the compiler to constrain a type in a related function
import Data.Vect
interface Distribution (0 event : Nat) (dist : Nat -> Nat -> Type) where
data Gaussian : Nat -> Nat -> Type where
Distribution e Gaussian where
interface Distribution targets marginal =>
Model (targets : Nat) (marginal : Nat -> Nat -> Type) model where
marginalise : model -> Vect s Double -> marginal targets s
foo : m -> Model 1 Gaussian m => Vect s Double -> Nat
foo model x = let marginal = marginalise model x in ?rhs
I get
While processing right hand side of foo. Can't find an implementation for Model ?targets ?marginal m.
foo model x = let marginal = marginalise model x in ?rhs
^^^^^^^^^^^^^^^^^^^
How could this be?
If I use marginalise {marginal=Gaussian} {targets=1} model x it type checks, but I don't get why this isn't already determined by the type signature.
I don't think this qu I asked about the same area applies here
I started writing this as a comment and realized halfway through that it might work as a full-blown answer.
Model 1 Gaussan m means that you have an implementation of the Model interface with targets = 1, marginal = Gaussian and model = m. Then, the let-binding of marginal requires Model a b m, i.e. an implementation of Model where targets = a , marginal = b and model = m. But there is no requirement that a = 1 and b = Gaussian!
My guess is that once you read up on determining parameters, you will discover that you want something like:
interface Distribution targets marginal =>
Model (targets : Nat) (marginal : Nat -> Nat -> Type) model | model where

How can I learn more about how type checking happens in Idris?

I am learning programming using dependent types and started using Idris recently. I have some background in programming languages and understand how type checking happens in languages like Haskell and ML. I wanted to know if there is some description of how type checking happens in Idris, even a flag that can be passed to the Idris compiler which dumps the output of the type checker will also be useful.
For example, I want to understand how Idris type checks the correctness of the function append,
append : Vect m elem -> Vect n elem -> Vect (m + n) elem
append [] ys = ys
append (x::xs) ys = x :: (append xs ys)
The reason is that Idris gives a type error when I wrongly define append as append (x::xs) ys = x :: (append ys xs), I interchanged the arguments in the recursive call, saying that (plus k n) is not equal to (plus n k).
My guess is that even if I fix this by giving a proof that plus is commutative, Idris must still give a type error. Therefore, I want to understand type checking in Idris and see if it really happens and if so how it figures out. Thanks in advance.

confused about lazy evaluation in Idris

Reading and playing around with some examples from the official Idris tutorial has left me a bit confused regarding lazy evaluation.
As stated in the tutorial Idris uses eager evaluation, and they go on to give an example where this would not be appropriate
ifThenElse : Bool -> a -> a -> a
ifThenElse True t e = t
ifThenElse False t e = e
And they then proceed to show an example using lazy evaluation
ifThenElse : Bool -> Lazy a -> Lazy a -> a
ifThenElse True t e = t
ifThenElse False t e = e
I like to try things out while reading, so I created an inefficient Fibonacci function to test out the non-lazy and lazy ifThenElse.
fibo : Nat -> Nat
fibo Z = Z
fibo (S Z) = S Z
fibo (S(S Z)) = S(S Z)
fibo (S(S(S n))) = fibo (S(S n)) + fibo (S n)
-- the non lazy
ifThenElse1 : Bool -> (t: a) -> (f: a) -> a
ifThenElse1 False t f = f
ifThenElse1 True t f = t
-- should be slow when applied to True
iftest1 : Bool -> Nat
iftest1 b = ifThenElse1 b (fibo 5) (fibo 25)
-- the lazy
ifThenElse2 : Bool -> (t: Lazy a) -> (f: Lazy a) -> a
ifThenElse2 False t f = f
ifThenElse2 True t f = t
-- should be fast when applied to True
iftest2 : Bool -> Nat
iftest2 b = ifThenElse2 b (fibo 5) (fibo 25)
Given that Idris should be performing eager evaluation, I would expect the execution of iftest1 to be slowed down by (fibo 25) even when applied to True. However, both iftest1 and iftest2 execute very fast when applied to True. So maybe my understanding of lazyness/eagerness is fundamentally flawed?
What is a good example to observe the difference between lazyness and eagerness in Idris?
You probably tried iftest1 and iftest2 from the Idris REPL. The REPL uses different evaluation order than compiled code:
Being a fully dependently typed language, Idris has two phases where it evaluates things, compile-time and run-time. At compile-time it will only evaluate things which it knows to be total (i.e. terminating and covering all possible inputs) in order to keep type checking decidable. The compile-time evaluator is part of the Idris kernel, and is implemented in Haskell using a HOAS (higher order abstract syntax) style representation of values. Since everything is known to have a normal form here, the evaluation strategy doesn’t actually matter because either way it will get the same answer, and in practice it will do whatever the Haskell run-time system chooses to do.
The REPL, for convenience, uses the compile-time notion of evaluation.

Equality in Agda - irrelevant arguments

I have a dependant type that consists of a value plus some proofs about its properties. Naturally I would like my notion of equality over this type to be equivalent to equality over the value component. This is all fine except that I run into problems when proving properties of lifted notions of this equality (for example equality over lists of this type).
For example:
open import Data.Nat
open import Data.Product
open import Relation.Binary
open import Relation.Binary.PropositionalEquality
open import Relation.Binary.List.Pointwise renaming (Rel to ListRel) hiding (refl)
module Test where
_≈_ : Rel (ℕ × ℕ) _
(a₁ , _) ≈ (a₂ , _) = a₁ ≡ a₂
≈-sym : Symmetric _≈_
≈-sym refl = refl
≋-sym : Symmetric (ListRel _≈_)
≋-sym = symmetric ≈-sym
In the last line Agda complains that it can't solve for the second projections out of the pairs. Interestingly changing the last line to the following eta-equivalent expression means that it can solve them:
≋-sym = symmetric (λ {x} {y} → ≈-sym {x} {y})
Now naturally I know that sometimes Agda can't solve for all the implicit arguments and needs a bit of extra help but I don't understand what new information I'm providing it by doing this...
I'm doing a lot of lifting of equality and I'd rather avoid adding these ugly eta expansions everywhere in my code. I was wondering if anyone has any suggestions to allow something similar to the original code to pass?
I have looked into irrelevance but the second projection is used elsewhere, even if it is computationally irrelevant for equality.
I'm guessing, but I think the problem is that the order of implicit arguments doesn't matter for (a part of) unification. Consider
flipped : (({n m : ℕ} -> n ≡ m) -> ℕ) -> ({m n : ℕ} -> n ≡ m) -> ℕ
flipped k f = k f
Here k receives something of type {n m : ℕ} -> n ≡ m and f is of type {m n : ℕ} -> n ≡ m (m comes before n), but Agda happily unifies these two expressions, since each implicit argument becomes a metavariable during elaboration and metas are not ordered by when they was introduced — they are ordered by how they was instantiated (e.g. you can't instantiate α to β -> β and then β to α as it would result in α ≡ α -> α and handling such loops (called equirecursive types) is an unsound nightmare). So when you write
≋-sym = symmetric ≈-sym
Agda is confused, because it could mean any of
≋-sym = symmetric (λ {x} {y} → ≈-sym {x} {y})
≋-sym = symmetric (λ {x} {y} → ≈-sym {y} {x})
(well, not quite, because the second expression is ill-typed, but Agda doesn't backtrack and can't solve such problems therefore)
This is different from the flipped example, because flipped explicitly specifies how n and m are used, so unification of
{n1 m1 : ℕ} -> n1 ≡ m1
{m2 n2 : ℕ} -> n2 ≡ m2
results in n1 ≡ n2 and m1 ≡ m2 and hence the problem is solved. If you drop this specification
unsolved : (({n m : ℕ} -> ℕ) -> ℕ) -> ({m n : ℕ} -> ℕ) -> ℕ
unsolved k f = k f
you'll get unsolved metas back.
The exact problem with your definition is that only the first projections of pairs are mentioned in the RHS of _≈_, so Agda doesn't know how to unify the second projections. Here is a workaround:
record Sing {α} {A : Set α} (x : A) : Set where
module Test where
_≈_ : Rel (ℕ × ℕ) _
(a₁ , b₁) ≈ (a₂ , b₂) = a₁ ≡ a₂ × Sing (b₁ , b₂)
≈-sym : Symmetric _≈_
≈-sym (refl , _) = (refl , _)
≋-sym : Symmetric (ListRel _≈_)
≋-sym = symmetric ≈-sym
Sing is a dummy record that have only one inhabitant that can be inferred automatically. But Sing allows to mention b₁ and b₂ in the RHS of _≈_ in an inference-friendly way, which makes those metas in ≋-sym solvable.
Though, it seems that (a₁ , b₁) ≈ (a₂ , b₂) = a₁ ≡ a₂ × Sing b₁ already gives Agda enough hints how to solve metas in ≋-sym.
You can also define a pattern synonym to make the code slightly nicer:
pattern refl₁ = (refl , _)
≈-sym : Symmetric _≈_
≈-sym refl₁ = refl₁

Defining Functor Instance for Tensor Type (Idris)

I've been learning Idris recently, and decided I'd try to write a simple tensor library. I started by defining the type.
data Tensor : Vect n Nat -> Type -> Type
Scalar : a -> Tensor [] a
Dimension : Vect n (Tensor d a) -> Tensor (n::d) a
As you can see, the type Tensor is parameterized by a Vect of Nats describing the tensor's dimensions, and a type, describing its contents. So far so good. Next I decided to try making the Tensor type a Functor.
instance Functor (Tensor d) where
map f (Scalar x) = f x
map f (Dimension x) = map f x
And Idris gave me the following error.
Unifying `b` and `Tensor [] b` would lead to infinite type
Okay. From the error, I figured that maybe the issue was that the first pattern of map was too specific (i.e., would only accept scalars when the type declaration of map is such that it accepts any tensor). That seemed odd, but I figured I'd try rewriting it using a with statement.
dimensions : {d : Vect n Nat} -> Tensor d a -> Vect n Nat
dimensions {d} _ = d
instance Functor (Tensor d) where
map f t with (dimensions t)
map f (Scalar x) | [] = f x
map f (Dimension x) | (_::_) = map f x
But I got the same error. I have quite a bit of experience in Haskell, but I'm still not quite used to the lingo used in dependently typed programming in general and by Idris in particular, so any help understanding the error message would be greatly appreciated.
(Note: from Idris 0.10, the instance keyword is deprecated and should be simply left out).
The task is to apply the function to all elements in the Scalar constructors, but otherwise leave the structure unchanged. So, we need to map Scalar to Scalar and Dimension to Dimension, and since Dimension contains a vector of recursive occurrences, we should use Vect's map to access them.
Functor (Tensor d) where
map f (Scalar x) = Scalar (f x)
map f (Dimension xs) = Dimension (map (map f) xs)
So, in map (map f) xs, the first map is for mapping over Vect, and map f is the recursive call.