Adjusting dependently typed function spaces - dependent-type

Suppose we have the following definition that represents a bounded integer in clopen interval <a, b).
def zbound (x₁ x₂ : ℤ) :=
{ n : ℤ // x₁ ≤ n ∧ n < x₂ }
Now I want a function space from zbound x₁ x₂ to bool packed with a proof x₁ < x₂, so that the range is properly represented.
structure bounded_f (x₁ x₂ : ℤ) :=
(map : zbound x₁ x₂ → bool)
(prf : x₁ < x₂)
The problem is now as follows. I would like to have an update function of sorts, that treats bounded_f as a map and updates it to return true for a particular value x, otherwise, and this is important, delegates functionality to the original bounded_f.
def inject_true {x₁ x₂} (x : ℤ) (f : bounded_f x₁ x₂) :
bounded_f (min x₁ x) (max x x₂) :=
⟨λ⟨x', prf⟩, if x = x' then true else f.map ⟨x', sorry⟩, _⟩
The resulting function should have a modified definition space, namely it is no longer operating on bounded range from x₁ to x₂, but rather on a range from min x₁ x to max x x₂. Intuitively, this simply extends the range as needed for the new update.
Sadly I have no idea what to put instead of sorry in the cases where x is smaller than lower bound or greater than upper bound. My problem is that Lean wants me to show:
x₁ ≤ x' ∧ x' < x₂
This is however not the case, as I'd rather want an extended range than the original one. Is there something wrong with applying f.map right away and do I need to... modify it somehow?

Related

How to use data from Maybe as Vect size variable

I'm trying to write a simple program that asks the user about the list size and shows content from that list by index also by the user's input. But I've stuck. I have a function that builds a list by a number. Now I want to create another function that uses Maybe Nat as input and returns Maybe (Vect n Nat). But I have no idea how to accomplish this. Here is the code:
module Main
import Data.Fin
import Data.Vect
import Data.String
getList: (n: Nat) -> Vect n Nat
getList Z = []
getList (S k) = (S k) :: getList k
mbGetList : (Maybe Nat) -> Maybe (Vect n Nat)
mbGetList mbLen = case mbLen of
Just len => Just (getList len)
Nothing => Nothing
main : IO ()
main = do
len <- readNum
-- list <- mbGetList len
putStrLn (show len)
And here is the error:
|
55 | Just len => Just (getList len)
| ~~~~~~~~~~~
When checking right hand side of Main.case block in mbGetList at main.idr:54:24-28 with expected type
Maybe (Vect n Nat)
When checking argument n to function Main.getList:
Type mismatch between
n (Inferred value)
and
len (Given value)
I've tried to declare an implicit variable. The code compiles, but I can't use it (at least throw repl). Also, I've tried to use dependant pair and also failed. Maybe I should use Dec instead of Maybe? But how??? Another attempt was a try to use map function. But in that case, I have an error like that: Can't infer argument n to Functor.
So, what I've missed?
This answer doesn't feel optimal, but here goes
mbGetList : (mbLen: Maybe Nat) -> case mbLen of
(Just len) => Maybe (Vect len Nat)
Nothing => Maybe (Vect Z Nat)
mbGetList (Just len) = Just (getList len)
mbGetList Nothing = Nothing
I think the difficulty comes from the fact that there's no well-defined length for the Vect if you don't have a valid input

Theorem Proving in Idris

I was reading Idris tutorial. And I can't understand the following code.
disjoint : (n : Nat) -> Z = S n -> Void
disjoint n p = replace {P = disjointTy} p ()
where
disjointTy : Nat -> Type
disjointTy Z = ()
disjointTy (S k) = Void
So far, what I figure out is ...
Void is the empty type which is used to prove something is impossible.
replace : (x = y) -> P x -> P y
replace uses an equality proof to transform a predicate.
My questions are:
which one is an equality proof? (Z = S n)?
which one is a predicate? the disjointTy function?
What's the purpose of disjointTy? Does disjointTy Z = () means Z is in one Type "land" () and (S k) is in another land Void?
In what way can an Void output represent contradiction?
Ps. What I know about proving is "all things are no matched then it is false." or "find one thing that is contradictory"...
which one is an equality proof? (Z = S n)?
The p parameter is the equality proof here. p has type Z = S n.
which one is a predicate? the disjointTy function?
Yes, you are right.
What's the purpose of disjointTy?
Let me repeat the definition of disjointTy here:
disjointTy : Nat -> Type
disjointTy Z = ()
disjointTy (S k) = Void
The purpose of disjointTy is to be that predicate replace function needs. This consideration determines the type of disjointTy, viz. [domain] -> Type. Since we have equality between naturals numbers, [domain] is Nat.
To understand how the body has been constructed we need to take a look at replace one more time:
replace : (x = y) -> P x -> P y
Recall that we have p of Z = S n, so x from the above type is Z and y is S n. To call replace we need to construct a term of type P x, i.e. P Z in our case. This means the type P Z returns must be easily constructible, e.g. the unit type is the ideal candidate for this. We have justified disjointTy Z = () clause of the definition of disjointTy. Of course it's not the only option, we could have used any other inhabited (non-empty) type, like Bool or Nat, etc.
The return value in the second clause of disjointTy is obvious now -- we want replace to return a value of Void type, so P (S n) has to be Void.
Next, we use disjointTy like so:
replace {P = disjointTy} p ()
^ ^ ^
| | |
| | the value of `()` type
| |
| proof term of Z = S n
|
we are saying "this is the predicate"
As a bonus, here is an alternative proof:
disjoint : (n : Nat) -> Z = S n -> Void
disjoint n p = replace {P = disjointTy} p False
where
disjointTy : Nat -> Type
disjointTy Z = Bool
disjointTy (S k) = Void
I have used False, but could have used True -- it doesn't matter. What matters is our ability to construct a term of type disjointTy Z.
In what way can an Void output represent contradiction?
Void is defined like so:
data Void : Type where
It has no constructors! There is no way to create a term of this type whatsoever (under some conditions: like Idris' implementation is correct and the underlying logic of Idris is sane, etc.). So if some function claims it can return a term of type Void there must be something fishy going on. Our function says: if you give me a proof of Z = S n, I will return a term of the empty type. This means Z = S n cannot be constructed in the first place because it leads to a contradiction.
Yes, p : x = y is an equality proof. So p is a equality proof and Z = S k is a equality type.
Also yes, usually any P : a -> Type is called predicate, like IsSucc : Nat -> Type. In boolean logic, a predicate would map Nat to true or false. Here, a predicate holds, if we can construct a proof for it. And it is true, if we can construct it (prf : ItIsSucc 4). And it is false, if we cannot construct it (there is no member of ItIsSucc Z).
At the end, we want Void. Read the replace call as Z = S k -> disjointTy Z -> disjointTy (S k), that is Z = S K -> () -> Void. So replace needs two arguments: the proof p : Z = S k and the unit () : (), and voilà, we have a void. By the way, instead of () you could use any type that you can construct, e.g. disjointTy Z = Nat and then use Z instead of ().
In dependent type theory we construct proofs like prf : IsSucc 4. We would say, we have a proof prf that IsSucc 4 is true. prf is also called a witness for IsSucc 4. But with this alone we could only proove things to be true. This is the definiton for Void:
data Void : Type where
There is no constructor. So we cannot construct a witness that Void holds. If you somehow ended up with a prf : Void, something is wrong and you have a contradiction.

Totality and searching for elements in Streams

I want a find function for Streams of size-bounded types which is analogous to the find functions for Lists and Vects.
total
find : MaxBound a => (a -> Bool) -> Stream a -> Maybe a
The challenge is it to make it:
be total
consume no more than constant log_2 N space where N is the number of bits required to encode the largest a.
take no longer than a minute to check at compile time
impose no runtime cost
Generally a total find implementation for Streams sounds absurd. Streams are infinite and a predicate of const False would make the search go on forever. A nice way to handle this general case is the infinite fuel technique.
data Fuel = Dry | More (Lazy Fuel)
partial
forever : Fuel
forever = More forever
total
find : Fuel -> (a -> Bool) -> Stream a -> Maybe a
find Dry _ _ = Nothing
find (More fuel) f (value :: xs) = if f value
then Just value
else find fuel f xs
That works well for my use case, but I wonder if in certain specialized cases the totality checker could be convinced without using forever. Otherwise, somebody may suffer a boring life waiting for find forever ?predicateWhichHappensToAlwaysReturnFalse (iterate S Z) to finish.
Consider the special case where a is Bits32.
find32 : (Bits32 -> Bool) -> Stream Bits32 -> Maybe Bits32
find32 f (value :: xs) = if f value then Just value else find32 f xs
Two problems: it's not total and it can't possibly return Nothing even though there's a finite number of Bits32 inhabitants to try. Maybe I could use take (pow 2 32) to build a List and then use List's find...uh, wait...the list alone would take up GBs of space.
In principle it doesn't seem like this should be difficult. There's finitely many inhabitants to try, and a modern computer can iterate through all 32-bit permutations in seconds. Is there a way to have the totality checker verify the (Stream Bits32) $ iterate (+1) 0 eventually cycles back to 0 and once it does assert that all the elements have been tried since (+1) is pure?
Here's a start, although I'm unsure how to fill the holes and specialize find enough to make it total. Maybe an interface would help?
total
IsCyclic : (init : a) -> (succ : a -> a) -> Type
data FinStream : Type -> Type where
MkFinStream : (init : a) ->
(succ : a -> a) ->
{prf : IsCyclic init succ} ->
FinStream a
partial
find : Eq a => (a -> Bool) -> FinStream a -> Maybe a
find pred (MkFinStream {prf} init succ) = if pred init
then Just init
else find' (succ init)
where
partial
find' : a -> Maybe a
find' x = if x == init
then Nothing
else
if pred x
then Just x
else find' (succ x)
total
all32bits : FinStream Bits32
all32bits = MkFinStream 0 (+1) {prf=?prf}
Is there a way to tell the totality checker to use infinite fuel verifying a search over a particular stream is total?
Let's define what it means for a sequence to be cyclic:
%default total
iter : (n : Nat) -> (a -> a) -> (a -> a)
iter Z f = id
iter (S k) f = f . iter k f
isCyclic : (init : a) -> (next : a -> a) -> Type
isCyclic init next = DPair (Nat, Nat) $ \(m, n) => (m `LT` n, iter m next init = iter n next init)
The above means that we have a situation which can be depicted as follows:
-- x0 -> x1 -> ... -> xm -> ... -> x(n-1) --
-- ^ |
-- |---------------------
where m is strictly less than n (but m can be equal to zero). n is some number of steps after which we get an element of the sequence we previously encountered.
data FinStream : Type -> Type where
MkFinStream : (init : a) ->
(next : a -> a) ->
{prf : isCyclic init next} ->
FinStream a
Next, let's define a helper function, which uses an upper bound called fuel to break out from the loop:
findLimited : (p : a -> Bool) -> (next : a -> a) -> (init : a) -> (fuel : Nat) -> Maybe a
findLimited p next x Z = Nothing
findLimited p next x (S k) = if p x then Just x
else findLimited pred next (next x) k
Now find can be defined like so:
find : (a -> Bool) -> FinStream a -> Maybe a
find p (MkFinStream init next {prf = ((_,n) ** _)}) =
findLimited p next init n
Here are some tests:
-- I don't have patience to wait until all32bits typechecks
all8bits : FinStream Bits8
all8bits = MkFinStream 0 (+1) {prf=((0, 256) ** (LTESucc LTEZero, Refl))}
exampleNothing : Maybe Bits8
exampleNothing = find (const False) all8bits -- Nothing
exampleChosenByFairDiceRoll : Maybe Bits8
exampleChosenByFairDiceRoll = find ((==) 4) all8bits -- Just 4
exampleLast : Maybe Bits8
exampleLast = find ((==) 255) all8bits -- Just 255

Is there a more convenient way to use nested records?

As I told before, I'm working in a library about algebra, matrices and category theory. I have decomposed the algebraic structures in a "tower" of record types, each one representing an algebraic structure. As example, to specify a monoid, we define first a semigroup and to define a commutative monoid we use monoid definition, following the same pattern as Agda standard library.
My trouble is that when I need a property of an algebraic structure that is deep within another one (e.g. property of a Monoid that is part of a CommutativeSemiring) we need to use a number of a projections equal to desired algebraic structure depth.
As an example of my problem, consider the following "lemma":
open import Algebra
open import Algebra.Structures
open import Data.Vec
open import Relation.Binary.PropositionalEquality
open import Algebra.FunctionProperties
open import Data.Product
module _ {Carrier : Set} {_+_ _*_ : Op₂ Carrier} {0# 1# : Carrier} (ICSR : IsCommutativeSemiring _≡_ _+_ _*_ 0# 1#) where
csr : CommutativeSemiring _ _
csr = record{ isCommutativeSemiring = ICSR }
zipWith-replicate-0# : ∀ {n}(xs : Vec Carrier n) → zipWith _+_ (replicate 0#) xs ≡ xs
zipWith-replicate-0# [] = refl
zipWith-replicate-0# (x ∷ xs) = cong₂ _∷_ (proj₁ (IsMonoid.identity (IsCommutativeMonoid.isMonoid
(IsCommutativeSemiring.+-isCommutativeMonoid
(CommutativeSemiring.isCommutativeSemiring csr)))) x)
(zipWith-replicate-0# xs)
Note that in order to access the left identity property of a monoid, I need to project it from the monoid that is within the commutative monoid that lies in the structure of an commutative semiring.
My concern is that, as I add more and more algebraic structures, such lemmas it will become unreadable.
My question is: Is there a pattern or trick that can avoid this "ladder" of record projections?
Any clue on this is highly welcome.
If you look at the Agda standard library, you'll see that for most specialized algebraic structures, the record defining them has the less specialized structure open public. E.g. in Algebra.AbelianGroup, we have:
record AbelianGroup c ℓ : Set (suc (c ⊔ ℓ)) where
-- ... snip ...
open IsAbelianGroup isAbelianGroup public
group : Group _ _
group = record { isGroup = isGroup }
open Group group public using (setoid; semigroup; monoid; rawMonoid)
-- ... snip ...
So an AbelianGroup record will have not just the AbelianGroup/IsAbelianGroup fields available, but also setoid, semigroup and monoid and rawMonoid from Group. In turn, setoid, monoid and rawMonoid in Group come from similarly open public-reexporting these fields from Monoid.
Similarly, for algebraic property witnesses, they re-export the less specialized version's fields, e.g. in IsAbelianGroup we have
record IsAbelianGroup
{a ℓ} {A : Set a} (≈ : Rel A ℓ)
(∙ : Op₂ A) (ε : A) (⁻¹ : Op₁ A) : Set (a ⊔ ℓ) where
-- ... snip ...
open IsGroup isGroup public
-- ... snip ...
and then IsGroup reexports IsMonoid, IsMonoid reexports IsSemigroup, and so on. And so, if you have IsAbelianGroup open, you can still use assoc (coming from IsSemigroup) without having to write out the whole path to it by hand.
The bottom line is you can write your function as follows:
open CommutativeSemiring CSR hiding (refl)
open import Function using (_⟨_⟩_)
zipWith-replicate-0# : ∀ {n}(xs : Vec Carrier n) → zipWith _+_ (replicate 0#) xs ≡ xs
zipWith-replicate-0# [] = refl
zipWith-replicate-0# (x ∷ xs) = proj₁ +-identity x ⟨ cong₂ _∷_ ⟩ zipWith-replicate-0# xs

Idris: proof that specific terms are impossible

Idris version: 0.9.16
I am attempting to describe constructions generated from a base value and an iterated step function:
namespace Iterate
data Iterate : (base : a) -> (step : a -> a) -> a -> Type where
IBase : Iterate base step base
IStep : Iterate base step v -> Iterate base step (step v)
Using this I can define Plus, describing constructs from iterated addition of a jump value:
namespace Plus
Plus : (base : Nat) -> (jump : Nat) -> Nat -> Type
Plus base jump = Iterate base (\v => jump + v)
Simple example uses of this:
namespace PlusExamples
Even : Nat -> Type; Even = Plus 0 2
even0 : Even 0; even0 = IBase
even2 : Even 2; even2 = IStep even0
even4 : Even 4; even4 = IStep even2
Odd : Nat -> Type; Odd = Plus 1 2
odd1 : Odd 1; odd1 = IBase
odd3 : Odd 3; odd3 = IStep odd1
Fizz : Nat -> Type; Fizz = Plus 0 3
fizz0 : Fizz 0; fizz0 = IBase
fizz3 : Fizz 3; fizz3 = IStep fizz0
fizz6 : Fizz 6; fizz6 = IStep fizz3
Buzz : Nat -> Type; Buzz = Plus 0 5
buzz0 : Buzz 0; buzz0 = IBase
buzz5 : Buzz 5; buzz5 = IStep buzz0
buzz10 : Buzz 10; buzz10 = IStep buzz5
The following describes that values below the base are impossible:
noLess : (base : Nat) ->
(i : Fin base) ->
Plus base jump (finToNat i) ->
Void
noLess Z FZ m impossible
noLess (S b) FZ IBase impossible
noLess (S b) (FS i) IBase impossible
And the following for values between base and jump + base:
noBetween : (base : Nat) ->
(predJump : Nat) ->
(i : Fin predJump) ->
Plus base (S predJump) (base + S (finToNat i)) ->
Void
noBetween b Z FZ m impossible
noBetween b (S s) FZ IBase impossible
noBetween b (S s) (FS i) IBase impossible
I am having trouble defining the following function:
noJump : (Plus base jump n -> Void) -> Plus base jump (jump + n) -> Void
noJump f m = ?noJump_rhs
That is: if n isn't base plus a natural multiple of jump, then neither is jump + n.
If I ask Idris to case split m it only shows me IBase - then I get stuck.
Would someone point me in the right direction?
Edit 0:
Applying induction to m gives me the following message:
Induction needs an eliminator for Iterate.Iterate.Iterate
Edit 1:
Name updates and here is a copy of the source: http://lpaste.net/125873
I think there's a good reason to get stuck on the IBase case of this proof, which is that the theorem is false! Consider:
noplus532 : Plus 5 3 2 -> Void
noplus532 IBase impossible
noplus532 (IStep _) impossible
plus535 : Plus 5 3 (3 + 2)
plus535 = IBase
To Edit 0: to induct on a type, it needs a special qualifier:
%elim data Iterate = <your definition>
To the main question: sorry that I haven't read through all your code, I only want to make some suggestion for falsifying proofs. From my experience (I even delved the standard library sources to find out some help), when you need to prove Not a (a -> Void), often you can use some Not b (b -> Void) and a way to convert a to b, then just pass it to the second proof. For example, a very simple proof that one list cannot be prefix of another if they have different heads:
%elim data Prefix : List a -> List a -> Type where
pEmpty : Prefix Nil ys
pNext : Prefix xs ys -> Prefix (x :: xs) (x :: ys)
prefixNotCons : Not (x = y) -> Not (Prefix (x :: xs) (y :: ys))
prefixNotCons r (pNext _) = r refl
In your case, I suppose you need to combine several proofs.