import Data.Nat.Views
total toBinary : Nat -> String
toBinary k with (halfRec k)
toBinary Z | HalfRecZ = ""
toBinary (n + n) | (HalfRecEven rec) = (toBinary n | rec) ++ "0"
toBinary (S (n + n)) | (HalfRecOdd rec) = (toBinary n | rec) ++ "1"
When testing the above function in the REPL with the expression toBinary 2, the following gets printed out.
prim__concat (with block in Exercises.toBinary 1
(with block in with block in Data.Nat.Views.halfRec 0
(half 0)
(Access (\z, zLTy =>
Prelude.WellFounded.sizeAccessible, acc Nat
2
(constructor of Prelude.WellFounded.Sized (\meth =>
meth))
1
z
(lteTransitive zLTy
(LTESucc LTEZero))))))
"0" : String
When I run :exec toBinary 2, the expected result of "10" gets printed out.
Could anyone please explain what's happening?
Related
I'm a green hand with Idris,and get confused with this definition, as I don't understand how it works.
The definitionare as follows.
sameS : (k : Nat)->(j : Nat)->(k = j)->((S k) = (S j))
sameS x x Refl=Refl
Let us start by breaking down the type signature:
sameS : (k : Nat) -> (j : Nat) -> (k = j) -> ((S k) = (S j))
sameS is a function.
sameS take the following arguments:
(k : Nat) a parameter k of type Nat
(j : Nat) a parameter j of type Nat
(k = j) A proof that k and j are equal
sameS returns:
((S k) = (S j)) proof that S k and S j are equal.
Now let us breakdown the definition:
sameS x x Refl = Refl
The type of Refl is a = a.
x is both the first and second argument because both are identical. We know this because the 3rd argument is Refl.
Refl is returned because S x = S x.
Suppose we'd like to have a "proper" minus on Nats, requiring m <= n for n `minus` m to make sense:
%hide minus
minus : (n, m : Nat) -> { auto prf : m `LTE` n } -> Nat
minus { prf = LTEZero } n Z = n
minus { prf = LTESucc prevPrf } (S n) (S m) = minus n m
Now let's try to prove the following lemma, stating that (n + (1 + m)) - k = ((1 + n) + m) - k, assuming both sides are valid:
minusPlusTossS : (n, m, k : Nat) ->
{ auto prf1 : k `LTE` n + S m } ->
{ auto prf2 : k `LTE` S n + m } ->
minus (n + S m) k = minus (S n + m) k
The goal suggests the following sublemma might help:
plusTossS : (n, m : Nat) -> n + S m = S n + m
plusTossS Z m = Refl
plusTossS (S n) m = cong $ plusTossS n m
so we try to use it:
minusPlusTossS n m k =
let tossPrf = plusTossS n m
in rewrite tossPrf in ?rhs
And here we fail:
When checking right hand side of minusPlusTossS with expected type
minus (n + S m) k = minus (S n + m) k
When checking argument prf to function Main.minus:
Type mismatch between
LTE k (S n + m) (Type of prf2)
and
LTE k replaced (Expected type)
Specifically:
Type mismatch between
S (plus n m)
and
replaced
If I understand this error correctly, it just means that it tries to rewrite the RHS of the target equality (which is minus { prf = prf2 } (S n + m) k) to minus { prf = prf2 } (n + S m) k and fails. Rightfully, of course, since prf is a proof for a different inequality! And while replace could be used to produce a proof of (S n + m) k (or prf1 would do as well), it does not look like it's possible to simultaneously rewrite and change the proof object so that it matches the rewrite.
How do I work around this? Or, more generally, how do I prove this lemma?
Ok, I guess I solved it. Bottom line: if you don't know what to do, do a lemma!
So we have a proof of two minuends n1, n2 being equal, and we need to produce a proof of n1 `minus` m = n2 `minus` m. Let's write this down!
minusReflLeft : { n1, n2, m : Nat } -> (prf : n1 = n2) -> (prf_n1 : m `LTE` n1) -> (prf_n2 : m `LTE` n2) -> n1 `minus` m = n2 `minus` m
minusReflLeft Refl LTEZero LTEZero = Refl
minusReflLeft Refl (LTESucc prev1) (LTESucc prev2) = minusReflLeft Refl prev1 prev2
I don't even need plusTossS anymore, which can be replaced by a more directly applicable lemma:
plusRightS : (n, m : Nat) -> n + S m = S (n + m)
plusRightS Z m = Refl
plusRightS (S n) m = cong $ plusRightS n m
After that, the original one becomes trivial:
minusPlusTossS : (n, m, k : Nat) ->
{ auto prf1 : k `LTE` n + S m } ->
{ auto prf2 : k `LTE` S n + m } ->
minus (n + S m) k = minus (S n + m) k
minusPlusTossS {prf1} {prf2} n m k = minusReflLeft (plusRightS n m) prf1 prf2
I am working through Chapter 8 Type Driven Development with Idris, and I have a question about how rewrite interacts with Refl.
This code is shown as an example of how rewrite works on an expression:
myReverse : Vect n elem -> Vect n elem
myReverse [] = []
myReverse {n = S k} (x :: xs)
= let result = myReverse xs ++ [x] in
rewrite plusCommutative 1 k in result
where plusCommutative 1 k will look for any instances of 1 + k and replace it with k + 1.
My question is with this solution to rewriting plusCommutative as part of the exercies as myPlusCommutes with an answer being:
myPlusCommutes : (n : Nat) -> (m : Nat) -> n + m = m + n
myPlusCommutes Z m = rewrite plusZeroRightNeutral m in Refl
myPlusCommutes (S k) m = rewrite myPlusCommutes k m in
rewrite plusSuccRightSucc m k in Refl
I am having trouble with this line:
myPlusCommutes Z m = rewrite plusZeroRightNeutral m in Refl
because from what I can understand by using Refl on its own in that line as such:
myPlusCommutes Z m = Refl
I get this error:
When checking right hand side of myPlusCommutes with expected type
0 + m = m + 0
Type mismatch between
plus m 0 = plus m 0 (Type of Refl)
and
m = plus m 0 (Expected type)
Specifically:
Type mismatch between
plus m 0
and
m
First off, one thing I did not realize is that it appears Refl works from the right side of the = and seeks reflection from that direction.
Next, it would seem that rewriting Refl results in a change from plus m 0 = plus m 0 to m = plus m 0, rewriting from the left but stopping after the first replacement and not going to so far as to replace all instances of plus m 0 with m as I would have expected.
Ultimately, that is my question, why rewriting behaves in such a way. Is rewriting on equality types different and in those cases rewrite only replaces on the left side of the =?
To understand what is going on here we need to take into account the fact that Refl is polymorphic:
λΠ> :set showimplicits
λΠ> :t Refl
Refl : {A : Type} -> {x : A} -> (=) {A = A} {B = A} x x
That means Idris is trying to ascribe a type to the term Refl using information from the context. E.g. Refl in myPlusCommutes Z m = Refl has type plus m 0 = plus m 0. Idris could have picked the LHS of myPlusCommutes' output type and tried to ascribe the type m = m to Refl. Also you can specify the x expression like so : Refl {x = m}.
Now, rewrite works with respect to your current goal, i.e. rewrite Eq replaces all the occurrences of the LHS of Eq with its RHS in your goal, not in some possible typing of Refl.
Let me give you a silly example of using a sequence of rewrites to illustrate what I mean:
foo : (n : Nat) -> n = (n + Z) + Z
foo n =
rewrite sym $ plusAssociative n Z Z in -- 1
rewrite plusZeroRightNeutral n in -- 2
Refl -- 3
We start with goal n = (n + Z) + Z, then
line 1 turns the goal into n = n + (Z + Z) using the law of associativity, then
line 2 turns the current goal n = n + Z (which is definitionally equal to n = n + (Z + Z)) into n = n
line 3 provides a proof term for the current goal (if we wanted to be more explicit, we could have written Refl {x = n} in place of Refl).
The following function compiles:
onlyModByFive : (n : Nat) -> (k : Nat ** 5 * k = n) -> Nat
onlyModByFive n k = 100
But what does k represent with its Nat ** 5 * k = n syntax?
Also, how can I call it? Here's what I tried, but I don't understand the output.
*Test> onlyModByFive 5 5
When checking an application of function Main.onlyModByFive:
(k : Nat ** plus k (plus k (plus k (plus k (plus k 0)))) = 5) is not a
numeric type
source of answer - https://groups.google.com/d/msg/idris-lang/ZPi9wCd95FY/eo3tRijGAAAJ
(k : Nat) ** (5 * k = n) is a dependent pair consisting of
A first element k : Nat
A second element prf : 5 * k = n
In other words, this is an existential type that says "there exists some k : Nat such that 5 * k = n". To be constructive, you must give such a k and a proof that it indeed satisfies 5 * k = n.
In your example, if you partially apply onlyModByFive to 5, you get something of type
onlyModModByFive 5 : ((k : Nat) ** (5 * k = 5)) -> Nat
so the second argument has to be of type (k : Nat) ** (5 * k = 5). There is only one choice of k we can make here, by setting it to 1, and proving that 5 * 1 = 5:
foo : Nat
foo = onlyModByFive 5 (1 ** Refl)
This works because 5 * 1 reduces to 5, so we have to prove 5 = 5, which can be trivially done by using Refl : a = a directly (unifying a ~ 5).
I'm trying to create a small module for doing decimal-based calculations. A number is stored as an integer mantisse, with a precision value specified by an int:
data APNum =
{ getMantisse :: Integer
, getPrecision :: Int }
For instance:
APNum 123 0 -> 123
APNum 123 1 -> 1.23
APNum 123 2 -> 12.3
...
(negative precision is not allowed).
Now I wrote this function, which adjusts the precision automatically by stripping as many trailing zero's as possible:
autoPrecision :: APNum -> APNum
autoPrecision x#(APNum m p) = if p > maxPrecision
then autoPrecision $ setPrecision x maxPrecision
else autoPrecision' m p where
autoPrecision' m p = let (m',r) = m `divMod` 10 in
if r /= 0 || p <= 0 then APNum m p else autoPrecision' m' (pred p)
(MaxPrecision and setPrecision are obvious, I think).
The problem is, this snippet has a very bad performance, specially n numbers with more then 10000 digits. Are there any simple optimizations?
You can use binary search to find the highest power of 10 which divides m, instead of trying all consecutive values.
import Numeric.Search.Range
import Data.Maybe
data APNum = APNum{getMantisse :: Integer, getPrecission :: Int} deriving Show
setPrecision (APNum m _) x = APNum m x
maxPrecission = 200000
findDiv x = pred $ fromJust $ searchFromTo (p x) 0 maxPrecission where
p x n = x `mod` 10^n /= 0
autoPrecision :: APNum -> APNum
autoPrecision x#(APNum m p)
= if p > maxPrecission then
autoPrecision $ setPrecision x maxPrecission else APNum m' p'
where d = min (findDiv m) p
p' = p - d
m' = m `div` 10^d
I'm using the binary-search package here which provides searchFromTo :: Integral a => (a -> Bool) -> a -> a -> Maybe a. This should give you a big speedup.
Looks like even straightforward string operation is still faster:
maxPrecision = 2000000
autoPrecision (APNum m p) =
let p' = min p maxPrecision
(n',ds) = genericDropNWhile (=='0') p' $ reverse $ show m
in APNum (read $ reverse ds) n'
where
genericDropNWhile p n (x:xs) | n > 0 && p x = genericDropNWhile p (n-1) xs
genericDropNWhile _ n xs = (n,xs)
Test with this:
main = print $ autoPrecision $ APNum (10^100000) (100000-3)
EDIT: Oops, faster only for numbers with lots of zeroes. Otherwise this double conversion is definitely slower.
also x mod 10 == 0 implies x mod 2 == 0, and that is cheaper to test for