Compile-time invariant/property checking in Elm - elm

I have some value constraints/properties that I want to check at compile-time. In this example, I want to track whether a vector is normalized or not. I think I have a solution, using type tags, but I need someone with some Elm/FP experience to tell me if I have missed something obvious. Thank you.
module TagExperiment exposing (..)
type Vec t = Vec Float Float Float
type Unit = Unit
type General = General
toGeneral : Vec t -> Vec General
toGeneral (Vec x y z) =
Vec x y z
scaleBy : Vec t -> Vec t -> Vec t
scaleBy (Vec bx by bz) (Vec ax ay az) =
{- Operate on two General's or two Unit's. If you have one of each,
then the Unit needs to be cast to a General.
-}
let
mag =
sqrt ((bx * bx) + (by * by) + (bz * bz))
in
Vec (ax * mag) (ay * mag) (az * mag)
-- These cases work as desired.
a : Vec Unit
a = Vec 0 0 1
b : Vec General
b = Vec 2 2 2
d : Vec Unit
d = scaleBy a a
e : Vec General
e = scaleBy b b
g : Vec General
g = scaleBy (toGeneral a) b
h : Vec General
h = scaleBy b (toGeneral a)
-- Here is where I have trouble.
c : Vec t -- unknown... uh-oh
c = Vec 3 3 3
f : Vec t -- still unknown... sure
f = scaleBy c c
i : Vec Unit -- wrong !!!
i = scaleBy a c
j : Vec Unit -- wrong !!!
j = scaleBy c a
k : Vec General -- lucky
k = scaleBy b c
l : Vec General -- lucky
l = scaleBy c b
{- The trouble is that I am not required to specify a tag for c. I guess the
solution is to write "constructors" and make the built-in Vec constructor
opaque?
-}
newUnitVec : Float -> Float -> Float -> (Vec Unit)
newUnitVec x y z =
-- add normalization
Vec x y z
newVec : Float -> Float -> Float -> (Vec General)
newVec x y z =
Vec x y z

Yes, without Dependant Types the most ergonomic way to ensure constraints on values at compile time is to use opaque types along with a "Parse" approach.
Perhaps something like:
module Vec exposing (UnitVector, Vector, vectorToUnit)
type UnitVector
= UnitVector Float Float Float
type Vector
= Vector Float Float Float
vectorToUnit : Vector -> Maybe UnitVector
vectorToUnit (Vector x y z) =
case ( x, y, z ) of
( 0, 0, 0 ) ->
Nothing
_ ->
normalize x y z
Then, with the only ways to get a UnitVector both defined inside this module and known to obey the constraints, then any time you see a UnitVector at compile-time it is correct to assume the constraints are met.
For vectors in particular, it may be worth having a look at ianmackenzie/elm-geometry for comparison?

Related

Applying known proofs in Idris 1 interactive elaborator

I am trying to get some familiarity with theorem proving in Idris1 by exercise and am running into trouble.
Suppose I have the following definition for naturals and the following theorems that I want to prove:
data Natural = Z | S Natural
plus : Natural -> Natural -> Natural
plus x Z = x
plus x (S y) = S (plus x y)
succBoth : {a : Natural} -> {b : Natural} -> (a = b) -> (S a = S b)
succBoth = ?succBothProof
plusZero : (y : Natural) -> plus Z y = y
plusZero = ?plusZeroProof
plusSwitch : (x : Natural) -> (y : Natural) -> plus (S x) y = S (plus x y)
plusSwitch = ?plusSwitchProof
plusComm : (x : Natural) -> (y : Natural) -> plus x y = plus y x
plusComm = ?plusCommProof
I already have written proofs for the first three. Now, when I want to prove the last theorem, I run into necessity of applying an earlier proof.
Idris> :l Peano.idr
Holes: Peano.plusCommProof
*Peano> :elab plusCommProof
-Peano.plusCommProof> intro `{{x}}
...
-Peano.plusCommProof> intro `{{y}}
...
-Peano.plusCommProof> induction (Var `{{y}})
...
-Peano.plusCommProof> compute
...
-Peano.plusCommProof> attack
---------- Other goals: ----------
{Z_103},{S_104}
---------- Assumptions: ----------
x : Natural
y : Natural
---------- Goal: ----------
{hole_7} : x = plus Z x
It would be natural to apply plusZero at this stage, but I run into issues trying to do that. I try to apply it via rewriteWith, keeping in mind that plusZero takes a Natural type argument. I try to supply it with the x variable, thinking that it will be able to infer its Natural type from assumptions, but no luck:
-Peano.plusCommProof> rewriteWith `(plusZero (Var `{{x}}))
(input):1:15-35:When checking argument y to function Peano.plusZero:
Type mismatch between
Raw (Type of Var _)
and
Natural (Expected type)
How does one "cast" the Raw variable into its type in context?
I couldn't get the Idris 1 version to work but I did install Idris 2 and wrote the proofs in its style instead.
module Peano
data Natural = Zero | Succ Natural
plus : Natural -> Natural -> Natural
plus x Zero = x
plus x (Succ y) = Succ (plus x y)
succBoth : {a : Natural} -> {b : Natural} -> (a = b) -> (Succ a = Succ b)
succBoth rfl = cong (\ a => Succ a) rfl
plusZero : (y : Natural) -> plus Zero y = y
plusZero Zero = Refl
plusZero (Succ y)
= let assumption = plusZero y in
rewrite assumption in Refl
plusSwitch : (x : Natural) -> (y : Natural) ->
plus (Succ x) y = Succ (plus x y)
plusSwitch x Zero = Refl
plusSwitch x (Succ y)
= let assumption = plusSwitch x y in
rewrite assumption in Refl
plusComm : (x : Natural) -> (y : Natural) -> plus x y = plus y x
plusComm x Zero = rewrite plusZero x in Refl
plusComm x (Succ y)
= let assumption = plusComm x y in
rewrite plusSwitch y x in
rewrite assumption in Refl
Admittedly much more compact but I prefer the Idris 1 :elab style for readability

Difference between parameters and members of a class

I am new to Coq and was wondering what is the difference between the following things:
Class test (f g: nat -> nat) := {
init: f 0 = 0 /\ g 0 = 0;
output: ...another proposition about f and g...;
}.
and
Class test := {
f: nat -> nat;
g: nat -> nat;
init: f 0 = 0 /\ g 0 = 0;
output: ...another proposition about f and g...;
}.
Could anyone provide an explanation ?
The difference between them is referred to as bundling.
Class test (f g: nat -> nat) := {
init: f 0 = 0 /\ g 0 = 0;
output: ...another proposition about f and g...;
}.
is unbundled, and
Class test := {
f: nat -> nat;
g: nat -> nat;
init: f 0 = 0 /\ g 0 = 0;
output: ...another proposition about f and g...;
}.
is bundled.
The advantage of bundling is that you don't need to always provide f and g. The advantage of unbundling is that you can have different instances of the same class sharing the same f and g. If you bundle them, Coq will not be easily convinced that different instances share parameters.
You can read more about this in Type Classes for Mathematics in Type Theory.
To complement Ana’s excellent answer, here is a practical difference:
the unbundled version (call it utest) allows you to write the logical statement utest f g about a specific pair of functions f and g,
whereas the bundled version (call it btest) allows you to state that there exists a pair of functions which satisfies the properties; you can later refer to these functions by the projection names f and g.
So, roughly speaking:
btest is “equivalent” to ∃ f g, utest f g;
utest f' g' is “equivalent” to btest ∧ “the f (resp. g) in the aforementioned proof of btest is equal to f' (resp. g')”.
More formally, here are the equivalences for a minimal example
(in this code, the notation { x : A | B } is a dependent pair type,
i.e. the type of (x, y) where x : A and y : B
and the type B depends on the value x):
(* unbundled: *)
Class utest (x : nat) : Prop := {
uprop : x = 0;
}.
(* bundled: *)
Class btest : Type := {
bx : nat;
bprop : bx = 0;
}.
(* [btest] is equivalent to: *)
Goal { x : nat | utest x } -> btest.
Proof.
intros [x u]. econstructor. exact (#uprop x u).
Qed.
Goal btest -> { x : nat | utest x }.
Proof.
intros b. exists (#bx b). constructor. exact (#bprop b).
Qed.
(* [utest x] is equivalent to: *)
Goal forall x, { b : btest | #bx b = x } -> utest x.
Proof.
intros x [b <-]. constructor. exact (#bprop b).
Qed.
Goal forall x, utest x -> { b : btest | #bx b = x }.
Proof.
intros x u. exists {| bx := x ; bprop := #uprop x u |}. reflexivity.
Qed.
(* NOTE: Here I’ve explicited all implicit arguments; in fact, you
can let Coq infer them, and write just [bx], [bprop], [uprop]
instead of [#bx b], [#bprop b], [#uprop x u]. *)
In this example, we can also observe a difference with respect to computational relevance: utest can live in Prop, because its only member, uprop, is a proposition. On the other hand, I cannot really put btest in Prop, because that would mean that both bx and bprop would live in Prop but bf is computationally relevant. In other words, Coq gives you this warning:
Class btest : Prop := {
bx : nat;
bprop : bx = 0;
}.
(* WARNING:
bx cannot be defined because it is informative and btest is not.
bprop cannot be defined because the projection bx was not defined.
*)

Why are these two tuples in idris equal?

I am reading Type driven development with Idris, and one of the exercises asks the reader to define a type TupleVect, such that a vector can be represented as:
TupleVect 2 ty = (ty, (ty, ()))
I solved it by defining the following type:
TupleVect : Nat -> Type -> Type
TupleVect Z ty = ()
TupleVect (S k) ty = (ty, TupleVect k ty)
The following test typechecks:
test : TupleVect 4 Nat
test = (1,2,3,4,())
My question is, why is (1,2,3,4,()) == (1,(2,(3,(4,()))))? I would have thought that the right hand side is a 2-tuple, consisting of an Int and another tuple.
Checking the documentation at http://docs.idris-lang.org/en/latest/tutorial/typesfuns.html#tuples, you can see that tuples are represented as nested pairs.
Hence (x, y, z) == (x, (y, z)) for every x, y, z

Problems with equational proofs and interface resolution in Idris

I'm trying to model Agda style equational reasoning proofs for Setoids (types with an equivalence relation). My setup is as follows:
infix 1 :=:
interface Equality a where
(:=:) : a -> a -> Type
interface Equality a => VerifiedEquality a where
eqRefl : {x : a} -> x :=: x
eqSym : {x, y : a} -> x :=: y -> y :=: x
eqTran : {x, y, z : a} -> x :=: y -> y :=: z -> x :=: z
Using such interfaces I could model some equational reasoning combinators like
Syntax.PreorderReasoning from Idris library.
syntax [expr] "QED" = qed expr
syntax [from] "={" [prf] "}=" [to] = step from prf to
namespace EqReasoning
using (a : Type, x : a, y : a, z : a)
qed : VerifiedEquality a => (x : a) -> x :=: x
qed x = eqRefl {x = x}
step : VerifiedEquality a => (x : a) -> x :=: y -> (y :=: z) -> x :=: z
step x prf prf1 = eqTran {x = x} prf prf1
The main difference from Idris library is just the replacement of propositional equality and their related functions to use the ones from VerifiedEquality interface.
So far, so good. But when I try to use such combinators, I run in problems that, I believe, are related to interface resolution. Since the code is part of a matrix library that I'm working on, I posted the relevant part of it in the following gist.
The error occurs in the following proof
zeroMatAddRight : ( VerifiedSemiring s
, VerifiedEquality s ) =>
{r, c : Shape} ->
(m : M s r c) ->
(m :+: (zeroMat r c)) :=: m
zeroMatAddRight {r = r}{c = c} m
= m :+: (zeroMat r c)
={ addMatComm {r = r}{c = c} m (zeroMat r c) }=
(zeroMat r c) :+: m
={ zeroMatAddLeft {r = r}{c = c} m }=
m
QED
that returns the following error message:
When checking right hand side of zeroMatAddRight with expected type
m :+: (zeroMat r c) :=: m
Can't find implementation for Semiring a
Possible cause:
./Data/Matrix/Operations/Addition.idr:112:11-118:1:When checking an application of function Algebra.Equality.EqReasoning.step:
Type mismatch between
m :=: m (Type of qed m)
and
y :=: z (Expected type)
At least to me, it appears that this error is related with interface resolution that isn't interacting well with syntax extensions.
My experience is that such strange errors can be solved by passing implicit parameters explicitly. The problem is that such solution will kill the "readability" of equational reasoning combinator proofs.
Is there a way to solve this? The relevant part for reproducing this error is available in previously linked gist.

Dependently typed map - can't get it wrong?

Suppose I define my own list type.
data MyVec : Nat -> Type -> Type where
MyNil : MyVec Z a
(::) : a -> MyVec k a -> MyVec (S k) a
And a myMap function serving as fmap for MyVec:
myMap : (a -> b) -> MyVec k a -> MyVec k b
myMap f MyNil = ?rhs_nil
myMap f (x :: xs) = ?rhs_cons
Attempting to solve the ?rhs_nil hole in my mind:
:t ?rhs_nil is MyVec 0 b
fair enough - I need a constructor that returns MyVec parameterized by b and I need k to be 0 (or Z) and I can see that MyNil is indexed by Z and parameterized by whatever, so I can use b easily, therefore ?rhs_nil = MyNil and it typechecks, dandy
:t ?rhs_cons is MyVec (S k)
I need a constructor that returns MyVec parameterized by b and I need k to be (S k)
I do see that the constructor (::) constructs a list indexed by (S k) and I try to use it. The first argument needs to be of type b considering I am building a MyVec <> b and the only way to get it is to apply x to f.
myMap f (x :: xs) = f x :: <>
Now I get confused. The RHS of (::) is supposed to be MyVec k b, why can I not simply use the MyNil constructor, with unification / substitution k == Z (that MyNil) gives me, getting:
myMap f (x :: xs) = f x :: MyNil
I do understand that I need to recurse and have = f x :: myMap f xs, but how does the compiler know the number of times the (::) constructor needs to be applied? How does it infer the correct k for this case, preventing me from using Z there.
The k is already implied by xs : MyVec k a. So you cannot unify k with Z if xs contains some elements.