Why I get sum(<P/3> <P/3>) when I use Show in OZ? - show

I'm learning Oz, and was trying to run an example that I found in a book, it is about to simulate a full adder, but what I get is sum( ), so I do not know where the mistake, I would appreciate your help.
Here is part of the code:
fun {XorG X Y}
fun {$ X Y}
fun {GateLoop X Y}
case X#Y of (X|Xr)#(Y|Yr) then
{X+Y-2*X*Y}|{GateLoop Xr Yr}
end
end
in
thread {GateLoop X Y} end
end
end
proc {FullAdder X Y ?C ?S}
K L M
in
K={AndG X Y}
L={AndG Y Z}
M={AndG X Z}
C={OrG K {OrG L M}}
S={XorG Z {XorG X Y}}
end
declare
X=1|1|0|_
Y=0|1|0|_ C S in
{FullAdder X Y C S}
{Show sum(C S)}
AndG and OrG are similar to XorG.

A full adder has 3 inputs and 2 outputs. Indeed you use Z in the FullAdder function but never declare it. So first add it as an argument. Then you have to define a stream for Z as you did for X and Y. Namely :
declare
X=1|1|0|_
Y=0|1|0|_
Z=1|1|1|_ C S in
{FullAdder X Y Z C S}
{Show sum(C S)}
But your main problem is that your XOR gate function is not well defined. It returns an anonymous function. So a call like {XorG A B} returns a function. A better way to implement logic gates is to use a generic function GateMaker in order not to duplicate code :
fun {GateMaker F}
fun {$ Xs Ys}
fun {GateLoop Xs Ys}
case Xs#Ys of (X|Xr)#(Y|Yr) then
{F X Y}|{GateLoop Xr Yr}
end
end
in
thread {GateLoop Xs Ys} end
end
end
Then you only have to define your gates like this :
AndG = {GateMaker fun {$ X Y} X*Y end}
OrG = {GateMaker fun {$ X Y} X+Y-X*Y end}
XorG = {GateMaker fun {$ X Y} X+Y-2*X*Y end}
...
Once you define the gates correctly your FullAdder should work properly.

Related

Compile-time invariant/property checking in Elm

I have some value constraints/properties that I want to check at compile-time. In this example, I want to track whether a vector is normalized or not. I think I have a solution, using type tags, but I need someone with some Elm/FP experience to tell me if I have missed something obvious. Thank you.
module TagExperiment exposing (..)
type Vec t = Vec Float Float Float
type Unit = Unit
type General = General
toGeneral : Vec t -> Vec General
toGeneral (Vec x y z) =
Vec x y z
scaleBy : Vec t -> Vec t -> Vec t
scaleBy (Vec bx by bz) (Vec ax ay az) =
{- Operate on two General's or two Unit's. If you have one of each,
then the Unit needs to be cast to a General.
-}
let
mag =
sqrt ((bx * bx) + (by * by) + (bz * bz))
in
Vec (ax * mag) (ay * mag) (az * mag)
-- These cases work as desired.
a : Vec Unit
a = Vec 0 0 1
b : Vec General
b = Vec 2 2 2
d : Vec Unit
d = scaleBy a a
e : Vec General
e = scaleBy b b
g : Vec General
g = scaleBy (toGeneral a) b
h : Vec General
h = scaleBy b (toGeneral a)
-- Here is where I have trouble.
c : Vec t -- unknown... uh-oh
c = Vec 3 3 3
f : Vec t -- still unknown... sure
f = scaleBy c c
i : Vec Unit -- wrong !!!
i = scaleBy a c
j : Vec Unit -- wrong !!!
j = scaleBy c a
k : Vec General -- lucky
k = scaleBy b c
l : Vec General -- lucky
l = scaleBy c b
{- The trouble is that I am not required to specify a tag for c. I guess the
solution is to write "constructors" and make the built-in Vec constructor
opaque?
-}
newUnitVec : Float -> Float -> Float -> (Vec Unit)
newUnitVec x y z =
-- add normalization
Vec x y z
newVec : Float -> Float -> Float -> (Vec General)
newVec x y z =
Vec x y z
Yes, without Dependant Types the most ergonomic way to ensure constraints on values at compile time is to use opaque types along with a "Parse" approach.
Perhaps something like:
module Vec exposing (UnitVector, Vector, vectorToUnit)
type UnitVector
= UnitVector Float Float Float
type Vector
= Vector Float Float Float
vectorToUnit : Vector -> Maybe UnitVector
vectorToUnit (Vector x y z) =
case ( x, y, z ) of
( 0, 0, 0 ) ->
Nothing
_ ->
normalize x y z
Then, with the only ways to get a UnitVector both defined inside this module and known to obey the constraints, then any time you see a UnitVector at compile-time it is correct to assume the constraints are met.
For vectors in particular, it may be worth having a look at ianmackenzie/elm-geometry for comparison?

Compiler introducing extra interface requirements when using functions returning dependent pairs

This is rather complicated. Sorry I could'n make the example simpler. I'm trying to formalize a theory and interfaces A and B represent my axioms. X and Y are some objects in the theory, mkY creates a Y from two Xs and PropA, PropY and PropYY are some statements about these objects:
interface A a where
PropA : a -> Type
interface B where
X : Type
Y : Type
PropY : Y -> Type
mkY : A X => (x, y : X) -> (z : Y ** PropY z)
PropYY : Y -> Type
mkPropY : A X => {x : X} -> {y : Y} -> PropA x -> PropY y -> PropYY y
lemma1 : (B, A X) => (x, y : X) -> PropA x -> (z : Y ** PropYY z)
lemma1 x y prop_a =
let (z ** propZ) = mkY x y in
(z ** mkPropY prop_a propZ)
Unfortunately rather obvious lemma1 does not compile:
When checking right hand side of Example.case block in lemma1 at /home/sven/code/idris/geometry/src/Euclid/Example.idr:17:9-20 with expected type
(z1 : Y ** PropYY z1)
When checking an application of function Example.mkPropY:
Type mismatch between
PropA x1 (Type of prop_a)
and
PropA x (Expected type)
It seems to me Idris refuses to unify requirement A X from the function header with that introduced by mkY function. When I replace mkPropY prop_a propZ with a hole and ask for its type, I get this:
constraint : B
z : Y
propZ : PropY z
x : X
y : X
constraint1 : A X
prop_a : PropA x
constraint2 : A X
--------------------------------------
whatIsIt : PropYY z
Here constraint1 and constraint2 are the same, and yet there's two of them, which seems to be the root cause of the problem. So why does Idris introduce this additional constraint and how do I make it work?
I'm not sure why Idris thinks with mkY there could be another constraint in play (as by the looks of it nothing in the hole is constrained by it, even with :set showimplicits). Maybe someone else can explain why, but for now it usually helps to make constraints explicit:
lemma1 : (B, a_const : A X) => (x, y : X) -> PropA x -> (z : Y ** PropYY z)
lemma1 #{a_const} x y prop_a =
let (z ** propZ) = mkY #{a_const} x y in
(z ** mkPropY prop_a propZ)
(Maybe rewriting B as interface B x y where; … would help, so the scope over X and Y is clearer, but I didn't try this.)

Proving isomorphism between Martin-Lof's equality and Path Induction in Coq

I am studying Coq and trying to prove the isomorphism between Martin-Lof's equality and the Path Induction's one.
I defined the two equalities as follows.
Module MartinLof.
Axiom eq : forall A, A -> A -> Prop.
Axiom refl : forall A, forall x : A, eq A x x.
Axiom el : forall (A : Type) (C : forall x y : A, eq A x y -> Prop),
(forall x : A, C x x (refl A x)) ->
forall a b : A, forall p : eq A a b, C a b p.
End MartinLof.
Module PathInduction.
Axiom eq : forall A, A -> A -> Prop.
Axiom refl : forall A, forall x : A, eq A x x.
Axiom el : forall (A : Type) (x : A) (P : forall a : A, eq A x a -> Prop),
P x (refl A x) -> forall y : A, forall p : eq A x y, P y p.
End PathInduction.
And I defined the two functions involved in the isomorphism as follows.
Definition f {A} : forall x y: A, forall m: MartinLof.eq A x y, PathInduction.eq A x y.
Proof.
apply: (MartinLof.el A (fun a b p => PathInduction.eq A a b) _ x y m).
move=> x0.
exact: PathInduction.refl A x0.
Defined.
Definition g {A} : forall x y: A, forall p: PathInduction.eq A x y, MartinLof.eq A x y.
Proof.
apply: (PathInduction.el A x (fun a p => MartinLof.eq A x a) _ y p).
exact: MartinLof.refl A x.
Defined.
Now I am trying to define the following proof-terms, but I cannot move from the initial statement because I actually don't know what tactic to apply.
Definition pf1 {A}: forall x y: A, forall m: MartinLof.eq A x y,
eq m (g x y (f x y m)).
Definition pf2 {A} : forall x y: A, forall p: PathInduction.eq A x y,
eq p (f x y (g x y p)).
I through I could simplify the expression
(g x y (f x y m))
but I don't know how to do that. Does anyone know how I can go on on this proof?
Thanks in advance.
The problem is that your definition of the identity types is incomplete, because it does not specify how el interacts with refl. Here is a complete solution.
From mathcomp Require Import ssreflect.
Module MartinLof.
Axiom eq : forall A, A -> A -> Prop.
Axiom refl : forall A, forall x : A, eq A x x.
Axiom el : forall (A : Type) (C : forall x y : A, eq A x y -> Prop),
(forall x : A, C x x (refl A x)) ->
forall a b : A, forall p : eq A a b, C a b p.
Axiom el_refl : forall (A : Type) (C : forall x y : A, eq A x y -> Prop)
(CR : forall x : A, C x x (refl A x)),
forall x : A, el A C CR x x (refl A x) = CR x.
End MartinLof.
Module PathInduction.
Axiom eq : forall A, A -> A -> Prop.
Axiom refl : forall A, forall x : A, eq A x x.
Axiom el : forall (A : Type) (x : A) (P : forall a : A, eq A x a -> Prop),
P x (refl A x) -> forall y : A, forall p : eq A x y, P y p.
Axiom el_refl : forall (A : Type) (x : A) (P : forall y : A, eq A x y -> Prop)
(PR : P x (refl A x)),
el A x P PR x (refl A x) = PR.
End PathInduction.
Definition f {A} (x y: A) (m: MartinLof.eq A x y) : PathInduction.eq A x y.
Proof.
apply: (MartinLof.el A (fun a b p => PathInduction.eq A a b) _ x y m).
move=> x0.
exact: PathInduction.refl A x0.
Defined.
Definition g {A} (x y: A) (p: PathInduction.eq A x y) : MartinLof.eq A x y.
Proof.
apply: (PathInduction.el A x (fun a p => MartinLof.eq A x a) _ y p).
exact: MartinLof.refl A x.
Defined.
Definition pf1 {A} (x y: A) (m: MartinLof.eq A x y) : eq m (g x y (f x y m)).
Proof.
apply: (MartinLof.el A (fun x y p => p = g x y (f x y p))) => x0.
by rewrite /f MartinLof.el_refl /g PathInduction.el_refl.
Qed.
Definition pf2 {A} (x y: A) (m: PathInduction.eq A x y) : eq m (f x y (g x y m)).
Proof.
apply: (PathInduction.el A x (fun y p => p = f x y (g x y p))).
by rewrite /f /g PathInduction.el_refl MartinLof.el_refl.
Qed.
Alternatively, you could have just defined the two identity types as Coq inductive types, which would have given you the same principles without the need to state separate axioms; Coq's computation behavior already yields the equations you need. I have used pattern-matching syntax, but similar definitions are possible using the standard combinators eq1_rect and eq2_rect.
Inductive eq1 (A : Type) (x : A) : A -> Type :=
| eq1_refl : eq1 A x x.
Inductive eq2 (A : Type) : A -> A -> Type :=
| eq2_refl x : eq2 A x x.
Definition f {A} {x y : A} (p : eq1 A x y) : eq2 A x y :=
match p with eq1_refl _ _ => eq2_refl A x end.
Definition g {A} {x y : A} (p : eq2 A x y) : eq1 A x y :=
match p with eq2_refl _ z => eq1_refl A z end.
Definition fg {A} (x y : A) (p : eq2 A x y) : f (g p) = p :=
match p with eq2_refl _ _ => eq_refl end.
Definition gf {A} (x y : A) (p : eq1 A x y) : g (f p) = p :=
match p with eq1_refl _ _ => eq_refl end.
Although not immediately clear, eq1 corresponds to your PathInduction.eq, and eq2 corresponds to your MartinLof.eq. You can check this by asking Coq to print the types of their recursion principles:
Check eq1_rect.
Check eq2_rect.
You may notice that I have defined the two in Type instead of Prop. I just did this to make the recursion principles generated by Coq closer to the ones you had; by default, Coq uses simpler recursion principles for things defined in Prop (though that behavior can be changed with a few commands).

Problems with equational proofs and interface resolution in Idris

I'm trying to model Agda style equational reasoning proofs for Setoids (types with an equivalence relation). My setup is as follows:
infix 1 :=:
interface Equality a where
(:=:) : a -> a -> Type
interface Equality a => VerifiedEquality a where
eqRefl : {x : a} -> x :=: x
eqSym : {x, y : a} -> x :=: y -> y :=: x
eqTran : {x, y, z : a} -> x :=: y -> y :=: z -> x :=: z
Using such interfaces I could model some equational reasoning combinators like
Syntax.PreorderReasoning from Idris library.
syntax [expr] "QED" = qed expr
syntax [from] "={" [prf] "}=" [to] = step from prf to
namespace EqReasoning
using (a : Type, x : a, y : a, z : a)
qed : VerifiedEquality a => (x : a) -> x :=: x
qed x = eqRefl {x = x}
step : VerifiedEquality a => (x : a) -> x :=: y -> (y :=: z) -> x :=: z
step x prf prf1 = eqTran {x = x} prf prf1
The main difference from Idris library is just the replacement of propositional equality and their related functions to use the ones from VerifiedEquality interface.
So far, so good. But when I try to use such combinators, I run in problems that, I believe, are related to interface resolution. Since the code is part of a matrix library that I'm working on, I posted the relevant part of it in the following gist.
The error occurs in the following proof
zeroMatAddRight : ( VerifiedSemiring s
, VerifiedEquality s ) =>
{r, c : Shape} ->
(m : M s r c) ->
(m :+: (zeroMat r c)) :=: m
zeroMatAddRight {r = r}{c = c} m
= m :+: (zeroMat r c)
={ addMatComm {r = r}{c = c} m (zeroMat r c) }=
(zeroMat r c) :+: m
={ zeroMatAddLeft {r = r}{c = c} m }=
m
QED
that returns the following error message:
When checking right hand side of zeroMatAddRight with expected type
m :+: (zeroMat r c) :=: m
Can't find implementation for Semiring a
Possible cause:
./Data/Matrix/Operations/Addition.idr:112:11-118:1:When checking an application of function Algebra.Equality.EqReasoning.step:
Type mismatch between
m :=: m (Type of qed m)
and
y :=: z (Expected type)
At least to me, it appears that this error is related with interface resolution that isn't interacting well with syntax extensions.
My experience is that such strange errors can be solved by passing implicit parameters explicitly. The problem is that such solution will kill the "readability" of equational reasoning combinator proofs.
Is there a way to solve this? The relevant part for reproducing this error is available in previously linked gist.

Proof in Coq that equality is reflexivity

The HOTT book writes on page 51:
... we can prove by path induction on p: x = y that
$(x, y, p) =_{ \sum_{(x,y:A)} (x=y)} (x, x, refl x)$ .
Can someone show me how to prove this in Coq?
Actually, it is possible to prove this result in Coq:
Notation "y ; z" := (existT _ y z) (at level 80, right associativity).
Definition hott51 T x y e :
(x; y; e) = (x; x; eq_refl) :> {x : T & {y : T & x = y} } :=
match e with
| eq_refl => eq_refl
end.
Here, I've used a semicolon tuple notation to express dependent pairs; in Coq, {x : T & T x} is the sigma type \sum_{x : T} T x. There is also a slightly easier-to-read variant, where we do not mention y:
Definition hott51' T x e : (x; e) = (x; eq_refl) :> {y : T & x = y} :=
match e with
| eq_refl => eq_refl
end.
If you're not used to writing proof terms by hand, this code might look a bit mysterious, but it is doing exactly what the HoTT book says: proceeding by path induction. There's one crucial bit of information that is missing here, which are the type annotations needed to do path induction. Coq is able to infer those, but we can ask it to tell us what they are explicitly by printing the term. For hott51', we get the following (after some rewriting):
hott51' =
fun (T : Type) (x : T) (e : x = x) =>
match e as e' in _ = y' return (y'; e') = (x; eq_refl) with
| eq_refl => eq_refl
end
: forall (T : Type) (x : T) (e : x = x),
(x; e) = (x; eq_refl)
The important detail there is that in the return type of the match, both x and e are generalized to y' and e'. The only reason this is possible is because we wrapped x in a pair. Consider what would happen if we tried proving UIP:
Fail Definition uip T (x : T) (e : x = x) : e = eq_refl :=
match e as e' in _ = y' return e' = eq_refl with
| eq_refl => eq_refl
end.
Here, Coq complains, saying:
The command has indeed failed with message:
In environment
T : Type
x : T
e : x = x
y' : T
e' : x = y'
The term "eq_refl" has type "x = x" while it is expected to have type
"x = y'" (cannot unify "x" and "y'").
What this error message is saying is that, in the return type of the match, the e' has type x = y', where y' is generalized. This means that the equality e' = eq_refl is ill-typed, because the right-hand side must have type x = x or y' = y'.
Simple answer: you can't. All proofs of x = y in Coq are not instances of eq_refl x. You will have to assume Uniqueness of Identity Proof to have such a result. This is a very nice axiom, but it's still an axiom in the Calculus of Inductive Constructions.