Coq: Is it possible to prove that, if two records are different, then one of their fields is different? - record

Say you have a record:
Record Example := {
fieldA : nat;
fieldB : nat
}.
Can we prove:
Lemma record_difference : forall (e1 e2 : Example),
e1 <> e2 ->
(fieldA e1 <> fieldA e2)
\/
(fieldB e1 <> fieldB e2).
If so, how?
On the one hand, it looks true, since Records are absolutely defined by their fields. On the other, without knowing what made e1 different from e2 in the first place, how are we supposed to decide which side of the disjunction to prove?
As a comparison, note that if there is only one field in the record, we are able to prove the respective lemma:
Record SmallExample := {
field : nat
}.
Lemma record_dif_small : forall (e1 e2 : SmallExample),
e1 <> e2 -> field e1 <> field e2.
Proof.
unfold not; intros; apply H.
destruct e1; destruct e2; simpl in H0.
f_equal; auto.
Qed.

On the other, without knowing what made e1 different from e2 in the first place, how are we supposed to decide which side of the disjunction to prove?
That is precisely the point: we need to figure out what makes both records different. We can do this by testing whether fieldA e1 = fieldA e2.
Require Import Coq.Arith.PeanoNat.
Record Example := {
fieldA : nat;
fieldB : nat
}.
Lemma record_difference : forall (e1 e2 : Example),
e1 <> e2 ->
(fieldA e1 <> fieldA e2)
\/
(fieldB e1 <> fieldB e2).
Proof.
intros [n1 m1] [n2 m2] He1e2; simpl.
destruct (Nat.eq_dec n1 n2) as [en|nen]; try now left.
right. intros em. congruence.
Qed.
Here, Nat.eq_dec is a function from the standard library that allows us to check whether two natural numbers are equal:
Nat.eq_dec : forall n m, {n = m} + {n <> m}.
The {P} + {~ P} notation denotes a special kind of boolean that gives you a proof of P or ~ P when destructed, depending on which side it lies on.
It is worth stepping through this proof to see what is going on. On the third line of the proof, for instance, executing intros em leads to the following goal.
n1, m1, n2, m2 : nat
He1e2 : {| fieldA := n1; fieldB := m1 |} <> {| fieldA := n2; fieldB := m2 |}
en : n1 = n2
em : m1 = m2
============================
False
If en and em hold, then the two records must be equal, contradicting He1e2. The congruence tactic simply instructs Coq to try to figure this out by itself.
Edit
It is interesting to see how far one can get without decidable equality. The following similar statement can be proved trivially:
forall (A B : Type) (p1 p2 : A * B),
p1 = p2 <-> fst p1 = fst p2 /\ snd p1 = snd p2.
By contraposition, we get
forall (A B : Type) (p1 p2 : A * B),
p1 <> p2 <-> ~ (fst p1 = fst p2 /\ snd p1 = snd p2).
It is here that we get stuck without a decidability assumption. De Morgan's laws would allow us to convert the right-hand side to a statement of the form ~ P \/ ~ Q; however, their proof appeals to decidability, which is not generally available in Coq's constructive logic.

Related

Avoid repetition in Coq

I'm currently trying to implement Hilbert's geometry in Coq. When proving, very often a section of the proof is repeated multiple times; for example, here I'm trying to prove that there exists 3 lines which are different from each other.
Proposition prop3_2 : (exists l m n: Line, (l<>m/\m<>n/\n<>l)).
Proof.
destruct I3 as [A [B [C [[AneB [BneC CneA]] nAlgn]]]].
destruct ((I1 A B) AneB) as [AB [incAB unAB]].
destruct ((I1 B C) BneC) as [BC [incBC unBC]].
destruct ((I1 C A) CneA) as [CA [incCA unCA]].
refine (ex_intro _ AB _).
refine (ex_intro _ BC _).
refine (ex_intro _ CA _).
split.
(* Proving AB <> BC through contradiction *)
case (classic (AB = BC)).
intros AB_e_BC.
rewrite AB_e_BC in incAB.
pose (conj incBC (proj2 incAB)) as incABC.
specialize (nAlgn BC).
tauto.
trivial.
split.
(* Proving BC <> CA through contradiction *)
case (classic (BC = CA)).
intros BC_e_CA.
rewrite BC_e_CA in incBC.
pose (conj incCA (proj2 incBC)) as incABC.
specialize (nAlgn CA).
tauto.
trivial.
(* Proving CA <> AB through contradiction *)
case (classic (CA = AB)).
intros CA_e_AB.
rewrite CA_e_AB in incCA.
pose (conj incAB (proj2 incCA)) as incABC.
specialize (nAlgn AB).
tauto.
trivial.
Qed.
It'd be very nice if there was something like a macro in these cases.
I thought about creating a sub-proof halfway through:
Lemma prop3_2_a: (forall (A B C:Point) (AB BC:Line)
(incAB:(Inc B AB /\ Inc A AB)) (incBC:(Inc C BC /\ Inc B BC))
(nAlgn : forall l : Line, ~ (Inc A l /\ Inc B l /\ Inc C l)),
AB <> BC).
Proof.
...
But that's pretty cumbersome, and I'd have to create three different versions of nAlgn ordered differently, which is manageable but annoying.
The code can be found here: https://github.com/GiacomoMaletto/Hilbert/blob/master/hilbert.v
(Btw Any other comments on style or whatever are appreciated).
First, some simple advice to refactor the three cases individually.
At the start of each of them, the goal looks like this:
...
--------------
AB <> BC
The subsequent case analysis on (AB = BC) is somewhat redundant. The first case (AB = BC) is the interesting one, where you need to prove a contradiction, and the second case (AB <> BC) is trivial. A shorter way is intro AB_e_BC, which asks you just to prove the first case. This works because AB <> BC actually means AB = BC -> False.
The other steps are mostly straightforward propositional reasoning that can be bruteforced via tauto, except for a bit of rewriting and a crucial use of specialize. The rewriting only uses an equality between variables AB and BC, in that case you can use the subst shorthand that rewrites using all equalities where one side is a variable. So this fragment:
(* Proving AB <> BC through contradiction *)
case (classic (AB = BC)).
intros AB_e_BC.
rewrite AB_e_BC in incAB.
pose (conj incBC (proj2 incAB)) as incABC.
specialize (nAlgn BC).
tauto.
trivial.
becomes
intro; specialize (nAlgnABC BC); subst; tauto.
Now you still don't want to write that three times. The only varying part now is the variable BC. Luckily, you can read that off the goal before intro.
--------------
AB <> BC
^----- there's BC (and in the other two cases, CA and AB)
Actually picking either AB or BC is fine, since intro makes the assumption they're equal. You can use match goal with to parameterize your tactic by bits from the goal.
match goal with
| [ |- _ <> ?l ] => intro; specialize (nAlgnABC l); subst; tauto
end.
(* The syntax is:
match goal with
| [ |- ??? ] => tactics
end.
where ??? is an expression with wildcards (_) and existential
variables (?l), that can be referred to inside the body "tactics"
(without the question mark) *)
Next, moving up before the split:
-------------------------------------------
AB <> BC /\ BC <> CA /\ CA <> AB
You can compose tactics to get three subgoals at once: split; [| split]. (meaning, split once, and in the second subgoal split again).
Finally, you want to apply the match tactic above for each subgoal, that's another semicolon:
split; [| split];
match goal with
| [ |- _ <> ?l ] => intro; specialize (nAlgnABC l); subst; tauto
end.
I would also recommend using bullets and braces to structure your proof, so that when your definitions change, you avoid entering confusing proof states because tactics get applied to the wrong subgoal. Here are some possible layouts for a three-case proof:
split.
- ...
...
- split.
+ ...
...
+ ...
...
split; [| split].
- ...
...
- ...
...
- ...
...
split; [| split].
{ ...
...
}
{ ...
...
}
{ ...
...
}

From set inclusion to set equality in lean

Given a proof of set inclusion and its converse I'd like to be able to show that two sets are equal.
For example, I know how to prove the following statement, and its converse:
open set
universe u
variable elem_type : Type u
variable A : set elem_type
variable B : set elem_type
def set_deMorgan_incl : A ∩ B ⊆ set.compl ((set.compl A) ∪ (set.compl B)) :=
sorry
Given these two inclusion proofs, how do I prove set equality, i.e.
def set_deMorgan_eq : A ∩ B = set.compl ((set.compl A) ∪ (set.compl B)) :=
sorry
You will want to use anti-symmetry of the subset relation, as proved in the stdlib package:
def set_deMorgan_eq : A ∩ B = set.compl ((set.compl A) ∪ (set.compl B)) :=
subset.antisymm (set_deMorgan_incl _ _ _) (set_deMorgan_incl_conv _ _ _)
As you can see in the proof of subset.antisymm, it combines both functional and propositional extensionality.

Why is this Grammar not context sensitive?

I have got this grammar:
G = (N, Epsilon, P, S)
with
N = {S, A, B}
Epsilon = {a},
P: S -> e
S -> ABA
AB -> aa
aA -> aaaA
A -> a
Why is this a grammar of only type 0?
I think it is because of aA -> aaaA, but I don't see how it is in conflict with the rules.
The rules have to be built like this:
x1 A x2 -> x1 B x2 while:
A is element of N;
x1,x2 are elements of V*;
and B is element of VV*;
With V = N united Epsilon, I don't see the problem here.
a is from V, and A is from N, while right of A there could be the empty word, which would also be part of V*, so the left side would be okay.
On the right side, there is x1 again, being a, then we could say aaA is part of VV*, with aa being V and A being V*, while the right part is x2, so empty again.
"The rules have to be built like this:
x1 A x2 -> x1 B x2 while:...."
yes, it's correct. But, exists an equivalent definition of the rules (of type-1 grammars):
p->q where
p,q is element of V^+ and length(p)<=length(q) and -naturally- p has an element of N.
Your grammar has only rules, that satisfy this form => your grammar is type-1

vector reflexivity under setoid equality using CoRN MathClasses

I have a simple lemma:
Lemma map2_comm: forall A (f:A->A->B) n (a b:t A n),
(forall x y, (f x y) = (f y x)) -> map2 f a b = map2 f b a.
which I was able to prove using standard equality (≡). Now I am need to prove the similar lemma using setoid equality (using CoRN MathClasses). I am new to this library and type classes in general and having difficulty doing so. My first attempt is:
Lemma map2_setoid_comm `{Equiv B} `{Equiv (t B n)} `{Commutative B A}:
forall (a b: t A n),
map2 f a b = map2 f b a.
Proof.
intros.
induction n.
dep_destruct a.
dep_destruct b.
simpl.
(here '=' is 'equiv'). After 'simpl' the goal is "(nil B)=(nil B)" or "[]=[]" using VectorNotations. Normally I would finish it using 'reflexivity' tactics but it gives me:
Tactic failure: The relation equiv is not a declared reflexive relation. Maybe you need to require the Setoid library.
I guess I need somehow to define reflexivity for vector types, but I am not sure how to do that. Please advise.
First of all the lemma definition needs to be adjusted to:
Lemma map2_setoid_comm : forall `{CO:Commutative B A f} `{SB: !Setoid B} ,
forall n:nat, Commutative (map2 f (n:=n)).
To be able to use reflexivity:
Definition vec_equiv `{Equiv A} {n}: relation (vector A n) := Vforall2 (n:=n) equiv.
Instance vec_Equiv `{Equiv A} {n}: Equiv (vector A n) := vec_equiv.

CTL Equivalence checking

I'm told the following CTL formulas aren't equivalent. However, I can't find a model in which one is true and the other isn't. CTL is a computational temporal logic.
Formula 1: AF p OR AF q
Formula 2: AF( p OR q )
The first says: For all paths starting from the begin state there is a future in which p holds OR for all paths starting from the begin state there is a future in which q holds.
The second: For all paths starting from the begin state there is a future in which p OR q holds.
The model is a little bit tricky. Firstly, one should note that AF(p OR q) implies AF p OR AF q. So, we are looking for a model in which AF (p OR q) is true but AF p OR AF q is false.
I am assuming that you are familiar with Kripke model notation described in Logic in Computer Science textbook by M. Huth and M. Ryan (see http://www.cs.bham.ac.uk/research/projects/lics/).
Let M = (S, R, L) be a model with S = {s0, s1, s2} as the set of possible states, R = {(s0,s1), (s0,s2), (s1,s1), (s1,s2), (s2,s2)} as the transition relation, and L is a labeling function defined as follows: L(s0) = {} (empty set), L(s1) = {p}, and L(s2) = {q}.
Suppose the starting state is s0. It is clear that AF (p OR q) holds at s0. However, AF p OR AF q is not satisfied at s0. To prove this, we have to show that s0 does not satisfy AF p *and* s0 does not satisfy AF q.
AF p is not satisfied at s0 since we can choose the path s0 -> s2 -> s2 -> s2 -> ...
Similarly, AF q is not satisfied at s1 since we can choose the path s0 -> s1 -> s1 -> s1 -> ...