I'm told the following CTL formulas aren't equivalent. However, I can't find a model in which one is true and the other isn't. CTL is a computational temporal logic.
Formula 1: AF p OR AF q
Formula 2: AF( p OR q )
The first says: For all paths starting from the begin state there is a future in which p holds OR for all paths starting from the begin state there is a future in which q holds.
The second: For all paths starting from the begin state there is a future in which p OR q holds.
The model is a little bit tricky. Firstly, one should note that AF(p OR q) implies AF p OR AF q. So, we are looking for a model in which AF (p OR q) is true but AF p OR AF q is false.
I am assuming that you are familiar with Kripke model notation described in Logic in Computer Science textbook by M. Huth and M. Ryan (see http://www.cs.bham.ac.uk/research/projects/lics/).
Let M = (S, R, L) be a model with S = {s0, s1, s2} as the set of possible states, R = {(s0,s1), (s0,s2), (s1,s1), (s1,s2), (s2,s2)} as the transition relation, and L is a labeling function defined as follows: L(s0) = {} (empty set), L(s1) = {p}, and L(s2) = {q}.
Suppose the starting state is s0. It is clear that AF (p OR q) holds at s0. However, AF p OR AF q is not satisfied at s0. To prove this, we have to show that s0 does not satisfy AF p *and* s0 does not satisfy AF q.
AF p is not satisfied at s0 since we can choose the path s0 -> s2 -> s2 -> s2 -> ...
Similarly, AF q is not satisfied at s1 since we can choose the path s0 -> s1 -> s1 -> s1 -> ...
Related
Say you have a record:
Record Example := {
fieldA : nat;
fieldB : nat
}.
Can we prove:
Lemma record_difference : forall (e1 e2 : Example),
e1 <> e2 ->
(fieldA e1 <> fieldA e2)
\/
(fieldB e1 <> fieldB e2).
If so, how?
On the one hand, it looks true, since Records are absolutely defined by their fields. On the other, without knowing what made e1 different from e2 in the first place, how are we supposed to decide which side of the disjunction to prove?
As a comparison, note that if there is only one field in the record, we are able to prove the respective lemma:
Record SmallExample := {
field : nat
}.
Lemma record_dif_small : forall (e1 e2 : SmallExample),
e1 <> e2 -> field e1 <> field e2.
Proof.
unfold not; intros; apply H.
destruct e1; destruct e2; simpl in H0.
f_equal; auto.
Qed.
On the other, without knowing what made e1 different from e2 in the first place, how are we supposed to decide which side of the disjunction to prove?
That is precisely the point: we need to figure out what makes both records different. We can do this by testing whether fieldA e1 = fieldA e2.
Require Import Coq.Arith.PeanoNat.
Record Example := {
fieldA : nat;
fieldB : nat
}.
Lemma record_difference : forall (e1 e2 : Example),
e1 <> e2 ->
(fieldA e1 <> fieldA e2)
\/
(fieldB e1 <> fieldB e2).
Proof.
intros [n1 m1] [n2 m2] He1e2; simpl.
destruct (Nat.eq_dec n1 n2) as [en|nen]; try now left.
right. intros em. congruence.
Qed.
Here, Nat.eq_dec is a function from the standard library that allows us to check whether two natural numbers are equal:
Nat.eq_dec : forall n m, {n = m} + {n <> m}.
The {P} + {~ P} notation denotes a special kind of boolean that gives you a proof of P or ~ P when destructed, depending on which side it lies on.
It is worth stepping through this proof to see what is going on. On the third line of the proof, for instance, executing intros em leads to the following goal.
n1, m1, n2, m2 : nat
He1e2 : {| fieldA := n1; fieldB := m1 |} <> {| fieldA := n2; fieldB := m2 |}
en : n1 = n2
em : m1 = m2
============================
False
If en and em hold, then the two records must be equal, contradicting He1e2. The congruence tactic simply instructs Coq to try to figure this out by itself.
Edit
It is interesting to see how far one can get without decidable equality. The following similar statement can be proved trivially:
forall (A B : Type) (p1 p2 : A * B),
p1 = p2 <-> fst p1 = fst p2 /\ snd p1 = snd p2.
By contraposition, we get
forall (A B : Type) (p1 p2 : A * B),
p1 <> p2 <-> ~ (fst p1 = fst p2 /\ snd p1 = snd p2).
It is here that we get stuck without a decidability assumption. De Morgan's laws would allow us to convert the right-hand side to a statement of the form ~ P \/ ~ Q; however, their proof appeals to decidability, which is not generally available in Coq's constructive logic.
I have got this grammar:
G = (N, Epsilon, P, S)
with
N = {S, A, B}
Epsilon = {a},
P: S -> e
S -> ABA
AB -> aa
aA -> aaaA
A -> a
Why is this a grammar of only type 0?
I think it is because of aA -> aaaA, but I don't see how it is in conflict with the rules.
The rules have to be built like this:
x1 A x2 -> x1 B x2 while:
A is element of N;
x1,x2 are elements of V*;
and B is element of VV*;
With V = N united Epsilon, I don't see the problem here.
a is from V, and A is from N, while right of A there could be the empty word, which would also be part of V*, so the left side would be okay.
On the right side, there is x1 again, being a, then we could say aaA is part of VV*, with aa being V and A being V*, while the right part is x2, so empty again.
"The rules have to be built like this:
x1 A x2 -> x1 B x2 while:...."
yes, it's correct. But, exists an equivalent definition of the rules (of type-1 grammars):
p->q where
p,q is element of V^+ and length(p)<=length(q) and -naturally- p has an element of N.
Your grammar has only rules, that satisfy this form => your grammar is type-1
**{a^i b^j c^k d^m | i+j=k+m | i<m}**
The grammar should allow the language in order abbccd not cbbcda. First should be the a's then b's and so on.
I know that you must "count" the number of a's and b's you are adding to make sure there are an equivalent number of c's and d's. I just can't seem to figure out how to make sure there are more c's than a's in the language. I appreciate any help anyone can give. I've been working on this for many hours now.
Edit:
The grammar should be Context Free
I have only got these two currently because all others turned out to be very wrong:
S -> C A D
| B
B -> C B D
|
C -> a
| b
D -> c
| d
and
S -> a S d
| A
A -> b A c
|
(which is close but doesn't satisfy the i < k part)
EDIT: This is for when i < k, not i < m. OP changed the problem, but I figure this answer may still be useful.
This is not a context free grammar, and this can be proven with the pumping lemma which states that if the grammar is context free, there exists an integer p > 0, such that all strings in the language of length >= p can be split into a substring uvwxy, where len(vx) >= 1, len(vwx) <= p, and uvnwxny is a member of the language for all n >= 0.
Suppose that a value of p exists. We can create a string such that:
k = i + 1
j = m + 1
j > p
k > p
v and x cannot contain more than one type of character or be both on the left side or both on the right side, because then raising them to powers would break the grammar immediately. They cannot be the same character as each other, because then multiplying them would break the rule that i + j = k + m. v cannot be a if x is d, because then w contains the bs and cs, which makes len(vwx) > p. By the same reasoning, v cannot be as if x is cs, and v cannot be bs if x is ds. The only remaining option is bs and cs, but setting n to 0 would make i >= k and j >= m, breaking the grammar.
Therefore, it is not a context free grammar.
There has to be at least one d because i < m, so there has to be a b somewhere to offset it. T and V guarantee this criterion before moving to S, the accepted state.
T ::= bd | bTd
U ::= bc | bUc
V ::= bUd | bVd
S ::= T | V | aSd
I'm curious how to optimize this code :
fun n = (sum l, f $ f0 l, g $ g0 l)
where l = map h [1..n]
Assuming that f, f0, g, g0, and h are all costly, but the creation and storage of l is extremely expensive.
As written, l is stored until the returned tuple is fully evaluated or garbage collected. Instead, length l, f0 l, and g0 l should all be executed whenever any one of them is executed, but f and g should be delayed.
It appears this behavior could be fixed by writing :
fun n = a `seq` b `seq` c `seq` (a, f b, g c)
where
l = map h [1..n]
a = sum l
b = inline f0 $ l
c = inline g0 $ l
Or the very similar :
fun n = (a,b,c) `deepSeq` (a, f b, g c)
where ...
We could perhaps specify a bunch of internal types to achieve the same effects as well, which looks painful. Are there any other options?
Also, I'm obviously hoping with my inlines that the compiler fuses sum, f0, and g0 into a single loop that constructs and consumes l term by term. I could make this explicit through manual inlining, but that'd suck. Are there ways to explicitly prevent the list l from ever being created and/or compel inlining? Pragmas that produce warnings or errors if inlining or fusion fail during compilation perhaps?
As an aside, I'm curious about why seq, inline, lazy, etc. are all defined to by let x = x in x in the Prelude. Is this simply to give them a definition for the compiler to override?
If you want to be sure, the only way is to do it yourself. For any given compiler version, you can try out several source-formulations and check the generated core/assembly/llvm byte-code/whatever whether it does what you want. But that could break with each new compiler version.
If you write
fun n = a `seq` b `seq` c `seq` (a, f b, g c)
where
l = map h [1..n]
a = sum l
b = inline f0 $ l
c = inline g0 $ l
or the deepseq version thereof, the compiler might be able to merge the computations of a, b and c to be performed in parallel (not in the concurrency sense) during a single traversal of l, but for the time being, I'm rather convinced that GHC doesn't, and I'd be surprised if JHC or UHC did. And for that the structure of computing b and c needs to be simple enough.
The only way to obtain the desired result portably across compilers and compiler versions is to do it yourself. For the next few years, at least.
Depending on f0 and g0, it might be as simple as doing a strict left fold with appropriate accumulator type and combining function, like the famous average
data P = P {-# UNPACK #-} !Int {-# UNPACK #-} !Double
average :: [Double] -> Double
average = ratio . foldl' count (P 0 0)
where
ratio (P n s) = s / fromIntegral n
count (P n s) x = P (n+1) (s+x)
but if the structure of f0 and/or g0 doesn't fit, say one's a left fold and the other a right fold, it may be impossible to do the computation in one traversal. In such cases, the choice is between recreating l and storing l. Storing l is easy to achieve with explicit sharing (where l = map h [1..n]), but recreating it may be difficult to achieve if the compiler does some common subexpression elimination (unfortunately, GHC does have a tendency to share lists of that form, even though it does little CSE). For GHC, the flags fno-cse and -fno-full-laziness can help avoiding unwanted sharing.
Language
X = { 0^m such that m = 2n+1 where n >= 0}
Can someone help me find the context sensitive grammar for X? Ive been trying for ages but im still not close.
What i have right now:
S -> B0C|00
B0 -> DD0|00
BD -> DD
0C -> 0EE|00
EC -> EE
D -> B
E -> C
But this doesn't work. I can't figure out how to double the number of zeros.
Why not just use a simple grammar (even context-free in this case, though I can also make one that isn't), such as:
S -> 0 | 00S