Context sensitive grammar for this language - grammar

Language
X = { 0^m such that m = 2n+1 where n >= 0}
Can someone help me find the context sensitive grammar for X? Ive been trying for ages but im still not close.
What i have right now:
S -> B0C|00
B0 -> DD0|00
BD -> DD
0C -> 0EE|00
EC -> EE
D -> B
E -> C
But this doesn't work. I can't figure out how to double the number of zeros.

Why not just use a simple grammar (even context-free in this case, though I can also make one that isn't), such as:
S -> 0 | 00S

Related

Creating Context-Free-Grammar with restrictions

I'm trying to understand CFG by using an example with some obstacles in it.
For example, I want to match the declaration of a double variable:
double d; In this case, "d" could be any other valid identifier.
There are some cases that should not be matched, e.g. "double double;", but I don't understand how to avoid a match of the second "double"
My approach:
G = (Σ, V, S, P)
Σ = {a-z}
V = {S,T,U,W}
P = { S -> doubleTUW
T -> _(space)
U -> (a-z)U | (a-z)
W -> ;
}
Now there must be a way to limit the possible outcomes of this grammar by using the function L(G). Unfortunately, I couldn't find a syntax that meet my requirement to deny a second "double".
Here's a somewhat tedious regular expression to match any identifier other than double. Converting it to a CFG can be done mechanically but it is even more tedious.
([a-ce-z]|d[a-np-z]|do[a-tv-z]|dou[ac-z]|doub[a-km-z]|doubl[a-df-z]|double[a-z])[a-z]*
Converting it to a CFG can be done mechanically but it is even more tedious:
ALPHA → a|b|c|d|e|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|u|v|w|x|y|z
NOT_B → a|c|d|e|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|u|v|w|x|y|z
NOT_D → a|b|c|e|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|u|v|w|x|y|z
NOT_E → a|b|c|d|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|u|v|w|x|y|z
NOT_L → a|b|c|d|e|f|g|h|i|j|k|m|n|o|p|q|r|s|t|u|v|w|x|y|z
NOT_O → a|b|c|d|e|f|g|h|i|j|k|l|m|n|p|q|r|s|t|u|v|w|x|y|z
NOT_U → a|b|c|d|e|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|v|w|x|y|z
WORD → NOT_D
| d NOT_O
| do NOT_U
| dou NOT_B
| doub NOT_L
| doubl NOT_E
| double ALPHA
| WORD ALPHA
This is why many of us usually use scanner generators like (f)lex which handle such exclusions automatically.

Constructing a linear grammar for the language

I find difficulties in constructing a Grammar for the language especially with linear grammar.
Can anyone please give me some basic tips/methodology where i can construct the grammar for any language ? thanks in advance
I have a doubt whether the answer for this question "Construct a linear grammar for the language: is right
L ={a^n b c^n | n belongs to Natural numbers}
Solution:
Right-Linear Grammar :
S--> aS | bA
A--> cA | ^
Left-Linear Grammar:
S--> Sc | Ab
A--> Aa | ^
As pointed out in the comments, these grammars are wrong since they generate strings not in the language. Here's a derivation of abcc in both grammars:
S -> aS -> abA -> abcA -> abccA -> abcc
S -> Sc -> Scc -> Abcc -> Aabcc -> abcc
Also as pointed out in the comments, there is a simple linear grammar for this language, where a linear grammar is defined as having at most one nonterminal symbol in the RHS of any production:
S -> aSc | b
There are some general rules for constructing grammars for languages. These are either obvious simple rules or rules derived from closure properties and the way grammars work. For instance:
if L = {a} for an alphabet symbol a, then S -> a is a gammar for L.
if L = {e} for the empty string e, then S -> e is a grammar for L.
if L = R U T for languages R and T, then S -> S' | S'' along with the grammars for R and T are a grammar for L if S' is the start symbol of the grammar for R and S'' is the start symbol of the grammar for T.
if L = RT for languages R and T, then S = S'S'' is a grammar for L if S' is the start symbol of the grammar for R and S'' is the start symbol of the grammar for T.
if L = R* for language R, then S = S'S | e is a grammar for L if S' is the start symbol of the grammar for R.
Rules 4 and 5, as written, do not preserve linearity. Linearity can be preserved for left-linear and right-linear grammars (since those grammars describe regular languages, and regular languages are closed under these kinds of operations); but linearity cannot be preserved in general. To prove this, an example suffices:
R -> aRb | ab
T -> cTd | cd
L = RT = a^n b^n c^m d^m, 0 < a,b,c,d
L' = R* = (a^n b^n)*, 0 < a,b
Suppose there were a linear grammar for L. We must have a production for the start symbol S that produces something. To produce something, we require a string of terminal and nonterminal symbols. To be linear, we must have at most one nonterminal symbol. That is, our production must be of the form
S := xYz
where x is a string of terminals, Y is a single nonterminal, and z is a string of terminals. If x is non-empty, reflection shows the only useful choice is a; anything else fails to derive known strings in the language. Similarly, if z is non-empty, the only useful choice is d. This gives four cases:
x empty, z empty. This is useless, since we now have the same problem to solve for nonterminal Y as we had for S.
x = a, z empty. Y must now generate exactly a^n' b^n' b c^m d^m where n' = n - 1. But then the exact same argument applies to the grammar whose start symbol is Y.
x empty, z = d. Y must now generate exactly a^n b^n c c^m' d^m' where m' = m - 1. But then the exact same argument applies to the grammar whose start symbol is Y.
x = a, z = d. Y must now generate exactly a^n' b^n' bc c^m' d^m' where n' and m' are as in 2 and 3. But then the exact same argument applies to the grammar whose start symbol is Y.
None of the possible choices for a useful production for S is actually useful in getting us closer to a string in the language. Therefore, no strings are derived, a contradiction, meaning that the grammar for L cannot be linear.
Suppose there were a grammar for L'. Then that grammar has to generate all the strings in (a^n b^n)R(a^m b^m), plus those in e + R. But it can't generate the ones in the former by the argument used above: any production useful for that purpose would get us no closer to a string in the language.

Simplification of lambda-productions,unary rules and non-useful symbols of a Grammar

I know that is not a general question but I would like to know how to do it with an example that I've been already working a bit on it..Once said this:
I have the following Grammar. I tried to simplify it but I'm unsure of its correctness, could someone help me confirming if it's correct or not?
S -> BC | lambda
A -> aA | lambda
B -> bB
C -> c
If I have to simplify the Grammar I first apply lambda-eliminations where I have something like:
S -> BC | B | C
A -> aA | a
B -> bB
C -> c
And finally I have to eliminate non-useful symbols:
Firstly I eliminate the ones that are not productive and then the ones that are unreacheable so..
S -> BC | bB | C
A -> aA | a
B -> bB ---> non-productive
C -> c
S -> C | b | C
A -> aA | a --> unreacheable
C -> c
Finally I have something like this and I eliminate C because is unnecessary and I also eliminate BC because were eliminated so should be something like:
S -> b | c
But if i'm honest I don't think that what I've done it's correct but I don't know exactly
It looks like there might be some problems with your simplification - or I'm having trouble following it. I'll walk through what I would do and you compare to your understanding.
S -> BC | lambda
A -> aA | lambda
B -> bB
C -> c
I presume the goal is to eliminate as many lambda and non-terminal symbols as possible. The first thing to note is that the production S -> lambda cannot be eliminated without changing the language; but all other lambda productions can be eliminated. We see one other production, so we are assured it can be eliminated. How do we eliminate A -> lambda?
We note that A is unreachable from S, the start symbol. So we can quite easily eliminate A -> lambda by eliminating A altogether. We arrive at this simpler, equivalent, grammar:
S -> BC | lambda
B -> bB
C -> c
Now, as our goal is to eliminate non-terminal symbols (we have already eliminated all extraneous -> lambda productions) we can look at S, B and C. We know we need a start symbol, so we might as well keep S as it is. B can only generate bB, which contains non-terminals; and bB can never lead to a string of only non-terminals. B is unproductive and we can eliminate it. When we eliminate an unproductive symbol, any concatenated term in which it appears must also be eliminated, since the concatenated expression can never arrive at a string of only non-terminals (any concatenated expression in which an unproductive symbol appears is also unproductive):
S -> lambda
C -> c
Applying our analysis to C, we easily see it is as unreachable as A was initially and so it can be eliminated in the same way:
S -> lambda
This grammar is in simplest terms and is minimal in terms of non-terminal symbols and productions for the language {lambda}.

How can I construct a grammar that generates this language? grammar context-free-grammar

I'm studying for a finite automata & grammars test and I'm stuck with this question:
Construct a grammar that generates L:
L = {a^n b^m c^2n | n>=0, m>=0}
How can I construct a grammar that generates this language?
grammar context-free-grammar automata
I think this should do the trick. I verified this on http://mdaines.github.io/grammophone/ .
S -> a B c c
| a S c c
| .
B -> b B
| .
I find it always helps with these kinds of questions to come up with some rules for how to build big strings out of little strings. First, identify the littlest strings in your language. In our case, we can start with the observation that if n = 0, b^m is in our language; that is, w in b* is in our language. We then note that if x is a string in our language we get another string by adding one a on the left and two cs on the right; that is, axcc is a string in our language also. So our rules are:
b* in L
if x in L then axcc in L
Writing this in terms of a CFG is now straightforward:
S -> B
S -> aScc
Here, S generates our language L and B generates the language b*. We complete the grammar by providing a grammar for b* with start symbol B:
(1) S -> B
(2) S -> aScc
(3) B -> e
(4) B -> B
Any string a^n b^m c^2n can be generated using n applications of rule 2, 1 application of rule 1, m applications of rule 4 and 1 application of rule 3. That this grammar generates no strings not in the language is left as an exercise.

CTL Equivalence checking

I'm told the following CTL formulas aren't equivalent. However, I can't find a model in which one is true and the other isn't. CTL is a computational temporal logic.
Formula 1: AF p OR AF q
Formula 2: AF( p OR q )
The first says: For all paths starting from the begin state there is a future in which p holds OR for all paths starting from the begin state there is a future in which q holds.
The second: For all paths starting from the begin state there is a future in which p OR q holds.
The model is a little bit tricky. Firstly, one should note that AF(p OR q) implies AF p OR AF q. So, we are looking for a model in which AF (p OR q) is true but AF p OR AF q is false.
I am assuming that you are familiar with Kripke model notation described in Logic in Computer Science textbook by M. Huth and M. Ryan (see http://www.cs.bham.ac.uk/research/projects/lics/).
Let M = (S, R, L) be a model with S = {s0, s1, s2} as the set of possible states, R = {(s0,s1), (s0,s2), (s1,s1), (s1,s2), (s2,s2)} as the transition relation, and L is a labeling function defined as follows: L(s0) = {} (empty set), L(s1) = {p}, and L(s2) = {q}.
Suppose the starting state is s0. It is clear that AF (p OR q) holds at s0. However, AF p OR AF q is not satisfied at s0. To prove this, we have to show that s0 does not satisfy AF p *and* s0 does not satisfy AF q.
AF p is not satisfied at s0 since we can choose the path s0 -> s2 -> s2 -> s2 -> ...
Similarly, AF q is not satisfied at s1 since we can choose the path s0 -> s1 -> s1 -> s1 -> ...