Expressing the language of a grammar - grammar

How do I express the language accepted by a grammar into a set notation, i.e. the following grammar?
S → aSa|A
A → bAb|B
B → ccB|λ
Thanks.

Related

How can I derivate these Grammar

G = (V={S,X,Y}, T={0,1,2},P,S)
S -> 0X1
X ->S | 00S2 | Y | ε
Y ->X | 1
The Problem is I don´t know how to derivate numbers..
How can I derivate this here:
00111 ∈ L(G)
And here I have to give a derivation three:
0000121 ∈ L(G)
To do a derivation you start with the start symbol (in this case S) which is the fourth item in the grammar tuple). You then apply the production rules (P) in whatever order seems appropriate to you.
A production like:
X → S | 00S2 | Y | ε
means that you can replace an X with
S, or
00S2, or
Y, or
nothing.
In other words, you read production rules as follows:
→ means "can be replaced with".
| means "or"
ε means "nothing" (Replacing a symbol with nothing means deleting it from the current string.)
Everything else is just a possible symbol in the string. You keep doing replacements, one at a time, until you reach the string you are trying to derive.
Here's a quick example:
S
→ 0X1 (using S → 0X1)
→ 000S21 (using X → 00S2)
→ 0000X121 (using S → 0X1)
→ 0000121 (using X → ε)
That's it. Nothing complicated at all. Just a bunch of search and replace operations. (You don't need to replace the first occurrence, if there is more than one possibility. You can do the replacements in any order you like. But it's convenient to be systematic.)

Formal grammar and arity

I have the following grammar:
S --> LR .
L --> aL .
R --> bR .
This grammar generates the language a^n b^k, where n,k > 0.
I want a grammar that generates the language a^n b^n where n>0, so
my goal is to obtain a grammar in order to ensure that the number of a is always equal of b, but still keeping the non-terminals L and R.
Is there a way to do this?
In a.context free grammar, the derivations of L and R in S → L R are independent of each other. That is what "context free" means: the derivation of a non-terminal is not affected by the context in which the non-terminal occurs.
So if you want a grammar in which L and R must derive strings of equal length, it will have to be a context-sensitive grammar. No context-free grammar can do that.
Of course, there is a simple CFG for the language:
S →
S → a S b

How can I show that this grammar is ambiguous?

I want to prove that this grammar is ambiguous, but I'm not sure how I am supposed to do that. Do I have to use parse trees?
S -> if E then S | if E then S else S | begin S L | print E
L -> end | ; S L
E -> i
You can show it is ambiguous if you can find a string that parses more than one way:
if i then ( if i then print i else print i ; )
if i then ( if i then print i ) else print i ;
This happens to be the classic "dangling else" ambiguity. Googling your tag(s), title & grammar gives other hits.
However, if you don't happen to guess at an ambiguous string then googling your tag(s) & title:
how can i prove that this grammar is ambiguous?
There is no easy method for proving a context-free grammar ambiguous -- in fact,
the question is undecidable, by reduction to the Post correspondence problem.
You can put the grammar into a parser generator which supports all context-free grammars, a context-free general parser generator. Generate the parser, then parse a string which you think is ambiguous and find out by looking at the output of the parser.
A context-free general parser generator generates parsers which produce all derivations in polynomial time. Examples of such parser generators include SDF2, Rascal, DMS, Elkhound, ART. There is also a backtracking version of yacc (btyacc) but I don't think it does it in polynomial time. Usually the output is encoded as a graph where alternative trees for sub-sentences are encoded with a nested set of alternative trees.

Left-Linear and Right-Linear Grammar for a simple Regular Expression

I am having trouble coming up with a left-linear and right-linear grammar for the following regular expression.
0(0+1)*+10^+
I am also quite confused on what the plus-closure does.
This is what I got for the left linear grammar, but I am not sure if this is correct:
P: S--> 0A | 1A
A--> A0|A1|0S|0| epsilon
Thank you!
One general good way to find left- and right-linear grammars is to find an NFA that has the same language as your regex, then convert that NFA into a left- or right-linear grammar using the following mechanical transform:
For each state q, introduce a nonterminal Tq.
For each transition (q, r) on character a (or where a = ε), add the production Tq → aTr (for left-linear grammars) and Tr → Tqa (for right-linear grammars).
Then, for left-linear grammars:
For each accepting state q, add the production Tq → ε
For left-linear grammars with start state q0, make the start symbol the symbol Tq0.
Then, for right-linear grammars:
Add a start symbol S with the production S → Tq for each accepting state q.
Add the production Tq0 → ε for the start state q0.
Try applying this idea here and you'll end up producing left- and right-linear grammars for your language. They might not be the most efficient grammars, but they'll work.

Kleene Closure in Chomsky Normal Form

Let n be any terminal.
Consider the following, presumably correct, representation of the kleene star over n:
N → n N | ε
(where ε is the empty terminal.)
Wikipedia says:
Every grammar in Chomsky normal form is context-free, and conversely, every context-free grammar can be transformed into an equivalent one which is in Chomsky normal form.
I cannot see how the above grammar could be transformed to CNF.
Is the grammar not context-free?
Is there in fact a way to represent it in CNF?
Fortunately, this can be written in CNF. Here is one such grammar:
S → ε | n | NA
N → n
A → n | NA
Therefore, the language is context-free.
Hope this helps!