I am just new in automata field. I have read many articles, and seen many video. I stuck in some first topics. It can be easy for others. but after spending a lot of time,i am still unable to understand it.
TOPIC is: Ambiguity in alphabet
An alphabet is = {A, Aa, bab, d}
and a string is s= AababA
and author says that, this is ambiguous alphabet, because when computer reads it , it reads from left to right. After the capital A, there again A that is prefix of small a, will create ambiguity. A letter(symbol) should not be prefix again of a new letter.
moreover author says.
we will tokenize it (AababA) in two ways:
(Aa) (bab) (A)
(A) (abab) (A)
after that , first one is ok, second is not ok due to ambiguity in alphabet define above.
What is procedure to tokenize the above string in two ways? is there any specific rule?
How alphabet is ambiguous due to second group.
If it is invalid due to prefix of A, then how? What is the role of prefix in ambiguity of alphabet?
If we don't think about prefix, and we just simply match the both string group with above alphabet, then we can easily judge, that second is not matching with above alphabet, then why do we need to discuss that prefix?
I hope, this question will be considered important, so that answer will help me to make my self out of this confusion. I will be very thankful .
The author chose a confusing example. If you share the source where you got this example, I could give a better answer, but I would argue that in this case, there is no practical ambiguity. If you see Aa, you can know that the first lexeme must be "Aa", because nothing in the alphabet starts with "a".
For an easier example, consider the alphabet {A, a, Aa} and string "AAaAaaA"
You could tokenize this in the following ways:
(A) (A) (a) (A) (a) (a) (A)
(A) (Aa) (A) (a) (a) (A)
(A) (A) (a) (Aa) (a) (A)
(A) (Aa) (Aa) (A)
This is most often resolved by choosing the longest lexeme that matches in each case, which would yield the last tokenization.
Now let us return to your example, but let's make the string a little bit different: "AababAe".
You could tokenize the string in the following ways:
(Aa) (bab) (A) <error>
(A) <error>
In one branch, you have an error. In one branch, you don't. As you noted, the tokenizer should choose the first. Both have errors, though. The point is that there is an explicit choice here to prefer the longest valid tokenization. Nothing in the alphabet forces you to make this choice. It is just as valid to choose the shortest matching option. This would be massively impractical, but it is a valid choice.
Related
As context, my textbook uses this style for EBNF:
Sebesta, Robert W. Concepts of Programming Languages 11th ed., Pearson, 2016, 150.
The problem:
Convert the following BNF rule with three RHSs to an EBNF rule with a single RHS.
Note: Conversion to EBNF should remove all explicit recursion and yield a single RHS EBNF rule.
A ⟶ B + A | B – A | B
My solution:
A ⟶ B [ (+ | –) A ]
My professor tells me:
"First, you should use { } instead of [ ],
Second, according to the BNF rule, <"term"> is B." (He is referring the the style guide posted above)
Is he correct? I assume so but have read other EBNF styles and wonder if I am entitled to credit.
You were clearly asked to remove explicit recursion and your proposed solution doesn't do that; A is still defined in terms of itself. So independent of naming issues, you failed to do the requested conversion and your prof is correct to mark you down for it. The correct solution for the problem as presented, ignoring the names of non-terminals, is A ⟶ B { (+ | –) B }, using indefinite repetition ({…}) instead of optionality ([…]). With this solution, the right-hand side of the production for A only references B, so there is no recursion (at least, in this particular production).
Now, for naming: clearly, your textbook's EBNF style is to use angle brackets around the non-terminal names. That's a common style, and many would say that it is more readable than using single capital letters which mean nothing to a human reader. Now, I suppose your prof thinks you should have changed the name of B to <term> on the basis that that is the "textbook" name for the non-terminal representing the operand of an additive operator. The original BNF you were asked to convert does show the two additive operators. However, it makes them right-associative, which is definitely non-standard. So you might be able to construct an argument that there's no reason to assume that these operators are additive and that their operands should be called "terms" [Note 1]. But even on that basis, you should have used some name written in lower-case letters and surrounded in angle brackets. To me, that's minor compared with the first issue, but your prof may have their own criteria.
In summary, I'm afraid I have to say that I don't believe you are entitled to credit for that solution.
Notes
If you had actually come up with that explanation, your prof might have been justified in suggesting a change of major to Law.
Let S= {a, bb, bab, abaab} is an alphabet. and kleene closure will be S* will all possible combinations.
Is string abaabbabbaab exists in S*?
what is the method to factorize to check whether it is in S* or not?
I have done it, by the following ways,
Possible factorization:
(abaab)(bab)(b)(a)(a)(b)
(abaab)(bab)(b)(aa)(b)
(abaab)(bab)(ba)(ab)
(abaab)(bab)(baa)(b)
(abaab)(bab)(b)(aab)
we can see that (abaab)(bab) is matching , but later part is not matching will combinations in S*. I have factorized the later part in many ways, but still its not matching.
I want to ask that,
is it correct?
Is this correct way to factorize(tokenize) the string?
are all factorization pairs are correct?
is this correct method to check a string whether it is belong to a
language or not?
Some of your factoriztions contain $(b)$, which is not in $S$. So they are not correct.
I think your method is exhaustive trial and error. If you do that correctly, it is a correct way to find a factorization. For checking membership of a language, it works if the language is given in the form of the Kleene closure of a finite language.
Noam Chomsky - formal languages - type 1 - context sensitive grammar
Does AB->BA violate the rule? I assume it does.
A -> aAB does not violate condition?
aAB->ABc violates condition?
Using the wikipedia link provided, you can answer each question if you can map your production rules to the form:
iAr -> ibr, where A is a single non-terminal, i and r are (possibly empty) strings of terminals and non-terminals, and b is a non-empty string of terminals and non-terminals.
In other words, look at each of your rules, and try to make suitable choices for i, A, r, and b.
Before we look at your questions, let's look at some hypothetical examples:
Is CRC -> CRRRRRC a valid context-sensitive rule?
Yes. I can choose i=empty, A=C, r=RC, and b=CRRRR. Note, I could have made other choices that work, too.
Is xYz -> xWzv a valid context-sensitive rule?
No. There is no choice for i, A, and r that allow a match. If I chose i=x A=Y, r=z, and b=W, that trailing v screws things up.
Is xY -> xWzv a valid context-sensitive rule?
Yes. I can choose i=x, A=Y, r=empty, and b=Wzv.
This is the scheme you should use to answer your questions. Now, let's look at those:
AB -> BA: Assume you choose either A or B to be your single non-terminal. The choice fixes i and r (one will be empty, the other will be the non-terminal you didn't choose). Is there a string of the form ibr that can match based on how you fixed i and r? In other words, can you choose the string to replace b that maps to your rule?
A -> aAB. I hope the choice of your single non-terminal on the left is intuitively obvious. This choice will again fix i and r. Does the right map to a suitable ibr form where b is a nonempty string of terminals and nonterminals?
aAB -> ABc. Again, choose A or B to be your single non-terminal. This fixes i and r. Is there a choice that allows you to choose a suitable ibr?
While creating a first set for a given grammar, I noticed a scenario not described in my reference for the algorithm.
Namely, how does one calculate the follow set for a nonterminal with a rule such as this.
<exp-list_tail> --> COMMA <exp> <exp-list_tail>
Expressions surrounded by <..> are nonterminals, COMMA is a terminal.
My best guess is I should just add the empty string to the follow set, but I'm not sure.
Normally, for the case of a nonterminal being at the end of a production rule, you would just compute the follow list for the left nonterminal, but you can see how this would be a problem.
To answer this properly, it would be helpful to know your entire grammar. However, here is an attempt for a general answer:
Here is the algorithm for calculating follow groups:
Init all follow groups to {}, except S which is init to {$}.
While there are changes, for each A∈V do:
For each Y → αAβ do:
follow(A) = follow(A) ∪ first(β)
If β ⇒* ε, also do: follow(A) = follow(A) ∪ follow(Y)
Note that this is a deterministic algorithm, it will give you a single answer, depending only on your (entire) grammar.
Specifically, I don't think that this particular rule will affect <exp-list_tail>'s follow set (it could, but probably wouldn't).
hi
there is this question in the book that said
Given this grammer
A --> AA | (A) | epsilon
a- what it generates\
b- show that is ambiguous
now the answers that i think of is
a- adjecent paranthesis
b- it generates diffrent parse tree so its abmbiguous and i did a draw showing two scenarios .
is this right or there is a better answer ?
a is almost correct.
Grammar really generates (), ()(), ()()(), … sequences.
But due to second rule it can generate (()), ()((())), etc.
b is not correct.
This grammar is ambiguous due ot immediate left recursion: A → AA.
How to avoid left recursion: one, two.
a) Nearly right...
This grammar generates exactly the set of strings composed of balanced parenthesis. To see why is that so, let's try to make a quick demonstration.
First: Everything that goes out of your grammar is a balanced parenthesis string. Why?, simple induction:
Epsilon is a balanced (empty) parenthesis string.
if A is a balanced parenthesis string, the (A) is also balanced.
if A1 and A2 are balanced, so is A1A2 (I'm using too different identifiers just to make explicit the fact that A -> AA doesn't necessary produces the same for each A).
Second: Every set of balanced string is produced by your grammar. Let's do it by induction on the size of the string.
If the string is zero-sized, it must be Epsilon.
If not, then being N the size of the string and M the length of the shortest prefix that is balanced (note that the rest of the string is also balanced):
If M = N then you can produce that string with (A).
If M < N the you can produce it with A -> AA, the first M characters with the first A and last N - M with the last A.
In either case, you have to produce a string shorter than N characters, so by induction you can do that. QED.
For example: (()())(())
We can generate this string using exactly the idea of the demonstration.
A -> AA -> (A)A -> (AA)A -> ((A)(A))A -> (()())A -> (()())(A) -> (()())((A)) -> (()())(())
b) Of course left and right recursion is enough to say it's ambiguous, but to see why specially this grammar is ambiguous, follow the same idea for the demonstration:
It is ambiguous because you don't need to take the shortest balanced prefix. You could take the longest balanced (or in general any balanced prefix) that is not the size of the string and the demonstration (and generation) would follow the same process.
Ex: (())()()
You can chose A -> AA and generate with the first A the (()) substring, or the (())() substring.
Yes you are right.
That is what ambigious grammar means.
the problem with mbigious grammars is that if you are writing a compiler, and you want to identify each token in certain line of code (or something like that), then ambigiouity wil inerrupt you in identifying as you will have "two explainations" to that line of code.
It sounds like your approach for part B is correct, showing two independent derivations for the same string in the languages defined by the grammar.
However, I think your answer to part A needs a little work. Clearly you can use the second clause recursively to obtain strings like (((((epsilon))))), but there are other types of derivations possible using the first clause and second clause together.