Constructing a Follow Set - grammar

While creating a first set for a given grammar, I noticed a scenario not described in my reference for the algorithm.
Namely, how does one calculate the follow set for a nonterminal with a rule such as this.
<exp-list_tail> --> COMMA <exp> <exp-list_tail>
Expressions surrounded by <..> are nonterminals, COMMA is a terminal.
My best guess is I should just add the empty string to the follow set, but I'm not sure.
Normally, for the case of a nonterminal being at the end of a production rule, you would just compute the follow list for the left nonterminal, but you can see how this would be a problem.

To answer this properly, it would be helpful to know your entire grammar. However, here is an attempt for a general answer:
Here is the algorithm for calculating follow groups:
Init all follow groups to {}, except S which is init to {$}.
While there are changes, for each A∈V do:
For each Y → αAβ do:
follow(A) = follow(A) ∪ first(β)
If β ⇒* ε, also do: follow(A) = follow(A) ∪ follow(Y)
Note that this is a deterministic algorithm, it will give you a single answer, depending only on your (entire) grammar.
Specifically, I don't think that this particular rule will affect <exp-list_tail>'s follow set (it could, but probably wouldn't).

Related

Is this conversion from BNF to EBNF correct?

As context, my textbook uses this style for EBNF:
Sebesta, Robert W. Concepts of Programming Languages 11th ed., Pearson, 2016, 150.
The problem:
Convert the following BNF rule with three RHSs to an EBNF rule with a single RHS.
Note: Conversion to EBNF should remove all explicit recursion and yield a single RHS EBNF rule.
A ⟶ B + A | B – A | B
My solution:
A ⟶ B [ (+ | –) A ]
My professor tells me:
"First, you should use { } instead of [ ],
Second, according to the BNF rule, <"term"> is B." (He is referring the the style guide posted above)
Is he correct? I assume so but have read other EBNF styles and wonder if I am entitled to credit.
You were clearly asked to remove explicit recursion and your proposed solution doesn't do that; A is still defined in terms of itself. So independent of naming issues, you failed to do the requested conversion and your prof is correct to mark you down for it. The correct solution for the problem as presented, ignoring the names of non-terminals, is A ⟶ B { (+ | –) B }, using indefinite repetition ({…}) instead of optionality ([…]). With this solution, the right-hand side of the production for A only references B, so there is no recursion (at least, in this particular production).
Now, for naming: clearly, your textbook's EBNF style is to use angle brackets around the non-terminal names. That's a common style, and many would say that it is more readable than using single capital letters which mean nothing to a human reader. Now, I suppose your prof thinks you should have changed the name of B to <term> on the basis that that is the "textbook" name for the non-terminal representing the operand of an additive operator. The original BNF you were asked to convert does show the two additive operators. However, it makes them right-associative, which is definitely non-standard. So you might be able to construct an argument that there's no reason to assume that these operators are additive and that their operands should be called "terms" [Note 1]. But even on that basis, you should have used some name written in lower-case letters and surrounded in angle brackets. To me, that's minor compared with the first issue, but your prof may have their own criteria.
In summary, I'm afraid I have to say that I don't believe you are entitled to credit for that solution.
Notes
If you had actually come up with that explanation, your prof might have been justified in suggesting a change of major to Law.

Construct CFG from {w element of {a, b}* : 2#a(w)=3#b(w)}

If i have following language { x is element of {a,b}*, where 2#a(x)=3#b(x), then the cfg of that language is :
S=>SaSaSaSbSbS |SaSaSbSaSbS|SaSaSbSbSaS | SaSbSaSaSbS| SaSbSaSbSaS | SaSbSbSaSaS |SbSaSaSaSbS |SbSaSaSbSaS |SbSaSbSaSaS |SbSbSaSaSaS | epsilon/lambda
Is this correct? If this isnt correct/there's another more simple form, can you tell it? I have no clue on another form other than that.
At a glance it looks like this probably works:
your base case is good; the empty string is in the language
you cover all your inductive cases: you only add 2 a and 3 b and you cover all arrangements
I am not seeing a fundamentally simpler solution than this, although you might be able to remove either the leading or the trailing S from the right-hand side of all productions; then, by choosing a production you'd be committing to that first or last terminal symbol, but I think that still works out. Possibly even removing both leading and trailing S so you commit to both the first and the last. Any other simplification seems like it would increase the number of productions or the number of nonterminals, or both, which while possibly reducing the total number of symbols needed to encode the grammar, arguably doesn't make the grammar any simpler (indeed, more nonterminals and productions is typically seen as more complicated, not less). If you want to experiment with adding productions or nonterminals, consider e.g. T => Sa and R => Sb, just to cut down on repetition.

what is ambiguity in alphabet in automata theory?

I am just new in automata field. I have read many articles, and seen many video. I stuck in some first topics. It can be easy for others. but after spending a lot of time,i am still unable to understand it.
TOPIC is: Ambiguity in alphabet
An alphabet is = {A, Aa, bab, d}
and a string is s= AababA
and author says that, this is ambiguous alphabet, because when computer reads it , it reads from left to right. After the capital A, there again A that is prefix of small a, will create ambiguity. A letter(symbol) should not be prefix again of a new letter.
moreover author says.
we will tokenize it (AababA) in two ways:
(Aa) (bab) (A)
(A) (abab) (A)
after that , first one is ok, second is not ok due to ambiguity in alphabet define above.
What is procedure to tokenize the above string in two ways? is there any specific rule?
How alphabet is ambiguous due to second group.
If it is invalid due to prefix of A, then how? What is the role of prefix in ambiguity of alphabet?
If we don't think about prefix, and we just simply match the both string group with above alphabet, then we can easily judge, that second is not matching with above alphabet, then why do we need to discuss that prefix?
I hope, this question will be considered important, so that answer will help me to make my self out of this confusion. I will be very thankful .
The author chose a confusing example. If you share the source where you got this example, I could give a better answer, but I would argue that in this case, there is no practical ambiguity. If you see Aa, you can know that the first lexeme must be "Aa", because nothing in the alphabet starts with "a".
For an easier example, consider the alphabet {A, a, Aa} and string "AAaAaaA"
You could tokenize this in the following ways:
(A) (A) (a) (A) (a) (a) (A)
(A) (Aa) (A) (a) (a) (A)
(A) (A) (a) (Aa) (a) (A)
(A) (Aa) (Aa) (A)
This is most often resolved by choosing the longest lexeme that matches in each case, which would yield the last tokenization.
Now let us return to your example, but let's make the string a little bit different: "AababAe".
You could tokenize the string in the following ways:
(Aa) (bab) (A) <error>
(A) <error>
In one branch, you have an error. In one branch, you don't. As you noted, the tokenizer should choose the first. Both have errors, though. The point is that there is an explicit choice here to prefer the longest valid tokenization. Nothing in the alphabet forces you to make this choice. It is just as valid to choose the shortest matching option. This would be massively impractical, but it is a valid choice.

Using wolfram to check biconditional involving predicates and quantifiers

I have a habit of trying out correctness about some logical statements with worlfram alpha by generating truth table for them. For example, I can try if this:
((¬x→y)∧(¬x→¬y))→x
is correct or not by geerating truth table for ((¬x→y)∧(¬x→¬y)) which turns out to be the same as x column in the same truth table. Hence the above is correct. However is there any way I can check same for biconditionals involving nested existential and universal quantifiers and predicates? For example can I somehow verify rules of this kind?:
(∀x)(∀y)ϕ(x,y)⇔(∀y)(∀x)ϕ(x,y)
Update
I am able to do following check ∀x,y(x∨y) as follows:
Resolve[ForAll[{x,y}, x or y]]
which correctly returns False as (x∨y) does not hold for all x and y.
So now I thought I can do something similar to obtain True for following (which is a general fact): ¬(∀x)ϕ(x)⇔(∃x)¬ϕ(x). I tried this:
Resolve[ForAll[x,(not ForAll[x, x]) xnor (exists[x,not x])]]
But it did not work. Note that ⇔ is nothing but xnor. So how do I do this especially something like following also correctly returns True:
Resolve[not ForAll[x, x]]
which stands for ¬∀x(x).

Ambiguous grammar?

hi
there is this question in the book that said
Given this grammer
A --> AA | (A) | epsilon
a- what it generates\
b- show that is ambiguous
now the answers that i think of is
a- adjecent paranthesis
b- it generates diffrent parse tree so its abmbiguous and i did a draw showing two scenarios .
is this right or there is a better answer ?
a is almost correct.
Grammar really generates (), ()(), ()()(), … sequences.
But due to second rule it can generate (()), ()((())), etc.
b is not correct.
This grammar is ambiguous due ot immediate left recursion: A → AA.
How to avoid left recursion: one, two.
a) Nearly right...
This grammar generates exactly the set of strings composed of balanced parenthesis. To see why is that so, let's try to make a quick demonstration.
First: Everything that goes out of your grammar is a balanced parenthesis string. Why?, simple induction:
Epsilon is a balanced (empty) parenthesis string.
if A is a balanced parenthesis string, the (A) is also balanced.
if A1 and A2 are balanced, so is A1A2 (I'm using too different identifiers just to make explicit the fact that A -> AA doesn't necessary produces the same for each A).
Second: Every set of balanced string is produced by your grammar. Let's do it by induction on the size of the string.
If the string is zero-sized, it must be Epsilon.
If not, then being N the size of the string and M the length of the shortest prefix that is balanced (note that the rest of the string is also balanced):
If M = N then you can produce that string with (A).
If M < N the you can produce it with A -> AA, the first M characters with the first A and last N - M with the last A.
In either case, you have to produce a string shorter than N characters, so by induction you can do that. QED.
For example: (()())(())
We can generate this string using exactly the idea of the demonstration.
A -> AA -> (A)A -> (AA)A -> ((A)(A))A -> (()())A -> (()())(A) -> (()())((A)) -> (()())(())
b) Of course left and right recursion is enough to say it's ambiguous, but to see why specially this grammar is ambiguous, follow the same idea for the demonstration:
It is ambiguous because you don't need to take the shortest balanced prefix. You could take the longest balanced (or in general any balanced prefix) that is not the size of the string and the demonstration (and generation) would follow the same process.
Ex: (())()()
You can chose A -> AA and generate with the first A the (()) substring, or the (())() substring.
Yes you are right.
That is what ambigious grammar means.
the problem with mbigious grammars is that if you are writing a compiler, and you want to identify each token in certain line of code (or something like that), then ambigiouity wil inerrupt you in identifying as you will have "two explainations" to that line of code.
It sounds like your approach for part B is correct, showing two independent derivations for the same string in the languages defined by the grammar.
However, I think your answer to part A needs a little work. Clearly you can use the second clause recursively to obtain strings like (((((epsilon))))), but there are other types of derivations possible using the first clause and second clause together.