Prove the language is not context-free? - finite-automata

How can you prove the language L given below is not context-free, I would like to know does my proof given below makes any sense, if not, what would be the correct method to prove?
L = {a^n b^n c^i|n ≤ i ≤ 2n}
I am trying to solve this language by contradiction. Suppose L is regular and with pumping length p such that S = a^p b^p c^p. Observe that S ∉ L. Since there must be a pumping cycle xy with length less than p, this can duplicate y which consists of some number of b to cause x(y^2)z to enter the language because the number of b exceeds the number of c by no longer bound by the given condition of i which is n ≤ i ≥ 2n, therefore, we have contradiction and hence language L is not context-free.

The proof is by contradiction. Assume the language is context-free. Then, by the pumping lemma for context-free languages, any string in L can be written as uvxyz where |vxy| < p, |vy| > 0 and for all natural numbers k, u(v^k)x(y^k)z is in the language as well. Choose a^p b^p c^(p+1). Then we must be able to write this string as uvxyz so that |vy| > 0. There are several possibilities to consider:
v and y consist only of a's. In this case, pumping in either direction causes the numbers of a's and b's to differ, producing a string not in the language; so this cannot be the case.
v and y consist only of a's and b's. In this case, pumping might keep the numbers of a's and b's the same, but pumping up will eventually cause the number of c's to be less than the number of a's and b's; so this cannot be the case.
v and y consist only of b's. This case is similar to (1) above and so cannot be a valid choice.
v and y consist only of b's and c's. Also similar to (1) and (3) in that pumping will cause the numbers of a's and b's to differ.
v and y consist only of c's. Pumping up will eventually cause there to be more c's than twice the number of a's; so this cannot be the case either.
No matter how we choose v and y, pumping will produce strings not in the language. This is a contradiction; this means our assumption that the language is context-free must have been incorrect.

Related

Context-free grammar for L = { b^n c^n a^n , n>=1}

I have a language L, which is defined as: L = { b^n c^n a^n , n>=1}
The corresponding grammar would be able to create words such as:
bca
bbccaa
bbbcccaaa
...
How would such a grammar look like? Making two variables dependent of each other is relatively simple, but I have trouble with doing it for three.
Thanks in advance!
L = { b^n c^n a^n , n>=1}
As pointed out in the comments, this is a canonical example of a language which is not context free. It can be shown using the pumping lemma for context-free languages. Basically, consider a string like b^p c^p a^p where p is the pumping length and then show no matter what part you pump, you will throw off the balance (basically, the size of the part that's pumped is less than p, so it cannot "span" all three symbols to keep them in sync).
L = {a^m b^n c^n a^(m+n) |m ≥ 0,n ≥ 1}
As suggested in the comments, this is not context free either. It can be shown using the pumping lemma for context-free languages as well. However, given a proof (or acceptance) of the above, there is an easier way. Recall that the intersection of a regular language and a context-free language must be context free. Assume L is context-free. Then so must be its intersection with the regular language (b+c)(b+c)* a*. However, that intersection can be expressed as b^n c^n a^n (since m is forced to be zero), which we know is not context-free, a contradiction. Therefore, our assumption was wrong and L is not context free either.

Topcoder Binary search Tutorial

What we can call the main theorem states that binary search can be used if and only if for all x in S, p(x) implies p(y) for all y > x. This property is what we use when we discard the second half of the search space. It is equivalent to saying that ¬p(x) implies ¬p(y) for all y < x (the symbol ¬ denotes the logical not operator), which is what we use when we discard the first half of the search space.
Please explain this paragraph in simpler and detailed terms.
Consider that p(x) is some property of x. When using binary search this property is usually x being either greater, lesser, or equal than some other value k that you are looking for.
What we can call the main theorem states that binary search can be used if and only if for all x in S, p(x) implies p(y) for all y > x.
Lets say that x is some value in the middle of the list and you are looking for where k is. Lets also say that p(x) means that k is greater than x. If the list is sorted in ascending order, than all values y to the right of x (y > x) must also be greater than k (the property is transitive), and as such p(y) also holds for all y. This is the basis of binary search. If you are looking for k and some value x is known to be greater than k, than all elements to its right are also greater than k. Notice that this is only true if the list is sorted. Consider the list [a,b,c] and a value k that you are looking for. If it's known that a < b and b < c, if k < b is true, than k < c must also be true.
This property is what we use when we discard the second half of the search space.
This is what the previous conclusion allows you to do. As you know that the property that holds for x also holds for all y (that is, they are not the element you are looking for, because they are greater) than it's safe to discard them, and as such you keep looking for k only on the lower half.
The rest of the paragraph says pretty much the same thing for discarding the lower half.
In short, p(x) is some transitive property that should hold to all values to the right of any given value x (again, because it's transitive). ¬p(x), on the other hand, should hold for all values to the left of x. By being able to conclude that those are not the elements you are looking for, you can conclude that it's safe to discard either half of the list.

construction of a^(2^i) language grammar

I'm kind a stuck with automaton and grammars problem. I've searched a lot but without any success.
Is it even possible to construct a grammar generating this language L?
L = { a(2i) | i >= 0}
Can anyone provide me with simple solution?
It's certainly possible to write a grammar for this language, but it won't be a context-free grammar. That's easy to demonstrate using the pumping lemma.
The pumping lemma states that for any CFL, there is some integer p such that any string s in the language whose length is at least p can be written as uvxyz, where u, v, x, y and z are strings and vy is not empty, and for all integers n, the string uvnxynz is also in the language.
That is, for any string in the language (whose length l is greater than p), there is there some k such that there are strings in the language whose lengths are l + nk for any integer n. That is not the case for the language a2i, since those strings have exponential lengths, so the language cannot be context-free.
Constructing a non-context-free grammar for the language is not that difficult, but I don't know how useful it is.
The following is a Type 0 grammar (i.e. it's not context-sensitive either), but only because of the productions used to get rid of the metacharacters. The basic idea here is that there we put start and end markers around the string ([ and ]) and we have a "duplicator" (&rarrtl;) which moves from left to right doubling the a's; when it hits the end marker, it either turns into a back-shuttle (&larrtl;) or it eats the end-marker and turns into a start-marker-destroyer (&Larr;)
Start: [&rarrtl;a]
&rarrtl;a: aa&rarrtl;
&rarrtl;]: &larrtl;]
&rarrtl;]: &Larr;
a&larrtl;: &larrtl;a
a&Larr;: &Larr;a
[&larrtl;: [&rarrtl;
[&Larr;:

How can I prove that derivations in Chomsky Normal Form require 2n - 1 steps?

I'm trying to prove the following:
If G is a Context Free Grammar in the Chomsky Normal Form, then for any string w belongs L(G) of length n ≥ 1, it requires exactly 2n-1 steps to make any derivation of w.
How would I go about proving this?
As a hint - since every production in Chomsky Normal Form either has the form
S → AB, for nonterminals A and B, or the form
S → x, for terminal x,
Then deriving a string would work in the following way:
Create a string of exactly n nonterminals, then
Expand each nonterminal out to a single terminal.
Applying productions of the first form will increase the number of nonterminals from k to k + 1, since you replace one nonterminal (-1) with two nonterminals (+2) for a net gain of +1 nonterminal. Since your start with one nonterminal, this means you need to do n - 1 productions of the first form. You then need n more of the second form to convert the nonterminals to terminals, giving a total of n + (n - 1) = 2n - 1 productions.
To show that you need exactly this many, I would suggest doing a proof by contradiction and showing that you can't do it with any more or any fewer. As a hint, try counting the number of productions of each type that are made and showing that if it isn't 2n - 1, either the string is too short, or you will still have nonterminals remaining.
Hope this helps!

Is the language of all strings over the alphabet "a,b,c" with the same number of substrings "ab" & "ba" regular?

Is the language of all strings over the alphabet "a,b,c" with the same number of substrings "ab" & "ba" regular?
I believe the answer is NO, but it is hard to make a formal demonstration of it, even a NON formal demonstration.
Any ideas on how to approach this?
It's clearly not regular. How is an FA going to recognize (abc)^n c (cba)^n. Strings like this are in your language, right? The argument is a simple one based on the fact that there are infinitely many equivalence classes under the indistinguishability relation I_l.
The most common way to prove a language is NOT regular is using on of the Pumping Lemmas.
Using the lemma is a little tricky, since it has all those "exists" and so on. To prove a language L is not regular using the pumping lemma you have to prove that
for any integer p,
there is a word w in L of length n, with n>=p, such that
for all possible ways to decompose w as xyz, with len(xy) <= p and y non empty
there exists an i such that x(y^i)z (repeating the y bit i times) is NOT in L
whooo!
I'l l show how the proof looks for the "same number of as and bs" language. It should be straighfoward to convert to your case:
for any given p, we can make a word of length n = 2*p
a^p b^p (p a's followed by p b's)
any way you decompose this into xyz w/ |xy| <=p, y will only contain a's.
Thus, pumping the the y part will make the word have more as than bs,
thus NOT belonging to L.
If you need intuition on why this works, it follows from how you need to be able to count to arbritrarily large numbers to verify if a word belongs to one of these languages. However, Regular Languages are described by finite automata and no finite automata can represent the infinite ammount of states required to represent all the numbers. (The Wikipedia article should have a formal proof).
EDIT: It looks like you can't straight up use the pumping lemma in this particular case directly: if you always make y be one character long you can never make a word stop being accepted (aba becoming abbbba makes no difference and so on).
Just do the equivalence class approach suggested by Patrick87 - it will probably turn out to be cleaner than any of the dirty hacks you would need to do to make the pumping lemma applicable here.