Pumping lemma for context-sensitive language? - grammar

i have googled on pumping lemma for context sensitive, and it seems to only produce results for context-free language.
Pumping lemma only allows to prove a language is context free only? and not context sensitive?
Any idea how?

Pumping lemmas exist for regular, context-free, tree-adjoining, and multiple-context-free languages. There is a good survey in Johan Behrenfeld's master's thesis:
http://www.flov.gu.se/digitalAssets/1302/1302983_behrenfeldt-johan-alinguists.pdf
There is no pumping lemma for context-sensitive languages. Indeed, this class has considerably more generative power, and include languages without any kind of "pumping" property, e.g. {a^p | p prime}.
Each pumping lemma states a property that is true of a language in that class. It can be used to prove that a language is not in that class, as a proof by contradiction. It cannot be used to prove that a language is in that class.

The "pumping lemma-like" approach for tree-adjoining languages actually is named the "pumping lemma for tree-adjoining languages" everywhere in the literature. It makes it possible to prove that a language is not tree-adjoining, and therefore not mildly context-sensitive. Maybe this is the one you had in mind?
It was defined by Vijay-Shanker in his PhD thesis, which is unfortunately not available online. It is nonetheless easy to find how it works by searching the web. Many courses, for instance this one from the University of Tübingen, give a good account.

There are two Pumping Lemmas. Pumping Lemma for regular languages allows to prove that a language is not regular. Pumping Lemma for context-free languages allows to prove that a language is not context-free and hence not regular.
There are no other Pumping Lemmas. To prove that language is context sensitive you could first using Pumping Lemma prove that it is not context-free. Then you must supply a context sensitive grammar that actually generates given language.

Related

Whether a given context-free language is regular

Is it Decidable whether:
A given grammar is Context free?
A given recursive language is Context free?
A given context free language is regular?
A given grammer can give us the language and using the language and Pumping Lemma , we can easily decide if the given grammer is Context free
By using Greibach's theorem, we can show that it is undecidable whether context-free language is regualr or not..

Is there some other way to describe a formal language other than grammars?

I'm looking for the mathematical theory which deals with describing formal languages (set of strings) in general and not just grammar hierarchies.
Grammars give you the algorithm that lists all possible strings in the language. You could specify the algorithm any other way, but grammars are a concise and well-accepted format to do so.
Another way is to list every string that belongs to the language -- this will only work if the set of strings in the language is small (and definitely not when the set is infinite).
Regular expressions are a formalism for describing a set of languages, for instance. Although there are algorithms for transforming regular grammars and expressions in both ways, they are still two different theories. Also, automata (as a plural of automaton) can help you describe languages, not just DFA and NFA which describe the same set as regular languages, but 2DFA, stack automata. For example, a two-stacks automata is as powerful as a Turing machine. Finally, Turing machines itself are a formalism for languages. For any Turing machine, the set of all string on which the given Turing machine stops on a finite number of steps is a formally defined language.

chomsky hierarchy in plain english

I'm trying to find a plain (i.e. non-formal) explanation of the 4 levels of formal grammars (unrestricted, context-sensitive, context-free, regular) as set out by Chomsky.
It's been an age since I studied formal grammars, and the various definitions are now confusing for me to visualize. To be clear, I'm not looking for the formal definitions you'll find everywhere (e.g. here and here -- I can google as well as anyone else), or really even formal definitions of any sort. Instead, what I was hoping to find was clean and simple explanations that don't sacrifice clarity for the sake of completeness.
Maybe you get a better understanding if you remember the automata generating these languages.
Regular languages are generated by regular automata. They have only have a finit knowledge of the past (their compute memory has limits) so everytime you have a language with suffixes depending on prefixes (palindrome language) this can not be done with regular languages.
Context-free languages are generated by nondeterministic pushdown automata. They have a kind of knowledge of the past (the stack, which is not limited in contrast to regular automata) but a stack can only be viewed from top so you don't have complete knowledge of the past.
Context-sensitive languages are generated by linear-bound non-deterministic turing machines. They know the past and can deal with different contexts because they are non-deterministic and can access all the past at every time.
Unrestricted languages are generated by Turing machines. According to the Church-Turing-Thesis turing machines are able to calculate everything you can imagine (which means everything decidable).
As for regular languages, there are many equivalent characterizations. They give many different ways of looking at regular languages. It is hard to give a "plain English" definition, and if you find it hard to understand any of the characterizations of regular languages, it is unlikely that a "plain English" explanation will help. One thing to note from the definitions and various closure properties is that regular languages embody the notion of "finiteness" somehow. But this is again hard to appreciate without better familiarity with regular languages.
Do you find the notion of a finite automaton to be not simple and clean?
Let me mention some of the many equivalent characterizations (at least for other readers) :
Languages accepted by deterministic finite automata
Languages accepted by nondeterministic finite automata
Languages accepted by alternating finite automata
Languages accepted by two-way deterministic finite automata
Languages generated by left-linear grammars
Languages generated by right-linear grammars
Languages generated by regular expressions.
A union of some equivalence classes of a right-congruence of finite index.
A union of some equivalence classes of a congruence of finite index.
The inverse image under a monoid homomorphism of a subset of a finite monoid.
Languages expressible in monadic second order logic over words.
Regular: These languages answer yes/no with finite automata
Context free: These languages when given input word ( using state machiene and stack ) we can always answer yes/no if it is member of the language
Context sensitive: As long as production in grammar never shrinks ( α -> β ) we can answer yes/no (using state machiene and chunk of memory that is linear in size with input)
Recursively ennumerable: It can answer yes but in case of no it will go into infinite loop
see this video for full explanation.

How does yacc generate the syntactic parser from grammar rules?

I've understood how lexical analysis works,
but no idea how the syntactic analysis is done,
though in principle they two should similar(The only difference lies in the
type of their input symbols, characters or tokens.) ,
but the generated parser code is greatly different.
Especially the yy_action,yy_lookahead,there's no such thing in lexical analysis...
The grammars used to generate lexical analyzers generally are regular grammars, while the grammars used to generated syntatic analyzers generally are context-free grammars. Although they might look the same at the surface, they have very different characteristics and capabilities. Regular grammars can be recognized by deterministic finite automatons, which are relatively simple to construct and make fast. Context-free grammars are more challenging to build a recognizer for and usually a parser generator tool will construct a parser for only a subset of context-free grammars. For example, yacc constructs parsers for context-free grammars that are also LALR(1) grammars using push-down automata.
For more information on parsing, I would highly recommend Parsing Techniques, which walks through all the nuances of parsing in excruciating (but well described!) detail.

When you are proving a language is decidable, what are you effectively doing?

When you are proving a language is decidable, what are you effectively doing?
If you asking HOW is it done, I'm unsure, but I can check.
Basically, decidable is the language for which one can construct an algorithm (i.e. Turing machine) that will halt for ANY finite input (with accepting or rejecting the input).
Undecidable is the language which is not decidable.
http://en.wikipedia.org/wiki/Recursive_language ... but more on the subject can easily be found. On this link there is only a quick mention of the term.
p.s. So, when constructing above mentioned algorithm, you are basically proving that language is decidable.