This case is different I think, I have a word and I have some 100 regex with me. I want to check which regex it is passing? How to do it in an optimised way?
The most efficient way would be to combine all of those regular expressions into a deterministic finite automaton (a finite state machine). Then run the string through that finite state machine.
Michael Sipser's Introduction to the Theory of Computation explains how to do this. It is fairly complex, thus the reference to the book.
After you have constructed the DFA by hand, you can implement it in code.
There are tools that can do this for you, such as flex. flex takes the regular expressions as input and generates the DFA as .c file which you can then use in your project. You can configure flex to return a token to indicate which regular expression was matched.
flex is a unix tool and is part of OS X 10.8.
Related
Right now we would like to have the grammar of PL/I, COBOL based on Antlr4. Is there anyone provide these grammars
If not, can you please share your thought/experience on developing these grammars from scratch
Thanks
I assume you mean IBM PL/I and COBOL. (Not many other PL/Is around, but I don't think that really changes the answer much).
The obvious place to look for mature ANTLR grammars is ANTLR3 grammar library; no PL/1 or COBOL grammars there. The Antlr V4 (a very new, radical, backwards incompatible reengineering of ANTLR3) main page talks about Java and C#; no hint of PL/1 or COBOL there; given its newness, no surprise. If you are really lucky, somebody may have one they will give you and speak up.
Developing such grammars is difficult for several reasons (based on personal experience building production-quality parsers for these two specific items, using a very strong parser system different than ANTLR [see my bio for more details]):
The character set and column layout rules (columns 1-5, 6 and 72-80 are special) may be an issue: the languages you describe are typically written in EBCDIC historically in punch-card 80 column format without line break characters between lines. Translation to ASCII sometimes produces nasty glitches; the ASCII end-of-line character is occasionally found in the middle of COBOL literal strings as a binary value, but because it has the same exact code in EBCDIC and ASCII, after translation it will (be and) appear to be an ASCII newline break character. Character strings can also be long but split across multiple lines; but columns 72-80 by definition have to be ignored. Column 6 may contain a "D" character, which affects interpretation of the following source lines as "debug" or "not". This means you need to get 80 column processing right. I don't know what ANTLR has to support processing characters-in-column-areas. You'll also need to worry about DBCS encoding of string literals, and variations of that if the source code is used in non-English speaking countries, such as Japan.
These languages are large and complex; IBM has had 40 years to decorate them with cruft. The IBM COBOL manual is some 600 pages ... then you discover that COBOL also includes a Report Writer, which is another 600 page document. Capturing all the nuances of the lexical tokens and the grammar rules will take effort, and you have to do that from the IBM manuals, which don't contain nice BNF-style descriptions, which means guessing from the textual description and some examples. For COBOL, expect several thousand grammar rules; PL/1 is less complicated in the abstract. Expect a certain amount of "lies"; we've encountered a number of places where the reference documentation clearly says certain things are not legal, and yet the IBM compilers (based on real, running source code) accepts them, and vice versa. The only way you find these is by empirical experiments.
Both languages have constructs that are difficult to parse, e.g., requiring arbitrary lookahead and/or local ambiguity. ANTLR4 is much better than ANTLR3 from my understanding on these, but that doesn't mean these aspects will be easy. PL/1 is particularly nasty in this regard: it has no keywords, but hundreds of keywords-in-context. To resolve these one has to get the lexer and the parser to cooperate, and even then there may be many locally ambiguous parses. ANTLR3 doesn't do these well; ANTLR4 is supposed to be better but I don't know how it handles this, if it does at all.
To verify these parsers are right, you will need to run them on millions of lines of code (which means you have to have access to such code samples), and correct any errors you find. This takes a long time (in our case, several years of more or less continuous work/improvement to get production quality grammars that work on large code bases). You might be miraculously faster than this; good luck.
You need to build a preprocessor for COBOL (COPY ... REPLACING), whose details are poorly documented, and eventually another one for PL/1 (which I understand to be fully Turing capable).
After you build a parser, you need to capture a syntax tree; here ANTLR4 is supposed to be pretty good in that it will capture one for the grammar you give it. That may or may not be the AST you want; with several thousand grammar rules, I'd expect not. ANTLR3 requires you to add, manually, indications of where and how to form the AST.
After you get the AST, you'll want to do something with it. This means you will need to build at least symbol tables (mappings from identifier instances to their declarations and any related type information). ANTLR provides nothing special to support this AFAIK except for support in walking the ASTs. This, too, is hard to get right, COBOL has crazy rules about how an unqualified identifier reference can be interpreted as to a specific data field if there are no other conflicting interpretations. (There's lots more to Life After Parsing if you want to have good semantic information about the program; see my bio for more details; for each of these semantic aspects you have develop them and then for validation go back and run them on large code bases again.).
TL;DR
Building parsers (well, "front ends") for these languages is a lot of work no matter what parsing engine you choose. Likely explains why they aren't already in ANTLR's grammar zoo.
Have a look at the OpenSource Cobol-85 Parser from ProLeap, based on antlr4 and creating ASTs and ASGs as well.
And, best of all, it really works !
https://github.com/uwol/proleap-cobol-parser
I am not aware of a comparable PLI-grammar, but a very good start is the EBNF-definition from Ralf Lämmel (CWI, Amsterdam) & Chris Verhoef (WINS, Universiteit van Amsterdam)
http://www.cs.vu.nl/grammarware/browsable/os-pli-v2r3/
I'm looking for the mathematical theory which deals with describing formal languages (set of strings) in general and not just grammar hierarchies.
Grammars give you the algorithm that lists all possible strings in the language. You could specify the algorithm any other way, but grammars are a concise and well-accepted format to do so.
Another way is to list every string that belongs to the language -- this will only work if the set of strings in the language is small (and definitely not when the set is infinite).
Regular expressions are a formalism for describing a set of languages, for instance. Although there are algorithms for transforming regular grammars and expressions in both ways, they are still two different theories. Also, automata (as a plural of automaton) can help you describe languages, not just DFA and NFA which describe the same set as regular languages, but 2DFA, stack automata. For example, a two-stacks automata is as powerful as a Turing machine. Finally, Turing machines itself are a formalism for languages. For any Turing machine, the set of all string on which the given Turing machine stops on a finite number of steps is a formally defined language.
I'm trying to find a plain (i.e. non-formal) explanation of the 4 levels of formal grammars (unrestricted, context-sensitive, context-free, regular) as set out by Chomsky.
It's been an age since I studied formal grammars, and the various definitions are now confusing for me to visualize. To be clear, I'm not looking for the formal definitions you'll find everywhere (e.g. here and here -- I can google as well as anyone else), or really even formal definitions of any sort. Instead, what I was hoping to find was clean and simple explanations that don't sacrifice clarity for the sake of completeness.
Maybe you get a better understanding if you remember the automata generating these languages.
Regular languages are generated by regular automata. They have only have a finit knowledge of the past (their compute memory has limits) so everytime you have a language with suffixes depending on prefixes (palindrome language) this can not be done with regular languages.
Context-free languages are generated by nondeterministic pushdown automata. They have a kind of knowledge of the past (the stack, which is not limited in contrast to regular automata) but a stack can only be viewed from top so you don't have complete knowledge of the past.
Context-sensitive languages are generated by linear-bound non-deterministic turing machines. They know the past and can deal with different contexts because they are non-deterministic and can access all the past at every time.
Unrestricted languages are generated by Turing machines. According to the Church-Turing-Thesis turing machines are able to calculate everything you can imagine (which means everything decidable).
As for regular languages, there are many equivalent characterizations. They give many different ways of looking at regular languages. It is hard to give a "plain English" definition, and if you find it hard to understand any of the characterizations of regular languages, it is unlikely that a "plain English" explanation will help. One thing to note from the definitions and various closure properties is that regular languages embody the notion of "finiteness" somehow. But this is again hard to appreciate without better familiarity with regular languages.
Do you find the notion of a finite automaton to be not simple and clean?
Let me mention some of the many equivalent characterizations (at least for other readers) :
Languages accepted by deterministic finite automata
Languages accepted by nondeterministic finite automata
Languages accepted by alternating finite automata
Languages accepted by two-way deterministic finite automata
Languages generated by left-linear grammars
Languages generated by right-linear grammars
Languages generated by regular expressions.
A union of some equivalence classes of a right-congruence of finite index.
A union of some equivalence classes of a congruence of finite index.
The inverse image under a monoid homomorphism of a subset of a finite monoid.
Languages expressible in monadic second order logic over words.
Regular: These languages answer yes/no with finite automata
Context free: These languages when given input word ( using state machiene and stack ) we can always answer yes/no if it is member of the language
Context sensitive: As long as production in grammar never shrinks ( α -> β ) we can answer yes/no (using state machiene and chunk of memory that is linear in size with input)
Recursively ennumerable: It can answer yes but in case of no it will go into infinite loop
see this video for full explanation.
I just wanted to know if it is 100% possible, if my language is turing-complete, to write a program in it that prints itself out (of course not using a file reading function)
So if the language just has the really necessary things in order to make it turing complete (I would prove that by translating Brainf*ck code to it), like output, variables, conditions and gotos (hell yes, gotos), can I try writing a quine in it?
I'm also asking this because I'm not sure that a quine directly fits into Turing's law that the turing machine is capable of any computational task.
I just want to know so I don't try for years without knowing that it may be impossible.
Any programming language which is
Turing complete, and which is able to
output any string (by a computable
function of the string as program —
this is a technical condition that is
satisfied in every programming
language in existence) has a quine
program (and, in fact, infinitely many
quine programs, and many similar
curiosities) as follows by the
fixed-point theorem.
See here
I ran into this issue a couple of months ago.
While writing a quine doesn't necessarily prove that a language is Turing Complete, it is a strong suggestion ;) As far as Turing Completeness goes, if you can (like you said) provide a valid translation from your language to another Turing-Complete language, then your language is Turing Complete.
That being said, any language that is Turing Complete that can output a string should be able to generate a quine. Also, from Wikipedia:
A quine is a fixed point of an execution environment, when the execution environment is viewed as a function. Quines are possible in any programming language that has the ability to output any computable string, as a direct consequence of Kleene's recursion theorem. For amusement, programmers sometimes attempt to develop the shortest possible quine in any given programming language.
It is possible to have a programming language that cannot print all the symbols in its representation. For example, the I/O may be limited to 7-bit ASCII characters with language keywords in Arabic. That's the only exception I can think of.
Well, technically, not always. According to the proof on Wikipedia, the programming language has to be an admissible numbering. Practical and sane Turing-complate programming languages are all admissible numberings. And a Turing-complate programming language is an admissible numbering if it's possible to translate between that and another admissible numbering.
An example Turing-complete programming language that is not an admissible numbering:
The source code always contains one or two doublequoted escaped strings. If the input is empty, output the first string if there are two strings, or loop forever if there is one. Otherwise, evaluate the last string in Python, using the original input as input.
It's not an admissible numbering because, given a Python program, we have to know its behavior when the input is empty, to translate it into this language. But we may never know if it is an infinite loop, as we cannot solve the halting problem. We know a translation always exists, though.
It's impossible to write quines in this language.
I've got a BNF and EBNF for a grammar. The BNF is obviously more verbose. I have a fairly good idea as far as using the BNF to build a recursive-descent parser; there are many resources for this. I am having trouble finding resources to convert an EBNF to a recursive-descent parser. Is this because it's more difficult? I recall from my CS theory classes that we went over EBNFs, but we didn't go over converting them into a recursive-descent parser. We did go over converting BNF's into a recursive-descent parser.
The reason I'm asking is because the EBNF is more compact.
From looking at the EBNF's in general, I notice that terms enclosed between { and } can be converted into a while loop. Are there any other guidelines or rules?
You should investigate so-called metacompilers, which essentially compile EBNF into recursive descent parsers. How they do it is exactly the answer your question.
(Its pretty straightfoward, but good to understand the details).
A really wonderful paper is the "MetaII" paper by Val Schorre. This is metacompiler technology from honest-to-God 1964. In 10 pages, he shows you how to build a metacompiler, and provides not just that, but another compiler too and the output of both!. There's an astonishing moment that you come too if you go build one of these, where you realized how the meta-compiler compiles itself using its own grammar. This moment got me
hooked on compiler back in about 1970 when I first tripped over this paper. This is one of those computer science papers that everybody in the software business should read.
James Neighbors (the inventor of the term "domain" in software engineering, and builder of the first program transformation system [based on these metacompilers] has a great online MetaII tutorial, for those of you that don't want the do-it-from-scratch experience. (I have nothing to do with this except that Neighbors and I were undergraduates together).
Both ways are a fine way to learn about metacompilers and generating parsers from EBNF.
The key ideas are that the left hand side of a rule creates a function that parses that nonterminal and returns true if match and advances the input stream; false if no match and the input stream doesn't advance.
The contents of the function is determined by the right hand side. Literal tokens are matched directly.
Nonterminals cause calls to other functions generated for the other rules.
Kleene* maps to while loops, alternations map to conditional branches. What EBNF doesn't address,
and the metacompilers do, is how does parsing do anyting other than saying "matched" or not?
The secret is weaving output operations into the EBNF. The MetaII paper makes all this crystal clear.
Neither is harder than the other. It is really the difference between implementing something iteratively and implementing something recursively. In BNF, everything is recursive. In EBNF, some of the recursion is expressed iteratively. There are different variations in EBNF syntax, so I'll just use the English... "zero or more" is a simple while loop as you have discovered. "One or more" is the same as one followed by "zero or more". "Zero or one times" is a simple if statement. That should cover most of the cases.
The early meta compilers META II and TREEMETA and their kin are not exactly recursive decent parser. They were were stated as using recursive functions. That just meant they could call them selves.
We do not call C a recursive language. A C or C++ function is recursive in the same way the early meta compilers are recursive.
Recursion can be used. They were programming languages. Recursion is generally used only when analyzing nexted language constructs. For example parenthesized expression and nexted blocks.
More of an LR recursive decent combination. CWIC the last documented one has extensive backtracking and look ahead features. The '-' not operator can match any language construct. And inverts it success or failure. -term fails if a term is matched for example. The input is never advanced. The '?' looks ahead and matches any language construct ?expr for example would try to parse an expr. The look ahead '?' matched construct is not kept or is the input advanced.