is it possible in ANTLR 4 to create a parser rule with arguments of type 'token', i.e. a sort of a rule
list[elem Token] : '[' elem (',' elem)* ']';
which should match a list of tokens of the type 'elem'. For example, list[ID] should match a list of identifiers while list[String] should match a list of strings both following the syntax given in the above rule.
No, such semantic checks are generally done after parsing, in a listener or visitor (which ANTLR generates as well).
Related
I'm writing a JAVA software to parse SQL queries. In order to do so I'm using ANTLR with presto.g4.
The code I'm currently using is pretty standard:
PrestoLexer lexer = new PrestoLexer(
new CaseChangingCharStream(CharStreams.fromString(query), true));
lexer.removeErrorListeners();
lexer.addErrorListener(errorListener);
CommonTokenStream tokens = new CommonTokenStream(lexer);
PrestoParser parser = new PrestoParser(tokens);
I wonder whether it's possible to pass a parameter to the lexer so the lexing will be different depends on that parameter?
update:
I've used #Mike's suggestion below and my lexer now inherits from the built-in lexer and added a predicate function. My issue is now pure grammar.
This is my string definition:
STRING
: '\'' ( '\\' .
| '\\\\' . {HelperUtils.isNeedSpecialEscaping(this)}? // match \ followed by any char
| ~[\\'] // match anything other than \ and '
| '\'\'' // match ''
)*
'\''
;
I sometimes have a query with weird escaping for which the predicate returns true. For example:
select
table1(replace(replace(some_col,'\\'',''),'\"' ,'')) as features
from table1
And when I try to parse it I'm getting:
'\'',''),'
As a single string.
how can I handle this one?
I don't know what you need the parameter for, but you mentioned SQL, so let me present a solution I used since years: predicates.
In MySQL (which is the dialect I work with) the syntax differs depending on the MySQL version number. So in my grammar I use semantic predicates to switch off and on language parts that belong to a specific version. The approach is simple:
test:
{serverVersion < 80014}? ADMIN_SYMBOL
| ONLY_SYMBOL
;
The ADMIN keyword is only acceptable for version < 8.0.14 (just an example, not true in reality), while the ONLY keyword is a possible alternative in any version.
The variable serverVersion is a member of a base class from which I derive my parser. That can be specified by:
options {
superClass = MySQLBaseRecognizer;
tokenVocab = MySQLLexer;
}
The lexer also is derived from that class, so the version number is available in both lexer and parser (in addition to other important settings like the SQL mode). With this approach you can also implement more complex functions for predicates, that need additional processing.
You can find the full code + grammars at the MySQL Workbench Github repository.
I wonder whether it's possible to pass a parameter to the lexer so the lexing will be different depends on that parameter?
No, the lexer works independently from the parser. You cannot direct the lexer while parsing.
I am looking for a solution to a simple problem.
The example :
SELECT date, date(date)
FROM date;
This is a rather stupid example where a table, its column, and a function all have the name "date".
The snippet of my grammar (very simplified) :
simple_select
: SELECT selected_element (',' selected_element) FROM from_element ';'
;
selected_element
: function
| REGULAR_WORD
;
function
: REGULAR_WORD '(' function_argument ')'
;
function_argument
: REGULAR_WORD
;
from_element
: REGULAR_WORD
;
DATE: D A T E;
FROM: F R O M;
SELECT: S E L E C T;
REGULAR_WORD
: (SIMPLE_LETTER) (SIMPLE_LETTER | '0'..'9')*
;
fragment SIMPLE_LETTER
: 'a'..'z'
| 'A'..'Z'
;
DATE is a keyword (it is used somewhere else in the grammar).
If I want it to be recognised by my grammar as a normal word, here are my solutions :
1) I add it everywhere I used REGULAR_WORD, next to it.
Example :
selected_element
: function
| REGULAR_WORD
| DATE
;
=> I don't want this solution. I don't have only "DATE" as a keyword, and I have many rules using REGULAR_WORD, so I would need to add a list of many (50+) keywords like DATE to many (20+) parser rules : it would be absolutely ugly.
PROS: make a clean tree
CONS: make a dirty grammar
2) I use a parser rule in between to get all those keywords, and then, I replace every occurrence of REGULAR_WORD by that parser rule.
Example :
word
: REGULAR_WORD
| DATE
;
selected_element
: function
| word
;
=> I do not want this solution either, as it adds one more parser rule in the tree and polluting the informations (I do not want to know that "date" is a word, I want to know that it's a selected_element, a function, a function_argument or a from_element ...
PROS: make a clean grammar
CONS: make a dirty tree
Either way, I have a dirty tree or a dirty grammar. Isn't there a way to have both clean ?
I looked for aliases, parser fragment equivalent, but it doesn't seem like ANTLR4 has any ?
Thank you, have a nice day !
There are four different grammars for SQL dialects in the Antlr4 grammar repository and all four of them use your second strategy. So it seems like there is a consensus among Antlr4 sql grammar writers. I don't believe there is a better solution given the design of the Antlr4 lexer.
As you say, that leads to a bit of noise in the full parse tree, but the relevant non-terminal (function, selected_element, etc.) is certainly present and it does not seem to me to be very difficult to collapse the unit productions out of the parse tree.
As I understand it, when Antlr4 was being designed, a decision was made to only automatically produce full parse trees, because the design of condensed ("abstract") syntax trees is too idiosyncratic to fit into a grammar DSL. So if you find an AST more convenient, you have the responsibility to generate one yourself. That's generally straight-forward although it involves a lot of boilerplate.
Other parser generators do have mechanisms which can handle "semireserved keywords". In particular, the Lemon parser generator, which is part of the Sqlite project, includes a %fallback declaration which allows you to specify that one or more tokens should be automatically reclassified in a context in which no grammar rule allows them to be used. Unfortunately, Lemon does not generate Java parsers.
Another similar option would be to use a parser generator which supports "scannerless" parsing. Such parsers typically use algorithms like Earley/GLL/GLR, capable of parsing arbitrary CFGs, to get around the need for more lookahead than can conveniently be supported in fixed-lookahead algorithms such as LALR(1).
This is the socalled keywords-as-identifiers problem and has been discussed many times before. For instance I asked a similar question already 6 years ago in the ANTLR mailing list. But also here at Stackoverflow there are questions touching this area, for instance Trying to use keywords as identifiers in ANTLR4; not working.
Terence Parr wrote a wiki article for ANTLR3 in 2008 that shortly describes 2 possible solutions:
This grammar allows "if if call call;" and "call if;".
grammar Pred;
prog: stat+ ;
stat: keyIF expr stat
| keyCALL ID ';'
| ';'
;
expr: ID
;
keyIF : {input.LT(1).getText().equals("if")}? ID ;
keyCALL : {input.LT(1).getText().equals("call")}? ID ;
ID : 'a'..'z'+ ;
WS : (' '|'\n')+ {$channel=HIDDEN;} ;
You can make those semantic predicates more efficient by intern'ing those strings so that you can do integer comparisons instead of string compares.
The other alternative is to do something like this
identifier : KEY1 | KEY2 | ... | ID ;
which is a set comparison and should be faster.
Normally, as #rici already mentioned, people prefer the solution where you keep all keywords in an own rule and add that to your normal identifier rule (where such a keyword is allowed).
The other solution in the wiki can be generalized for any keyword, by using a lookup table/list in an action in the ID lexer rule, which is used to check if a given string is a keyword. This solution is not only slower, but also sacrifies clarity in your parser grammar, since you can no longer use keyword tokens in your parser rules.
I have a lexer grammar that defines a lexer that is used in two ways: to identify tokens for a syntax-aware editor, and to identify tokens for the parser. In the first case, the lexer should return comments and whitespace, but in the second case, the comments and whitespace are not wanted. Do I need two different lexer classes, each defined by its own variant of the grammar? Or can I accomplish this with a single lexer by using channels? How?
If I need two separate grammars, I assume I can factor out all the rules except for comments and whitespace, and then import those rules from that separate "common" grammar.
Usually you filter out tokens (like whitespaces) via token channels (or skip them entirely). This is part of your grammar and hence you'd need 2 grammars if you want whitespaces in one use case and not in the other. And yes, you can import a base grammar with all the common rules into specialized grammars which only hold the differences. You can even override rules (define e.g. the whitespace rule in the base grammar and redefine it in your main grammar).
But keep in mind that not filtering whitespaces will have consequences for all your other rules. In that case you would have to explicitly add whitespace handling to your parser rules everywhere. For instance:
blah: a or b;
versus
blah: a WS* or WS* b;
Is there any API in ANTLR4 for obtaining the original productions from the grammar?
For example, if there was a rule:
functionHeader : identifier LPAREN parameterDecl RPAREN
... is there some function on the parse that, given the functionHeader token would return a list ["identifier", "LPAREN", "parameterDecl", "RPAREN"]?
Well, it is rarely as simple as the list of elements you specify their but you can look at the augmented transition network (ATN) via parser.getATN() then get the rule start state etc...See ATNState
I'd like to exclude certain tokens from the parse-tree in antlr4.
Say I have this definition:
assignStatement: assignable EQ expression EOS;
EQ: '=';
EOS: ';';
The resulting parse-tree contains the assignable, EQ, expression and EOS as children of assignStatement. Is there any way to get rid of EQ and EOS here, since I only need them during parse-time for matching purposes?
ANTLR 4 does not omit matched terminals from the parse tree. While your application does not require access to those tokens, our experience has been that new applications using a previously written grammar frequently require access to elements that earlier applications did not. By including all elements in the parse tree, we are accounting for this case in advance to improve the long-term maintainability of applications using ANTLR 4.