How can I hide parens in ANTLR4? - antlr

For example, input = '(1+2)*3' .
tree is like that '(expr (expr ((expr (expr 1) + (expr 2)) ))*(expr 3))'
And then, I would like to hide or delete the '(' and ')' in the tree , they are no needed any more. I try to make it , but didn't.
expr : ID LPAREN exprList? RPAREN
| '-' expr
| '!' expr
| expr op=('*'|'/') expr
| expr op=('+'|'-') expr
| ID
| INT
| LPAREN expr RPAREN //### Parens Here ####
;
LPAREN : '(' ;
RPAREN : ')' ;
What I want is** NOT** the following.
PAREN : ( '(' | ')' ) -> channel(HIDDEN)

Standard parser generator schemes separate parsing from tree building.
This allows one to design custom actions to build an AST, and fine tune its structure to the targeted langauge (including leaving out concrete syntax such as "parentheses").
The price is that one must not only specify the grammar, but one must also specify the rules for building the AST. And that makes defining a "grammar + tree builder" about twice as much work as just defining a grammar. When your grammars are tiny, this doesn't matter, but usually tiny grammars means "toy problem". With big real production gnarly grammars, this matters a lot; there's usually a bunch of initial churn in trying to get such grammars right and the AST building stuff just gets in the way during this phase. Clever people delay adding AST building rules till the churn phase is over, but that only partially works, and it turns out that you may want to reshape the grammar based on AST you want to build, so this delay actually increases the churn somewhat. One also pays a maintenance cost; if your grammar has any scale, you will change it, and then the AST building part must change, too.
My company builds a tool, the DMS Software Reengineering Toolkit, which contains a parser generator. We decided, the first to do so AFAIK, some 20 years ago, that this extra AST building step was too much work for the benefit for the many big grammars we expected to (and did) build. So we designed DMS to automatically build a concrete syntax tree as it parsed. Voila, write a grammar, get a parser and the tree is free. This decision has turned out to be a really good one.
The price is the final tree retains all the concrete syntax, e.g., the parentheses. While it may not look elegant, it turns out that this does not matter much in practice when manipulating trees (inspecting, traversing, analyzing, modifying, ...). We've traded a bit of inelegance for much easier tree building and grammar maintenance.
The ANTLR guy(!) decided for ANTRL4, unlike his previous ANTLR1/2/3 systems, to follow our lead and switch to building "ASTs" automatically from the grammar, as concrete syntax trees. (I don't know if you can actually write your own AST building rules to override the built-in feature for ANTLR4. Comments on this answer suggest that the way to get an AST from ANTLR4 is to walk the CST and build what you want. I'm not keen on that solution; it strikes me as paying the price for building and managing the AST, and also having the parsing overhead [time and space] of building the CST. If you only build small trees, maybe you don't care. For DMS, we regularly read thousands of files for processing together; space and time matter!)
For some discussion on how to make this a bit more elegant (effectively even more AST like), see my SO answer on ASTs vs. CSTs

To suppress useless tokens in the tree, use '!' symbol after corresponding tokens:
//remove ',' comma from the output
list: LISTNAME LISTMEMBER (','! LISTMEMBER)*;
from http://meri-stuff.blogspot.com/2011/09/antlr-tutorial-expression-language.html

Related

ANTLR4 : clean grammar and tree with keywords (aliases ?)

I am looking for a solution to a simple problem.
The example :
SELECT date, date(date)
FROM date;
This is a rather stupid example where a table, its column, and a function all have the name "date".
The snippet of my grammar (very simplified) :
simple_select
: SELECT selected_element (',' selected_element) FROM from_element ';'
;
selected_element
: function
| REGULAR_WORD
;
function
: REGULAR_WORD '(' function_argument ')'
;
function_argument
: REGULAR_WORD
;
from_element
: REGULAR_WORD
;
DATE: D A T E;
FROM: F R O M;
SELECT: S E L E C T;
REGULAR_WORD
: (SIMPLE_LETTER) (SIMPLE_LETTER | '0'..'9')*
;
fragment SIMPLE_LETTER
: 'a'..'z'
| 'A'..'Z'
;
DATE is a keyword (it is used somewhere else in the grammar).
If I want it to be recognised by my grammar as a normal word, here are my solutions :
1) I add it everywhere I used REGULAR_WORD, next to it.
Example :
selected_element
: function
| REGULAR_WORD
| DATE
;
=> I don't want this solution. I don't have only "DATE" as a keyword, and I have many rules using REGULAR_WORD, so I would need to add a list of many (50+) keywords like DATE to many (20+) parser rules : it would be absolutely ugly.
PROS: make a clean tree
CONS: make a dirty grammar
2) I use a parser rule in between to get all those keywords, and then, I replace every occurrence of REGULAR_WORD by that parser rule.
Example :
word
: REGULAR_WORD
| DATE
;
selected_element
: function
| word
;
=> I do not want this solution either, as it adds one more parser rule in the tree and polluting the informations (I do not want to know that "date" is a word, I want to know that it's a selected_element, a function, a function_argument or a from_element ...
PROS: make a clean grammar
CONS: make a dirty tree
Either way, I have a dirty tree or a dirty grammar. Isn't there a way to have both clean ?
I looked for aliases, parser fragment equivalent, but it doesn't seem like ANTLR4 has any ?
Thank you, have a nice day !
There are four different grammars for SQL dialects in the Antlr4 grammar repository and all four of them use your second strategy. So it seems like there is a consensus among Antlr4 sql grammar writers. I don't believe there is a better solution given the design of the Antlr4 lexer.
As you say, that leads to a bit of noise in the full parse tree, but the relevant non-terminal (function, selected_element, etc.) is certainly present and it does not seem to me to be very difficult to collapse the unit productions out of the parse tree.
As I understand it, when Antlr4 was being designed, a decision was made to only automatically produce full parse trees, because the design of condensed ("abstract") syntax trees is too idiosyncratic to fit into a grammar DSL. So if you find an AST more convenient, you have the responsibility to generate one yourself. That's generally straight-forward although it involves a lot of boilerplate.
Other parser generators do have mechanisms which can handle "semireserved keywords". In particular, the Lemon parser generator, which is part of the Sqlite project, includes a %fallback declaration which allows you to specify that one or more tokens should be automatically reclassified in a context in which no grammar rule allows them to be used. Unfortunately, Lemon does not generate Java parsers.
Another similar option would be to use a parser generator which supports "scannerless" parsing. Such parsers typically use algorithms like Earley/GLL/GLR, capable of parsing arbitrary CFGs, to get around the need for more lookahead than can conveniently be supported in fixed-lookahead algorithms such as LALR(1).
This is the socalled keywords-as-identifiers problem and has been discussed many times before. For instance I asked a similar question already 6 years ago in the ANTLR mailing list. But also here at Stackoverflow there are questions touching this area, for instance Trying to use keywords as identifiers in ANTLR4; not working.
Terence Parr wrote a wiki article for ANTLR3 in 2008 that shortly describes 2 possible solutions:
This grammar allows "if if call call;" and "call if;".
grammar Pred;
prog: stat+ ;
stat: keyIF expr stat
| keyCALL ID ';'
| ';'
;
expr: ID
;
keyIF : {input.LT(1).getText().equals("if")}? ID ;
keyCALL : {input.LT(1).getText().equals("call")}? ID ;
ID : 'a'..'z'+ ;
WS : (' '|'\n')+ {$channel=HIDDEN;} ;
You can make those semantic predicates more efficient by intern'ing those strings so that you can do integer comparisons instead of string compares.
The other alternative is to do something like this
identifier : KEY1 | KEY2 | ... | ID ;
which is a set comparison and should be faster.
Normally, as #rici already mentioned, people prefer the solution where you keep all keywords in an own rule and add that to your normal identifier rule (where such a keyword is allowed).
The other solution in the wiki can be generalized for any keyword, by using a lookup table/list in an action in the ID lexer rule, which is used to check if a given string is a keyword. This solution is not only slower, but also sacrifies clarity in your parser grammar, since you can no longer use keyword tokens in your parser rules.

Preferentially match shorter token in ANTLR4

I'm currently attempting to write a UCUM parser using ANTLR4. My current approach has involved defining every valid unit and prefix as a token.
Here's a very small subset of the defined tokens. I could make a cut-down version of the grammar as an example, but it seems like it shouldn't be necessary to resolve this problem (or to point out that I'm going about this entirely the wrong way).
MILLI_OR_METRE: 'm' ;
OSMOLE: 'osm' ;
MONTH: 'mo' ;
SECOND: 's' ;
One of the standard testcases is mosm, from which the lexer should generate the token stream MILLI_OR_METRE OSMOLE. Unfortunately, because ANTLR preferentially matches longer tokens, it generates the token stream MONTH SECOND MILLI_OR_METRE, which then causes the parser to raise an error.
Is it possible to make an ANTLR4 lexer try to match using shorter tokens first? Adding lookahead-type rules to MONTH isn't a great solution, as there are all sorts of potential lexing conflicts that I'd need to take account of (for example mol being lexed as MONTH LITRE instead of MOLE and so on).
EDIT:
StefanA below is of course correct; this is a job for a parser capable of backtracking (eg. recursive descent, packrat, PEG and probably various others... Coco/R is one reasonable package to do this). In an attempt to avoid adding a dependency on another parser generator (or moving other bits of the project from ANTLR to this new generator) I've hacked my way around the problem like this:
MONTH: 'mo' { _input.La(1) != 's' && _input.La(1) != 'l' && _input.La(1) != '_' }? ;
// (note: this is a C# project; java would use _input.LA instead)
but this isn't really a very extensible or maintainable solution, and like as not will have introduced other subtle issues I've not come across yet.
Your problem does not require smaller tokens to be preferred (In this case MONTH would never be matched). You need a backtracking behaviour dependent on the text being matched or not. Right?
ANTLR separates tokenization and parsing strictly. Consequently every solution to your problem will seem like a hack.
However other parser generators are specialized on problems like yours. Packrat Parsers (PEG) are backtracking and allow tokenization on the fly. Try out parboiled for this purpose.
Appears that the question is not being framed correctly.
I'm currently attempting to write a UCUM parser using ANTLR4. My current approach has involved defining every valid unit and prefix as a token.
But, according to the UCUM:
The expression syntax of The Unified Code for Units of Measure generates an infinite number of codes with the consequence that it is impossible to compile a table of all valid units.
The most to expect from the lexer is an unambiguous identification of the measurement string without regard to its semantic value. Similarly, a parser alone will be unable to validly select between unit sequences like MONTH LITRE and MOLE - both could reasonably apply to a leak rate - unless the problem space is statically constrained in the parser definition.
A heuristic, structural (explicitly identifying the problem space) or contextual (considering the relative nature of other units in the problem space), is most likely required to select the correct unit interpretation.
The best tool to use is the one that puts you in the best position to implement the heuristics necessary to disambiguate the unit strings. Antlr could do it using parse-tree walkers. Whether that is the appropriate approach requires further analysis.

ANTLR4 parse tree simplification

Is there any means to get ANTLR4 to automatically remove redundant nodes in generated parse trees?
More specifically, I've been experimenting with a grammar for GLSL and you end up with long linear sequences of "expressions" in the parse tree due to the rule forwarding needed to give the automatic handling of operator precedence.
Most of the generated tree nodes are simply "forward to the next level of precedence", so don't provide any useful syntactic information - you only really need the last expression node in each sequence (i.e. the point at which the rule forwarding stopped), or the point where it becomes an actual tree node with more than one child (i.e. an actual expression was encountered in the source) ...
I was hoping there would be an easy way to kill off the dummy intermediate expression nodes - this type of structure must be common in any grammar with operator precedence.
The basic structure of the grammar is a fairly direct clone taken from the Khronos specification for the language:
https://www.khronos.org/registry/gles/specs/3.1/es_spec_3.1.pdf
ANTLR v4 is able to generate code from a single recursive rule dealing with different precedence levels, if you use a grammar like this (example for basic math):
expr : '(' expr ')'
| '-' expr
| expr ('*'|'/') expr
| expr ('+'|'-') expr
| INT
;
ANTLR v3 was unable to do so and basically required you to write one rule per precedence level. So I'd advise you to rewrite your grammar to avoid these boilerplate rules.
Then, I think you're confusing the parse tree (aka concrete syntax tree) with the AST (abstract syntax tree). The AST is like a simplified version of the parse tree, which keeps only what's needed for your purpose. For instance, with the expr rule above, the AST wouldn't contain any node for parentheses, since the precedence is encoded in the tree itself and you usually don't need to know whether a part of a given expression was parenthesized or not.
Your program should build an AST from the parse tree and then go from there. Don't deal with parse trees directly, even if it seems convenient at first sight because the tool generates them for you. It'll quickly become cumbersome. Build your own tree structure (AST), tailored for the task at hand.
Use the Visitor implementation to access each node in sequence. Build your own tree by adding nodes to parents as they are visited. Decide at the time the node is visited whether to add it to your new tree or not. For example:
public T visitExpression(#NotNull AcParser.ExpressionContext ctx) {
// Expressionable parent = getParent(Expressionable.class, ctx);
// Class<? extends AcExpression> expClass = AcExpression.class;
AcExpression obj = null;
String text = ctx.getText();
//do something with text or children
for (int i=0; i<ctx.getChildCount(); i++){
printnl(ctx.getChild(i).getText()+"/");
}
return visitChildren(ctx);
}

xtext: expression/factor/term grammar

This has got to be one of those well-known examples that's somewhere on the internet, but I can't seem to find it.
I'm trying to learn XText and I figured a calculator expression parser would be a good start. But I'm getting syntax errors in my grammar:
Expression:
Term (('+'|'-') Term)*;
Term:
Factor (('*'|'/') Factor)*;
Factor:
number=Number | variable=ID | ('(' expression=Expression ')');
I get this error in the Expression and Term lines:
Multiple markers at this line
- Cannot change type twice within a rule
- An unassigned rule call is not allowed, when the 'current'
was already created.
What gives? How can I fix this? And when do I have instanceName=Rule vs. Rule entries in a grammar?
I downloaded xtext integrated with eclipse and it comes with a calculator example which does approximately what you wish called arithmetics. From what I can gather you will need to assign an associativity to your tokens. This grammar runs fine for me:
Expression:
Term (({Plus.left=current}'+'|{Minus.left=current}'-') right=Term)*;
Term:
Factor (({Multiply.left=current} '*'| {Division.left=current}'/') right=Factor)*;
Factor:
number=NUMBER | variable=ID | ('(' expression=Expression ')');
The example grammar they have for arithmetics can be viewed here. It includes a bit more than your, like function calls, but the basics are the same.

Stripping actions from ANTLR grammar changes its parsing algorithm

I have a grammar Foo.xtext (too complex to include it here). Xtext generates InternalFoo.g from it. After some tweaking it also generates DebugInternalFoo.g which claims to be the same thing without actions. Now, I strip off actions with ANTLR directly
java -cp antlr-3.4.jar org.antlr.tool.Strip Internal.g > Stripped.g
I'd expect the three grammars to behave the same way when I check them. But here is what I experienced
InternalFoo.g - error, rule assignment has non-LL(*) decision
DebugInternalFoo.g - no problem, parses fine
Stripped.g - warnings at rule assignment, decision can match using multiple alternatives. It fails to parse properly.
Is it possible that a grammar parses a text differently with or without actions? Or is it a bug in any of the action-remover tools? (The rule in question has syntactic predicates, and without them, it would really have a non-LL(*) decision.)
UPDATE:
I partly found what caused the problem. The rule in question was like this
trickyRule:
({ some complex action})
(expression '=')=>...
Stripping with Antlr removed the action, but left an empty group there:
// Stripped.g
trickyRule:
() (expression '=')=>...
The generation of the debug grammar removes both the action, and the now empty group around it:
// DebugInternalFoo.g
trickyRule:
(expression '=')=>...
So the lesson learned is: an empty group before a syntactic predicate is not the same as nothing at all.
Is it possible that a grammar parses a text differently with or without actions?
Yes, that is possible. org.antlr.tool.Strip leaves syntactic predicates1, but removes validating2- and gated3 semantic predicates (and member sections that these semantic predicates might use).
For example, the following rules would only match an A_TOKEN:
parser_rule1
: (parser_rule2)=> parser_rule2
;
parser_rule2
: {input.LT(1).getType() == A_TOKEN}? .
;
but if you use the Strip tool on it, it leaves the following:
parser_rule1
: (parser_rule2)=> parser_rule2
;
parser_rule2
: /*{input.LT(1).getType() == A_TOKEN}?*/ .
;
making it match any token.
In other words, Strip could change the behavior of the generated lexer or parser.
1 syntactic predicate: ( ... )=>
2 validating semantic predicate { ... }?
3 gated semantic predicate { ... }?=>