VARIABLE: ...
UNARYOP: 'not' Expression; // unary operation
BINARYOP: 'or' VARIABLE;
Expression : (NIL | INTEGER | UNARYOP) BINARYOP?;
In the above scenario, 'or' can either be reached through
Expression->BINARYOP
or
EXPRESSION->UNARYOP->Expression->BINARYOP
Is there a systematic way to remove ambiguities such as the above?
I think that removing ambiguities in grammars is a non automatically solvable task because if the choose of which of the alternatives is the right one is a 'subjective' choice.
Once you identified the problem, build the different alternative trees and add new production rules to disallow the invalid parse trees.
I am afraid there is no magic solution like for removing left recursions... Maybe I am wrong.
In your case you could define
Expression : NIL
| INTEGER
| VARIABLE
| 'not' Expression
| Expression 'or' Expression;
Or do you want to limit the right side of 'or' to variables only?
Related
Following along http://blog.ptsecurity.com/2016/06/theory-and-practice-of-source-code.html#java--and-java8-grammars, I am trying to reduce left-recursion in my fairly complex grammar. From what I understand, the non-primitive form of recursion can lead to performance problems both in terms of memory and process time.
So I am trying to refactor these rules in my grammar to use only "primitive" recursion. Of course, that blog post is the only time I have seen the phrase "primitive" recursion in regards to Antlr. So I am just guessing at its meaning/intent. Seems to me it means a rule that refers to itself as a lhs for at most just a single rule branch. Correct?
At the moment I have an expression rule like:
expression
: expression DOUBLE_PIPE expression # ConcatenationExpression
| expression PLUS expression # AdditionExpression
| expression MINUS expression # SubtractionExpression
| expression ASTERISK expression # MultiplicationExpression
| expression SLASH expression # DivisionExpression
| expression PERCENT expression # ModuloExpression
...
;
The ... includes quite a few sub-rules that also refer back to expression. But these are the only ones with direct recursion.
If I understand correctly, refactoring these to be "primitive" recursion would look something like:
expression
: binaryOpExpression # BinaryOpExpression
...
;
binaryOpExpression
: expression DOUBLE_PIPE expression # ConcatenationExpression
| expression PLUS expression # AdditionExpression
| expression MINUS expression # SubtractionExpression
| expression ASTERISK expression # MultiplicationExpression
| expression SLASH expression # DivisionExpression
| expression PERCENT expression # ModuloExpression
;
First, is that the correct refactoring?
Secondly, will that really help performance? At the end of the day it is still the same decisions, so I'm not really understanding how this helps performance (aside from maybe producing less ATNConfig objects).
Thanks
I have not heard "primitive recursion" before in this context and the author probably only means to name a specific form of recursions in ANTLR4.
Fact is there are 3 relevant forms of recursions in ANTLR4:
Direct left recursion: recursion from the first rule reference in a rule (to the same rule). For example: a: ab | c;
Indirect left recursion: left recursion not directly from the same rule. For example: a: b | c; b: c | d; c: a | e; (not allowed in ANTLR4)
Right recursion: any other recursion in a rule. For example: a: ba | c;. The name "right recursion" is however only correct in cases of binary expression, but is used often to differentiate from left recursions in general.
Having said that it becomes clear that your rewrite is wrong, as it would create indirect left recursion, which ANLTR4 does not support. Direct left recursion is usually not a problem (from a memory or performance standpoint) because ANTLR4 converts them to non-recursive ATN rule graphs.
What can become a problem are right recursions, because they are implemented by code recursion (recursive function calls in the runtime), which may qickly exhaust the CPU stack. I have seen cases with big expressions which could not be parsed in a separate thread, because I couldn't set the thread stack size to a larger value (the main thread stack size usually can be adjusted via linker settings).
The only solution for the latter case, which I have found useful, is to lower the number of parser rules in the grammar that call each other. Of course it's a matter of structure, readability etc. to put certain expression elements in different rules (for example andExpression, orExpression, bitExpression etc.), but that may lead to pretty deep invocation stacks, which may exhaust the CPU stack and/or require a lot of time to process them.
I am looking for a solution to a simple problem.
The example :
SELECT date, date(date)
FROM date;
This is a rather stupid example where a table, its column, and a function all have the name "date".
The snippet of my grammar (very simplified) :
simple_select
: SELECT selected_element (',' selected_element) FROM from_element ';'
;
selected_element
: function
| REGULAR_WORD
;
function
: REGULAR_WORD '(' function_argument ')'
;
function_argument
: REGULAR_WORD
;
from_element
: REGULAR_WORD
;
DATE: D A T E;
FROM: F R O M;
SELECT: S E L E C T;
REGULAR_WORD
: (SIMPLE_LETTER) (SIMPLE_LETTER | '0'..'9')*
;
fragment SIMPLE_LETTER
: 'a'..'z'
| 'A'..'Z'
;
DATE is a keyword (it is used somewhere else in the grammar).
If I want it to be recognised by my grammar as a normal word, here are my solutions :
1) I add it everywhere I used REGULAR_WORD, next to it.
Example :
selected_element
: function
| REGULAR_WORD
| DATE
;
=> I don't want this solution. I don't have only "DATE" as a keyword, and I have many rules using REGULAR_WORD, so I would need to add a list of many (50+) keywords like DATE to many (20+) parser rules : it would be absolutely ugly.
PROS: make a clean tree
CONS: make a dirty grammar
2) I use a parser rule in between to get all those keywords, and then, I replace every occurrence of REGULAR_WORD by that parser rule.
Example :
word
: REGULAR_WORD
| DATE
;
selected_element
: function
| word
;
=> I do not want this solution either, as it adds one more parser rule in the tree and polluting the informations (I do not want to know that "date" is a word, I want to know that it's a selected_element, a function, a function_argument or a from_element ...
PROS: make a clean grammar
CONS: make a dirty tree
Either way, I have a dirty tree or a dirty grammar. Isn't there a way to have both clean ?
I looked for aliases, parser fragment equivalent, but it doesn't seem like ANTLR4 has any ?
Thank you, have a nice day !
There are four different grammars for SQL dialects in the Antlr4 grammar repository and all four of them use your second strategy. So it seems like there is a consensus among Antlr4 sql grammar writers. I don't believe there is a better solution given the design of the Antlr4 lexer.
As you say, that leads to a bit of noise in the full parse tree, but the relevant non-terminal (function, selected_element, etc.) is certainly present and it does not seem to me to be very difficult to collapse the unit productions out of the parse tree.
As I understand it, when Antlr4 was being designed, a decision was made to only automatically produce full parse trees, because the design of condensed ("abstract") syntax trees is too idiosyncratic to fit into a grammar DSL. So if you find an AST more convenient, you have the responsibility to generate one yourself. That's generally straight-forward although it involves a lot of boilerplate.
Other parser generators do have mechanisms which can handle "semireserved keywords". In particular, the Lemon parser generator, which is part of the Sqlite project, includes a %fallback declaration which allows you to specify that one or more tokens should be automatically reclassified in a context in which no grammar rule allows them to be used. Unfortunately, Lemon does not generate Java parsers.
Another similar option would be to use a parser generator which supports "scannerless" parsing. Such parsers typically use algorithms like Earley/GLL/GLR, capable of parsing arbitrary CFGs, to get around the need for more lookahead than can conveniently be supported in fixed-lookahead algorithms such as LALR(1).
This is the socalled keywords-as-identifiers problem and has been discussed many times before. For instance I asked a similar question already 6 years ago in the ANTLR mailing list. But also here at Stackoverflow there are questions touching this area, for instance Trying to use keywords as identifiers in ANTLR4; not working.
Terence Parr wrote a wiki article for ANTLR3 in 2008 that shortly describes 2 possible solutions:
This grammar allows "if if call call;" and "call if;".
grammar Pred;
prog: stat+ ;
stat: keyIF expr stat
| keyCALL ID ';'
| ';'
;
expr: ID
;
keyIF : {input.LT(1).getText().equals("if")}? ID ;
keyCALL : {input.LT(1).getText().equals("call")}? ID ;
ID : 'a'..'z'+ ;
WS : (' '|'\n')+ {$channel=HIDDEN;} ;
You can make those semantic predicates more efficient by intern'ing those strings so that you can do integer comparisons instead of string compares.
The other alternative is to do something like this
identifier : KEY1 | KEY2 | ... | ID ;
which is a set comparison and should be faster.
Normally, as #rici already mentioned, people prefer the solution where you keep all keywords in an own rule and add that to your normal identifier rule (where such a keyword is allowed).
The other solution in the wiki can be generalized for any keyword, by using a lookup table/list in an action in the ID lexer rule, which is used to check if a given string is a keyword. This solution is not only slower, but also sacrifies clarity in your parser grammar, since you can no longer use keyword tokens in your parser rules.
I am having a problem while parsing some SQL typed string with ANTLR4.
The parsed string is :
WHERE a <> 17106
AND b BETWEEN c AND d
AND e BTW(f, g)
Here is a snippet of my grammar :
where_clause
: WHERE element
;
element
: element NOT_EQUAL_INFERIOR element
| element BETWEEN element AND element
| element BTW LEFT_PARENTHESIS element COMMA_CHAR element RIGHT_PARENTHESIS
| element AND element
| WORD
;
NOT_EQUAL_INFERIOR: '<>';
LEFT_PARENTHESIS: '(';
RIGHT_PARENTHESIS: ')';
COMMA_CHAR: ',';
BETWEEN: B E T W E E N;
BTW: B T W;
WORD ... //can be anything ... it doesn't matter for the problem.
(source: hostpic.xyz)
But as you can see on that same picture, the tree is not the "correct one".
ANTLR4 being greedy, it englobes everything that follows the BETWEEN in a single "element", but we want it to only take "c" and "d".
Naturally, since it englobes everything in the element rule, it is missing the second AND of the BETWEEN, so it fails.
I have tried changing order of the rules (putting AND before BETWEEN), I tried changing association to right to those rules (< assoc=right >), but those didn't work. They change the tree but don't make it the way I want it to be.
I feel like the error is a mix of greediness, association, recursivity ... Makes it quite difficult to look for the same kind of issue, but maybe I'm just missing the correct words.
Thanks, have a nice day !
I think you misuse the rule element. I don't think SQL allows you to put anything as left and right limits of BETWEEN.
Not tested, but I'd try this:
expression
: expression NOT_EQUAL_INFERIOR expression
| term BETWEEN term AND term
| term BTW LEFT_PARENTHESIS term COMMA_CHAR term RIGHT_PARENTHESIS
| expression AND expression
| term
;
term
: WORD
;
Here your element becomes expression in most places, but in others it becomes term. The latter is a dummy rule for now, but I'm pretty sure you'd want to also add e.g. literals to it.
Disclaimer: I don't actually use ANTLR (I use my own), and I haven't worked with the (rather hairy) SQL grammar in a while, so this may be off the mark, but I think to get what you want you'll have to do something along the lines of:
...
where_clause
: WHERE disjunction
;
disjunction
: conjunction OR disjunction
| conjunction
;
conjunction
: element AND conjunction
| element
;
element
: element NOT_EQUAL_INFERIOR element
| element BETWEEN element AND element
| element BTW LEFT_PARENTHESIS element COMMA_CHAR element RIGHT_PARENTHESIS
| WORD
;
...
This is not the complete refactoring needed but illustrates the first steps.
Following lexer grammar snippet is supposed to tokenize 'custom names' depending on a predicate that is defined in a class LexerHelper:
fragment NUMERICAL : [0-9];
fragment XML_NameStartChar
: [:a-zA-Z]
| '\u2070'..'\u218F'
| '\u2C00'..'\u2FEF'
| '\u3001'..'\uD7FF'
| '\uF900'..'\uFDCF'
| '\uFDF0'..'\uFFFD'
;
fragment XML_NameChar : XML_NameStartChar
| '-' | '_' | '.' | NUMERICAL
| '\u00B7'
| '\u0300'..'\u036F'
| '\u203F'..'\u2040'
;
fragment XML_NAME_FRAG : XML_NameStartChar XML_NameChar*;
CUSTOM_NAME : XML_NAME_FRAG ':' XML_NAME_FRAG {LexerHelper.myPredicate(getText())}?;
The correct match for CUSTOM_NAME is always the longest possible match. Now if the lexer encounters a custom name such as some:cname then I would like it to lex the entire string some:cname and then call the predicate once with 'some:cname' as argument.
Instead, the lexer calls the predicate with each possible 'valid' match it finds along the way, so some:c, some:cn, some:cna, some:cnam until finally some:cname.
Is there a way to change the behaviour to force antlr4 to first find the longest possible match, before calling the predicate? Alternatively, is there an efficient way for the predicate to determine that the match is not the longest one yet to simply return with false in that case?
EDIT: The funny thing about this behavior is that as long as only partial matches are passed to the predicate, the result of the predicate seems to be completely ignored by the lexer anyway. This seems oddly inefficient.
As it turns out, the behavior is known and permitted by Antlr. Antlr may or may not call predicates more than necessary (see here for more details). To avoid that behavior I am now using actions instead, which only get executed once the rule has completely and successfully matched. This allows me to e.g. switch modes in an action.
I need an idea how to express a statement like the following:
Int<Double<Float>>
So, in an abstract form we should have:
1.(easiest case): a<b>
2. case: a<a<b>>
3. case: a<a<a<b>>>
4. ....and so on...
The thing is that I should enable the possibility to embed a statement of the form a < b > within the < .. > - signs such that I have a nested statement. In other words: I should replace the b with a< b >.
The 2nd thing is that the number of the opening and closed <>-signs should be equal.
How can I do that in ANTLR ?
A rule can refer to itself without any problem¹. Let's say we have a rule type which describes your case, in a minimalist approach:
type: typeLiteral ('<' type '>')?;
typeLiteral: 'Int' | 'Double' | 'Float';
Note the ('<' type '>') is optional, denoted by the ? symbol, so using only a typeLiteral is a valid type. Here are the synta trees generated by these rules in your example Int<Double<Float>>:
¹: As long some terminals (like '<' or '>') can diferentiate when the recursion stop.
Image generated by http://ironcreek.net/phpsyntaxtree/