Exclude certain tokens from the parse tree - antlr

I'd like to exclude certain tokens from the parse-tree in antlr4.
Say I have this definition:
assignStatement: assignable EQ expression EOS;
EQ: '=';
EOS: ';';
The resulting parse-tree contains the assignable, EQ, expression and EOS as children of assignStatement. Is there any way to get rid of EQ and EOS here, since I only need them during parse-time for matching purposes?

ANTLR 4 does not omit matched terminals from the parse tree. While your application does not require access to those tokens, our experience has been that new applications using a previously written grammar frequently require access to elements that earlier applications did not. By including all elements in the parse tree, we are accounting for this case in advance to improve the long-term maintainability of applications using ANTLR 4.

Related

How to get a parameter to the ANTLR lexer object?

I'm writing a JAVA software to parse SQL queries. In order to do so I'm using ANTLR with presto.g4.
The code I'm currently using is pretty standard:
PrestoLexer lexer = new PrestoLexer(
new CaseChangingCharStream(CharStreams.fromString(query), true));
lexer.removeErrorListeners();
lexer.addErrorListener(errorListener);
CommonTokenStream tokens = new CommonTokenStream(lexer);
PrestoParser parser = new PrestoParser(tokens);
I wonder whether it's possible to pass a parameter to the lexer so the lexing will be different depends on that parameter?
update:
I've used #Mike's suggestion below and my lexer now inherits from the built-in lexer and added a predicate function. My issue is now pure grammar.
This is my string definition:
STRING
: '\'' ( '\\' .
| '\\\\' . {HelperUtils.isNeedSpecialEscaping(this)}? // match \ followed by any char
| ~[\\'] // match anything other than \ and '
| '\'\'' // match ''
)*
'\''
;
I sometimes have a query with weird escaping for which the predicate returns true. For example:
select
table1(replace(replace(some_col,'\\'',''),'\"' ,'')) as features
from table1
And when I try to parse it I'm getting:
'\'',''),'
As a single string.
how can I handle this one?
I don't know what you need the parameter for, but you mentioned SQL, so let me present a solution I used since years: predicates.
In MySQL (which is the dialect I work with) the syntax differs depending on the MySQL version number. So in my grammar I use semantic predicates to switch off and on language parts that belong to a specific version. The approach is simple:
test:
{serverVersion < 80014}? ADMIN_SYMBOL
| ONLY_SYMBOL
;
The ADMIN keyword is only acceptable for version < 8.0.14 (just an example, not true in reality), while the ONLY keyword is a possible alternative in any version.
The variable serverVersion is a member of a base class from which I derive my parser. That can be specified by:
options {
superClass = MySQLBaseRecognizer;
tokenVocab = MySQLLexer;
}
The lexer also is derived from that class, so the version number is available in both lexer and parser (in addition to other important settings like the SQL mode). With this approach you can also implement more complex functions for predicates, that need additional processing.
You can find the full code + grammars at the MySQL Workbench Github repository.
I wonder whether it's possible to pass a parameter to the lexer so the lexing will be different depends on that parameter?
No, the lexer works independently from the parser. You cannot direct the lexer while parsing.

Preferentially match shorter token in ANTLR4

I'm currently attempting to write a UCUM parser using ANTLR4. My current approach has involved defining every valid unit and prefix as a token.
Here's a very small subset of the defined tokens. I could make a cut-down version of the grammar as an example, but it seems like it shouldn't be necessary to resolve this problem (or to point out that I'm going about this entirely the wrong way).
MILLI_OR_METRE: 'm' ;
OSMOLE: 'osm' ;
MONTH: 'mo' ;
SECOND: 's' ;
One of the standard testcases is mosm, from which the lexer should generate the token stream MILLI_OR_METRE OSMOLE. Unfortunately, because ANTLR preferentially matches longer tokens, it generates the token stream MONTH SECOND MILLI_OR_METRE, which then causes the parser to raise an error.
Is it possible to make an ANTLR4 lexer try to match using shorter tokens first? Adding lookahead-type rules to MONTH isn't a great solution, as there are all sorts of potential lexing conflicts that I'd need to take account of (for example mol being lexed as MONTH LITRE instead of MOLE and so on).
EDIT:
StefanA below is of course correct; this is a job for a parser capable of backtracking (eg. recursive descent, packrat, PEG and probably various others... Coco/R is one reasonable package to do this). In an attempt to avoid adding a dependency on another parser generator (or moving other bits of the project from ANTLR to this new generator) I've hacked my way around the problem like this:
MONTH: 'mo' { _input.La(1) != 's' && _input.La(1) != 'l' && _input.La(1) != '_' }? ;
// (note: this is a C# project; java would use _input.LA instead)
but this isn't really a very extensible or maintainable solution, and like as not will have introduced other subtle issues I've not come across yet.
Your problem does not require smaller tokens to be preferred (In this case MONTH would never be matched). You need a backtracking behaviour dependent on the text being matched or not. Right?
ANTLR separates tokenization and parsing strictly. Consequently every solution to your problem will seem like a hack.
However other parser generators are specialized on problems like yours. Packrat Parsers (PEG) are backtracking and allow tokenization on the fly. Try out parboiled for this purpose.
Appears that the question is not being framed correctly.
I'm currently attempting to write a UCUM parser using ANTLR4. My current approach has involved defining every valid unit and prefix as a token.
But, according to the UCUM:
The expression syntax of The Unified Code for Units of Measure generates an infinite number of codes with the consequence that it is impossible to compile a table of all valid units.
The most to expect from the lexer is an unambiguous identification of the measurement string without regard to its semantic value. Similarly, a parser alone will be unable to validly select between unit sequences like MONTH LITRE and MOLE - both could reasonably apply to a leak rate - unless the problem space is statically constrained in the parser definition.
A heuristic, structural (explicitly identifying the problem space) or contextual (considering the relative nature of other units in the problem space), is most likely required to select the correct unit interpretation.
The best tool to use is the one that puts you in the best position to implement the heuristics necessary to disambiguate the unit strings. Antlr could do it using parse-tree walkers. Whether that is the appropriate approach requires further analysis.

ANTLR4 parse tree simplification

Is there any means to get ANTLR4 to automatically remove redundant nodes in generated parse trees?
More specifically, I've been experimenting with a grammar for GLSL and you end up with long linear sequences of "expressions" in the parse tree due to the rule forwarding needed to give the automatic handling of operator precedence.
Most of the generated tree nodes are simply "forward to the next level of precedence", so don't provide any useful syntactic information - you only really need the last expression node in each sequence (i.e. the point at which the rule forwarding stopped), or the point where it becomes an actual tree node with more than one child (i.e. an actual expression was encountered in the source) ...
I was hoping there would be an easy way to kill off the dummy intermediate expression nodes - this type of structure must be common in any grammar with operator precedence.
The basic structure of the grammar is a fairly direct clone taken from the Khronos specification for the language:
https://www.khronos.org/registry/gles/specs/3.1/es_spec_3.1.pdf
ANTLR v4 is able to generate code from a single recursive rule dealing with different precedence levels, if you use a grammar like this (example for basic math):
expr : '(' expr ')'
| '-' expr
| expr ('*'|'/') expr
| expr ('+'|'-') expr
| INT
;
ANTLR v3 was unable to do so and basically required you to write one rule per precedence level. So I'd advise you to rewrite your grammar to avoid these boilerplate rules.
Then, I think you're confusing the parse tree (aka concrete syntax tree) with the AST (abstract syntax tree). The AST is like a simplified version of the parse tree, which keeps only what's needed for your purpose. For instance, with the expr rule above, the AST wouldn't contain any node for parentheses, since the precedence is encoded in the tree itself and you usually don't need to know whether a part of a given expression was parenthesized or not.
Your program should build an AST from the parse tree and then go from there. Don't deal with parse trees directly, even if it seems convenient at first sight because the tool generates them for you. It'll quickly become cumbersome. Build your own tree structure (AST), tailored for the task at hand.
Use the Visitor implementation to access each node in sequence. Build your own tree by adding nodes to parents as they are visited. Decide at the time the node is visited whether to add it to your new tree or not. For example:
public T visitExpression(#NotNull AcParser.ExpressionContext ctx) {
// Expressionable parent = getParent(Expressionable.class, ctx);
// Class<? extends AcExpression> expClass = AcExpression.class;
AcExpression obj = null;
String text = ctx.getText();
//do something with text or children
for (int i=0; i<ctx.getChildCount(); i++){
printnl(ctx.getChild(i).getText()+"/");
}
return visitChildren(ctx);
}

Stripping actions from ANTLR grammar changes its parsing algorithm

I have a grammar Foo.xtext (too complex to include it here). Xtext generates InternalFoo.g from it. After some tweaking it also generates DebugInternalFoo.g which claims to be the same thing without actions. Now, I strip off actions with ANTLR directly
java -cp antlr-3.4.jar org.antlr.tool.Strip Internal.g > Stripped.g
I'd expect the three grammars to behave the same way when I check them. But here is what I experienced
InternalFoo.g - error, rule assignment has non-LL(*) decision
DebugInternalFoo.g - no problem, parses fine
Stripped.g - warnings at rule assignment, decision can match using multiple alternatives. It fails to parse properly.
Is it possible that a grammar parses a text differently with or without actions? Or is it a bug in any of the action-remover tools? (The rule in question has syntactic predicates, and without them, it would really have a non-LL(*) decision.)
UPDATE:
I partly found what caused the problem. The rule in question was like this
trickyRule:
({ some complex action})
(expression '=')=>...
Stripping with Antlr removed the action, but left an empty group there:
// Stripped.g
trickyRule:
() (expression '=')=>...
The generation of the debug grammar removes both the action, and the now empty group around it:
// DebugInternalFoo.g
trickyRule:
(expression '=')=>...
So the lesson learned is: an empty group before a syntactic predicate is not the same as nothing at all.
Is it possible that a grammar parses a text differently with or without actions?
Yes, that is possible. org.antlr.tool.Strip leaves syntactic predicates1, but removes validating2- and gated3 semantic predicates (and member sections that these semantic predicates might use).
For example, the following rules would only match an A_TOKEN:
parser_rule1
: (parser_rule2)=> parser_rule2
;
parser_rule2
: {input.LT(1).getType() == A_TOKEN}? .
;
but if you use the Strip tool on it, it leaves the following:
parser_rule1
: (parser_rule2)=> parser_rule2
;
parser_rule2
: /*{input.LT(1).getType() == A_TOKEN}?*/ .
;
making it match any token.
In other words, Strip could change the behavior of the generated lexer or parser.
1 syntactic predicate: ( ... )=>
2 validating semantic predicate { ... }?
3 gated semantic predicate { ... }?=>

Bison input analyzer - basic question on optional grammar and input interpretation

I am very new to Flex/Bison, So it is very navie question.
Pardon me if so. May look like homework question - but I need to implement project based on below concept.
My question is related to two parts,
Question 1
In Bison parser, How do I provide rules for optional input.
Like, I need to parse the statment
Example :
-country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana'
Here the ratio token can be optional. Similarly, If I have many tokens optional, then How do I provide the grammar in the parser for the same?
My code looks like,
%start program
program : TK_COUNTRY TK_IDENTIFIER TK_STATE TK_IDENTIFIER TK_POPULATION TK_IDENTIFIER ...
where all the tokens are defined in the lexer. Since there are many tokens which are optional, If I use "|" then there will be many different ways of input combination possible.
Question 2
There are good chance that the comment might have quotes as part of the input, so I have added a token -tag which user can provide to interpret the same,
Example :
-country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana$'s population' -tag=$
Now, I need to reinterpret Indiana$'s as Indiana's since -tag=$.
Please provide any input or related material for to understand these topic.
Q1: I am assuming we have 4 possible tokens: NAME , '-', '=' and VALUE
Then the grammar could look like this:
attrs:
attr attrs
| attr
;
attr:
'-' NAME '=' VALUE
;
Note that, unlike you make specific attribute names distinguished tokens, there is no way to say "We must have country, state and population, but ratio is optional."
This would be the task of that part of the program that analyses the data produced by the parser.
Q2: I understand this so, that you think of changing the way lexical analysis works while the parser is running. This is not a good idea, at least not for a beginner. Have you even started to think about lexical analysis, as opposed to parsing?