I'm using the Java syntax defined at https://github.com/antlr/grammars-v4/tree/master/java/java
My user somehow input his Java code as the following
class HelloWorld {
public static void main(String[] args) {
* my first program !
*/
System.out.println("Hello, World!");
}
}
He just forgets /* before line 3, but my parser is screwed up.
var stream = CharStreams.fromString(input);
ITokenSource lexer = new JavaLexer(stream);
ITokenStream tokens = new CommonTokenStream(lexer);
Parser parser = new JavaParser(tokens);
var tree = parser.compilationUnit();
The Definitive ANTLR 4 Reference claims Antlr can do single token insertion and single token deletion but I didn't find it's inserting /* for me.
How would I ask Antlr to recover from the missing /*?
Related
Using C# and ANTLR4, I'm trying to parse a simple grammar, which is just a simple assign statement, which would look like:
int someinteger = 3;.
Below are my parser rules, which contain a compile unit, block and basic statement.
//The final compile unit sent to the interpreter.
compileUnit
: block EOF
;
//A block, array of statements.
block: statement*
;
//A single statement.
statement: stat_ass;
//An assign statement.
stat_ass: IDENTIFIER IDENTIFIER SET_EQUALS INTEGER ENDLINE;
When parsing int banana = 142;, the tokens returned are:
[IDENTIFIER, int]
[IDENTIFIER, banana]
[SET_EQUALS, =]
[INTEGER, 142]
[ENDLINE, ;]
However, when printing my parse tree, it just contains a block which has no statements.
ANTLR Parse Tree:
([] [10] <EOF>)
Can someone enlighten me on why this fails? Apologies if this is a simple mistake, I've run out of options I can think of to fix this.
Program.cs:
using Antlr4.Runtime;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace stork
{
class Program
{
static void Main(string[] args)
{
//Test input string.
string input = "int banana = 142;";
var chars = new AntlrInputStream(input);
var lexer = new storkLexer(chars);
var tokens = new CommonTokenStream(lexer);
//Debug print.
ANTLRDebug.PrintTokens(lexer);
//Debug print tree.
var parser = new storkParser(tokens);
ANTLRDebug.PrintParseList(parser);
//Getting tree.
parser.BuildParseTree = true;
var tree = parser.compileUnit();
}
}
}
ANTLRDebug.cs
https://github.com/c272/stork-lang/blob/master/stork/ANTLRDebug.cs
stork.g4
https://github.com/c272/stork-lang/blob/master/stork/Stork.g4
Your ANTLRDebug.PrintTokens method iterates over all the tokens from the lexer, consuming all of them. Afterwards the lexer is empty (it's like an iterator that way), so you're invoking the parser on an empty token stream.
You should call lexer.reset() after calling ANTLRDebug.PrintTokens (or call it at the end of that method) to reset the lexer to the beginning of the input stream.
PS: I recommend calling ToStringTree(parser) instead of just ToStringTree() as that will produce more readable output (rule names instead of numbers).
I have a lexer which puts every token the parser is interested in into the default channel and all comment-tokens in channel 1.
The default channel is used to create the actual tree while the comment channel is used to seperate the tokens and to store all comments.
Look at this scribble:
In chapter 12.1 p. 206-208 in The Definitive ANTLR4 Reference there is a comparable situation where comment tokens are shifted inside the token stream. The represented approach is to read out the comment-channel in an exit-method inside the parser.
In my opinion this is a very rough option for my problem, because i don't want to overwhelm my listener with that back-looking operations. Is there a possibility to override a method which puts tokens inside the comment-channel?
It looks like you misunderstand how channels work in ANTLR. What happens is that the lexer, as it comes along a token, assigns the default channel (just a number) during initialization of the token. That value is only changed when the lexer finds a -> channel() command or you change it explicitely in a lexer action. So there is nothing to do in a listener or whatever to filter out such tokens.
Later when you want to get all tokens "in" a given channel (i.e. all tokens that have a specific channel number assigned) you can just iterate over all tokens returned by your token stream and compare the channel value. Alternatively you can create a new CommonTokenStream instance and pass it the channel your are interested in. It will then only give you those tokens from that channel (it uses a token source, e.g. a lexer, to get the actual tokens and cache them).
I found out, that there is a easy way to override how tokens are created. To do this, one can override a method inside the CommonTokenFactory and give it to the Lexer. At this point i can check the channel and i am able to push the tokens in a separate set.
In my opinion this is a little bit hacky, but i do not need to iterate over the whole commonTokenStream later on.
This code is only to demonstrate the idea behind (in C#) .
internal class HeadAnalyzer
{
#region Methods
internal void AnalyzeHeader(Stream headerSourceStream)
{
var antlrFileStream =
new AntlrInputStream(headerSourceStream);
var mcrLexer = new MCRLexer(antlrFileStream);
var commentSaverTokenFactory = new MyTokenFactory();
mcrLexer.TokenFactory = commentSaverTokenFactory;
var commonTokenStream = new CommonTokenStream(mcrLexer);
var mcrParser = new MCRParser(commonTokenStream);
mcrParser.AddErrorListener(new DefaultErrorListener());
MCRParser.ProgramContext tree;
try
{
tree = mcrParser.program(); // create the tree
}
catch (SyntaxErrorException syntaxErrorException)
{
throw new NotImplementedException();
}
var headerContext = new HeaderContext();
var headListener = new HeadListener(headerContext);
ParseTreeWalker.Default.Walk(headListener, tree);
var comments = commentSaverTokenFactory.CommentTokens; // contains all comments :)
}
#endregion
}
internal class MyTokenFactory : CommonTokenFactory
{
internal readonly List<CommonToken> CommentTokens = new List<CommonToken>();
public override CommonToken Create(Tuple<ITokenSource, ICharStream> source, int type, string text, int channel, int start, int stop, int line, int charPositionInLine)
{
var token = base.Create(source, type, text, channel, start, stop, line, charPositionInLine);
if (token.Channel == 1)
{
CommentTokens.Add(token);
}
return token;
}
}
Maybe there are some better approaches. For my usecase it works as expected.
Main question
Is there an easy way to get a list of tokens (ideally in a form of a TokenStream) from the parser rule class ParserRuleContext?
Related answers
In an answer for a question Traversal of tokens using ParserRuleContext in listener - ANTLR4 this solution appeared:
ParserRuleContext pctx = ctx.getParent();
List<TerminalNode> nodes = pctx.getTokens(pctx.getStart(), pctx.getStop());
But there is no method with signature ParserRuleContext::getTokens(Token, Token) in ANTLRv4.
My solution
I thought about retriving a list of tokens from TokenStream by using TokenStream:get(index: int) method, where index value will be set to a range of indicies of given ParserRuleContext start/stop tokens.
Additional question
Is there a way to get a subset of tokens from TokenStream in a form of another TokenStream?
So, I overlooked some classes and their inferfaces in the ANTLRv4 API.
Answer to main question
Proposed above solution is correct. Also BufferedTokenStream and CommonTokenStream classes have method public List<Token> getTokens(int start, int stop) which allows to retrive list of tokens from a given range (especially from a range between start token and stop token of ParserRuleContext class)
Answer to additional question
You can utilize ListTokenSource class which implements TokenSource interface. Then you can create CommonTokenStream class passing the ListTokenSource.
Code example of ParserRuleRewriter
I encapsulate above ideas into small code example featuring ParserRuleRewriter - a TokenStreamRewriter that rewrites only given parser rule. In the code tokenStream parameter is a token stream of a full program.
import org.antlr.v4.runtime.*;
import java.util.List;
public class ParserRuleRewriter {
private TokenStreamRewriter rewriter;
public ParserRuleRewriter(ParserRuleContext parserRule, CommonTokenStream tokenStream) {
Token start = parserRule.getStart();
Token stop = parserRule.getStop();
List<Token> ruleTokens = tokenStream.getTokens(start.getTokenIndex(), stop.getTokenIndex());
ListTokenSource tokenSource = new ListTokenSource(ruleTokens);
CommonTokenStream commonTokenStream = new CommonTokenStream(tokenSource);
commonTokenStream.fill();
rewriter = new TokenStreamRewriter(commonTokenStream);
}
public void replace(Token token, ParserRuleContext rule) {
rewriter.replace(token, rule.getText());
}
#Override
public String toString() {
return rewriter.getText();
}
}
How do i parse multiple source file and end up with just one AST to perform analysis and code generation from? Typically, I find example usage of ANTLR in the form of
public void process(String source)
{
ANTLRStringStream Input = new ANTLRStringStream(input);
TLexer lex = new TLexer(Input);
CommonTokenStream tokens = new CommonTokenStream(lex);
TParser parser = new TParser(tokens);
var tree = parser.parse().Tree;
}
but neither the lexer nor the parser seems to be able to take additional files. Am I supposed to create a lexer and parser pr. inputfile and use tree.Add() to add trees from the other files to the tree of the first file?
Here are three ways you could do this:
Use Bart's suggestion and combine the files into a single buffer. This would require adding a lexer rule that implements identical functionality to the C++ #line directive.
Combine the trees returned by the parser rule.
Use multiple input streams with a single lexer. This can be done by using code similar to that which handles include files by pushing all buffers onto the stack before lexing.
The second option would probably be the easiest. I don't use the Java target so I can't give code details, which is required for all of these solutions.
I think this close to what you are after. I've hard-coded two files to process but you can process as many as needed by creating a loop. At the step // create new parent node and merge trees here into fulltree see Bart's answer on duplicating a tree. It has the steps to create a parent node and attach children to it (sorry but I've not done this and didn't have time to integrate his code and test).
public class OneASTfromTwoFiles {
public String source1 = "file1.txt";
public String source2 = "file2.txt";
public static void main(String[] args)
CommonTree fulltree;
{
CommonTree nodes1 = process(source1);
CommonTree nodes2 = process(source2);
// create new parent node and merge trees here into fulltree
CommonTreeNodeStream nodes = new CommonTreeNodeStream(fulltree); //create node stream
treeEval walker = new treeEval(fulltree);
walker.startRule(); //walk the combined tree
}
public CommonTree process(String source)
{
CharStream afs = null;
// read file; exit if error
try {
afs = new ANTLRFileStream(source);
}
catch (IOException e) {
System.out.println("file not found");
System.exit(1);
}
TLexer lex = new TLexer(afs);
CommonTokenStream tokens = new CommonTokenStream(lex);
TParser parser = new TParser(tokens);
//note startRule is the name of the first rule in your parser grammar
TParser.startRule_return r = parser.startRule(); //parse this file
CommonTree ast = (CommonTree)r.getTree(); //create AST from parse of this file
return ast; //and return it
}
}
I need to create a grammar for a language with forward references. I think that the easiest way to achieve this is to make several passes on the generated AST, but I need a way to store symbol information in the tree.
Right now my parser correctly generates an AST and computes scopes of the variables and function definitions. The problem is, I don't know how to save the scope information into the tree.
Fragment of my grammar:
composite_instruction
scope JScope;
#init {
$JScope::symbols = new ArrayList();
$JScope::name = "level "+ $JScope.size();
}
#after {
System.out.println("code block scope " +$JScope::name + " = " + $JScope::symbols);
}
: '{' instruction* '}' -> ^(INSTRUCTION_LIST instruction*)
;
I would like to put a reference to current scope into a tree, something like:
: '{' instruction* '}' -> ^(INSTRUCTION_LIST instruction* {$JScope::symbols})
Is it even possible? Is there any other way to store current scopes in a generated tree? I can generate the scope info in a tree grammar, but it won't change anything, because I still have to store it somewhere for the second pass on the tree.
To my knowledge, the syntax for the rewrite rules doesn't allows for directly assigning values as your tentative snippet suggests. This is in part due to the fact that the parser wouldn't really know to what part of the tree/node the values should be added to.
However, one of the cool features of ANTLR-produced ASTs is that the parser makes no assumptions about the type of the Nodes. One just needs to implement a TreeAdapator which serves as a factory for new nodes and as a navigator of the tree structure. One can therefore stuff whatever info may be needed in the nodes, as explained below.
ANTLR provides a default tree node implementation, CommonTree, and in most cases (as in the situation at hand) we merely need to
subclass CommonTree by adding some custom fields to it
subclass the CommonTreeAdaptor to override its create() method, i.e. the way it produces new nodes.
but one could also create a novel type of node altogher, for some odd graph structure or whatnot. For the case at hand, the following should be sufficient (adapt for the specific target language if this isn't java)
import org.antlr.runtime.tree.*;
import org.antlr.runtime.Token;
public class NodeWithScope extends CommonTree {
/* Just declare the extra fields for the node */
public ArrayList symbols;
public string name;
public object whatever_else;
public NodeWithScope (Token t) {
super(t);
}
}
/* TreeAdaptor: we just need to override create method */
class NodeWithScopeAdaptor extends CommonTreeAdaptor {
public Object create(Token standardPayload) {
return new NodeWithScope(standardPayload);
}
}
One then needs to slightly modify the way the parsing process is started, so that ANTLR (or rather the ANTLR-produced parser) knows to use the NodeWithScopeAdaptor rather than CommnTree.
(Step 4.1 below, the rest if rather standard ANTLR test rig)
// ***** Typical ANTLR pipe rig *****
// ** 1. input stream
ANTLRInputStream input = new ANTLRInputStream(my_input_file);
// ** 2, Lexer
MyGrammarLexer lexer = new MyGrammarLexer(input);
// ** 3. token stream produced by lexer
CommonTokenStream tokens = new CommonTokenStream(lexer);
// ** 4. Parser
MyGrammarParser parser = new MyGrammarParser(tokens);
// 4.1 !!! Specify the TreeAdapter
NodeWithScopeAdaptor adaptor = new NodeWithScopeAdaptor();
parser.setTreeAdaptor(adaptor); // use my adaptor
// ** 5. Start process by invoking the root rule
r = parser.MyTopRule();
// ** 6. AST tree
NodeWithScope t = (NodeWithScope)r.getTree();
// ** 7. etc. parse the tree or do whatever is needed on it.
Finally your grammar would have to be adapted with something akin to what follows
(note that the node [for the current rule] is only available in the #after section. It may however reference any token attribute and other contextual variable from the grammar-level, using the usual $rule.atrribute notation)
composite_instruction
scope JScope;
#init {
$JScope::symbols = new ArrayList();
$JScope::name = "level "+ $JScope.size();
}
#after {
($composite_instruction.tree).symbols = $JScope::symbols;
($composite_instruction.tree).name = $JScope::name;
($composite_instruction.tree).whatever_else
= new myFancyObject($x.Text, $y.line, whatever, blah);
}
: '{' instruction* '}' -> ^(INSTRUCTION_LIST instruction*)
;