I'am writing a code indentor using ANTLR4 and Java. I have successfully generated the lexer and the parser. And the approach i am using is to walk through the generated parse tree.
ParseTreeWalker mywalker = new ParseTreeWalker();
mywalker.walk(myListener, myTree);
The auto-generated *BaseListener has methods like below...
#Override public void enterEveryRule(ParserRuleContext ctx) { }
I'm very new to ANTLR. But, As I understand, I need to extend *BaseListener and override the relevant methods and write code to indent, So my question is what are the methods that I should be overriding for indenting the input code file? Or if there is an alternate approach I should take, please let me know.
Thanks!
None. You don't need a parser for this task and you are limiting yourself to valid code, when you require a parser (hence you cannot reformat code with a syntax error). Instead take the lexer and iterate over all tokens. Keep a state to know where you are (a block, a function, whatever) and indent according to that.
Related
I am editing a cod pragmatically through editing the AST in Eclipse CDT, put after perform the refactor using Change#perform method, I found the code to be formatted, example here I am just cast a function call to be void
//Old code
Publish(Author_id, Content);
//New code
(void) Publish(
Author_id,
Content);
As you see the method has been formatted to be in 3 lines, how to stop this action?
My code sample that do refactoring
INodeFactory factory = ast.getASTNodeFactory();
IASTNode newNode = rewriter.createLiteralNode("( void )"+selectedNode.getRawSignature());
rewriter.replace(selectedNode, newNode, null);
Change change = rewriter.rewriteAST();
change = change.perform(new NullProgressMonitor());
I believe you have two options:
Configure CDT's built-in formatter to format the code the way you like it. In this case, based on the appearance of the "New code", it looks like Line Wrapping -> Function calls -> Arguments -> Line wrapping policy is set to one of the "Wrap all elements" options. You can change it to something you like better.
Implement the org.eclipse.cdt.core.CodeFormatter extension point to provide an alternative formatter that behaves differently from the default code formatter (for example, it could do nothing).
UPDATE: I don't know where to find an example of this, but assuming you're familiar with implementing Eclipse extension points in general, here is the reference for this one. The steps would be:
Write a class that extends org.eclipse.cdt.core.formatter.CodeFormatter and implements the format() method
Ad an <extension ...> tag to your plugin's plugin.xml referencing your class
I have a generated grammar that does two things:
Check the syntax of a domain specific language
Evaluate input against that domain specific language
These two functions are separate, lets call them validate() and evaluate().
The validate() function builds the tree from a String input while ensuring it meets the requirements of the BNF for the language. The evaluate() function plugs in values to that tree to get a result (usually true or false).
What the code is currently doing is running validate() each time on the input, just to generate the tree that evaluate() uses. Some of the inputs take up to 60 seconds to be checked. What I would LIKE to do is serialize the results of validate() (assuming it meets the syntax requirements), store the serialized form in the backend database, and just load it from the database as part of evaluate().
I noticed that I can execute the method toStringTree() on the parse tree, and retrieve a LISP style tree. However, can I restore a LISP style tree to an ANTLR parse tree? If not, can anyone recommend another way to serialize and store the generated parse tree?
Thanks for any help.
Jason
ANTLR 4's ParseRuleContext data structure (the specific implementation of ParseTree used by generated parsers to represent grammar rules in the parse tree) is not serializable by default. Open issue #233 on the project issue tracker covers the feature request. However, based on my experience with many applications using ANTLR for parsing, I'm not convinced serializing the parse trees would be useful in the long run. For each problem serializing the parse tree is meant to address, a better solution already exists.
Another option is to store a hash of the last known valid file in the database. After you use the parser to create a parse tree, you could skip the validation step if the input file has the same hash as the last time it was validated. This leverages two aspects of ANTLR 4:
For the same input file, running the parser twice will produce the same parse tree.
The ANTLR 4 parser is extremely fast in almost all cases (e.g. the Java grammar can process around 20MB of source per second). The remaining cases tend to be caused by poorly structured grammar rules that the new parser interpreter feature in ANTLRWorks 2.2 can analyze and make suggestions for improvement.
If you need performance beyond what you get with this, then a parse tree isn't the data structure you should be using. StringTemplate 4's enormous performance advantage over StringTemplate 3 came primarily from the fact that the interpreter switched from using ASTs (equivalent to parse trees for this reasoning) to a linear bytecode representation/interpreter. The ASTs for ST4 would never need to be serialized for performance reasons because the bytecode would be serialized instead. In fact, the C# port of StringTemplate 4 provides exactly this feature.
If the input data to your grammar is made of several independent blocks, you could try to store the string of each block separately, and run the parsing process again for each block independently, using a ThreadPool for example.
Say for example your input data is a set of method declarations:
int add(int a, int b) {
return a+b;
}
int mul(int a, int b) {
return a*b;
}
...
and the grammar is something like:
methodList : methodDeclaration methodList
|
;
methodDeclaration : // your method declaration rules...
The first run of the parser just collects each method text and store it. The parser starts the process at the methodList rule.
void visitMethodList(MethodListContext ctx) {
if(ctx.methodDeclaration() != null) {
String methodStr = formatParseTree(ctx.methodDeclaration(), " ");
// store methodStr for later parsing
}
// visit next method list item, if any
if(ctx.methodList() != null) {
visit(ctx.methodList());
}
}
The second run launch the parsing of each method declaration (in a separate thread for example). For this, the parser starts at the methodDeclaration rule.
void visitMethodDeclaration(MethodDeclarationContext ctx) {
// parse the method block
}
The reason why the text of a methodDeclaration rule is formatted if because calling directly ctx.methodDeclaration().getText() would combine the text of all child nodes AntLR doc, possibly making it unusable for parsing again. If white space is a token separator in the grammar, then adding one space between tokens should not change the parse tree.
String formatParseTree(ParseTree tree, String separator) {
StringBuilder builder = new StringBuilder();
for(int i = 0; i < tree.getChildCount(); i ++) {
ParseTree child = tree.getChild(i);
if(child instanceof TerminalNode) {
builder.append(child.getText());
builder.append(separator);
} else if(child instanceof RuleContext) {
builder.append(formatParseTree(child, separator));
}
}
return builder.toString();
}
Does the Antlr4 generated code include anything like an unparser that can use the grammer and the parser tree to reconstruct the original source? How would I invoke that if it exists? I ask because it might be useful in some application and debugging.
It really depends what do you want to achieve. Remember that Lexer tokens which are put onto HIDDEN channel (like comments and which spaces) and are not parsed at all.
The approach I used was
use additional user specific information in lexer token class
parse the source and get AST
rewind the lexer(token source) and loop over all Lexem-es, including the hidden ones
for each hidden Lexeme, append the reference to the corresponding AST leaf
so every AST leaf "know" which white-space Lexemes are following it
recursively traverse the AST and print all the Lexemes
Yes! ANTLR's infrastructure (usually) makes the original source data available.
In the default case, you will be using a CommonTokenStream. This inherits from BufferedTokenStream, which offers a whole slew of methods for getting at stuff.
Methods getHiddenTokensOnLeft (and ...Right) will get you lists of tokens not appearing in the DEFAULT stream. Those tokens will reveal their source text using getText().
What I find even more convenient is BufferedTokenStream.getText(interval), which will give you the text (including hidden) on an Interval, which you can get from your tree element (RuleContext).
To make use of your CommonTokenStream and its methods, you just need to pass it from where you create it and set up your parser to whatever class is examining the parse tree, such as your XXXBaseListener - I just gave my Listener a constructor that stores the CommonTokenStream as an instance field.
So when I want the complete text for a rule ctx, I use this little method:
String originalString(ParserRuleContext ctx) {
return this.tokenStream.getText(ctx.getSourceInterval());
}
Alternatively, the tokens also contain line numbers and offsets, if you want to fiddle with those.
Let's suppose I have two grammars (and that there is a Lexer defined somewhere), ParserA and ParserB.
In ParserA I have the following code:
parser grammar ParserA;
classDeclaration
scope {
ST mList;
}
...
ParserB is something like:
parser grammar ParserB;
import ParserA;
methodDeclaration : something something { $classDeclaration::mList.add(...) };
The code in the action will fail to compile (by javac) since classDeclaration is in a different class (and file). Any tips on how to fix it?
Any tips on how to fix it?
No, there's (AFAIK) no ANTLR shortcut here: there's no communication possible between imported grammars (either by using scopes or by providing parameters to imported grammar rules).
Aside from getting any real work done, I have an itch. My itch is to write a view engine that closely mimics a template system from another language (Template Toolkit/Perl). This is one of those if I had time/do it to learn something new kind of projects.
I've spent time looking at CoCo/R and ANTLR, and honestly, it makes my brain hurt, but some of CoCo/R is sinking in. Unfortunately, most of the examples are about creating a compiler that reads source code, but none seem to cover how to create a processor for templates.
Yes, those are the same thing, but I can't wrap my head around how to define the language for templates where most of the source is the html, rather than actual code being parsed and run.
Are there any good beginner resources out there for this kind of thing? I've taken a ganer at Spark, which didn't appear to have the grammar in the repo.
Maybe that is overkill, and one could just test-replace template syntax with c# in the file and compile it. http://msdn.microsoft.com/en-us/magazine/cc136756.aspx#S2
If you were in my shoes and weren't a language creating expert, where would you start?
The Spark grammar is implemented with a kind-of-fluent domain specific language.
It's declared in a few layers. The rules which recognize the html syntax are declared in MarkupGrammar.cs - those are based on grammar rules copied directly from the xml spec.
The markup rules refer to a limited subset of csharp syntax rules declared in CodeGrammar.cs - those are a subset because Spark only needs to recognize enough csharp to adjust single-quotes around strings to double-quotes, match curley braces, etc.
The individual rules themselves are of type ParseAction<TValue> delegate which accept a Position and return a ParseResult. The ParseResult is a simple class which contains the TValue data item parsed by the action and a new Position instance which has been advanced past the content which produced the TValue.
That isn't very useful on it's own until you introduce a small number of operators, as described in Parsing expression grammar, which can combine single parse actions to build very detailed and robust expressions about the shape of different syntax constructs.
The technique of using a delegate as a parse action came from a Luke H's blog post Monadic Parser Combinators using C# 3.0. I also wrote a post about Creating a Domain Specific Language for Parsing.
It's also entirely possible, if you like, to reference the Spark.dll assembly and inherit a class from the base CharGrammar to create an entirely new grammar for a particular syntax. It's probably the quickest way to start experimenting with this technique, and an example of that can be found in CharGrammarTester.cs.
Step 1. Use regular expressions (regexp substitution) to split your input template string to a token list, for example, split
hel<b>lo[if foo]bar is [bar].[else]baz[end]world</b>!
to
write('hel<b>lo')
if('foo')
write('bar is')
substitute('bar')
write('.')
else()
write('baz')
end()
write('world</b>!')
Step 2. Convert your token list to a syntax tree:
* Sequence
** Write
*** ('hel<b>lo')
** If
*** ('foo')
*** Sequence
**** Write
***** ('bar is')
**** Substitute
***** ('bar')
**** Write
***** ('.')
*** Write
**** ('baz')
** Write
*** ('world</b>!')
class Instruction {
}
class Write : Instruction {
string text;
}
class Substitute : Instruction {
string varname;
}
class Sequence : Instruction {
Instruction[] items;
}
class If : Instruction {
string condition;
Instruction then;
Instruction else;
}
Step 3. Write a recursive function (called the interpreter), which can walk your tree and execute the instructions there.
Another, alternative approach (instead of steps 1--3) if your language supports eval() (such as Perl, Python, Ruby): use a regexp substitution to convert the template to an eval()-able string in the host language, and run eval() to instantiate the template.
There are sooo many thing to do. But it does work for on simple GET statement plus a test. That's a start.
http://github.com/claco/tt.net/
In the end, I already had too much time in ANTLR to give loudejs' method a go. I wanted to spend a little more time on the whole process rather than the parser/lexer. Maybe in version 2 I can have a go at the Spark way when my brain understands things a little more.
Vici Parser (formerly known as LazyParser.NET) is an open-source tokenizer/template parser/expression parser which can help you get started.
If it's not what you're looking for, then you may get some ideas by looking at the source code.