byacc shift/reduce - yacc

I'm having trouble figuring this one out as well as the shift reduce problem.
Adding ';' to the end doesn't solve the problem since I can't change the language, it needs to go just as the following example. Does any prec operand work?
The example is the following:
A variable can be declared as: as a pointer or int as integer, so, both of this are valid:
<int> a = 0
int a = 1
the code goes:
%left '<'
declaration: variable
| declaration variable
variable : type tNAME '=' expr
| type tNAME
type : '<' type '>'
| tINT
expr : tINTEGER
| expr '<' expr
It obviously gives a shift/reduce problem afer expr. since it can shift for expr of "less" operator or reduce for another variable declaration.
I want precedence to be given on variable declaration, and have tried to create a %nonassoc prec_aux and put after '<' type '>' %prec prec_aux and after type tNAME but it doesn't solve my problem :S
How can I solve this?
Output was:
Well cant figure hwo to post linebreaks and code on reply... so here it goes the output:
35: shift/reduce conflict (shift 47, reduce 7) on '<'
state 35
variable : type tNAME '=' expr . (7)
expr : expr . '+' expr (26)
expr : expr . '-' expr (27)
expr : expr . '*' expr (28)
expr : expr . '/' expr (29)
expr : expr . '%' expr (30)
expr : expr . '<' expr (31)
expr : expr . '>' expr (32)
'>' shift 46
'<' shift 47
'+' shift 48
'-' shift 49
'*' shift 50
'/' shift 51
'%' shift 52
$end reduce 7
tINT reduce 7
Thats the output and the error seems the one I mentioned.
Does anyone know a different solution, other than adding a new terminal to the language that isn't really an option?
I think the resolution is to rewrite the grammar so it can lookahead somehow and see if its a type or expr after the '<' but I'm not seeing how to.
Precedence is unlikely to work since its the same character. Is there a way to give precendence for types that we define? such as declaration?
Thanks in advance

Your grammar gets confused in text like this:
int a = b
<int> c
That '<' on the second line could be part of an expression in the first declaration. It would have to look ahead further to find out.
This is the reason most languages have a statement terminator. This produces no conflicts:
%%
%token tNAME;
%token tINT;
%token tINTEGER;
%token tTERM;
%left '<';
declaration: variable
| declaration variable
variable : type tNAME '=' expr tTERM
| type tNAME tTERM
type : '<' type '>'
| tINT
expr : tINTEGER
| expr '<' expr
It helps when creating a parser to know how to design a grammar to eliminate possible conflicts. For that you would need an understanding of how parsers work, which is outside the scope of this answer :)

The basic problem here is that you need more lookahead than the 1 token you get with yacc/bison. When the parser sees a < it has no way of telling whether its done with the preivous declaration and its looking at the beginning of a bracketed type, or if this is a less-than operator. There's two basic things you can do here:
Use a parsing method such as bison's %glr-parser option or btyacc, which can deal with non-LR(1) grammars
Use the lexer to do extra lookahead and return disambiguating tokens
For the latter, you would have the lexer do extra lookahead after a '<' and return a different token if its followed by something that looks like a type. The easiest is to use flex's / lookahead operator. For example:
"<"/[ \t\n\r]*"<" return OPEN_ANGLE;
"<"/[ \t\n\r]*"int" return OPEN_ANGLE;
"<" return '<';
Then you change your bison rules to expect OPEN_ANGLE in types instead of <:
type : OPEN_ANGLE type '>'
| tINT
expr : tINTEGER
| expr '<' expr
For more complex problems, you can use flex start states, or even insert an entire token filter/transform pass between the lexer and the parser.

Here is the fix, but not entirely satisfactory:
%{
%}
%token tNAME tINT tINTEGER
%left '<'
%left '+'
%nonassoc '=' /* <-- LOOK */
%%
declaration: variable
| declaration variable
variable : type tNAME '=' expr
| type tNAME
type : '<' type '>'
| tINT
expr : tINTEGER
| expr '<' expr
| expr '+' expr
;
This issue is a conflict between these two LR items: the dot-final:
variable : type tNAME '=' expr_no_less .
and this one:
expr : expr . '<' expr
Notice that these two have different operators. It is not, as you seem to think, a conflict between different productions involving the '<' operator.
By adding = to the precedence ranking, we fix the issue in the sense that the conflict diagnostic goes away.
Note that I gave = a high precedence. This will resolve the conflict by favoring the reduce. This means that you cannot use a '<' expression as an initializer:
int name = 4 < 3 // syntax error
When the < is seen, the int name = 4 wants to be reduced, and the idea is that < must be the start of the next declaration, as part of a type production.
To allow < relational expressions to be used as initializers, add the support for parentheses into the expression grammar. Then users can parenthesize:
int foo = (4 < 3) <int> bar = (2 < 1)
There is no way to fix that without a more powerful parsing method or hacks.
What if you move the %nonassoc before %left '<', giving it low precedence? Then the shift will be favored. Unfortunately, that has the consequence that you cannot write another <int> declaration after a declaration.
int foo = 3 <int> bar = 4
^ // error: the machine shifted and is now doing: expr '<' . expr.
So that is the wrong way to resolve the conflict; you want to be able to write multiple such declarations.
Another Note:
My TXR language, which implements something equivalent to Parse Expression Grammars handles this grammar fine. This is essentially LL(infinite), which trumps LALR(1).
We don't even have to have a separate lexical analyzer and parser! That's just something made necessary by the limitations of one-symbol-lookahead, and the need for utmost efficiency on 1970's hardware.
Example output from shell command line, demonstrating the parse by translation to a Lisp-like abstract syntax tree, which is bound to the variable dl (declaration list). So this is complete with semantic actions, yielding an output that can be further processed in TXR Lisp. Identifiers are translated to Lisp symbols via calls to intern and numbers are translated to number objects also.
$ txr -l type.txr -
int x = 3 < 4 int y
(dl (decl x int (< 3 4)) (decl y int nil))
$ txr -l type.txr -
< int > x = 3 < 4 < int > y
(dl (decl x (pointer int) (< 3 4)) (decl y (pointer int) nil))
$ txr -l type.txr -
int x = 3 + 4 < 9 < int > y < int > z = 4 + 3 int w
(dl (decl x int (+ 3 (< 4 9))) (decl y (pointer int) nil)
(decl z (pointer int) (+ 4 3)) (decl w int nil))
$ txr -l type.txr -
<<<int>>>x=42
(dl (decl x (pointer (pointer (pointer int))) 42))
The source code of (type.txr):
#(define ws)#/[ \t]*/#(end)
#(define int)#(ws)int#(ws)#(end)
#(define num (n))#(ws)#{n /[0-9]+/}#(ws)#(filter :tonumber n)#(end)
#(define id (id))#\
#(ws)#{id /[A-Za-z_][A-Za-z_0-9]*/}#(ws)#\
#(set id #(intern id))#\
#(end)
#(define type (ty))#\
#(local l)#\
#(cases)#\
#(int)#\
#(bind ty #(progn 'int))#\
#(or)#\
<#(type l)>#\
#(bind ty #(progn '(pointer ,l)))#\
#(end)#\
#(end)
#(define expr (e))#\
#(local e1 op e2)#\
#(cases)#\
#(additive e1)#{op /[<>]/}#(expr e2)#\
#(bind e #(progn '(,(intern op) ,e1 ,e2)))#\
#(or)#\
#(additive e)#\
#(end)#\
#(end)
#(define additive (e))#\
#(local e1 op e2)#\
#(cases)#\
#(num e1)#{op /[+\-]/}#(expr e2)#\
#(bind e #(progn '(,(intern op) ,e1 ,e2)))#\
#(or)#\
#(num e)#\
#(end)#\
#(end)
#(define decl (d))#\
#(local type id expr)#\
#(type type)#(id id)#\
#(maybe)=#(expr expr)#(or)#(bind expr nil)#(end)#\
#(bind d #(progn '(decl ,id ,type ,expr)))#\
#(end)
#(define decls (dl))#\
#(coll :gap 0)#(decl dl)#(end)#\
#(end)
#(freeform)
#(decls dl)

Related

Is there a way to eliminate these reduce/reduce conflicts?

I am working on a toy language for fun, (called NOP) using Bison and Flex, and I have hit a wall. I am trying to parse sequences that look like name1.name2 and name1.func1().name2 and I get a lot of reduce/reduce conflicts. I know -why- am getting them, but I am having a heck of a time figuring out what to do about it.
So my question is whether this is a legitimate irregularity that can't be "fixed", or if my grammar is just wrong. The productions in question are compound_name and compound_symbol. It seems to me that they should parse separately. If I try to combine them I get conflicts with that as well. In the grammar, I am trying to illustrate what I want to do, rather than anything "clever".
%debug
%defines
%locations
%{
%}
%define parse.error verbose
%locations
%token FPCONST INTCONST UINTCONST STRCONST BOOLCONST
%token SYMBOL
%token AND OR NOT EQ NEQ LTE GTE LT GT
%token ADD_ASSIGN SUB_ASSIGN MUL_ASSIGN DIV_ASSIGN MOD_ASSIGN
%token DICT LIST BOOL STRING FLOAT INT UINT NOTHING
%right ADD_ASSIGN SUB_ASSIGN
%right MUL_ASSIGN DIV_ASSIGN MOD_ASSIGN
%left AND OR
%left EQ NEQ
%left LT GT LTE GTE
%right ':'
%left '+' '-'
%left '*' '/' '%'
%left NEG
%right NOT
%%
program
: {} all_module {}
;
all_module
: module_list
;
module_list
: module_element {}
| module_list module_element {}
;
module_element
: compound_symbol {}
| expression {}
;
compound_name
: SYMBOL {}
| compound_name '.' SYMBOL {}
;
compound_symbol_element
: compound_name {}
| func_call {}
;
compound_symbol
: compound_symbol_element {}
| compound_symbol '.' compound_symbol_element {}
;
func_call
: compound_name '(' expression_list ')' {}
;
formatted_string
: STRCONST {}
| STRCONST '(' expression_list ')' {}
;
type_specifier
: STRING {}
| FLOAT {}
| INT {}
| UINT {}
| BOOL {}
| NOTHING {}
;
constant
: FPCONST {}
| INTCONST {}
| UINTCONST {}
| BOOLCONST {}
| NOTHING {}
;
expression_factor
: constant { }
| compound_symbol { }
| formatted_string {}
;
expression
: expression_factor {}
| expression '+' expression {}
| expression '-' expression {}
| expression '*' expression {}
| expression '/' expression {}
| expression '%' expression {}
| expression EQ expression {}
| expression NEQ expression {}
| expression LT expression {}
| expression GT expression {}
| expression LTE expression {}
| expression GTE expression {}
| expression AND expression {}
| expression OR expression {}
| '-' expression %prec NEG {}
| NOT expression { }
| type_specifier ':' SYMBOL {} // type cast
| '(' expression ')' {}
;
expression_list
: expression {}
| expression_list ',' expression {}
;
%%
This is a very stripped down parser. The "real" one is about 600 lines. It has no conflicts (and passes a bunch of tests) if I don't try to use a function call in a variable name. I am looking at re-writing it to be a packrat grammar if I cannot get Bison to do that I want. The rest of the project is here: https://github.com/chucktilbury/nop
$ bison -tvdo temp.c temp.y
temp.y: warning: 4 shift/reduce conflicts [-Wconflicts-sr]
temp.y: warning: 16 reduce/reduce conflicts [-Wconflicts-rr]
All of the reduce/reduce conflicts are the result of:
module_element
: expression
| compound_symbol
That creates an ambiguity because you also have
expression
: expression_factor
expression_factor
: compound_symbol
So the parser can't tell whether or not you need the unit productions to be reduced. Eliminating module_element: compound_symbol doesn't change the set of sentences which can be produced; it just requires that a compound_symbol be reduced through expression before becoming a module_element.
As Chris Dodd points out in a comment, the fact that two module_elements can appear consecutively without a delimiter creates an additional ambiguity: the grammar allows a - b to be parsed either as a single expression (and consequently module_element) or as two consecutive expressions —a and -b— and thus two consecutive module_elements. That ambiguity accounts for three of the four shift/reduce conflicts.
Both of these are probably errors introduced when you simplified the grammar, since it appears that module elements in the full grammar are definitions, not expressions. Removing modules altogether and using expression as the starting symbol leaves only a single conflict.
That conflict is indeed the result of an ambiguity between compound_symbol and compound_name, as noted in your question. The problem is seen in these productions (non-terminals shortened to make typing easier):
name: SYMBOL
| name '.' SYMBOL
symbol
: element
| symbol '.' element
element
: name
That means that both a and a.b are names and hence
elements. But a symbol is a .-separated list of elements, so a.b could be derived in two ways:
symbol → element symbol → symbol . element
→ name → element . element
→ a.b → name . element
→ a . element
→ a . name
→ a . b
I fixed this by simplifying the grammar to:
compound_symbol
: compound_name
| compound_name '(' expression_list ')'
compound_name
: SYMBOL
| compound_symbol '.' SYMBOL
That gets rid of func_call and compound_symbol_element, which as far as I can see serve no purpose. I don't know if the non-terminal names remaining really capture anything sensible; I think it would make more sense to call compound_symbol something like name_or_call.
This grammar could be simplified further if higher-order functions were possible; the existing grammar forbids hof()(), presumably because you don't contemplate allowing a function to return a function object.
But even with higher-order functions, you might want to differentiate between function calls and member access/array subscript, because in many languages a function cannot return an object reference and hence a function call cannot appear on the left-hand side of an assignment operator. In other languages, such as C, the requirement that the left-hand side of an assignment operator be a reference ("lvalue") is enforced outside of the grammar. (And in C++, a function call or even an overloaded operator can return a reference, so the restriction needs to be enforced after type analysis.)

How does one remove shift/reduce errors caused by limited lookahead?

In my grammar, it's usually possible to only use declaration which looks like:
int x, y, z = 23;
int i = 1;
int j;
In for I'd like to use a set of comma separated declarations of different types e.g.
for (int i = 0, double d = 2.0; i < 0; i++) { ... }
Using yacc, the limited lookahead creates problems. Here is the naive grammar:
variables_decl
: type_expression IDENT
| variables_decl ',' IDENT
;
declaration
: variables_decl
| variables_decl '=' initializer
;
declaration_list
: declaration
| declaration_list ',' declaration
;
This causes a shift/reduce error on ',':
state 149
100 variables_decl: variables_decl . ',' IDENT
101 declaration: variables_decl .
102 | variables_decl . '=' initializer
',' shift, and go to state 261
'=' shift, and go to state 262
',' [reduce using rule 101 (declaration)]
$default reduce using rule 101 (declaration)
I'd like fix this issue so that this actually works:
for (double x, y, int i, j = 0, long l = 1; i < 0; i++) { ... }
But it's not obvious to me how to do that.
In general terms, you avoid this type of shift/reduce conflict by avoiding forcing the parser to make a decision until absolutely necessary.
It's understandable why you have structured the grammar as you have; intuitively, a declaration list is a list of declarations, where each declaration is a type and a list of variables of that type. The problem is that this definition makes it impossible to know whether a comma belongs to an inner or outer list.
Moreover, one extra lookahead token might not be enough, since the following IDENT could be a typename or the name of a variable to be declared, assuming type-expression is the usual C syntax which can start with an identifier corresponding to a typename.
But that's not the only way to look at the declaration list syntax. Instead, you can think of it as a list of individual declarations, each of which starts with an optional type (except the first in the list, which must have an explicit type), using the semantic convention that an omitted type is the same as the type of the previous variable. That leads to the following conflict-free grammar:
declaration_list: explicit_decl
| declaration_list ',' declaration
declaration : explicit_decl
| implicit_decl
explicit_decl : type_expression implicit_decl
implicit_decl : IDENT opt_init
opt_init : %empty | '=' expr
That does not capture the syntax of C declarations, since not all C declarations have the form type_expression IDENT. The IDENT being defined can be buried inside the declaration, as with, for example, int a[4] or int f(int i); fortunately, these forms are of limited use in a for loop.
On the other hand, unlike your grammar it does allow all declared variables to be initialised, so
int a = 1, b = 0, double x = -1.0, y = 0.0
should work.
Another note: the first item in a C for clause can be empty, a declaration (possibly in the form of a list) or an expression. In the last case, a top-level , is an operator, not a list indicator.
In short, the above fragment might or might not be a solution in the context of your actual grammar. But it is conflict-free in a simple test framework where typed declarations are always of the form typename identifier.

how to resolve an ambiguity

I have a grammar:
grammar Test;
s : ID OP (NUMBER | ID);
ID : [a-z]+ ;
NUMBER : '.'? [0-9]+ ;
OP : '/.' | '/' ;
WS : [ \t\r\n]+ -> skip ;
An expression like x/.123 can either be parsed as (s x /. 123), or as (s x / .123). With the grammar above I get the first variant.
Is there a way to get both parse trees? Is there a way to control how it is parsed? Say, if there is a number after the /. then I emit the / otherwise I emit /. in the tree.
I am new to ANTLR.
An expression like x/.123 can either be parsed as (s x /. 123), or as (s x / .123)
I'm not sure. In the ReplaceAll page(*), Possible Issues paragraph, it is said that "Periods bind to numbers more strongly than to slash", so that /.123 will always be interpreted as a division operation by the number .123. Next it is said that to avoid this issue, a space must be inserted in the input between the /. operator and the number, if you want it to be understood as a replacement.
So there is only one possible parse tree (otherwise how could the Wolfram parser decide how to interpret the statement ?).
ANTLR4 lexer and parser are greedy. It means that the lexer (parser) tries to read as much input characters (tokens) that it can while matching a rule. With your OP rule OP : '/.' | '/' ; the lexer will always match the input /. to the /. alternative (even if the rule is OP : '/' | '/.' ;). This means there is no ambiguity and you have no chance the input to be interpreted as OP=/ and NUMBER=.123.
Given my small experience with ANTLR, I have found no other solution than to split the ReplaceAll operator into two tokens.
Grammar Question.g4 :
grammar Question;
/* Parse Wolfram ReplaceAll. */
question
#init {System.out.println("Question last update 0851");}
: s+ EOF
;
s : division
| replace_all
;
division
: expr '/' NUMBER
{System.out.println("found division " + $expr.text + " by " + $NUMBER.text);}
;
replace_all
: expr '/' '.' replacement
{System.out.println("found ReplaceAll " + $expr.text + " with " + $replacement.text);}
;
expr
: ID
| '"' ID '"'
| NUMBER
| '{' expr ( ',' expr )* '}'
;
replacement
: expr '->' expr
| '{' replacement ( ',' replacement )* '}'
;
ID : [a-z]+ ;
NUMBER : '.'? [0-9]+ ;
WS : [ \t\r\n]+ -> skip ;
Input file t.text :
x/.123
x/.x -> 1
{x, y}/.{x -> 1, y -> 2}
{0, 1}/.0 -> "zero"
{0, 1}/. 0 -> "zero"
Execution :
$ export CLASSPATH=".:/usr/local/lib/antlr-4.6-complete.jar"
$ alias a4='java -jar /usr/local/lib/antlr-4.6-complete.jar'
$ alias grun='java org.antlr.v4.gui.TestRig'
$ a4 Question.g4
$ javac Q*.java
$ grun Question question -tokens -diagnostics t.text
[#0,0:0='x',<ID>,1:0]
[#1,1:1='/',<'/'>,1:1]
[#2,2:5='.123',<NUMBER>,1:2]
[#3,7:7='x',<ID>,2:0]
[#4,8:8='/',<'/'>,2:1]
[#5,9:9='.',<'.'>,2:2]
[#6,10:10='x',<ID>,2:3]
[#7,12:13='->',<'->'>,2:5]
[#8,15:15='1',<NUMBER>,2:8]
[#9,17:17='{',<'{'>,3:0]
...
[#29,47:47='}',<'}'>,4:5]
[#30,48:48='/',<'/'>,4:6]
[#31,49:50='.0',<NUMBER>,4:7]
...
[#40,67:67='}',<'}'>,5:5]
[#41,68:68='/',<'/'>,5:6]
[#42,69:69='.',<'.'>,5:7]
[#43,71:71='0',<NUMBER>,5:9]
...
[#48,83:82='<EOF>',<EOF>,6:0]
Question last update 0851
found division x by .123
found ReplaceAll x with x->1
found ReplaceAll {x,y} with {x->1,y->2}
found division {0,1} by .0
line 4:10 extraneous input '->' expecting {<EOF>, '"', '{', ID, NUMBER}
found ReplaceAll {0,1} with 0->"zero"
The input x/.123 is ambiguous until the slash. Then the parser has two choices : / NUMBER in the division rule or / . expr in the replace_all rule. I think that NUMBER absorbs the input and so there is no more ambiguity.
(*) the link was yesterday in a comment that has disappeared, i.e. Wolfram Language & System, ReplaceAll

How do I find reduce/shift conflicts in my yacc code?

I'm writing a flex/yacc program that should read some tokens and easy grammar using cygwin.
I'm guessing something is wrong with my BNF grammar, but I can't seem to locate the problem. Below is some code
%start statement_list
%%
statement_list: statement
|statement_list statement
;
statement: condition|simple|loop|call_func|decl_array|decl_constant|decl_var;
call_func: IDENTIFIER'('ID_LIST')' {printf("callfunc\n");} ;
ID_LIST: IDENTIFIER
|ID_LIST','IDENTIFIER
;
simple: IDENTIFIER'['NUMBER']' ASSIGN expr
|PRINT STRING
|PRINTLN STRING
|RETURN expr
;
bool_expr: expr E_EQUAL expr
|expr NOT_EQUAL expr
|expr SMALLER expr
|expr E_SMALLER expr
|expr E_LARGER expr
|expr LARGER expr
|expr E_EQUAL bool
|expr NOT_EQUAL bool
;
expr: expr ADD expr {$$ = $1+$3;}
| expr MULT expr {$$ = $1-$3;}
| expr MINUS expr {$$ = $1*$3;}
| expr DIVIDE expr {if($3 == 0) yyerror("divide by zero");
else $$ = $1 / $3;}
|expr ASSIGN expr
| NUMBER
| IDENTIFIER
;
bool: TRUE
|FALSE
;
decl_constant: LET IDENTIFIER ASSIGN expr
|LET IDENTIFIER COLON "bool" ASSIGN bool
|LET IDENTIFIER COLON "Int" ASSIGN NUMBER
|LET IDENTIFIER COLON "String" ASSIGN STRING
;
decl_var: VAR IDENTIFIER
|VAR IDENTIFIER ASSIGN NUMBER
|VAR IDENTIFIER ASSIGN STRING
|VAR IDENTIFIER ASSIGN bool
|VAR IDENTIFIER COLON "Bool" ASSIGN bool
|VAR IDENTIFIER COLON "Int" ASSIGN NUMBER
|VAR IDENTIFIER COLON "String" ASSIGN STRING
;
decl_array: VAR IDENTIFIER COLON "Int" '[' NUMBER ']'
|VAR IDENTIFIER COLON "Bool" '[' NUMBER ']'
|VAR IDENTIFIER COLON "String" '[' NUMBER ']'
;
condition: IF '(' bool_expr ')' statement ELSE statement;
loop: WHILE '(' bool_expr ')' statement;
I've tried changing statement into
statement:';';
,reading a simple token to test if it works, but it seems like my code refuses to enter that part of the grammar.
Also when I compile it, it tells me there are 18 shift/reduce conflicts. Should I try to locate and solve all of them ?
EDIT: I have edited my code using Chris Dodd's answer, trying to solve each conflict by looking at the output file. The last few conflicts seem to be located in the below code.
expr: expr ADD expr {$$ = $1+$3;}
| expr MULT expr {$$ = $1-$3;}
| expr MINUS expr {$$ = $1*$3;}
| expr DIVIDE expr {if($3 == 0) yyerror("divide by zero");
else $$ = $1 / $3;}
|expr ASSIGN expr
| NUMBER
| IDENTIFIER
;
And here is part of the output file telling me what's wrong.
state 60
28 expr: expr . ADD expr
29 | expr . MULT expr
30 | expr . MINUS expr
31 | expr . DIVIDE expr
32 | expr . ASSIGN expr
32 | expr ASSIGN expr .
ASSIGN shift, and go to state 36
ADD shift, and go to state 37
MINUS shift, and go to state 38
MULT shift, and go to state 39
DIVIDE shift, and go to state 40
ASSIGN [reduce using rule 32 (expr)]
ADD [reduce using rule 32 (expr)]
MINUS [reduce using rule 32 (expr)]
MULT [reduce using rule 32 (expr)]
DIVIDE [reduce using rule 32 (expr)]
$default reduce using rule 32 (expr)
I don't understand, why would it choose rule 32 when it read ADD, MULT, DIVIDE or other tokens ? What's wrong with this part of my grammar?
Also, even though that above part of the grammar is wrong, shouldn't my compiler be able to read other grammar correctly ? For instance,
let a = 5
should be readable, yet the program returns syntax error ?
Your grammar looks reasonable, though it does have ambiguities in expressions, most of which could be solved by precedence. You should definitely look at ALL the conflicts reported and understand why they occur, and preferrably change the grammar to get rid of them.
As for your specific issue, if you change it to have statement: ';' ;, it should accept that. You don't show any of your lexing code, so the problem may be there. It may be helpful to compile you parser with -DYYDEBUG=1 to enable the debugging code generated by yacc/bison, and set the global variable yydebug to 1 before calling yyparse, which will dump a trace of everything the parser is doing to stderr.

Controlling Parameter Slurping

I'm trying to write a grammar that supports functions calls without using parentheses:
f x, y
As in Haskell, I'd like function calls to minimally slurp up their parameters. That is, I want
g 5 + 3
to mean
(g 5) + 3
instead of
g (5 + 3)
Unfortunately, I'm getting the second parse with this grammar:
grammar Parameters;
expr
: '(' expr ')'
| expr MULTIPLICATIVE_OPERATOR expr
| expr ADDITIVE_OPERATOR expr
| ID (expr (',' expr)*?)??
| INT
;
MULTIPLICATIVE_OPERATOR: [*/%];
ADDITIVE_OPERATOR: '+';
ID: [a..z]+;
INT: '-'? [0-9]+;
WHITESPACE: [ \t\n\r]+ -> skip;
The parse tree I'm getting is this:
I had thought that the subrule listed first would get attempted first. In this case, expr ADDITIVE_OPERATOR expr appears before the ID subrule, so why is the ID subrule taking higher precedence?
In this case ANTLR does not the correct rule transformation (to eliminate left recursion and to handle precedences):
expr
: expr_1[0]
;
expr_1[int p]
: ('(' expr_1[0] ')' | INT | ID (expr_1[0] (',' expr_1[0])*?)??)
( {4 >= $p}? MULTIPLICATIVE_OPERATOR expr_1[5]
| {3 >= $p}? ADDITIVE_OPERATOR expr_1[4]
)*
;
leading to (expr (expr_1 a (expr_1 5 + (expr_1 3))))
correct would be:
expr
: expr_1[0]
;
expr_1[int p]
: ('(' expr_1[0] ')' | INT | ID (expr_1[5] (',' expr_1[5])*?)??)
( {4 >= $p}? MULTIPLICATIVE_OPERATOR expr_1[5]
| {3 >= $p}? ADDITIVE_OPERATOR expr_1[4]
)*
;
leading to (expr (expr_1 a (expr_1 5) + (expr_1 3)))
I am not certain if this is a bug in ANTLR4 or a trade-off of the transformation algorithm. Perhaps one should write an issue to the ANTLR4 jira.
To solve your problem you can simply put the correctly transformed grammar into your code and it should work. The explanation of rule transformation is found in "The Definitive ANTLR4 Reference" on pages 249ff (and perhaps somewhere on the web).