Note: This is a self-answered question that aims to provide a reference about one of the most common mistakes made by ANTLR users.
When I test this very simple grammar:
grammar KeyValues;
keyValueList: keyValue*;
keyValue: key=IDENTIFIER '=' value=INTEGER ';';
IDENTIFIER: [A-Za-z0-9]+;
INTEGER: [0-9]+;
WS: [ \t\r\n]+ -> skip;
With the following input:
foo = 42;
I end up with the following run-time error:
line 1:6 mismatched input '42' expecting INTEGER
line 1:8 mismatched input ';' expecting '='
Why doesn't ANTLR recognize 42
as an INTEGER
in this case?
It should match the pattern [0-9]+
just fine.
If I invert the order in which INTEGER
and IDENTIFIER
are defined it seems to work, but why does the order matter in the first place?
In ANTLR, the lexer is isolated from the parser, which means it will split the text into typed tokens according to the lexer grammar rules, and the parser has no influence on this process (it cannot say "give me an
INTEGER
now" for instance). It produces a token stream by itself. Furthermore, the parser doesn't care about the token text, it only cares about the token types to match its rules.This may easily become a problem when several lexer rules can match the same input text. In that case, the token type will be chosen according to these precedence rules:
'='
), use the implicit rule as the token typeThese rules are very important to keep in mind in order to use ANTLR effectively.
In the example from the question, the parser expects to see the following token stream to match the
keyValue
parser rule:IDENTIFIER
'='
INTEGER
';'
where'='
and';'
are implicit token types.Since
42
can match bothINTEGER
andIDENTIFIER
, andIDENTIFIER
is defined first, the parser will receive the following input:IDENTIFIER
'='
IDENTIFIER
';'
which it won't be able to match to thekeyValue
rule. Remember, the parser cannot communicate to the lexer, it can only receive data from it, therefore it cannot say "try to matchINTEGER
next".It's advisable to minimize the lexer rules overlap to limit the impact of this effect. In the above example, we have several options:
IDENTIFIER
as[A-Za-z] [A-Za-z0-9]*
(require it to start with a letter). This avoids the problem entirely but prevents identifier names starting with a number from being defined, so it changes the intent of the grammar.INTEGER
andIDENTIFIER
. This solves the problem for most cases, but prevents fully numeric identifiers from being defined, therefore it also changes the intent of the grammar in a subtle, not so obvious way.First, swap
INTEGER
andIDENTIFIER
in order to give priority toINTEGER
. Then, define a parser ruleid: IDENTIFIER | INTEGER;
then use that rule instead ofIDENTIFIER
in other parser rules, which would changekeyValue
tokey=id '=' value=INTEGER ';'
.Here's a second lexer behavior example to sum up:
The following combined grammar:
Given the following input:
Will produce the following token sequence from the lexer:
IDENTIFIER
'foo'
BAR
IDENTIFIER
IDENTIFIER
EOF
aaa
is of typeIDENTIFIER
Only the
IDENTIFIER
rule can match this token, there is no ambiguity.foo
is of type'foo'
The parser rule
randomParserRule
introduces the implicit'foo'
token type, which is prioritary over theIDENTIFIER
rule.bar
is of typeBAR
This text matches the
BAR
rule, which is defined before theIDENTIFIER
rule, and therefore has precedence.baz
is of typeIDENTIFIER
This text matches the
BAZ
rule, but it also matches theIDENTIFIER
rule. The latter is chosen as it is defined beforeBAR
.Given the grammar,
BAZ
will never be able to match, as theIDENTIFIER
rule already covers everythingBAZ
can match.barz
is of typeIDENTIFIER
The
BAR
rule can match the first 3 characters of this string (bar
), but theIDENTIFIER
rule will match 4 characters. AsIDENTIFIER
matches a longer substring, it is chosen overBAR
.EOF
(end of file) is an implicitly defined token type which always occurs at the end of the input.As a rule of thumb, specific rules should de defined before more generic rules. If a rule can only match an input which is already covered by a previously defined rule, it will never be used.
Implicitly defined rules such as
'foo'
act as if they were defined before all other lexer rules. As they add complexity, it's advisable to avoid them altogether and declare explicit lexer rules instead. Just having a list of tokens in one place instead of having them scattered across the grammar is a compelling advantage of this approach.