Antlr parser rule fails to match either of specified lexer rules - java

I have a small work-in-progress Antlr grammar that looks like:
filterExpression returns [ActivityPredicate pred]
: NAME OPERATOR (PACE | NUMBER) {
if ($PACE != null) {
$pred = new SingleActivityPredicate($NAME.text, Operator.fromCharacter($OPERATOR.text), $PACE.text);
} else {
$pred = new SingleActivityPredicate($NAME.text, Operator.fromCharacter($OPERATOR.text), $NUMBER.text);
}
};
OPERATOR: ('>' | '<' | '=') ;
NAME: ('A'..'Z' | 'a'..'z')+ ;
NUMBER: ('0'..'9')+ ('.' ('0'..'9')+)? ;
PACE: ('0'..'9')('0'..'9')? ':' ('0'..'5')('0'..'9');
WS: (' ' | '\t' | '\r'| '\n')+ -> skip;
Hoping to parse things like:
distance = 4 or pace < 8:30
However, both of those inputs result in null for both the PACE and NUMBER, while trying to parse either:
However, dropping the option and just picking PACE works fine (it also works fine the other way, opting for NUMBER):
filterExpression returns [ActivityPredicate pred]
: NAME OPERATOR PACE { ... };
Why is it that when I provide the option, they're both null?

Try this.
filterExpression returns [ActivityPredicate pred]
: n=NAME o=OPERATOR (p=PACE | i=NUMBER) {
if ($PACE != null) {
$pred = new SingleActivityPredicate(
$n.text, Operator.fromCharacter($o.text), $p.text);
} else {
$pred = new SingleActivityPredicate(
$n.text, Operator.fromCharacter($o.text), $i.text);
}
};

Related

Antlr3 grammar generates parsering error on encountering the Pound char

Antlr-3 generating an error on encountering the Pound char ("£") of the French language, which is equivalent char of Hash "#" char of English, even the Unicode value for three special characters #, #, and $ are specified in lexer/parser rule.
FYI: The Unicode value of Pound char (of the French language) = The Unicode value of Hash char (of ENGLISH language).
The lexer/parser rules:
grammar SimpleCalc;
options
{
k = 8;
language = Java;
//filter = true;
}
tokens {
PLUS = '+' ;
MINUS = '-' ;
MULT = '*' ;
DIV = '/' ;
}
/*------------------------------------------------------------------
* PARSER RULES
*------------------------------------------------------------------*/
expr : n1=NUMBER ( exp = ( PLUS | MINUS ) n2=NUMBER )*
{
if ($exp.text.equals("+"))
System.out.println("Plus Result = " + $n1.text + $n2.text);
else
System.out.println("Minus Result = " + $n1.text + $n2.text);
}
;
/*------------------------------------------------------------------
* LEXER RULES
*------------------------------------------------------------------*/
NUMBER : (DIGIT)+ ;
WHITESPACE : ( '\t' | ' ' | '\r' | '\n'| '\u000C' )+ { $channel = HIDDEN; } ;
fragment DIGIT : '0'..'9' | '£' | ('\u0040' | '\u0023' | '\u0024');
The text file also reading in UTF-8 as:
public static void main(String[] args) throws Exception
{
try
{
args = new String[1];
args[0] = new String("antlr_test.txt");
SimpleCalcLexer lex = new SimpleCalcLexer(new ANTLRFileStream(args[0], "UTF-8"));
CommonTokenStream tokens = new CommonTokenStream(lex);
SimpleCalcParser parser = new SimpleCalcParser(tokens);
parser.expr();
//System.out.println(tokens);
}
catch (Exception e)
{
e.printStackTrace();
}
}
The input file is having only 1 line:
£3 + 4£
the error is:
antlr_test.txt line 1:1 no viable alternative at character '£'
antlr_test.txt line 1:7 no viable alternative at character '£'
What is wrong with my approach?
or did I miss something?
I cannot reproduce what you describe. When I test your grammar without modifications, I get a NumberFormatException, which is expected, because Integer.parseInt("£3") cannot succeed.
When I change your embedded code into this:
{
if ($exp.text.equals("+"))
System.out.println("Result = " + (Integer.parseInt($n1.text.replaceAll("\\D", "")) + Integer.parseInt($n2.text.replaceAll("\\D", ""))));
else
System.out.println("Result = " + (Integer.parseInt($n1.text.replaceAll("\\D", "")) - Integer.parseInt($n2.text.replaceAll("\\D", ""))));
}
and regenerate lexer and parser classes (something you might not have done) and rerun the driver code, I get the following output:
Result = 7
EDIT
Perhaps the pound sign in the grammar is the issue? What if you try:
fragment DIGIT : '0'..'9' | '\u00A3' | ('\u0040' | '\u0023' | '\u0024');
instead of:
fragment DIGIT : '0'..'9' | '£' | ('\u0040' | '\u0023' | '\u0024');
?

AST how to deal with empty nodes

I am building an expression evaluator using Java, JFlex(lexer gen) and Jacc(parser gen). I need to:
generate the lexer
generate the parser
generate the AST
display the AST graph
evaluate expression
I was able to create the lexer and the parser and the AST. Now I am trying to make the AST graph using the visitor pattern, but this made a problem with my generated AST evident (so to speak). In my calculator I need to handle parentheses and they create empty nodes in my AST (and that makes my parse tree not an AST I suppose). Here is the relevant part of my grammar:
Calc : /* empty */
| AddExpr { ast = new Calc($1); }
;
AddExpr : ModExpr
| AddExpr '+' ModExpr { $$ = new AddExpr($1, $3, "+"); }
| AddExpr '-' ModExpr { $$ = new AddExpr($1, $3, "-"); }
;
ModExpr : IntDivExpr
| ModExpr MOD IntDivExpr { $$ = new ModExpr($1, $3); }
;
IntDivExpr : MultExpr
| IntDivExpr DIV MultExpr { $$ = new IntDivExpr($1, $3); }
;
MultExpr : UnaryExpr
| MultExpr '*' UnaryExpr { $$ = new MultExpr($1, $3, "*"); }
| MultExpr '/' UnaryExpr { $$ = new MultExpr($1, $3, "/"); }
;
UnaryExpr : ExpExpr
| '-' UnaryExpr { $$ = new UnaryExpr($2, "-"); }
| '+' UnaryExpr { $$ = new UnaryExpr($2, "+"); }
;
ExpExpr : Value
| ExpExpr '^' Value { $$ = new ExpExpr($1, $3); }
;
Value : DoubleLiteral
| '(' AddExpr ')' { $$ = new Value($2); }
;
DoubleLiteral : DOUBLE { $$ = $1; }
;
Here is an example expression:
1*(2+3)/(4-5)*((((6))))
and the resulting image:
This leaves me with Value nodes for each pair of parentheses. I have a few ideas on how to handle this, but I am not sure how to proceed:
Try to handle this in my grammar (not sure how as I am not allowed to use precedence directives)
Handle this in my evaluator
If you don't want Value nodes, then just replace { $$ = new Value($2); } with { $$ = $2; }.

JTB generating jj file that compiles but throws NullPointerExceptions on grammatically valid input

I am very new to JavaCC and JTB. I am trying to get a a moderately complicated parser/validator going using a JTB file to generate the AST for me. I'm having a lot of problems making that work, so I decided to do the simplest example possible.
I am using Eclipse 3.8.1. My JavaCC and JTB plugins are:
plugins/sf.eclipse.javacc_1.5.30/jars/javacc-5.0.jar
plugins/sf.eclipse.javacc_1.5.30/jars/jtb-1.4.9.jar
I believe these to be fairly recent versions of what's available and so they should work OK.
I created a project and inside that project, I created a new JTB file. The plugin generated a bunch of code.
/**
* JTB template file created by SF JavaCC plugin 1.5.28+ wizard for JTB 1.4.0.2+ and JavaCC 1.5.0+
*/
options
{
static = true;
JTB_P = "";
}
PARSER_BEGIN(SimpleGrammar)
// this import is not needed as it is generated by JTB
// import syntaxtree.*;
// this import is needed as it is not generated by JTB
import visitor.*;
public class SimpleGrammar
{
public static void main(String args [])
{
System.out.println("Reading from standard input...");
System.out.print("Enter an expression like \"1+(2+3)*var;\" :");
new SimpleGrammar(System.in);
try
{
Start start = SimpleGrammar.Start();
DepthFirstVoidVisitor v = new MyVisitor();
start.accept(v);
}
catch (Exception e)
{
System.out.println("Oops.");
System.out.println(e);
System.out.println(e.getMessage());
}
}
}
class MyVisitor extends DepthFirstVoidVisitor
{
public void visit(NodeToken n)
{
System.out.println("visit " + n.tokenImage);
}
}
PARSER_END(SimpleGrammar)
SKIP :
{
" "
| "\t"
| "\n"
| "\r"
| < "//" (~[ "\n", "\r" ])*
(
"\n"
| "\r"
| "\r\n"
) >
| < "/*" (~[ "*" ])* "*"
(
~[ "/" ] (~[ "*" ])* "*"
)*
"/" >
}
TOKEN : /* LITERALS */
{
< INTEGER_LITERAL :
< DECIMAL_LITERAL > ([ "l", "L" ])?
| < HEX_LITERAL > ([ "l", "L" ])?
| < OCTAL_LITERAL > ([ "l", "L" ])?
>
| < #DECIMAL_LITERAL : [ "1"-"9" ] ([ "0"-"9" ])* >
| < #HEX_LITERAL : "0" [ "x", "X" ] ([ "0"-"9", "a"-"f", "A"-"F" ])+ >
| < #OCTAL_LITERAL : "0" ([ "0"-"7" ])* >
}
TOKEN : /* IDENTIFIERS */
{
< IDENTIFIER :
< LETTER >
(
< LETTER >
| < DIGIT >
)* >
| < #LETTER : [ "_", "a"-"z", "A"-"Z" ] >
| < #DIGIT : [ "0"-"9" ] >
}
void Start() :
{}
{
Expression() ";"
}
void Expression() :
{}
{
AdditiveExpression()
}
void AdditiveExpression() :
{}
{
MultiplicativeExpression()
(
(
"+"
| "-"
)
MultiplicativeExpression()
)*
}
void MultiplicativeExpression() :
{}
{
UnaryExpression()
(
(
"*"
| "/"
| "%"
)
UnaryExpression()
)*
}
void UnaryExpression() :
{}
{
"(" Expression() ")"
| Identifier()
| MyInteger()
}
void Identifier() :
{}
{
< IDENTIFIER >
}
void MyInteger() :
{}
{
< INTEGER_LITERAL >
}
It actually had a bunch of ?parser_name? tags wherever you see SimpleGrammar which I noticed and changed.
So I right-click on the SimpleGrammar.jtb file and hit "Compile with JavaCC | JJTree | JTB" and it generates all the files just like it's supposed to. So I then press F11 to run the program. Here is the output of that console session:
Reading from standard input...
Enter an expression like "1+(2+3)*var;" :1+2
Oops.
java.lang.NullPointerException
null
That's funny, it probably shouldn't be doing that. So I change the exception catching to ParseException so that it won't catch the NullPointerException and I'll get a debug console. After I do that and hunt around a bit, I find this snippet of code:
static final public MyInteger MyInteger() throws ParseException {
// --- JTB generated node declarations ---
NodeToken n0 = null;
Token n1 = null;
jj_consume_token(INTEGER_LITERAL);
n0 = JTBToolkit.makeNodeToken(n1);
{if (true) return new MyInteger(n0);}
throw new Error("Missing return statement in function");
}
So what's happening here is that n1 is getting referenced before assigned to. I don't know why this is happening.
I've also looked in the generated jtb.out.jj file to see what's there. Here's the snippet corresponding to the code that I'm seeing:
MyInteger MyInteger() :
{
// --- JTB generated node declarations ---
NodeToken n0 = null;
Token n1 = null;
}
{
< INTEGER_LITERAL >
{ n0 = JTBToolkit.makeNodeToken(n1); }
{ return new MyInteger(n0); }
}
Is that wrong? I'm honestly not sure. I was under the impression that you did everything in the JTB file and all the files that were generated after that were not to be touched.
Any insight as to what's going wrong here would be greatly appreciated.
EDIT
So I've done a bit of poking around and I've found that if I change
<INTEGER_LITERAL>
to
n1 = <INTEGER_LITERAL>
in the above-mentioned snipped then there won't be a null pointer exception anymore I can successfully enter "1;" at the console and it'll parse.
I think what I'm trying to figure out is why JTB is generating a bogus jtb.out.jj rather than what can I do to stop the null pointer exceptions. I've got a decent sized, non-toy grammar I want to work with and manually editing the jtb.out.jj file every time is not a scalable solution.
I had the same problem, it's a bug in the generation. If you use version JTB 1.4.7, it will probably work.
You can find the available versions here: https://java.net/projects/jtb/sources/svn/show/trunk/lib?rev=75 (the download section doesn't really work for me)

superfluous LOOKAHEAD in javacc causes error?

I have the following TT.jj, if I uncomment the SomethingElse part below, it successfully parses a language of the form create create blahblah or create blahblah. But if I comment out the SomethingElse part below, but retain the LOOKAHEAD, javacc complains that the lookahead is not necessary and "ignored", but the resulting parser only accepts an empty string.
I thought javacc said it's "ignored" so it should not take any effect ? basically a superfluous LOOKAHEAD causes error. How does that work exactly? maybe javacc's implementation of LOOKAHEAD is not exactly up to the spec ?
options{
IGNORE_CASE=true ;
STATIC=false;
DEBUG_PARSER=true;
DEBUG_LOOKAHEAD=false;
DEBUG_TOKEN_MANAGER=false;
// FORCE_LA_CHECK=true;
UNICODE_INPUT=true;
}
PARSER_BEGIN(TT)
import java.util.*;
/**
* The parser generated by JavaCC
*/
public class TT {
}
PARSER_END(TT)
///////////////////////////////////////////// main stuff concerned
void Statement() :
{ }
{
LOOKAHEAD(2)
CreateTable()
//|
//SomethingElse()
}
void CreateTable():
{
}
{
<K_CREATE> <K_CREATE> <S_IDENTIFIER>
}
//void SomethingElse():
//{}{
// <K_CREATE> <S_IDENTIFIER>
//}
//
//////////////////////////////////////////////////////////
SKIP:
{
" "
| "\t"
| "\r"
| "\n"
}
TOKEN: /* SQL Keywords. prefixed with K_ to avoid name clashes */
{
<K_CREATE: "CREATE">
}
TOKEN : /* Numeric Constants */
{
< S_DOUBLE: ((<S_LONG>)? "." <S_LONG> ( ["e","E"] (["+", "-"])? <S_LONG>)?
|
<S_LONG> "." (["e","E"] (["+", "-"])? <S_LONG>)?
|
<S_LONG> ["e","E"] (["+", "-"])? <S_LONG>
)>
| < S_LONG: ( <DIGIT> )+ >
| < #DIGIT: ["0" - "9"] >
}
TOKEN:
{
< S_IDENTIFIER: ( <LETTER> | <ADDITIONAL_LETTERS> )+ ( <DIGIT> | <LETTER> | <ADDITIONAL_LETTERS> | <SPECIAL_CHARS>)* >
| < #LETTER: ["a"-"z", "A"-"Z", "_", "$"] >
| < #SPECIAL_CHARS: "$" | "_" | "#" | "#">
| < S_CHAR_LITERAL: "'" (~["'"])* "'" ("'" (~["'"])* "'")*>
| < S_QUOTED_IDENTIFIER: "\"" (~["\n","\r","\""])+ "\"" | ("`" (~["\n","\r","`"])+ "`") | ( "[" ~["0"-"9","]"] (~["\n","\r","]"])* "]" ) >
/*
To deal with database names (columns, tables) using not only latin base characters, one
can expand the following rule to accept additional letters. Here is the addition of german umlauts.
There seems to be no way to recognize letters by an external function to allow
a configurable addition. One must rebuild JSqlParser with this new "Letterset".
*/
| < #ADDITIONAL_LETTERS: ["ä","ö","ü","Ä","Ö","Ü","ß"] >
}
The lookahead specification that JavaCC says it is ignoring is not ignored. Moral: Don't put lookahead specifications at nonchoice points.
In more detail. When a lookahead (other than a purely semantic lookahead) appears at a nonchoice point, it appears to generate a lookahead method that always returns false, therefor lookahead fails and, there being no other choice, an exception is thrown.
here is the generated code from bad .jj
final public void Statement() throws ParseException {
trace_call("Statement");
try {
if (jj_2_1(5)) {
} else {
jj_consume_token(-1);
throw new ParseException();
}
CreateTable();
} finally {
trace_return("Statement");
}
}
here is the good one:
final public void Statement() throws ParseException {
trace_call("Statement");
try {
if (jj_2_1(3)) {
CreateTable();
} else {
switch ((jj_ntk==-1)?jj_ntk():jj_ntk) {
case K_CREATE:
SomethingElse();
break;
default:
jj_la1[0] = jj_gen;
jj_consume_token(-1);
throw new ParseException();
}
}
} finally {
trace_return("Statement");
}
}
i.e. the superfluous LOOKAHEAD is not ignored at all, javacc mechanically tries to list all the options (which is none in the bad case) in the if-else struct and led to a grammar that looks directly for EOF

How to handle escape sequences in string literals in ANTLR 3?

I've been looking through the ANTLR v3 documentation (and my trusty copy of "The Definitive ANTLR reference"), and I can't seem to find a clean way to implement escape sequences in string literals (I'm currently using the Java target). I had hoped to be able to do something like:
fragment
ESCAPE_SEQUENCE
: '\\' '\'' { setText("'"); }
;
STRING
: '\'' (ESCAPE_SEQUENCE | ~('\'' | '\\'))* '\''
{
// strip the quotes from the resulting token
setText(getText().substring(1, getText().length() - 1));
}
;
For example, I would want the input token "'Foo\'s House'" to become the String "Foo's House".
Unfortunately, the setText(...) call in the ESCAPE_SEQUENCE fragment sets the text for the entire STRING token, which is obviously not what I want.
Is there a way to implement this grammar without adding a method to go back through the resulting string and manually replace escape sequences (e.g., with something like setText(escapeString(getText())) in the STRING rule)?
Here is how I accomplished this in the JSON parser I wrote.
STRING
#init{StringBuilder lBuf = new StringBuilder();}
:
'"'
( escaped=ESC {lBuf.append(getText());} |
normal=~('"'|'\\'|'\n'|'\r') {lBuf.appendCodePoint(normal);} )*
'"'
{setText(lBuf.toString());}
;
fragment
ESC
: '\\'
( 'n' {setText("\n");}
| 'r' {setText("\r");}
| 't' {setText("\t");}
| 'b' {setText("\b");}
| 'f' {setText("\f");}
| '"' {setText("\"");}
| '\'' {setText("\'");}
| '/' {setText("/");}
| '\\' {setText("\\");}
| ('u')+ i=HEX_DIGIT j=HEX_DIGIT k=HEX_DIGIT l=HEX_DIGIT
{setText(ParserUtil.hexToChar(i.getText(),j.getText(),
k.getText(),l.getText()));}
)
;
For ANTLR4, Java target and standard escaped string grammar, I used a dedicated singleton class : CharSupport to translate string. It is available in antlr API :
STRING : '"'
( ESC
| ~('"'|'\\'|'\n'|'\r')
)*
'"' {
setText(
org.antlr.v4.misc.CharSupport.getStringFromGrammarStringLiteral(
getText()
)
);
}
;
As I saw in V4 documentation and by experiments, #init is no longer supported in lexer part!
Another (possibly more efficient) alternative is to use rule arguments:
STRING
#init { final StringBuilder buf = new StringBuilder(); }
:
'"'
(
ESCAPE[buf]
| i = ~( '\\' | '"' ) { buf.appendCodePoint(i); }
)*
'"'
{ setText(buf.toString()); };
fragment ESCAPE[StringBuilder buf] :
'\\'
( 't' { buf.append('\t'); }
| 'n' { buf.append('\n'); }
| 'r' { buf.append('\r'); }
| '"' { buf.append('\"'); }
| '\\' { buf.append('\\'); }
| 'u' a = HEX_DIGIT b = HEX_DIGIT c = HEX_DIGIT d = HEX_DIGIT { buf.append(ParserUtil.hexChar(a, b, c, d)); }
);
I needed to do just that, but my target was C and not Java. Here's how I did it based on answer #1 (and comment), in case anyone needs something alike:
QUOTE : '\'';
STR
#init{ pANTLR3_STRING unesc = GETTEXT()->factory->newRaw(GETTEXT()->factory); }
: QUOTE ( reg = ~('\\' | '\'') { unesc->addc(unesc, reg); }
| esc = ESCAPED { unesc->appendS(unesc, GETTEXT()); } )+ QUOTE { SETTEXT(unesc); };
fragment
ESCAPED : '\\'
( '\\' { SETTEXT(GETTEXT()->factory->newStr8(GETTEXT()->factory, (pANTLR3_UINT8)"\\")); }
| '\'' { SETTEXT(GETTEXT()->factory->newStr8(GETTEXT()->factory, (pANTLR3_UINT8)"\'")); }
)
;
HTH.

Categories