Antlr AST construction - java

I am trying build an AST for the below grammer using ANTLR
condition_in
: column_identifier ('NOT')? 'IN' (sql_element_list | LPAREN select_stmt RPAREN)
;
for the above how do i build a rooted tree at NOT IN or IN depending upon the input ? or is there any better way ?
Also for python like dicts, how do i construct an ast, tree with MAP as root and a child MAP_PAIR for each key:value should be great i guess
map : '{' collection_element':'collection_element (',' collection_element':'collection_element)* '}'
I tried several alternatives with label and tree rewrites but antrlworks always complains
Any help would be appreciated

Try something like this:
grammar T;
options {
output=AST;
}
tokens {
NOT_IN;
MAP_PAIR;
MAP;
}
condition_in
: column_identifier ( 'NOT' 'IN' in_end -> ^(NOT_IN column_identifier in_end)
| 'IN' in_end -> ^('IN' column_identifier in_end)
)
;
in_end
: sql_element_list
| LPAREN select_stmt RPAREN -> select_stmt
;
map
: '{' (entry (',' entry)*)? '}' -> ^(MAP entry*)
entry
: k=collection_element ':' v=collection_element -> ^(MAP_PAIR $k $v)
;
// ...

Related

ANTLR Grammar line 1:6 mismatched input '<EOF>' expecting '.'

I am playing with antlr4 grammar files, and I wanted to write my own jsonpath grammar.
I've comeup with this:
grammar ObjectPath;
objectPath : dnot;
dnot : ROOT expr ('.' expr)
| EOF
;
expr : select #selectExpr
| ID #idExpr
;
select : ID '[]' #selectAll
| ID '[' INT ']' #selectIndex
| ID '[' INT (',' INT)* ']' #selectIndexes
| ID '[' INT ':' INT ']' #selectRange
| ID '[' INT ':]' #selectFrom
| ID '[:' INT ']' #selectUntil
| ID '[-' INT ':]' #selectLast
| ID '[?(' query ')]' #selectQuery
;
query : expr (AND|OR) expr # andOr
| ALL # all
| QPREF ID # prop
| QPREF ID GT INT # gt
| QPREF ID LT INT # lt
| QPREF ID EQ INT # eq
| QPREF ID GTE INT # gte
| QPREF ID LTE INT # lte
;
/** Lexer **/
ROOT : '$.' ;
QPREF : '#.' ;
ID : [a-zA-Z][a-zA-Z0-9]* ;
INT : '0' | [1-9][0-9]* ;
AND : '&&' ;
OR : '||' ;
GT : '>' ;
LT : '<' ;
EQ : '==' ;
GTE : '>=' ;
LTE : '<=' ;
ALL : '*' ;
After running this on a simple expression:
CharStream input = CharStreams.fromString("$.name");
ObjectPathLexer lexer = new ObjectPathLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);
ObjectPathParser parser = new ObjectPathParser(tokens);
ParseTree parseTree = parser.dnot();
ObjectPathDefaultVisitor visitor = ...
System.out.println(visitor.visit(parseTree));
System.out.println(parseTree.toStringTree(parser));
The output is ok, meaning that the "name" is actually retrieved from the json, but there's a warning I cannot explain:
line 1:6 mismatched input '<EOF>' expecting '.'
I've read that I need to explicitly have an EOF rule added to my starting one (dnot), but this doesn't seem to work.
Any idea what can I do ?
Your input $.name cannot be parsed by your rule:
dnot : ROOT expr ('.' expr)
| EOF
;
$.name produces 2 tokens:
ROOT
ID
But your first alternative, ROOT expr ('.' expr), expects 2 expressions separated by a .. Perhaps you meant to make the second expr optional, like this:
dnot : ROOT expr ('.' expr)*
| EOF
;
And the EOF is generally added at the end of your start rule, to force the parser to consume all tokens. As you did it now, the parser successfully parsed ROOT expr, but then failed to parse further, and produces the warning you saw (expecting '.').
Since objectPath seems to be your start rule, I think this is what you want to do:
objectPath : dnot EOF;
dnot : ROOT expr ('.' expr)?
;
Also, tokens like these [], '[?(', etc look suspicious. I'm not really familiar with Object Path, but by glueing these chars to each other, input like this [ ] ([ and ] separated by a space) will not be matched by []. So if foo[ ] is valid, I'd write it like this instead:
select : ID '[' ']' #selectAll
| ...
and skip spaces in the lexer:
SPACES : [ \t\r\n]+ -> skip;

How to integrate the generated lexer/parser from Antlr4 into my java project

please bear with me I'm not a coding expert.
I built a grammar in ANTLR4 using ANTRWorks 2. I tested the grammar with various teststrings and it works fine within there. Now what I'm having trouble with is using the generated lexer and parser in my own code. As code generation target I'm using Java.
Here is the code I'm trying:
String s = "query(std::map .find(x) == y): bla";
ANTLRInputStream input = new ANTLRInputStream(s);
TokenStream tokens = new CommonTokenStream(new pqlcLexer(input));
pqlcParser parser = new pqlcParser(tokens);
ParseTree tree = parser.query();
System.out.println(tree.toStringTree());
The Output of that is just "query", which is my starting rule. I would expect something like the output from ANTLRworks:
"(query (quant_expr query ( (match std::map . find ( (cm x) ) == (cm (numeral 256))) ) : (query (qexpr bla))))"
Here is the tree visually: http://puu.sh/94Nlx/00dc35bb05.png
Which methods do I have to call to get the proper syntax tree as output?
Here is the generated Parser for reference: http://pastebin.com/Lb34TyRW and the grammar:
// Lexer
//Schlüsselwörter
EXISTS: 'exists';
REDUCE: 'reduce';
QUERY: 'query';
INT: 'int';
DOUBLE: 'double';
CONST: 'const';
STDVECTOR: 'std::vector';
STDMAP: 'std::map';
STDSET: 'std::set';
INTEGER_LITERAL : (DIGIT)+ ;
fragment DIGIT: '0'..'9';
DOUBLE_LITERAL : DIGIT '.' DIGIT+;
LPAREN : '(';
RPAREN : ')';
LBRACK : '[';
RBRACK : ']';
DOT : '.';
EQUAL : '==';
LE : '<=';
GE : '>=';
GT : '>';
LT : '<';
ADD : '+';
MUL : '*';
AND : '&&';
COLON : ':';
IDENTIFIER : JavaLetter JavaLetterOrDigit*;
fragment JavaLetter : [a-zA-Z$_]; // these are the "java letters" below 0xFF
fragment JavaLetterOrDigit : [a-zA-Z0-9$_]; // these are the "java letters or digits" below 0xFF
WS
: [ \t\r\n\u000C]+ -> skip
;
COMMENT
: '/*' .*? '*/' -> skip
;
LINE_COMMENT
: '//' ~[\r\n]* -> skip
;
// Parser
//start_rule: query;
query :
quant_expr
| qexpr+
| IDENTIFIER // order IDENTIFIER and qexpr+?
| numeral
//| c_expr TODO
;
c_type : INT | DOUBLE | CONST;
bin_op: AND | ADD | MUL | EQUAL | LT | GT | LE| GE;
qexpr:
LPAREN query RPAREN bin_op_query?
// query bin_op query
| IDENTIFIER bin_op_query? // copied from query to resolve left recursion problem
| numeral bin_op_query? // ^
| quant_expr bin_op_query? // ^
// query.find(query)
| IDENTIFIER find_query? // copied from query to resolve left recursion problem
| numeral find_query? // ^
| quant_expr find_query?
// query[query]
| IDENTIFIER array_query? // copied from query to resolve left recursion problem
| numeral array_query? // ^
| quant_expr array_query?
// | qexpr bin_op_query // bad, resolved by quexpr+ in query
;
bin_op_query: bin_op query bin_op_query?; // resolve left recursion of query bin_op query
find_query: '.''find' LPAREN query RPAREN;
array_query: LBRACK query RBRACK;
quant_expr:
quant id ':' query
| QUERY LPAREN match RPAREN ':' query
| REDUCE LPAREN IDENTIFIER RPAREN id ':' query
;
match:
STDVECTOR LBRACK id RBRACK EQUAL cm
| STDMAP '.''find' LPAREN cm RPAREN EQUAL cm
| STDSET '.''find' LPAREN cm RPAREN
;
cm:
IDENTIFIER
| numeral
// | c_expr TODO
;
quant :
EXISTS;
id :
c_type IDENTIFIER
| IDENTIFIER // Nach Seite 2 aber nicht der Übersicht. Laut übersicht id -> aber dann wäre Regel 1 ohne +
;
numeral :
INTEGER_LITERAL
| DOUBLE_LITERAL
;
Apart from the fact that Java Classes should start with an uppercase letter (so you should rename your grammar, so it starts with an uppercase letter) your last line should be
System.out.println(tree.toStringTree(parser));
to print the tree. Otherwise the tree doesnÄt know which parser to use and only outputs what you described.
EDIT
When naming your grammar PQLC the following code
import org.antlr.v4.runtime.*;
import org.antlr.v4.runtime.tree.*;
public class Test {
public static void main(String[] args) throws Exception {
String query = "query(std::map .find(x) == y): bla";
ANTLRInputStream input = new ANTLRInputStream(query);
PQLCLexer lexer = new PQLCLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);
PQLCParser parser = new PQLCParser(tokens);
ParseTree tree = parser.query(); // begin parsing at query rule
System.out.println(tree.toStringTree(parser)); // print LISP-style tree
}
}
produces this output with ANTLR v4.2 at my machine:
(query (quant_expr query ( (match std::map . find ( (cm x) ) == (cm y)) ) : (query (qexpr bla))))

ANTLR4-based lexer loses syntax hightlighting during typing on NetBeans

I've coded a simple lexer and parser using ANTLR4 grammars to make a language plugin for NetBeans 7.3 to help team write more quickly our layout files (a mix of XHTML and widgets definitions also in form of XHTML tags but with custom properties, characteristics, and with some differencies against XHTML syntax).
Template file example:
<div style="dyn_layout_panel">
#symbol#
<w_label=label, text="Try to close this window" />
<w_buttonclose=button, text = "CLOSE", on_press=press_close />
<w_buttonterminate=button, text="TERMINATE", on_press=press_terminate />
<w_mydatepicker=datepicker, parent=tab0, ary=[10, "str", /regex/i], start_date=2013-10-05, on_selected=datepicker_selected />
<w_myeditbox=editbox, parent=tab0, validation=USER_REGEX, validation_regex=/^[0-9]+[a-z]*$/i,
validation_msg="User regex don't match editbox contents.", on_keyreturn=tab0_editbox_keyreturn />
<div style="dyn_layout_panel">
$SYMBOL_2$
Some text that make a text node.
</div>
</div>
I use AnltrWorks 2 to write and debug lexer and parser and all seem to be fine, in NetBeans also I don't get any exception and the parser work properly but during editing/typing I lose token colors near the cursor.
Screenshot of problem:
Adding a debug console output for each keystroke I see that the lexer enter in IN_TAG or IN_WIDGET mode correctly, but after a WHITESPACE it returns to the default mode and match te rest of text inside a tag as a TEXT_NODE token.
I know that a lexer can have only one active mode at a time, so because it matches the TEXT_NODE rule when in IN_TAG or IN_WIDGET modes?
Lexer grammar file:
lexer grammar LayoutLexer;
COMMENT
: '/*' .*? '*/' -> channel(HIDDEN)
;
WS : ( ' '
| '\t'
| EOL
)+? -> channel(HIDDEN)
;
WDG_START_OPEN : '<w_' PROPERTY -> pushMode(IN_WIDGET) ;
WDG_END_OPEN : '</w_' PROPERTY -> pushMode(IN_WIDGET) ;
TAG_START_OPEN : '<' ATTRIBUTE -> pushMode(IN_TAG) ;
TAG_END_OPEN : '</' ATTRIBUTE -> pushMode(IN_TAG) ;
EXT_REF
: ( ('#' REF_NAME '#') | ('$' SYMBOL '$') | ('§' REF_NAME '§') )
;
fragment
REF_NAME
: ( [a-z]+ [0-9a-z_]*? )
;
fragment
EOL : ( '\r\n' | '\n\r' | '\n' )
;
EQUAL
: '='
;
TEXT_NODE
: ( (~('\r'|'\n'|'<'|'#'|'$'|'§'))+ )
;
ERROR
: ( .+? )
;
mode IN_TAG;
TAG_CLOSE : '>' -> popMode ;
TAG_EMPTY_CLOSE : '/>' -> popMode ;
TAG_WS : WS -> type(WS), channel(HIDDEN) ;
TAG_COMMENT : COMMENT -> type(COMMENT), channel(HIDDEN) ;
TAG_EQ : EQUAL -> type(EQUAL) ;
ATTRIBUTE
: ( LITERAL [0-9a-zA-Z_]* )
;
VAL
: ( '"' ( ESC_SEQ | ~('\\'|'"') )*? '"'
| '\'' ( ESC_SEQ | ~('\\'|'\'') )*? '\'' )
;
TAG_ERR : ERROR -> type(ERROR) ;
mode IN_WIDGET;
WDG_CLOSE : '>' -> popMode ;
WDG_EMPTY_CLOSE : '/>' -> popMode ;
WDG_WS : WS -> type(WS), mode(IN_WIDGET), channel(HIDDEN) ;
WDG_COMMENT : COMMENT -> type(COMMENT), channel(HIDDEN) ;
WDG_EQ : EQUAL -> type(EQUAL), pushMode(WDG_ASSIGN) ;
COMMA
: ','
;
fragment
MINUS
: '-'
;
STRING
: ( '"' ( ESC_SEQ | ~('\\'|'"') )*? '"'
| '\'' ( ESC_SEQ | ~('\\'|'\'') )*? '\'' )
;
fragment
ESC_SEQ
: '\\' ('b'|'t'|'n'|'f'|'r'|'\"'|'\''|'\\')
| UNICODE_ESC
| OCTAL_ESC
;
fragment
OCTAL_ESC
: '\\' ('0'..'3') ('0'..'7') ('0'..'7')
| '\\' ('0'..'7') ('0'..'7')
| '\\' ('0'..'7')
;
fragment
UNICODE_ESC
: '\\' 'u' HEX_DIGIT HEX_DIGIT HEX_DIGIT HEX_DIGIT
;
fragment
HEX_DIGIT
: [0-9a-fA-F]
;
fragment
DIGIT
: [0-9]
;
fragment
HEX_NUMBER
: '0x' HEX_DIGIT+
;
fragment
HTML_NUMBER
: (INT_NUMBER | FLOAT_NUMBER) HTML_UNITS
;
fragment
FLOAT_NUMBER
: MINUS? INT_NUMBER '.' DIGIT+
;
fragment
INT_NUMBER
: MINUS? DIGIT+
;
EVENT_HANDLER
: 'on_' PROPERTY
;
PROPERTY
: ( LITERAL [0-9a-zA-Z_]* )
;
fragment
LITERAL
: ( LITERAL_U | LITERAL_L )
;
fragment
LITERAL_U
: [A-Z]+
;
fragment
LITERAL_L
: [a-z]+
;
WDG_ERR : ERROR -> type(ERROR) ;
mode WDG_ASSIGN;
PHP_REF
: ( LITERAL_L ('_' | LITERAL_L | [0-9])* ) -> popMode
;
VALUE : (WDG_VAL | ARRAY) -> popMode;
ASGN_WS : WS -> type(WS), channel(HIDDEN);
ASGN_COMMA : COMMA -> type(COMMA);
ARY_START
: '['
;
ARY_END
: ']'
;
BIT_OR
: '|'
;
ARRAY
: ARY_START ARY_VALUE (ASGN_COMMA ARY_VALUE)* ARY_END
;
fragment
ARY_VALUE : ASGN_WS? WDG_VAL ASGN_WS? -> type(VALUE);
fragment
WDG_VAL
: (STRING
| UTC_DATE
| HEX_NUMBER
| HTML_NUMBER
| FLOAT_NUMBER
| INT_NUMBER
| BOOLEAN
| BITFIELD
| REGEX
| CSS_CLASS)
;
fragment
HTML_UNITS
: ('%'|'in'|'cm'|'mm'|'em'|'ex'|'pt'|'pc'|'px')
;
fragment
BOOLEAN
: ('true'|'false')
;
fragment
BITFIELD
: SYMBOL (WS? BIT_OR WS? SYMBOL)*
;
SYMBOL
: LITERAL_U [0-9A-Z_]*
;
UTC_DATE
: (DIGIT DIGIT DIGIT DIGIT '-' DIGIT DIGIT '-' DIGIT DIGIT)
;
REGEX
: ('/' ('\\'.|.)*? '/' ('g'|'m'|'i')* )
;
CSS_CLASS
: ( LITERAL_L ('-' | '_' | LITERAL_L | [0-9])* )
;
WDG_ASSIGN_ERR : ERROR -> type(ERROR), popMode;
Parser grammar file:
parser grammar LayoutParser;
options
{
tokenVocab=LayoutLexer;
language=Java;
}
document : (element | TEXT_NODE | EXT_REF)* EOF;
element
locals
[
String currentTag
]
: ( ( html_open_tag (element | TEXT_NODE | EXT_REF)* html_close_tag )
| ( wdg_open_tag (element | TEXT_NODE | EXT_REF)* wdg_close_tag )
| ( html_empty_tag | wdg_empty_tag ) )
;
html_empty_tag
: TAG_START_OPEN (ATTRIBUTE EQUAL VAL)* TAG_EMPTY_CLOSE
;
html_open_tag
: ( tag=TAG_START_OPEN (ATTRIBUTE EQUAL VAL)* TAG_CLOSE )
{$element::currentTag = $tag.text.substring(1);}
;
html_close_tag
: tag=TAG_END_OPEN TAG_CLOSE
{
if (!$element::currentTag.equals($tag.text.substring(2)))
notifyErrorListeners("HTML tag mismatch '" + $element::currentTag + "' - '" + $tag.text.substring(2) + "'");
}
;
wdg_empty_tag
: WDG_START_OPEN EQUAL PHP_REF ( COMMA (wdg_prop | wdg_event) )* WDG_EMPTY_CLOSE
;
wdg_open_tag
: tag=WDG_START_OPEN EQUAL PHP_REF ( COMMA (wdg_prop | wdg_event) )* WDG_CLOSE
{$element::currentTag = $tag.text.substring(1);}
;
wdg_close_tag
: tag=WDG_END_OPEN WDG_CLOSE
{
if (!$element::currentTag.equals($tag.text.substring(2)))
notifyErrorListeners("Widget alias mismatch '" + $element::currentTag + "' - '" + $tag.text + "'");
}
;
wdg_prop
: PROPERTY (EQUAL (ARRAY | VALUE | PHP_REF | UTC_DATE | REGEX | CSS_CLASS))?
;
wdg_event
: EVENT_HANDLER EQUAL PHP_REF
;
Depending on the implementation of syntax highlighting, the IDE may or may not start at the beginning of the document when lexing the input for syntax highlighting. If it does not start at the beginning of the document, then before returning any tokens, you need to ensure that the lexer instance is initialized in the correct mode (both the _mode and _modeStack fields need to be initialized to their correct state at the point where lexing starts).
If your lexer reads or writes any custom fields during lexing, you may need to restore those fields as well.
Examples
GoWorks (NetBeans based, LGPL License). This implementation does not use the lexer facilities in the NetBeans API, but instead implements the functionality at a lower level. For now you can ignore the MarkOccurrences* and SemanticHighlighter classes.
package org.tvl.goworks.editor.go.highlighter
package org.antlr.works.editor.antlr4.highlighting
ANTLR 4 IntelliJ Plugin (IntelliJ IDEA, BSD license).
package org.antlr.intellij.adaptor.lexer
package org.antlr.intellij.plugin (in particular, the SyntaxHighlighter classes)
Additional efficiency notes
Your REF_NAME, VAL, and STRING rules use non-greedy loops that do not need to be non-greedy. In each of these rules, change +? to + and change *? to *.
Your WS and ERROR rules use a non-greedy operator +? which is equivalent to not having a closure at all. The unnecessary use of a non-greedy operator in these cases only serves to slow down your lexer. To preserve the existing behavior, you can remove +? from these rules (replacing with + would change behavior).
Additional functionality notes
ANTLR 4 does not perform any error correction during lexing. If the input does not match a token, then the input simply does not match a token. This issue affects your VAL and STRING tokens in particular, which will not get syntax highlighting prior to adding the closing " or ' character. For syntax highlighting these types of tokens, I prefer to use an additional mode in the lexer, allowing me to produce separate tokens for the escape sequences embedded in the string, as well as syntax highlighting an unterminated string at the end of the line (unless your language allows strings to span multiple lines, in which case you'd stop at the end of the input).
For future references
All problems are related to the wrong implementation I done of NetBeans Lexer<T> class; many tutorials on the web do not take into account that a lexer may have more than one mode and that the lexer state must be backuped and restored between Lexer allocation/releases as mentioned by 280Z28.
This is the code I use to make syntax highlighting consistent:
public class LayoutEditorLexer implements Lexer<LayoutTokenId> {
private LexerRestartInfo<LayoutTokenId> info;
private LayoutLexer lexer;
private class LexerState {
public int Mode = -1;
public IntegerStack Stack = null;
public LexerState(int mode, IntegerStack stack)
{
Mode = mode;
Stack = new IntegerStack(stack);
}
}
public LayoutEditorLexer(LexerRestartInfo<LayoutTokenId> info) {
this.info = info;
AntlrCharStream charStream = new AntlrCharStream(info.input(), "LayoutEditor", false);
lexer = new LayoutLexer(charStream);
lexer.removeErrorListeners();
lexer.addErrorListener(ErrorListener.INSTANCE);
LexerState lexerMode = (LexerState)info.state();
if (lexerMode != null)
{
lexer._mode = lexerMode.Mode;
lexer._modeStack.addAll(lexerMode.Stack);
}
}
#Override
public org.netbeans.api.lexer.Token<LayoutTokenId> nextToken() {
Token token = lexer.nextToken();
int ttype = token.getType();
if (ttype != LayoutLexer.EOF)
{
LayoutTokenId tokenId = LayoutLanguageHierarchy.getToken(ttype);
return info.tokenFactory().createToken(tokenId);
}
return null;
}
#Override
public Object state()
{
// Here many tutorials simply returns null.
return new LexerState(lexer._mode, lexer._modeStack);
}
#Override
public void release()
{
}
}

ANTLR generating invalid java exceptions throws code

I've been using ANTLRwork 1.5 these days, together with antlr runtime 3.5. Here is a weird thing i found:
Antlr is generating this kind of java code for me:
public final BLABLABLAParser.addExpression_return addExpression() throws {
blablabla...
}
notice that this function throws nothing, and this is invalid in java. So I need to correct these mistakes manually.
Anyone knows why?
here is the sample grammar, it's directly taken from the book Language implementation patterns.
// START: header
// START: header
grammar Cymbol; // my grammar is called Cymbol
options {
output = AST;
ASTLabelType = CommonTree;
}
tokens{
METHOD_DECL;
ARG_DECL;
BLOCK;
VAR_DECL;
CALL;
ELIST;
EXPR;
}
// define a SymbolTable field in generated parser
compilationUnit // pass symbol table to start rule
: (methodDeclaration | varDeclaration)+ // recognize at least one variable declaration
;
// END: header
methodDeclaration
: type ID '(' formalParameters? ')' block
-> ^(METHOD_DECL type ID formalParameters? block)
;
formalParameters
: type ID (',' type ID)* -> ^(ARG_DECL type ID)+
;
// START: type
type
: 'float'
| 'int'
| 'void'
;
// END: type
block : '{' statement* '}' -> ^(BLOCK statement*)
;
// START: decl
varDeclaration
: type ID ('=' expression)? ';' -> ^(VAR_DECL type ID expression?)// E.g., "int i = 2;", "int i;"
;
// END: decl
statement
: block
| varDeclaration
| 'return' expression? ';' -> ^('return' expression?)
| postfixExpression
(
'=' expression -> ^('=' postfixExpression expression)
| -> ^(EXPR postfixExpression)
) ';'
;
expressionList
: expression(',' expression)* -> ^(ELIST expression+)
| -> ELIST
;
expression
: addExpression -> ^(EXPR addExpression)
;
addExpression
: postfixExpression('+'^ postfixExpression)*
;
postfixExpression
: primary (lp='('^ expressionList ')'! {$lp.setType(CALL);})*
;
// START: primary
primary
: ID // reference variable in an expression
| INT
| '(' expression ')' -> expression
;
// END: primary
// LEXER RULES
ID : LETTER (LETTER | '0'..'9')*
;
fragment
LETTER : ('a'..'z' | 'A'..'Z')
;
INT : '0'..'9'+
;
WS : (' '|'\r'|'\t'|'\n') {$channel=HIDDEN;}
;
SL_COMMENT
: '//' ~('\r'|'\n')* '\r'? '\n' {$channel=HIDDEN;}
;
Edit: This is a bug in ANTLRWorks 1.5 that has already been fixed for the next release.
#5: ANTLRworks fails to generate proper Java Code
I used the exact configuration you described above, with a copy/pasted grammar. The signature generated for the rule you mention was the following:
// $ANTLR start "addExpression"
// C:\\dev\\Cymbol.g:72:1: addExpression : postfixExpression ( '+' ^ postfixExpression )* ;
public final CymbolParser.addExpression_return addExpression() throws RecognitionException {
Can you post the first line of the generated file? It should start with // $ANTLR 3.5 like the following:
// $ANTLR 3.5 C:\\dev\\Cymbol.g 2013-02-13 09:55:44

Antlr Tree Grammar

I am having trouble moving from parser grammar to tree grammar, the problem comes when i use tree operators (^,!) instead of rewrite rules (->)
where_clause
: 'where'! condition_or
;
condition_or
: condition_and ( 'or'^ condition_and )*
;
condition_and
: condition_expr ( 'and'^ condition_expr )*
;
condition_expr
: condition_comparision
// | condition_in
// | condition_like
;
condition_comparision
: column_identifier ('=' | '!=' | '>' | '<')^ sql_element
;
For the above parser grammar, how would the tree grammer look like ? Since this isn't recursive I wont be able to collapse this into a single rule in the tree grammar.
The other alternative to forcefully rewrite the parser grammar using rewrite syntax
condition_or
: condition_and -> condition_and
( 'or' x=condition_and -> ^('or' condition_or $x))*
;
Is there any simpler way to do this ?
Thanks
The corresponding tree grammar would look like this:
where_clause
: condition_or
;
condition_or
: ^('or' condition_and condition_and)
;
condition_and
: ^('and' condition_expr condition_expr)
;
condition_expr
: condition_comparision
;
condition_comparision
: ^('=' column_identifier sql_element)
| ^('!=' column_identifier sql_element)
| ^('>' column_identifier sql_element)
| ^('<' column_identifier sql_element)
;

Categories