Alloy API throws a Null when executing alloy command - java

I have been using the Alloy API which can be written in Java. My goal is to compile Alloy model, display it visually, and narrow down the search for instances.
At this time, I need to command the source of the Alloy language, which may execute correctly or throw a NullPointerException, depending on the source. I have checked the contents of the API class in the eclipse debugger, but I cannot understand it properly.
The issue is: The debugger shows that TranslateAlloyToKodkod.execute_command occurs java.lang.NullPointerException.
According to the Alloy API documentation,
TranslateAlloyToKodkod.execute_command returns null if the user chose "save to FILE" as the SAT solver, and nonnull if the solver finishes the entire solving and is either satisfiable or unsatisfiable.
But I never changed execute options that "save to FILE" as the SAT solver. For your information, the solver, Alloy analyzer finishes the entire solving of following two sources.
Would you let me know how to fix the problem?
Here is the Java code I created, with some additions from the API example:
import java.io.File;
import edu.mit.csail.sdg.alloy4.A4Reporter;
import edu.mit.csail.sdg.alloy4.Err;
import edu.mit.csail.sdg.alloy4.ErrorWarning;
import edu.mit.csail.sdg.alloy4compiler.ast.Command;
import edu.mit.csail.sdg.alloy4compiler.ast.Module;
import edu.mit.csail.sdg.alloy4compiler.parser.CompUtil;
import edu.mit.csail.sdg.alloy4compiler.translator.A4Options;
import edu.mit.csail.sdg.alloy4compiler.translator.A4Solution;
import edu.mit.csail.sdg.alloy4compiler.translator.TranslateAlloyToKodkod;
import edu.mit.csail.sdg.alloy4viz.VizGUI;
public final class exportXML {
private static String outputfilepath;
public static void main(String[] args) throws Err {
VizGUI viz = null;
A4Reporter rep = new A4Reporter() {
#Override public void warning(ErrorWarning msg) {
System.out.print("Relevance Warning:\n"+(msg.toString().trim())+"\n\n");
System.out.flush();
}
};
String args_filename = args[0];
String[] path_split = args_filename.split("/");
int pos_fname = path_split.length -1;
String[] filename_split = path_split[pos_fname].split("\\.");
for ( int i=0; i<filename_split.length; i++ ) {
System.out.println(filename_split[i]);
}
String dir = "";
for ( int i = 0; i < path_split.length - 1; i++ ) {
dir = dir.concat(path_split[i]) + "/";
}
String out_fname = "Instance_of_" + filename_split[0];
outputfilepath = dir + out_fname;
File outdir = new File(outputfilepath);
outdir.mkdir();
for(String filename:args) {
System.out.println("=========== parse + typechecking: "+filename+" =============");
Module world = CompUtil.parseEverything_fromFile(rep, null, filename);
A4Options options = new A4Options();
options.solver = A4Options.SatSolver.SAT4J;
for (Command command: world.getAllCommands()) {
System.out.println("=========== command : "+command+" ============");
A4Solution ans = TranslateAlloyToKodkod.execute_command(rep, world.getAllReachableSigs(), command, options);
System.out.println(ans);
if (ans.satisfiable()) {
int cnt = 1;
A4Solution tmp = ans.next();
while ( tmp.satisfiable() ) {
tmp = tmp.next();
cnt++;
}
System.out.println("=========== "+cnt+" satisfiable solution found ============");
tmp = ans;
String[] outXml = new String[cnt];
for ( int i = 0; i < cnt; i++ ) {
outXml[i] = outputfilepath + "/" + out_fname + String.valueOf(i+1) + ".xml";
tmp.writeXML(outXml[i]);
tmp = tmp.next();
}
}
}
}
}
}
This is the sample of Alloy sources that will be successfully executed:
module adressBook
open ordering [Book]
abstract sig Target {}
sig Addr extends Target {}
abstract sig Name extends Target {}
sig Alias, Group extends Name {}
sig Book {
names: set Name,
addr: names -> some Target
}
{
no n: Name | n in n.^(addr)
all a: Alias | lone a.addr
}
pred add (b, b': Book, n: Name, t: Target) {
t in Addr or some lookup [b, t]
b'.addr = b.addr + n -> t
}
pred del (b, b': Book, n: Name, t: Target) {
no b.addr.n or some n.(b.addr) - t
b'.addr = b.addr - n -> t
}
fun lookup (b: Book, n: Name): set Addr {
n.^(b.addr) & Addr
}
pred init (b: Book) {no b.addr}
fact traces {
init [first]
all b: Book - last | let b' = next [b] |
some n: Name, t: Target | add [b, b', n, t] or del [b, b', n, t]
}
pred show {}
run show for 10
assert lookupYields {
all b: Book, n: b.names | some lookup [b, n]
}
check lookupYields for 3 but 4 Book
check lookupYields for 6
This is the Alloy source that will fail to execute (it will throw a null pointer):
sig Element {}
one sig Group {
elements: set Element,
unit: one elements,
mult: elements -> elements -> one elements,
inv: elements -> one elements
}
fact NoRedundantElements {
all e: Element | e in Group.elements
}
fact UnitLaw1 {
all a: Group.elements | Group.mult [a] [Group.unit] = a
}
fact UnitLaw2 {
all a: Group.elements |
Group.mult [Group.unit] [a] = a
}
fact AssociativeLaw {
all a: Group.elements | all b: Group.elements | all c:Group.elements |
Group.mult [Group.mult [a] [b]] [c] = Group.mult [a] [Group.mult [b] [c]]
}
fact InvLaw1{
all a: Group.elements | Group.mult [Group.inv[a]] [a] = Group.unit
}
assert InvLaw2 {
all a: Group.elements | Group.mult [a] [Group.inv[a]] = Group.unit
}
check InvLaw2
assert Commutativity {
all a: Group.elements | all b: Group.elements | Group.mult [a] [b] = Group.mult [b] [a]
}
check Commutativity for 6
pred subgroup (g: set Element, h: set Element) {
(all a: g | a in h) and
(Group.unit in g) and
(all a, b: g | Group.mult [a] [b] in g) and
(all a: g | Group.inv[a] in g)
}
pred regularSubgroup(n: set Element, g: set Element) {
subgroup [n, g] and
(all n0: n, g0: g | Group.mult [Group.mult [g0] [n0]] [Group.inv[g0]] in n)
}
pred main(n1: set Element, n2: set Element) {
let g = Group.elements |
regularSubgroup [n1, g] and
(some g0: g | (not g0 in n1)) and
regularSubgroup [n2, n1] and
(some n10: n1 | (not n10 in n2)) and
(not regularSubgroup [n2, g])
}
run main for 8

I think this should be reported as an issue on the https://github.com/alloytools/org.alloytools.alloy site? Preferably with a PR that fixes it.

Related

How to find duplicate elements in a Stream in Java

I'm trying to find duplicate entries in map values. But the thing is the list of values have multiple attributes/properties. Basically, if a title shows up more than once in a database, I would mark one entry as unique and mark the rest as duplicates.
Here's my current code:
// I have a Map that looks like...
host1 : id | title | host1 | url | state | duplicate
id | title | host1 | url | state | duplicate
host2 : id | title | host2 | url | state | duplicate
id | title | host2 | url | state | duplicate
for (Map.Entry<String, List<Record>> e : recordsByHost.entrySet()) {
boolean executed = false;
for (Record r : e.getValue()) {
int frequency = Collections.frequency(
e
.getValue()
.stream()
.map(Record::getTitle)
.collect(Collectors.toList()),
r.getTitle()
);
if ((frequency > 1) && (!executed)) {
markDuplicates(r.getId(), r.getTitle());
executed = true;
} else {
executed = false;
}
The issue is when frequency is more than 2 (three records with the same title), the line evaluates to false and treats the third record / second duplicate as "unique".
I've been trying to rework my logic but I'm afraid I'm stuck. Any help / suggestions to get me unstuck would be greatly appreciated.
Set.add (and in fact, Collection.add) returns true if and only if the value was actually added to the Set. Since a Set always enforces uniqueness, you can use this to find duplicates:
void markDuplicates(Iterable<? extends Record> records) {
Set<String> foundTitles = new HashSet<>();
for (Record r : records) {
String title = r.getTitle();
if (title != null && !foundTitles.add(title)) {
// title was not added, because it's already been found.
markAsDuplicate(r);
}
}
}

How to manage list of element of a grammar in the Abstract Syntax Tree generated by Bison

I'm making a java to python translator with the help of the flex and bison tools. The bison rules refer to a restriction of java grammar. In addition to creating the rules in bison, I also created the Abstract Syntax Tree as an Intermediate Representation. The respective nodes of the AST were created in the semantic actions alongside the bison rules.
My problem concerns the management of lists of elements (or recursion) in the bison rules.
Giving the translator the following text file, the parsing is completed without syntactical errors but when I cross the AST in pre-order for test purposes, it would seem that the crossing stops in the first child node of the list, and therefore does not cycle on the remaining children of the lists.
TEXT FILE IN INPUT:
import java.util.*;
class table {
int a;
int c;
}
class ball {
int a;
}
I put the grammar rules of bison involved in it:
Program
: ImportStatement ClassDeclarations { set_parse_tree($$ = program_new($1,$2,2));}
;
ImportStatement
: IMPORT LIBRARY SEMICOLON {$$ = import_new($2,1); printf("Type di import: %d \n", $$->type);}
| %empty {$$ = import_new(NULL,0); }
;
ClassDeclarations
: ClassDeclaration { $$ = list_new(CLASS_DECLARATIONS,$1,NULL,2); }
| ClassDeclarations ClassDeclaration { list_append( $$ = $1, list_new(CLASS_DECLARATIONS,$2,NULL,2)); }
;
ClassDeclaration
: CLASS NameID LBRACE FieldDeclarations RBRACE { $$ = classDec_new($2,$4,2); }
| PUBLIC CLASS NameID LBRACE FieldDeclarations RBRACE { $$ = classDec_new($3,$5,2);}
;
FieldDeclarations
: FieldDeclaration {$$ = list_new(FIELD_DECLARATIONS,$1,NULL,2); }
| FieldDeclarations FieldDeclaration { list_append( $$ = $1, list_new(FIELD_DECLARATIONS,$2,NULL,2)); }
;
FieldDeclaration
: VariableFieldDeclaration {$$ = fieldDec_new($1,NULL,NULL,3);}
| PUBLIC VariableFieldDeclaration {$$ = fieldDec_new($2,NULL,NULL,3);}
| MethodFieldDeclaration {$$ = fieldDec_new(NULL,$1,NULL,3);}
| ConstructorDeclaration {$$ = fieldDec_new(NULL,NULL,$1,3);}
;
VariableFieldDeclaration
: Type VariableDeclarations SEMICOLON {$$ = variableFieldDec_new($1,$2,2);}
;
VariableDeclarations
: VariableDeclaration {$$ = list_new(VARIABLE_DECLARATIONS,$1,NULL,2); }
| VariableDeclarations COMMA VariableDeclaration { list_append( $$ = $1, list_new(VARIABLE_DECLARATIONS,$3,NULL,2)); }
;
VariableDeclaration
: NameID {$$ = varDec_new($1,NULL,NULL,NULL,NULL,5);}
| NameID ASSIGNOP ExpressionStatement {$$ = varDec_new($1,$3,NULL,NULL,NULL,5);}
| NameID LSBRACKET RSBRACKET {$$ = varDec_new($1,NULL,NULL,NULL,NULL,5); }
| LSBRACKET RSBRACKET NameID {$$ = varDec_new($3,NULL,NULL,NULL,NULL,5); }
| NameID LSBRACKET RSBRACKET ASSIGNOP NEW Type LSBRACKET Dimension RSBRACKET {$$ = varDec_new($1,NULL,$6,$8,NULL,5); }
| LSBRACKET RSBRACKET NameID ASSIGNOP NEW Type LSBRACKET Dimension RSBRACKET {$$ = varDec_new($3,NULL,$6,$8,NULL,5); }
| NameID LSBRACKET RSBRACKET ASSIGNOP LBRACE VariableInitializers RBRACE {$$ = varDec_new($1,NULL,NULL,NULL,$6,5); }
| LSBRACKET RSBRACKET NameID ASSIGNOP LBRACE VariableInitializers RBRACE {$$ = varDec_new($3,NULL,NULL,NULL,$6,5); }
| NameID LSBRACKET RSBRACKET ASSIGNOP LBRACE RBRACE {$$ = varDec_new($1,NULL,NULL,NULL,NULL,5); }
| LSBRACKET RSBRACKET NameID ASSIGNOP LBRACE RBRACE {$$ = varDec_new($3,NULL,NULL,NULL,NULL,5); }
;
Type
: INT {$$ = typeId_new($1,1);}
| CHAR {$$ = typeId_new($1,1);}
| FLOAT {$$ = typeId_new($1,1);}
| DOUBLE {$$ = typeId_new($1,1);}
;
NameID
: ID {$$ = nameID_new($1, 1);}
;
In the general structure of a node of the ast there are:
the type of each node,
a union structure containing the different structures of each possible node,
an integer variable (numLeaf) which represents the maximum possible number of leaves for each parent node (it is passed from bison in semantic actions as the last parameter of the functions)
an array of pointers (leafVet) to structures that will have the number of leaves as the size and each location will contain a pointer to a possible child (if the child is not present it will be NULL).
These last two variables are used to manage the crossing of the tree. I will cycle on each vector to pass to the children of each node.
I think the problem refers mainly to the structures of the lists (ClassDeclarations, FieldDeclarations, VariableDeclarations...).
The structure of each list is as follows and this structure is part of the union of possible structures of each node.
STRUCT LIST:
struct {
int type;
struct ast_node *head; //pointer to the head of the list
struct ast_node *tail; //pointer to the tail of the list
} list;
The functions that refer to the creation of list nodes are the following:
static ast_node *newast(int type)
{
ast_node *node = malloc(sizeof(ast_node));
node->type = type;
return node;
}
ast_list *list_new(int type, ast_node *head, ast_list *tail, int numLeaf)
{
ast_list *l = newast(AST_LIST); //allocates memory for the AST_LIST type node
l->list.type = type;
l->list.head = head;
l->list.tail = tail;
l->numLeaf = numLeaf;
l->LeafVet[0] = head;
l->LeafVet[1] = tail;
return l;
}
void list_append(ast_list *first, ast_list *second)
{
while (first && first->list.tail)
{
first = first->list.tail;
}
if (first)
{
first->list.tail = second;
}
first->numLeaf = 2;
}
I think the error could be in the list_append function because when I run through the pre-order tree, it manages to enter the first leaf node of the lists but does not proceed with the remaining leaf nodes. Specifically, referring to the initial text file, the crossing stops after reaching the NameID node of VariableDeclaration (to be precise, it stops at the first variable of the first class) without giving any error. Immediately afterwards it should parse the second leaf node of fieldDeclarations as there is a second variable declaration (variableFieldDeclaration), but trying to print the nonzero leaf numbers of each list, I always get 1, so it would seem that the append of the lists do not work properly.
The error could also be in the crossing algorithm that I write below:
void print_ast(ast_node *node) //ast preorder
{
int leaf;
leaf = node->numLeaf;
printf("Num leaf: %d \n",leaf);
switch(node->type)
{
case AST_LIST:
break;
case AST_PROGRAM:
break;
case AST_IMPORT:
printf("Import: %s \n", node->import.namelib);
break;
case AST_CLASSDEC:
printf("name class: %s\n", node->classDec.nameClass->nameID.name);
break;
case AST_TYPEID:
break;
case AST_VARFIELDDEC:
break;
case AST_VARDEC:
break;
case AST_FIELDDEC:
break;
case AST_NAMEID:
printf("Il valore della variabile e': %s \n", node->nameID.name);
break;
default:
printf("Error in node selection!\n");
exit(1);
}
for (int i=0; i<leaf; i++)
{
if(node->LeafVet[i] == NULL ){
continue;
} else{
printf("%d \n", node->LeafVet[i]->type);
print_ast(node->LeafVet[i]);
}
}
}
I hope you can help me, thanks a lot.

How to pass by Value in Scala [duplicate]

This question already has answers here:
Is Java "pass-by-reference" or "pass-by-value"?
(93 answers)
Can Scala call by reference?
(3 answers)
Closed 6 years ago.
I wrote this simple program ( I pass Map from String to Int to method as a parameter) and it seems that is passing it by reference. How do I make it pass by Value?
import scala.collection.immutable._
import scala.io.Source._
/**
* Created by Alex on 5/11/16.
*/
object ScalaPassByValue {
type Env = scala.collection.mutable.Map[String,Int]
def mt_env : Env = {
scala.collection.mutable.Map.empty[String,Int]
}
def extend_env (sym: String, v: Int, env: Env) : Env = {
if(env.contains(sym)) {
env(sym) = v
env}
else {
env += (sym -> v)
env}
}
def main(args: Array[String]): Unit = {
bar(mt_env)
}
def bar (env: Env) : Unit = {
println("In A")
extend_env("a", 666,env)
print_env(env)
bullshit2(env)
bullshit3(env)
}
def bar2 (env: Env) : Unit = {
println("In AB")
extend_env("b", 326,env)
print_env(env)
}
def bar3 (env: Env) : Unit = {
println("In AC")
extend_env("c", 954,env)
print_env(env)
}
def print_env(env: Env) : Unit = {
//println("Environment")
for ((k,v) <- env){
v match {
case value: Int => print("Arg: "+k+" = ")
print(value+"\n")
case _ =>
}
}
}
}
Ideally I want main method to pass empty map to method bar which would add a map from 'b' to 666, then call two method to add 'b' and 'c' respectively. At the end I want to get printed
In A
Arg: a = 666
In AB
Arg: a = 666
Arg: b = 326
In AC
Arg: a = 666
Arg: c = 954
but get this:
In A
Arg: a = 666
In AB
Arg: b = 326
Arg: a = 666
In AC
Arg: b = 326
Arg: a = 666
Arg: c = 954
How can I make Scala pass my Map by value so modification in call to bar2 doesn't modify original map

Xtext: Inferring type in variable declaration not working in interface generation

I am writing my DSL's Model inferrer, which extends from AbstractModelInferrer. Until now, I have successfully generated classes for some grammar constructs, however when I try to generate an interface the type inferrer does not work and I get the following Exception:
0 [Worker-2] ERROR org.eclipse.xtext.builder.BuilderParticipant - Error during compilation of 'platform:/resource/pascani/src/org/example/namespaces/SLA.pascani'.
java.lang.IllegalStateException: equivalent could not be computed
The Model inferrer code is:
def dispatch void infer(Namespace namespace, IJvmDeclaredTypeAcceptor acceptor, boolean isPreIndexingPhase) {
acceptor.accept(processNamespace(namespace, isPreIndexingPhase))
}
def JvmGenericType processNamespace(Namespace namespace, boolean isPreIndexingPhase) {
namespace.toInterface(namespace.fullyQualifiedName.toString) [
if (!isPreIndexingPhase) {
documentation = namespace.documentation
for (e : namespace.expressions) {
switch (e) {
Namespace: {
members +=
e.toMethod("get" + Strings.toFirstUpper(e.name), typeRef(e.fullyQualifiedName.toString)) [
abstract = true
]
members += processNamespace(e, isPreIndexingPhase);
}
XVariableDeclaration: {
members += processNamespaceVarDecl(e)
}
}
}
}
]
}
def processNamespaceVarDecl(XVariableDeclaration decl) {
val EList<JvmMember> members = new BasicEList();
val field = decl.toField(decl.name, inferredType(decl.right))[initializer = decl.right]
// members += field
members += decl.toMethod("get" + Strings.toFirstUpper(decl.name), field.type) [
abstract = true
]
if (decl.isWriteable) {
members += decl.toMethod("set" + Strings.toFirstUpper(decl.name), typeRef(Void.TYPE)) [
parameters += decl.toParameter(decl.name, field.type)
abstract = true
]
}
return members
}
I have tried using the lazy initializer after the acceptor.accept method, but it still does not work.
When I uncomment the line members += field, which adds a field to an interface, the model inferrer works fine; however, as you know, interfaces cannot have fields.
This seems like a bug to me. I have read tons of posts in the Eclipse forum but nothing seems to solve my problem. In case it is needed, this is my grammar:
grammar org.pascani.Pascani with org.eclipse.xtext.xbase.Xbase
import "http://www.eclipse.org/xtext/common/JavaVMTypes" as types
import "http://www.eclipse.org/xtext/xbase/Xbase"
generate pascani "http://www.pascani.org/Pascani"
Model
: ('package' name = QualifiedName ->';'?)?
imports = XImportSection?
typeDeclaration = TypeDeclaration?
;
TypeDeclaration
: MonitorDeclaration
| NamespaceDeclaration
;
MonitorDeclaration returns Monitor
: 'monitor' name = ValidID
('using' usings += [Namespace | ValidID] (',' usings += [Namespace | ValidID])*)?
body = '{' expressions += InternalMonitorDeclaration* '}'
;
NamespaceDeclaration returns Namespace
: 'namespace' name = ValidID body = '{' expressions += InternalNamespaceDeclaration* '}'
;
InternalMonitorDeclaration returns XExpression
: XVariableDeclaration
| EventDeclaration
| HandlerDeclaration
;
InternalNamespaceDeclaration returns XExpression
: XVariableDeclaration
| NamespaceDeclaration
;
HandlerDeclaration
: 'handler' name = ValidID '(' param = FullJvmFormalParameter ')' body = XBlockExpression
;
EventDeclaration returns Event
: 'event' name = ValidID 'raised' (periodically ?= 'periodically')? 'on'? emitter = EventEmitter ->';'?
;
EventEmitter
: eventType = EventType 'of' emitter = QualifiedName (=> specifier = RelationalEventSpecifier)? ('using' probe = ValidID)?
| cronExpression = CronExpression
;
enum EventType
: invoke
| return
| change
| exception
;
RelationalEventSpecifier returns EventSpecifier
: EventSpecifier ({RelationalEventSpecifier.left = current} operator = RelationalOperator right = EventSpecifier)*
;
enum RelationalOperator
: and
| or
;
EventSpecifier
: (below ?= 'below' | above ?= 'above' | equal ?= 'equal' 'to') value = EventSpecifierValue
| '(' RelationalEventSpecifier ')'
;
EventSpecifierValue
: value = Number (percentage ?= '%')?
| variable = QualifiedName
;
CronExpression
: seconds = CronElement // 0-59
minutes = CronElement // 0-59
hours = CronElement // 0-23
days = CronElement // 1-31
months = CronElement // 1-2 or Jan-Dec
daysOfWeek = CronElement // 0-6 or Sun-Sat
| constant = CronConstant
;
enum CronConstant
: reboot // Run at startup
| yearly // 0 0 0 1 1 *
| annually // Equal to #yearly
| monthly // 0 0 0 1 * *
| weekly // 0 0 0 * * 0
| daily // 0 0 0 * * *
| hourly // 0 0 * * * *
| minutely // 0 * * * * *
| secondly // * * * * * *
;
CronElement
: RangeCronElement | PeriodicCronElement
;
RangeCronElement hidden()
: TerminalCronElement ({RangeCronElement.start = current} '-' end = TerminalCronElement)?
;
TerminalCronElement
: expression = (IntLiteral | ValidID | '*' | '?')
;
PeriodicCronElement hidden()
: expression = TerminalCronElement '/' elements = RangeCronList
;
RangeCronList hidden()
: elements += RangeCronElement (',' elements +=RangeCronElement)*
;
IntLiteral
: INT
;
UPDATE
The use of a field was a way to continue working in other stuff until I find a solution. The actual code is:
def processNamespaceVarDecl(XVariableDeclaration decl) {
val EList<JvmMember> members = new BasicEList();
val type = if (decl.right != null) inferredType(decl.right) else decl.type
members += decl.toMethod("get" + Strings.toFirstUpper(decl.name), type) [
abstract = true
]
if (decl.isWriteable) {
members += decl.toMethod("set" + Strings.toFirstUpper(decl.name), typeRef(Void.TYPE)) [
parameters += decl.toParameter(decl.name, type)
abstract = true
]
}
return members
}
From the answer in the Eclipse forum:
i dont know if that you are doing is a good idea. the inferrer maps
your concepts to java concepts and this enables the scoping for the
expressions. if you do not have a place for your expressions then it
wont work. their types never will be computed
thus i think you have a usecase which is not possible using xbase
without customizations. your semantics is not quite clear to me.
Christian Dietrich
My answer:
Thanks Christian, I though I was doing something wrong. If it seems not to be a common use case, then there is no problem, I will make sure the user explicitly defines a variable type.
Just to clarify a little bit, a Namespace is intended to define variables that are used in Monitors. That's why a Namespace becomes an interface, and a Monitor becomes a class.
Read the Eclipse forum thread

Create abstract tree problem from parser

I need big help, I have two simple classes Tree and Node ( I put just interface to use less space on forum, I can easy modify those classes ), I also have flex file and parser file and need to create AST ( abstract syntax tree - to put tokens in Node objects and fill Tree in right way ).
public class Tree {
Node root;
public void AddNode(Node n){}
public void Evaluate(){}
}
public class Node {
public String value;
public int type;
Node left, right;
}
This is parser file
import java_cup.runtime.*;
parser code {:
public boolean result = true;
public void report_fatal_error(String message, Object info) throws java.lang.Exception {
done_parsing();
System.out.println("report_fatal_error");
report_error();
}
public void syntax_error(Symbol cur_token) {
System.out.println("syntax_error");
report_error();
}
public void unrecovered_syntax_error(Symbol cur_token) throws java.lang.Exception {
System.out.println("unrecovered_syntax_error");
report_fatal_error("Fatalna greska, parsiranje se ne moze nastaviti", cur_token);
}
public void report_error(){
System.out.println("report_error");
result = false;
}
:}
init with {: result = true; :};
/* Terminals (tokens returned by the scanner). */
terminal AND, OR, NOT;
terminal LPAREN, RPAREN;
terminal ITEM;
terminal OPEN, CLOSE, MON, MOFF, TIMEOUT, ESERR, BAE, I, O, BUS, EXT, PUSHB;
terminal VAL, OK, BUS_BR_L, BUS_BR_R, SH_CRT_L, SH_CRT_R, BUS_ALL, EXT_ALL, NO_TIMEOUT, NO_ES_ERR, IBUS_OK, CFG_OK, SYNTAX;
terminal OUT;
/* Non-terminals */
non terminal extension;
non terminal Integer expr;
/* Precedences */
precedence left AND, OR;
/* The grammar */
expr ::=
|
expr:e1 AND expr:e2
{:
//System.out.println("AND");
RESULT = 1;
:}
|
expr:e1 OR expr:e2
{:
//System.out.println("OR");
RESULT = 2;
:}
|
NOT expr:e1
{:
//System.out.println("NOT");
RESULT = 3;
:}
|
LPAREN expr:e RPAREN
{:
//System.out.println("()");
RESULT = 4;
:}
|
ITEM extension:e1
{:
//System.out.println("ITEM.");
RESULT = 5;
:}
|
error
{:
System.out.println("error");
parser.report_error();
RESULT = 0;
:}
;
extension ::=
OPEN
|
MON
|
CLOSE
|
MOFF
|
TIMEOUT
|
ESERR
|
BAE
|
I
|
O
|
BUS
|
EXT
|
PUSHB
|
VAL
|
OK
|
BUS_BR_L
|
BUS_BR_R
|
SH_CRT_L
|
SH_CRT_R
|
BUS_ALL
|
EXT_ALL
|
NO_TIMEOUT
|
NO_ES_ERR
|
IBUS_OK
|
CFG_OK
|
SYNTAX
|
OUT
;
This is grammar
%%
%{
public boolean result = true;
//Puni expression sa tokenima radi reimenovanja
public Expression expression=new Expression();
//
public ArrayList<String> items = new ArrayList<String>();
public ArrayList<Integer> extensions = new ArrayList<Integer>();
// ukljucivanje informacije o poziciji tokena
private Symbol new_symbol(int type) {
return new Symbol(type, yyline+1, yycolumn);
}
// ukljucivanje informacije o poziciji tokena
private Symbol new_symbol(int type, Object value) {
return new Symbol(type, yyline+1, yycolumn, value);
}
%}
%cup
%xstate COMMENT
%eofval{
return new_symbol(sym.EOF);
%eofval}
%line
%column
%%
" " {}
"\b" {}
"\t" {}
"\r\n" {}
"\f" {}
"open" {extensions.add(sym.OPEN); return new_symbol(sym.OPEN);}
"close" {extensions.add(sym.CLOSE); return new_symbol(sym.CLOSE);}
"m_on" {extensions.add(sym.MON); return new_symbol(sym.MON);}
"m_off" {extensions.add(sym.MOFF); return new_symbol(sym.MOFF);}
"timeout" {extensions.add(sym.TIMEOUT); return new_symbol(sym.TIMEOUT);}
"es_err" {extensions.add(sym.ESERR); return new_symbol(sym.ESERR);}
"bae" {extensions.add(sym.BAE); return new_symbol(sym.BAE);}
"i" {extensions.add(sym.I); return new_symbol(sym.I);}
"o" {extensions.add(sym.O); return new_symbol(sym.O);}
"bus" {extensions.add(sym.BUS); return new_symbol(sym.BUS);}
"ext" {extensions.add(sym.EXT); return new_symbol(sym.EXT);}
"pushb" {extensions.add(sym.PUSHB); return new_symbol(sym.PUSHB);}
"val" {extensions.add(sym.VAL); return new_symbol(sym.VAL);}
"ok" {extensions.add(sym.OK); return new_symbol(sym.OK);}
"bus_br_l" {extensions.add(sym.BUS_BR_L); return new_symbol(sym.BUS_BR_L);}
"bus_br_r" {extensions.add(sym.BUS_BR_R); return new_symbol(sym.BUS_BR_R);}
"sh_crt_l" {extensions.add(sym.SH_CRT_L); return new_symbol(sym.SH_CRT_L);}
"sh_crt_r" {extensions.add(sym.SH_CRT_R); return new_symbol(sym.SH_CRT_R);}
"bus_all" {extensions.add(sym.BUS_ALL); return new_symbol(sym.BUS_ALL);}
"ext_all" {extensions.add(sym.EXT_ALL); return new_symbol(sym.EXT_ALL);}
"no_timeout" {extensions.add(sym.NO_TIMEOUT); return new_symbol(sym.NO_TIMEOUT);}
"no_es_err" {extensions.add(sym.NO_ES_ERR); return new_symbol(sym.NO_ES_ERR);}
"ibus_ok" {extensions.add(sym.IBUS_OK); return new_symbol(sym.IBUS_OK);}
"cfg_ok" {extensions.add(sym.CFG_OK); return new_symbol(sym.CFG_OK);}
"syntax" {extensions.add(sym.SYNTAX); return new_symbol(sym.SYNTAX);}
"out" {extensions.add(sym.OUT); return new_symbol(sym.OUT);}
"!" { return new_symbol(sym.NOT);}
"&" { return new_symbol(sym.AND);}
"|" { return new_symbol(sym.OR);}
"(" { return new_symbol(sym.LPAREN);}
")" { return new_symbol(sym.RPAREN);}
([[:jletter:]])[[:jletterdigit:]]* \. {items.add(yytext().substring(0, yytext().length()-1)); return new_symbol (sym.ITEM);}
. {result = false;}
Probem is how to create AST from here, I got on input expression something like
A.open && b.i
? Can anybody help ?
The lines in your Parser where you have commented out print statements like:
//System.out.println("OR");
is where you'll want to maintain your AST using the Tree data structure you have. Find out which token will create the tree, add something somewhere in the tree, etc based on your grammar.

Categories