No method prog() to build a ParseTree object - java

I experience issues with ANTLR 4, using the visitor classes.
I try to write the following code:
import bla.gen.InputLexer;
import bla.gen.InputParser;
import org.antlr.v4.runtime.ANTLRFileStream;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.tree.ParseTree;
public class Main {
public static void main(String[] args) throws Exception {
InputLexer lexer = new InputLexer(new ANTLRFileStream("pl_example.lp"));
InputParser parser = new InputParser(new CommonTokenStream(lexer));
parser.setBuildParseTree(true);
ParseTree tree = parser.prog();
ParserVisitor visitor = new ParserVisitor();
visitor.visit();
}
}
I try to mimic the code found in the book example here:
https://pragprog.com/titles/tpantlr2/source_code
(I have no access to the book, just the examples).
But I've got an error because the method parser.prog() does not exists...
I use ANTLR 4.5.
Do you know how to generate ParseTree with this version?

The name of the method used to retrieve the parse tree is the same as the entry parse rule chosen. If you used a different name for the entry parse rule, the method will be called that.

The problem is that you deleted the initial symbol from you grammatic the LabeledExpr.g4 in the book, that is 'prog' and have one or more 'stats' 'stat+'
prog: stat+ ;
Then, not find the initial symbol to go through the tree.

Related

How do I test a single Instance in Weka using a model that I have built?

I am trying to test a single instance using weka API in Java. My aim is to predict the class value of the single instance in the test.arff file.
My java code looks like this,
import weka.core.Instances;
import weka.classifiers.Evaluation;
import weka.classifiers.trees.J48;
import weka.classifiers.*;
import java.io.*;
import java.util.Random;
public class WekaNew {
public static void main(String[] args) throws Exception{
// TODO Auto-generated method stub
System.out.println("Weka Tool");
BufferedReader breader = new BufferedReader(new FileReader("train.arff"));
Instances train = new Instances(breader);
train.setClassIndex(train.numAttributes() -1);
breader.close(); //loading training data
BufferedReader treader = new BufferedReader(new FileReader("test.arff"));
Instances test = new Instances(treader);
test.setClassIndex(test.numAttributes() -1);
treader.close(); //loading testing data
Classifier cls = new J48();
cls.buildClassifier(train);
Evaluation eval = new Evaluation(train);
eval.evaluateModelOnce(cls,test);
System.out.println(eval.toMatrixString("\nConfusion Matrix\n========\n"));
}
}
The train.arff has 7(attributes)+1(class label) along with 132 instances of data.
The test.arff has 7 attributes + 1 class label=? with ONE instance.
I want to predict the class label of the single instance in the test.arff.
How do I go about predicting the label and what changes are needed to be made in the dataset and the code?
I tried compiling the java file by "javac -cp "/classpath" WekaNew.java"
, it gives the following error "No suitable method found for evaluateModelOnce()"
New to the Weka API and Java in general. Apologies in advance if the question seems repeated.
I have also referred the following questions in Stackoverflow,
1. Test single instance in weka which has no class label
2. Test a single instance in Weka
but it does not seem to solve my problem.
This is the signature of evaluateModelOnce:
public double evaluateModelOnce(Classifier classifier,
Instance instance)
(see http://weka.sourceforge.net/doc.stable/weka/classifiers/Evaluation.html#evaluateModelOnce-weka.classifiers.Classifier-weka.core.Instance-)
However, you pass in "Instances" instead of "Instance", which are different classes. Thus, this is a syntax error.
To evaluate a single Weka Instance, you might want to try
eval.evaluateModelOnce(cls,instances.firstInstance());

Cast from GrammaticalStructure to Tree

I am trying out the new NN Dependency Parser from Stanford. According to the demo they have provided, this is how the parsing is done:
import edu.stanford.nlp.process.DocumentPreprocessor;
import edu.stanford.nlp.trees.GrammaticalStructure;
import edu.stanford.nlp.parser.nndep.DependencyParser;
...
GrammaticalStructure gs = null;
DocumentPreprocessor tokenizer = new DocumentPreprocessor(new StringReader(sentence));
for (List<HasWord> sent : tokenizer) {
List<TaggedWord> tagged = tagger.tagSentence(sent);
gs = parser.predict(tagged);
// Print typed dependencies
System.out.println(Grammatical structure: " + gs);
}
Now, what I want to do is this object gs, which is of class GrammaticalStructure, to be casted as a Tree object from edu.stanford.nlp.trees.Tree.
I naively tried out with simple casting:
Tree t = (Tree) gs;
but, this is not possible (the IDE gives an error: Cannot cast from GrammaticalStructure to Tree).
How do I do this?
You should be able to get the Tree using gs.root().
According to the documentation, that method returns a Tree (actually, a TreeGraphNode) which represents the grammatical structure.
You could print that tree in a human-friendly way with gs.root().pennPrint().

Apache pig script, Error 1070: Java UDF could not resolve import

I am trying to write a Java UDF with the end goal of extending/overriding the load method of PigStorage to support entries that take multiple lines.
My pig script is as follows:
REGISTER udf.jar;
register 'userdef.py' using jython as parser;
A = LOAD 'test_data' USING PigStorage() AS row:chararray;
C = FOREACH A GENERATE myTOKENIZE.test();
DUMP D;
udf.jar looks like:
udf/myTOKENIZE.class
myTOKENIZE.java imports org.apache.pig.* ande extends EvalFunc. the test method just returns a Hello world String.
The problem that I am having is that when I try to call the method test() of class myTOKENIZE I get Error 1070: ERROR 1070: Could not resolve myTOKENIZE.test using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.] Thoughts?
As your UDF extends EvalFunc there should me a method called exec() in the class myTOKENIZE.
Your pig code would then look as follows:
C = FOREACH A GENERATE udf.myTOKENIZE(*);
Please read http://pig.apache.org/docs/r0.7.0/udf.html#How+to+Write+a+Simple+Eval+Function
Hope that helps.
So is myTOKENIZE in the package udf? In that case you'd need
C = FOREACH A GENERATE udf.myTOKENIZE.test();
After waaaaay too much time (and coffee) and a bunch a trial and error, I figured out my issue.
Important note: For some jar myudfs.jar, the classes contained within must have package defined as myudfs.
The corrected code is as follows:
REGISTER myudfs.jar;
register 'userdef.py' using jython as parser;
A = LOAD 'test_data' USING PigStorage() AS row:chararray;
C = FOREACH A GENERATE myudfs.myTOKENIZE('');
DUMP C;
myTOKENIZE.java:
package myudfs;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
import org.apache.pig.impl.util.WrappedIOException;
public class myTOKENIZE extends EvalFunc (String)
{
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
try{
String str = (String)input.get(0);
return str.toUpperCase();
}catch(Exception e){
throw WrappedIOException.wrap("Caught exception processing input row ", e);
}
}
}
the structure of myudfs.jar:
myudfs/myTOKENIZE.class
Hopefully this proves useful to someone else with similar issues!
This is very late but I think the solution is that while using the udf in your pig you have to give fully qualified path of the class with your package name.
package com.evalfunc.udf; and Power is my class name as
public class Power extends EvalFunc<Integer> {....}
Then while using it in pig first register the jar file in pig and then use the udf with full package name like:
record = LOAD '/user/fsbappdev/maitytest/pig/pigudf/power_data' USING PigStorage(',');
pow_result = foreach record generate com.evalfunc.udf.Power(base,exponent);

How to create a dynamic Interface with properties file at compile time?

The problem here is that the property file we use has insanely huge name as the key and most of us run into incorrect key naming issues . so it got me thinking if there's a way to generate the following interface based on the property file. Every change we make to the property file will auto-adjust the Properties interface. Or is there other solution?
Property File
A=Apple
B=Bannana
C=Cherry
Should Generate The following Interface
interface Properties{
public static final String A = "A" // keys
public static final String B = "B";
public static final String C = "C";
}
So in my application code
String a_value = PROP.getString(Properties.A);
There is an old rule about programming and not only about it, if something looks beautiful, then most probably it is the right way to do.
This approach does not look good, from my point of view.
The first thing:
Do not declare constants in interfaces. It violates the incapsulation approach. Check this article please: http://en.wikipedia.org/wiki/Constant_interface
The second thing:
Use a prefix for name part of your properties which are somehow special, let say: key_
And when you load your properties file, iterate over keys and extract keys with name that starts with key_ and use values of these keys as you planned to use those constants in your question.
UPDATE
Assume, we generate a huge properties file upon compilation process, using our Apache Ant script.
For example, let's properties file (myapp.properties) looks like that:
key_A = Apple
key_B = Banana
key_C = Cherry
anotherPropertyKey1 = blablabla1
anotherPropertyKey2 = blablabla2
our special properties which we want to handle have key names start with key_ prefix.
So, we write the following code (please note, it is not optimized, it is just proof of concept):
package propertiestest;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.Arrays;
import java.util.Enumeration;
import java.util.HashSet;
import java.util.Properties;
import java.util.Set;
public class PropertiesTest {
public static void main(String[] args) throws IOException {
final String PROPERTIES_FILENAME = "myapp.properties";
SpecialPropertyKeysStore spkStore =
new SpecialPropertyKeysStore(PROPERTIES_FILENAME);
System.out.println(Arrays.toString(spkStore.getKeysArray()));
}
}
class SpecialPropertyKeysStore {
private final Set<String> keys;
public SpecialPropertyKeysStore(String propertiesFileName)
throws FileNotFoundException, IOException {
// prefix of name of a special property key
final String KEY_PREFIX = "key_";
Properties propertiesHandler = new Properties();
keys = new HashSet<>();
try (InputStream input = new FileInputStream(propertiesFileName)) {
propertiesHandler.load(input);
Enumeration<?> enumeration = propertiesHandler.propertyNames();
while (enumeration.hasMoreElements()) {
String key = (String) enumeration.nextElement();
if (key.startsWith(KEY_PREFIX)) {
keys.add(key);
}
}
}
}
public boolean isKeyPresent(String keyName) {
return keys.contains(keyName);
}
public String[] getKeysArray() {
String[] strTypeParam = new String[0];
return keys.toArray(strTypeParam);
}
}
Class SpecialPropertyKeysStore filters and collects all special keys into its instance.
And you can get an array of these keys, or check whether is key present or not.
If you run this code, you will get:
[key_C, key_B, key_A]
It is a string representation of returned array with special key names.
Change this code as you want to meet your requirements.
I would not generate a class or interface from properties because you would lose the abilities to :
document those properties, as they would be represented by a java element + javadocs
references those properties in your code, as they would be play old java constant, and the compiler would have full knowledge of them. Refactoring them would also be possible while it would not be possible with automatic names.
You can also use enums, or create some special Property class, with a name as only and final field. Then, you only need a get method that would take a Properties, a Map or whatever.
As for your request, you can execute code with the maven-exec-plugin.
You should simply create a main that would read your properties file, and for each keys:
convert the key to a valid java identifier (you can use isJavaIdentifierStart and isJavaIdentifierPart to replace invalid char by a _)
write your class/interface/whatever you like using plain old Java (and don't forget to escape for eventual doublequote or backslashes !)
Since it would be a part of your build, say before building other classes that would depends on those constants, I would recommend you to create a specific maven project to isolate those build.
Still, I would really don't do that and use a POJO loaded with whatever need (CDI, Spring, Static initialization, etc).

Is there any way to convert the test code to a MethodDeclaration using Eclipse JDT?

Assume that I already had a CompilationUnit unit parsed from a Java file. Now, I want to add some new complex methods to Java file. With the simple method such as
public static void main(String[] args) {
System.out.println("Hello" + " world");
}
we can manually program as example: http://publib.boulder.ibm.com/infocenter/rsmhelp/v7r0m0/index.jsp?topic=/org.eclipse.jdt.doc.isv/guide/jdt_api_manip.htm
However, with complex method, it seems to be impossible. I think another solution is that: store the entire method to a String, then parse it and add the result to existing compilation unit.
Is there any way to convert the test code to a MethodDeclaration, then append to the existing compilation unit using eclipse JDT?
I assume here, that you are speaking of a CompilationUnit and a MethodDeclaration in the org.eclipse.jdt.core.dom package.
However, with complex method, it seems to be impossible.
Actually you could theoretically create any possible legal Java code using the org.eclipse.jdt.core.dom API, by creating and linking the corresponding ASTNodes using the various AST#new-methods.
For complex code, it is however more convenient to parse existing statements using an ASTParser. To do that, you must first set the source of the parser to the code of the statements, and then set the parser kind to ASTParser.K_STATEMENTS. Then, when creating an ASTNode by calling ASTParser#createAST, the returned node will be of type Block. Before you can set this block as e.g. the block of a MethodDeclaration, you must copy the block to the existing AST by calling ASTNode.copySubtree(ast, block).
Here is a complete example, that shows how this could be done:
import org.eclipse.jdt.core.dom.AST;
import org.eclipse.jdt.core.dom.ASTNode;
import org.eclipse.jdt.core.dom.ASTParser;
import org.eclipse.jdt.core.dom.Block;
import org.eclipse.jdt.core.dom.CompilationUnit;
import org.eclipse.jdt.core.dom.MethodDeclaration;
import org.eclipse.jdt.core.dom.TypeDeclaration;
public class JdtDomExample {
public static void main(String[] args) {
// (1) somehow get an org.eclipse.jdt.core.dom.CompilationUnit, a TypeDeclaration, and a MethodDeclaration
AST ast = AST.newAST(AST.JLS8);
CompilationUnit cu = ast.newCompilationUnit();
TypeDeclaration typeDecl = ast.newTypeDeclaration();
typeDecl.setName(ast.newSimpleName("MyClass"));
cu.types().add(typeDecl);
MethodDeclaration method = cu.getAST().newMethodDeclaration();
method.setName(ast.newSimpleName("myMethod"));
typeDecl.bodyDeclarations().add(method);
// (2) create an ASTParser and parse the method body as ASTParser.K_STATEMENTS
ASTParser parser = ASTParser.newParser(AST.JLS8);
parser.setSource("System.out.println(\"Hello\" + \" world\");".toCharArray());
parser.setKind(ASTParser.K_STATEMENTS);
Block block = (Block) parser.createAST(null);
// (3) copy the statements to the existing AST
block = (Block) ASTNode.copySubtree(ast, block);
method.setBody(block);
// show the result
System.out.println(cu);
}
}
Output:
class MyClass {
void myMethod(){
System.out.println("Hello" + " world");
}
}

Categories