JClouds-Chef BootstrapConfig Builder MissingMethodException - java

Please note: although this question involve the JClouds-Chef library and Groovy here, I think this is a Java API question at heart.
On JClouds-Chef 1.7.3 here:
List<String> runlist = new RunListBuilder().addRole("typicalapp").build();
ArrayList<String> runList2 = new ArrayList<String>();
for(String item : runlist) {
runList2.add(item);
}
System.out.println("runList2 is of type: " + runList2.getClass().getName());
BootstrapConfig bootstrapConfig = BootstrapConfig.builder().runlist(runList2).build();
Produces the following output/exception:
runList2 is of type: java.util.ArrayList
Exception in thread "main" groovy.lang.MissingMethodException: No signature of method: org.jclouds.chef.domain.BootstrapConfig$Builder.runlist() is applicable for argument types: (java.util.ArrayList) values: [[role[typicalapp]]]
Possible solutions: runList(java.lang.Iterable), build(), split(groovy.lang.Closure)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:55)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at net.myuser.chef.test.ChefPlugin.provision(ChefPlugin.groovy:71)
at net.myuser.chef.test.ChefPlugin$provision.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at net.myuser.chef.test.ChefPlugin.main(ChefPlugin.groovy:27)
I'm pretty sure the code for this version of BootstrapConfig#Builder is here. As far as I can tell, ArrayList extends Iterable, so I can't see what's going on here.

You are using runlist instead of runList

Related

Getting the below exception on using Date.parse function in Groovy java.lang.ClassNotFoundException: groovy.ui.Console

Up on using the below code i am getting the following Exception...
use(TimeCategory) {
def dt1=Date.parse("dd-mm-yyyy", a) + b.year;
def dt2=Date.parse("dd-mm-yyyy", c);
if (dt1>dt2) .....else ..}
java.lang.ClassNotFoundException: groovy.ui.Console
at org.codehaus.groovy.tools.RootLoader.findClass(RootLoader.java:179)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.codehaus.groovy.tools.RootLoader.loadClass(RootLoader.java:151)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:104)
at org.codehaus.groovy.tools.GroovyStarter.main(GroovyStarter.java:136)
Can you please let me know the reason for it ..thanks in advance
def dt1= new SimpleDateFormat("dd-mm-yyyy").parse(a) + b.year;
def dt2=new SimpleDateFormat("dd-mm-yyyy").parse(c)
Changing the above code resolved the issue

Java Spark GroupByFailure

I'm attempting to use the Java Spark libraries with a cluster running Spark 2.3.0 over Hadoop 3.1.0 (and using those versions of the Java libraries).
I've run into a problem where I simply cannot use groupByKey, and I am at a loss to explain why. Any attempted usage of groupByKey for any reason in any circumstance is returning a java.lang.IllegalArgumentException.
I've boiled this down to about the simplest test I can think of:
package com.failuretest;
import java.util.ArrayList;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;
public class TestReport {
public static void main(String[] args) throws Exception {
SparkConf conf = new SparkConf().setAppName("TestReport").set("spark.executor.memory", "20G");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> test = sc.parallelize(generateTestData());
test.saveAsTextFile("/TEST/testfile1");
test.mapToPair(line -> {
String[] testParts = line.split(" ");
return new Tuple2<String, String>(testParts[0], testParts[1]);
}).groupByKey().saveAsTextFile("/TEST/testfile2");
sc.close();
}
private static List<String> generateTestData() {
List<String> testList = new ArrayList<String>();
int keyCount = 0;
int valCount = 0;
while (valCount++ < 2000000) {
if (valCount % 10 == 0) {
keyCount++;
}
testList.add("Key" + keyCount + " " + "Val" + valCount);
}
return testList;
}
}
I'm just programmatically creating an RDD that produces 10 values per key, then creating my JavaPairRDD with a simple split, then attempting groupByKey.
When it runs, I receive the following stack:
Exception in thread "main" java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$combineByKeyWithClassTag$1.apply(PairRDDFunctions.scala:88)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$combineByKeyWithClassTag$1.apply(PairRDDFunctions.scala:77)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.combineByKeyWithClassTag(PairRDDFunctions.scala:77)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$groupByKey$1.apply(PairRDDFunctions.scala:505)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$groupByKey$1.apply(PairRDDFunctions.scala:498)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.groupByKey(PairRDDFunctions.scala:498)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$groupByKey$3.apply(PairRDDFunctions.scala:641)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$groupByKey$3.apply(PairRDDFunctions.scala:641)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.groupByKey(PairRDDFunctions.scala:640)
at org.apache.spark.api.java.JavaPairRDD.groupByKey(JavaPairRDD.scala:559)
at com.failuretest.TestReport.main(TestReport.java:22)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
It doesn't get any further than the groupByKey (I'm writing a file above with the results, but it really doesn't matter since it never gets there).
I can run it all day long in my local dev instance, but running spark-submit with a jar containing the above fails every time in the cluster.
I'm really not sure where to go from here - what I am trying to do is a bit of a challenge if I cannot group by key.
Am I messing up? Is this a version conflict somewhere?
Dave
I actually figured this out before posting this, but in the interests of helping others...
I discovered that one of my colleagues had decided to have a play around with Java 10 on this particular cluster. Moved it back to Java 8 (sorry - didn't try 9) and the problem went away.
Dave

Nullpointer Error in groovy script while trying to add xpath assertion

I am getting an error as
java.lang.NullPointerException: Cannot invoke method getAssertionByName() on null object error at line: 5
however I am able to add xpath assertion in the test case.
As, I am new to groovy so want to know :-
What is the reason that I am getting this error.
How can I implement a code for select from current option in xpath assertion so that i can add xpath instead of printing some junk value(i have printed "hello" as of now).
log.info("Testing Start")
def project = context.testCase.testSuite.project
TSName = "ManagePostpayInsurance_1_0"
StepName = "getInsuranceDetails_FC_004"
project.getTestSuiteList().each {
if(it.name == TSName) {
TS = it.name
it.getTestCaseList().each {
TC =it.name
def asserting = project.getTestSuiteByName(TS).getTestCaseByName(TC).getTestStepByName(StepName).getAssertionByName("XPath Match")
log.info(asserting)
if (asserting instanceof com.eviware.soapui.impl.wsdl.teststeps.assertions.basic.XPathContainsAssertion){
project.getTestSuiteByName(TS).getTestCaseByName(TC).getTestStepByName(StepName).removeAssertion(asserting)
}
def assertion = project.getTestSuiteByName(TS).getTestCaseByName(TC)getTestStepByName(StepName).addAssertion("XPath Match")
assertion.path = "declare namespace cor='http://soa.o2.co.uk/coredata_1';\ndeclare namespace man='http://soa.o2.co.uk/managepostpayinsurancedata_1';\ndeclare namespace soapenv='http://schemas.xmlsoap.org/soap/envelope/';\n//man:getInsuranceDetails_1Response"
assertion.expectedContent = "hello"
}
}
}
log.info("Testing Over")
I have attached the error log below.
Mon Nov 27 17:04:12 IST 2017:ERROR:java.lang.NullPointerException: Cannot invoke method getAssertionByName() on null object
java.lang.NullPointerException: Cannot invoke method getAssertionByName() on null object
at org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:77)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:32)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at com.eviware.soapui.model.testsuite.Assertable$getAssertionByName.call(Unknown Source)
at Script10$_run_closure1_closure2.doCall(Script10.groovy:11)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:272)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:909)
at groovy.lang.Closure.call(Closure.java:411)
at groovy.lang.Closure.call(Closure.java:427)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:1325)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:1297)
at org.codehaus.groovy.runtime.dgm$148.invoke(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:271)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at Script10$_run_closure1.doCall(Script10.groovy:9)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:272)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:909)
at groovy.lang.Closure.call(Closure.java:411)
at groovy.lang.Closure.call(Closure.java:427)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:1325)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:1297)
at org.codehaus.groovy.runtime.dgm$148.invoke(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:271)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at Script10.run(Script10.groovy:5)
at com.eviware.soapui.support.scripting.groovy.SoapUIGroovyScriptEngine.run(SoapUIGroovyScriptEngine.java:90)
at com.eviware.soapui.impl.wsdl.teststeps.WsdlGroovyScriptTestStep.run(WsdlGroovyScriptTestStep.java:141)
at com.eviware.soapui.impl.wsdl.panels.teststeps.GroovyScriptStepDesktopPanel$RunAction$1.run(GroovyScriptStepDesktopPanel.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
I'm badly stuck with the above issue, quick help is really appreciated!!!!
Thank you very much
Here you go:
Since you are going thru hierarchically, do not require to refer full chain of methods starting from project.
Instead, you could directly access the step objects once you browse to step level. This way, NPE can be avoided.
Here is the fixed script, see the inline relevant comment.
import com.eviware.soapui.impl.wsdl.teststeps.assertions.basic.XPathContainsAssertion
log.info("Testing Start")
def project = context.testCase.testSuite.project
def suiteName = "ManagePostpayInsurance_1_0"
def stepName = "getInsuranceDetails_FC_004"
project.testSuiteList.each { suite ->
if(suiteName == suite.name) {
suite.testCaseList.each { kase ->
kase.testStepList.each { step ->
if (stepName == step.name) {
//Note the change here, directly getting the object from step object
def asserting = step.getAssertionByName("XPath Match")
log.info(asserting)
if (asserting instanceof XPathContainsAssertion){
step.removeAssertion(asserting)
}
def assertion = step.addAssertion("XPath Match")
assertion.path = "declare namespace cor='http://soa.o2.co.uk/coredata_1';\ndeclare namespace man='http://soa.o2.co.uk/managepostpayinsurancedata_1';\ndeclare namespace soapenv='http://schemas.xmlsoap.org/soap/envelope/';\n//man:getInsuranceDetails_1Response"
assertion.expectedContent = "hello"
}
}
}
}
}
log.info 'Testing Over'

Conditional formatting using in Excel using APACHE POI

I have a problem using the SheetConditionalFormatting, just for testing if the cell contains particular string (in my case just "test") I run the following code:
SheetConditionalFormatting sheetConditionalFormatting = excelSheet.getSheetConditionalFormatting();
ConditionalFormattingRule rule = sheetConditionalFormatting.createConditionalFormattingRule(ComparisonOperator.EQUAL, "test");
PatternFormatting fill1 = rule.createPatternFormatting();
fill1.setFillBackgroundColor(IndexedColors.BLUE.index);
fill1.setFillPattern(PatternFormatting.SOLID_FOREGROUND);
CellRangeAddress[] regions = {
CellRangeAddress.valueOf("A1")
};
sheetConditionalFormatting.addConditionalFormatting(regions, rule);
And I got message that 'test' does not exist in the workspace. This is my error from Console:
Exception in thread "main" org.apache.poi.ss.formula.FormulaParseException: Specified named range 'test' does not exist in the current workbook.
at org.apache.poi.ss.formula.FormulaParser.parseNonRange(FormulaParser.java:569)
at org.apache.poi.ss.formula.FormulaParser.parseRangeable(FormulaParser.java:429)
at org.apache.poi.ss.formula.FormulaParser.parseRangeExpression(FormulaParser.java:268)
at org.apache.poi.ss.formula.FormulaParser.parseSimpleFactor(FormulaParser.java:1119)
at org.apache.poi.ss.formula.FormulaParser.percentFactor(FormulaParser.java:1079)
at org.apache.poi.ss.formula.FormulaParser.powerFactor(FormulaParser.java:1066)
at org.apache.poi.ss.formula.FormulaParser.Term(FormulaParser.java:1426)
at org.apache.poi.ss.formula.FormulaParser.additiveExpression(FormulaParser.java:1526)
at org.apache.poi.ss.formula.FormulaParser.concatExpression(FormulaParser.java:1510)
at org.apache.poi.ss.formula.FormulaParser.comparisonExpression(FormulaParser.java:1467)
at org.apache.poi.ss.formula.FormulaParser.unionExpression(FormulaParser.java:1447)
at org.apache.poi.ss.formula.FormulaParser.parse(FormulaParser.java:1568)
at org.apache.poi.ss.formula.FormulaParser.parse(FormulaParser.java:176)
at org.apache.poi.hssf.model.HSSFFormulaParser.parse(HSSFFormulaParser.java:70)
at org.apache.poi.hssf.record.CFRuleRecord.parseFormula(CFRuleRecord.java:525)
at org.apache.poi.hssf.record.CFRuleRecord.create(CFRuleRecord.java:146)
at org.apache.poi.hssf.usermodel.HSSFSheetConditionalFormatting.createConditionalFormattingRule(HSSFSheetConditionalFormatting.java:80)
at org.apache.poi.hssf.usermodel.HSSFSheetConditionalFormatting.createConditionalFormattingRule(HSSFSheetConditionalFormatting.java:32)
at MainApp.main(MainApp.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
it found out that string introduced into the createConditionalFormattingRule must be cell coordinates
ConditionalFormattingRule rule = sheetConditionalFormatting.createConditionalFormattingRule(ComparisonOperator.EQUAL, "B1");
To use a string as a comparison it need to be inclosed in quotes eg:
"""string"""

Java Stanford NLP: ArrayIndexOutOfBounds after loading second lexicon

I am using the Stanford Natural Language processing toolkit. I've been trying to find spelling errors with Lexicon's isKnown method, but it produces quite a few false positives. So I thought I'd load a second lexicon, and check that too. However, that causes a problem.
private static LexicalizedParser lp = new LexicalizedParser(Constants.stdLexFile);
private static LexicalizedParser wsjLexParse = new LexicalizedParser(Constants.wsjLexFile);
static {
lp.setOptionFlags(Constants.lexOptionFlags);
wsjLexParse.setOptionFlags(Constants.lexOptionFlags);
}
public ParseTree(String input) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
initialInput = input;
DocumentPreprocessor process = new DocumentPreprocessor();
sentences = process.getSentencesFromText(new StringReader(input));
for (List<? extends HasWord> sent : sentences) {
if(lp.parse(sent)) { // line 65
forest.add(lp.getBestParse()); //non determinism?
}
}
partsOfSpeech = pos();
runAnalysis();
}
The following fail trace is produced:
java.lang.ArrayIndexOutOfBoundsException: 45547
at edu.stanford.nlp.parser.lexparser.BaseLexicon.initRulesWithWord(BaseLexicon.java:300)
at edu.stanford.nlp.parser.lexparser.BaseLexicon.isKnown(BaseLexicon.java:160)
at edu.stanford.nlp.parser.lexparser.BaseLexicon.ruleIteratorByWord(BaseLexicon.java:212)
at edu.stanford.nlp.parser.lexparser.ExhaustivePCFGParser.initializeChart(ExhaustivePCFGParser.java:1299)
at edu.stanford.nlp.parser.lexparser.ExhaustivePCFGParser.parse(ExhaustivePCFGParser.java:388)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.parse(LexicalizedParser.java:234)
at nth.compling.ParseTree.<init>(ParseTree.java:65)
at nth.compling.ParseTreeTest.constructor(ParseTreeTest.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.junit.internal.runners.BeforeAndAfterRunner.invokeMethod(BeforeAndAfterRunner.java:74)
at org.junit.internal.runners.BeforeAndAfterRunner.runBefores(BeforeAndAfterRunner.java:50)
at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:33)
at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
If I comment out this line: (and other references to wsjLexParse)
private static LexicalizedParser wsjLexParse = new LexicalizedParser(Constants.wsjLexFile);
then everything works fine. What am I doing wrong here?
Looks like a bug in the Stanford library. You should report it to them.
Does the second lexicon work when you load only it (and not the other one)?
Does the same error occur when you load the two lexica in different order?

Categories