Eclipse - Drools - Exception in thread "main" java.lang.ExceptionInInitializerError - java

I am trying to run a Drools app, my ecplise got corrupted, I reinstalled, reloaded drools, jbpm, maven and I can not figure out why I get this error in every drools app I run. Even working demos from github
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.drools.compiler.rule.builder.dialect.asm.InvokerGenerator.createStubGenerator(InvokerGenerator.java:34)
Sample Code: (Confirmed worked before I had to reinstall)
package com.jenn.DroolsDemo;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;
import org.drools.compiler.compiler.DroolsParserException;
import org.drools.compiler.compiler.PackageBuilder;
import org.drools.compiler.rule.builder.dialect.*;
import org.drools.core.RuleBase;
import org.drools.core.RuleBaseFactory;
import org.drools.core.WorkingMemory;
/**
*
* #author Binod Suman
* Binod Suman Academy YouTube
*
*/
public class DemoTest {
public static void main(String[] args) throws DroolsParserException, IOException {
DemoTest client = new DemoTest();
client.execteRule();
}
public void execteRule() throws DroolsParserException, IOException{
PackageBuilder builder = new PackageBuilder();
String ruleFile = "/offers.drl";
InputStream resourceAsStream = getClass().getResourceAsStream(ruleFile);
Reader ruleReader = new InputStreamReader(resourceAsStream);
builder.addPackageFromDrl(ruleReader);
org.drools.core.rule.Package rulePackage = builder.getPackage();
RuleBase ruleBase = RuleBaseFactory.newRuleBase();
ruleBase.addPackage(rulePackage);
WorkingMemory workingMemory = ruleBase.newStatefulSession();
PaymentOffer paymentOffer = new PaymentOffer();
paymentOffer.setChannel("paytm");
workingMemory.insert(paymentOffer);
workingMemory.fireAllRules();
System.out.println("The cashback for this payment channel "+paymentOffer.getChannel()+" is "+paymentOffer.getDiscount());
}
}

After some research and the post above from Roddy it is confirmed my development partner is using very old drools core. I am working to get that upgraded in the code.

Related

How to load the data from marklogic using DatabaseClient? java.net.SocketException: Connection reset

I am trying to write data in marklogic using DatabaseClient but getting issue java.net.SocketException: Connection reset. I am not able to get where I am doing Wrong
Below are the code that I am running in intellij idea.
import com.marklogic.client.DatabaseClient;
import com.marklogic.client.DatabaseClientFactory;
import com.marklogic.client.document.XMLDocumentManager;
import com.marklogic.client.io.InputStreamHandle;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
public class data_movement {
public static void main(String[] args) throws FileNotFoundException {
DatabaseClient client = DatabaseClientFactory
.newClient("localhost", 8000,
new DatabaseClientFactory.DigestAuthContext("admin", "admin")
);
XMLDocumentManager XMLDocMgr = client.newXMLDocumentManager();
String docId = "/example/xml1.xml";
FileInputStream docStream = new FileInputStream("D:/Embargo/CodeChanges/DM/xmlDoc1.xml"); ;
InputStreamHandle handle = new InputStreamHandle(docStream);
XMLDocMgr.write(docId, handle);
client.release();
}
}

How to get Log Aggregation in code by yarnClient

I'm following this example about create YarnApp by java API.
https://github.com/hortonworks/simple-yarn-app
Works fine, but, the log exists only execution, after it the log gone.
How I can caught this by code ? or maybe enable one option?
You can find logs using LogCliHelpers by application id after application had finished:
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.api.records.ApplicationId;
import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
import org.apache.hadoop.yarn.client.api.YarnClientApplication;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.logaggregation.LogCLIHelpers;
import java.io.IOException;
import java.io.PrintStream;
public static void getLogs(YarnConfiguration conf, YarnClientApplication app) throws IOException, YarnException {
ApplicationSubmissionContext appContext =
app.getApplicationSubmissionContext();
ApplicationId appId = appContext.getApplicationId();
LogCLIHelpers logCLIHelpers = new LogCLIHelpers();
logCLIHelpers.setConf(conf);
FileSystem fs = FileSystem.get(conf);
Path logFile = new Path("/path/to/log/file.log");
fs.create(logFile, false);
try (PrintStream printStream = new PrintStream(logFile.toString())) {
logCLIHelpers.dumpAllContainersLogs(appId, UserGroupInformation.getCurrentUser().getShortUserName(), printStream);
}
}

Getting java.nio.file.AccessDeniedException , when trying to create zip file using java NIO 2 on MAC

import java.io.IOException;
import java.net.URI;
import java.nio.file.FileSystem;
import java.nio.file.FileSystems;
import java.util.HashMap;
import java.util.Map;
public class ZipFileSystem {
public static void main(String[] args) throws IOException {
URI uri = URI.create("jar:file:///sample.zip");
Map<String,String> options = new HashMap<>();
options.put("create","true");
FileSystem fileSystem = FileSystems.newFileSystem(uri, options);
}
}
I have this java simple code , when i try to run this on MAC, getting below exception, am I missing anything?
Exception in thread "main" java.nio.file.AccessDeniedException: /sample.zip
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at java.nio.file.Files.newOutputStream(Files.java:216)
at com.sun.nio.zipfs.ZipFileSystem.<init>(ZipFileSystem.java:116)
at com.sun.nio.zipfs.ZipFileSystemProvider.newFileSystem(ZipFileSystemProvider.java:117)
at java.nio.file.FileSystems.newFileSystem(FileSystems.java:326)
at java.nio.file.FileSystems.newFileSystem(FileSystems.java:276)
at ZipFileSystem.main(ZipFileSystem.java:14)

Handler error while trying to execute Jetty Server

I get error: incompatible types: org.eclipse.jetty.servlet.ServletHandler cannot be converted to org.mortbay.jetty.Handler
While trying to run my below code. I'm new to Java and not sure why this is happening. Any ideas? (I'm using JDK 11 and the latest Jetty versions 9.3 and IDE IntelliJ)
package newJetty;
import newJetty.handler.PingHandler;
import org.eclipse.jetty.servlet.ServletHandler;
import org.mortbay.jetty.Handler;
import org.mortbay.jetty.Server;
/**
* Hello world!
*
*/
public class JettyServer
{
public static void main( String[] args ) throws Exception {
Server server = new Server(8080);
ServletHandler handler = new ServletHandler();
handler.addServletWithMapping(PingHandler.class, "/ping");
server.setHandler(handler);
//
server.start();
server.join();
}
}
You are importing the wrong classes.
Remove the imports:
import org.mortbay.jetty.Handler;
import org.mortbay.jetty.Server;
and change to the following imports:
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.Handler;

Spark Throwing a "NoClassDefFoundError" Despite jar Indicating Class is Present

I am receiving a NoClassDefFoundError despite 7Zip indicating the jar containing the class is present in the uberjar being submitted to run the program. I am submitting with the below line:
spark-submit --class org.dia.red.ctakes.spark.CtakesSparkMain target/spark-ctakes-0.1-job.jar
The error being thrown is:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/uima/cas/FSIndex
at org.dia.red.ctakes.spark.CtakesSparkMain.main(CtakesSparkMain.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.uima.cas.FSIndex
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 10 more
The CtakesSparkMain class below calls the CtakesFunction class:
package org.dia.red.ctakes.spark;
import java.util.List;
import java.io.PrintWriter;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.uima.jcas.cas.FSArray;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.storage.StorageLevel;
import org.json.JSONObject;
public class CtakesSparkMain {
/**
* #param args
*/
public static void main(String[] args) throws Exception {
SparkConf conf = new SparkConf().setAppName("ctakes");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> lines = sc.textFile("/mnt/d/metistream/ctakes-streaming/SparkStreamingCTK/testdata100.txt").map(new CtakesFunction());
String first = lines.take(2).get(0);
PrintWriter out = new PrintWriter("/mnt/d/metistream/ctakes-streaming/SparkStreamingCTK/test_outputs/output.txt");
out.println(first);
out.close();
sc.close();
}
}
CtakesFunction:
package org.dia.red.ctakes.spark;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import org.apache.ctakes.typesystem.type.refsem.OntologyConcept;
import org.apache.ctakes.typesystem.type.textsem.*;
import org.apache.uima.UIMAException;
import org.apache.uima.cas.FSIndex;
import org.apache.uima.cas.Type;
import org.apache.uima.UIMAException;
import org.apache.uima.jcas.JCas;
import org.apache.uima.analysis_engine.AnalysisEngineDescription;
import org.apache.uima.cas.impl.XmiCasSerializer;
import org.apache.uima.fit.factory.JCasFactory;
import org.apache.uima.fit.pipeline.SimplePipeline;
import org.apache.uima.jcas.cas.FSArray;
import org.apache.uima.util.XMLSerializer;
import org.apache.spark.api.java.function.Function;
import it.cnr.iac.CTAKESClinicalPipelineFactory;
import org.json.*;
/**
* #author Selina Chu, Michael Starch, and Giuseppe Totaro
*
*/
public class CtakesFunction implements Function<String, String> {
transient JCas jcas = null;
transient AnalysisEngineDescription aed = null;
private void setup() throws UIMAException {
System.setProperty("ctakes.umlsuser", "");
System.setProperty("ctakes.umlspw", "");
this.jcas = JCasFactory.createJCas();
this.aed = CTAKESClinicalPipelineFactory.getDefaultPipeline();
}
private void readObject(ObjectInputStream in) {
try {
in.defaultReadObject();
this.setup();
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (UIMAException e) {
e.printStackTrace();
}
}
#Override
public String call(String paragraph) throws Exception {
this.jcas.setDocumentText(paragraph);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
SimplePipeline.runPipeline(this.jcas, this.aed);
FSIndex index = this.jcas.getAnnotationIndex(IdentifiedAnnotation.type);
Iterator iter = index.iterator();
JSONArray annotationsArray = new JSONArray();
JSONObject allAnnotations = new JSONObject();
ArrayList<String> types = new ArrayList<String>();
types.add("org.apache.ctakes.typesystem.type.textsem.SignSymptomMention");
types.add("org.apache.ctakes.typesystem.type.textsem.DiseaseDisorderMention");
types.add("org.apache.ctakes.typesystem.type.textsem.AnatomicalSiteMention");
types.add("org.apache.ctakes.typesystem.type.textsem.ProcedureMention");
types.add("import org.apache.ctakes.typesystem.type.textsem.MedicationMention");
String type;
String[] splitType;
FSArray snomedArray;
ArrayList<String> snomedStringArray = new ArrayList<String>();
while (iter.hasNext()){
IdentifiedAnnotation annotation = (IdentifiedAnnotation)iter.next();
type = annotation.getType().toString();
if (types.contains(type)){
JSONObject annotations = new JSONObject();
splitType = type.split("[.]");
annotations.put("id", annotation.getId());
annotations.put("subject", annotation.getSubject());
annotations.put("type", splitType[splitType.length - 1]);
annotations.put("text", annotation.getCoveredText());
annotations.put("polarity", annotation.getPolarity());
annotations.put("confidence", annotation.getConfidence());
snomedArray = annotation.getOntologyConceptArr();
for (int i = 0; i < snomedArray.size(); i++){
snomedStringArray.add(((OntologyConcept)snomedArray.get(i)).getCode());
}
annotations.put("snomed_codes", snomedStringArray);
snomedStringArray.clear();
annotationsArray.put(annotations);
}
}
allAnnotations.put("Annotations", annotationsArray);
this.jcas.reset();
return allAnnotations.toString();
}
}
I was attempting to modify the repository # https://github.com/selinachu/SparkStreamingCTK to leverage regular Spark as opposed to SparkStreaming (and Spark 2.0), but haven't been able to resolve this.
This is because this is not entirely an uber-jar generated by maven for this project. Spark-submit can not load a class from a jar within a jar. One need a special class loader for this. The right approach would be to explode all jars to put all contained classes in the uber-jar similar how maven shade plugin does it https://maven.apache.org/plugins/maven-shade-plugin/
So you have to change pom.xml file to generate correct uber-jar for this project.
Inspired by YuGagarin's feedback, I used SBT assembly to assemble an UberJar of cTAKES itself. Compiling everything into one "true" fat jar resolved the above issue.
However, I should point out there are still some residual issues with cTAKES and Spark that I am currently working through.

Categories