I am receiving a NoClassDefFoundError despite 7Zip indicating the jar containing the class is present in the uberjar being submitted to run the program. I am submitting with the below line:
spark-submit --class org.dia.red.ctakes.spark.CtakesSparkMain target/spark-ctakes-0.1-job.jar
The error being thrown is:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/uima/cas/FSIndex
at org.dia.red.ctakes.spark.CtakesSparkMain.main(CtakesSparkMain.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.uima.cas.FSIndex
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 10 more
The CtakesSparkMain class below calls the CtakesFunction class:
package org.dia.red.ctakes.spark;
import java.util.List;
import java.io.PrintWriter;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.uima.jcas.cas.FSArray;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.storage.StorageLevel;
import org.json.JSONObject;
public class CtakesSparkMain {
/**
* #param args
*/
public static void main(String[] args) throws Exception {
SparkConf conf = new SparkConf().setAppName("ctakes");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> lines = sc.textFile("/mnt/d/metistream/ctakes-streaming/SparkStreamingCTK/testdata100.txt").map(new CtakesFunction());
String first = lines.take(2).get(0);
PrintWriter out = new PrintWriter("/mnt/d/metistream/ctakes-streaming/SparkStreamingCTK/test_outputs/output.txt");
out.println(first);
out.close();
sc.close();
}
}
CtakesFunction:
package org.dia.red.ctakes.spark;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import org.apache.ctakes.typesystem.type.refsem.OntologyConcept;
import org.apache.ctakes.typesystem.type.textsem.*;
import org.apache.uima.UIMAException;
import org.apache.uima.cas.FSIndex;
import org.apache.uima.cas.Type;
import org.apache.uima.UIMAException;
import org.apache.uima.jcas.JCas;
import org.apache.uima.analysis_engine.AnalysisEngineDescription;
import org.apache.uima.cas.impl.XmiCasSerializer;
import org.apache.uima.fit.factory.JCasFactory;
import org.apache.uima.fit.pipeline.SimplePipeline;
import org.apache.uima.jcas.cas.FSArray;
import org.apache.uima.util.XMLSerializer;
import org.apache.spark.api.java.function.Function;
import it.cnr.iac.CTAKESClinicalPipelineFactory;
import org.json.*;
/**
* #author Selina Chu, Michael Starch, and Giuseppe Totaro
*
*/
public class CtakesFunction implements Function<String, String> {
transient JCas jcas = null;
transient AnalysisEngineDescription aed = null;
private void setup() throws UIMAException {
System.setProperty("ctakes.umlsuser", "");
System.setProperty("ctakes.umlspw", "");
this.jcas = JCasFactory.createJCas();
this.aed = CTAKESClinicalPipelineFactory.getDefaultPipeline();
}
private void readObject(ObjectInputStream in) {
try {
in.defaultReadObject();
this.setup();
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (UIMAException e) {
e.printStackTrace();
}
}
#Override
public String call(String paragraph) throws Exception {
this.jcas.setDocumentText(paragraph);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
SimplePipeline.runPipeline(this.jcas, this.aed);
FSIndex index = this.jcas.getAnnotationIndex(IdentifiedAnnotation.type);
Iterator iter = index.iterator();
JSONArray annotationsArray = new JSONArray();
JSONObject allAnnotations = new JSONObject();
ArrayList<String> types = new ArrayList<String>();
types.add("org.apache.ctakes.typesystem.type.textsem.SignSymptomMention");
types.add("org.apache.ctakes.typesystem.type.textsem.DiseaseDisorderMention");
types.add("org.apache.ctakes.typesystem.type.textsem.AnatomicalSiteMention");
types.add("org.apache.ctakes.typesystem.type.textsem.ProcedureMention");
types.add("import org.apache.ctakes.typesystem.type.textsem.MedicationMention");
String type;
String[] splitType;
FSArray snomedArray;
ArrayList<String> snomedStringArray = new ArrayList<String>();
while (iter.hasNext()){
IdentifiedAnnotation annotation = (IdentifiedAnnotation)iter.next();
type = annotation.getType().toString();
if (types.contains(type)){
JSONObject annotations = new JSONObject();
splitType = type.split("[.]");
annotations.put("id", annotation.getId());
annotations.put("subject", annotation.getSubject());
annotations.put("type", splitType[splitType.length - 1]);
annotations.put("text", annotation.getCoveredText());
annotations.put("polarity", annotation.getPolarity());
annotations.put("confidence", annotation.getConfidence());
snomedArray = annotation.getOntologyConceptArr();
for (int i = 0; i < snomedArray.size(); i++){
snomedStringArray.add(((OntologyConcept)snomedArray.get(i)).getCode());
}
annotations.put("snomed_codes", snomedStringArray);
snomedStringArray.clear();
annotationsArray.put(annotations);
}
}
allAnnotations.put("Annotations", annotationsArray);
this.jcas.reset();
return allAnnotations.toString();
}
}
I was attempting to modify the repository # https://github.com/selinachu/SparkStreamingCTK to leverage regular Spark as opposed to SparkStreaming (and Spark 2.0), but haven't been able to resolve this.
This is because this is not entirely an uber-jar generated by maven for this project. Spark-submit can not load a class from a jar within a jar. One need a special class loader for this. The right approach would be to explode all jars to put all contained classes in the uber-jar similar how maven shade plugin does it https://maven.apache.org/plugins/maven-shade-plugin/
So you have to change pom.xml file to generate correct uber-jar for this project.
Inspired by YuGagarin's feedback, I used SBT assembly to assemble an UberJar of cTAKES itself. Compiling everything into one "true" fat jar resolved the above issue.
However, I should point out there are still some residual issues with cTAKES and Spark that I am currently working through.
Related
I have no idea way I get this error. The code is from JXLS own example and the Jars are downloaded from there page.
Error code:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/poi/ss/usermodel/Workbook
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.jxls.util.TransformerFactory.loadPoiTransformer(TransformerFactory.java:85)
at org.jxls.util.TransformerFactory.getTransformerClass(TransformerFactory.java:69)
at org.jxls.util.TransformerFactory.createTransformer(TransformerFactory.java:27)
at org.jxls.util.JxlsHelper.createTransformer(JxlsHelper.java:250)
at org.jxls.util.JxlsHelper.processTemplate(JxlsHelper.java:108)
at test_excel.ObjectCollectionDemo.main(ObjectCollectionDemo.java:32)
Caused by: java.lang.ClassNotFoundException: org.apache.poi.ss.usermodel.Workbook
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 8 more
Referenced Libraries:
/slf4j-1.7.25/slf4j-api-1.7.25.jar
/slf4j-1.7.25/slf4j-jdk14-1.7.25.jar
/jxls-2.4.1 2/dist/jxls-reader-2.0.3.jar
/jxls-2.4.1 2/dist/jxls-poi-1.0.13.jar
/xls-2.4.1 2/dist/jxls-jexcel-1.0.6.jar
/jxls-2.4.1 2/dist/jxls-2.4.1.jar
Main code:
package test_excel;
import org.jxls.common.Context;
import org.jxls.util.JxlsHelper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
public class ObjectCollectionDemo {
private static Logger logger = LoggerFactory.getLogger(ObjectCollectionDemo.class);
public static void main(String[] args) throws ParseException, IOException {
logger.info("Running Object Collection demo");
List<Employee> employees = generateSampleEmployeeData();
try(InputStream is = ObjectCollectionDemo.class.getResourceAsStream("object_collection_template.xls")) {
try (OutputStream os = new FileOutputStream("object_collection_output.xls")) {
Context context = new Context();
context.putVar("employees", employees);
logger.info("OK so far...");
JxlsHelper.getInstance().processTemplate(is, os, context);
}
}
}
public static List<Employee> generateSampleEmployeeData() throws ParseException {
List<Employee> employees = new ArrayList<Employee>();
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MMM-dd", Locale.US);
employees.add( new Employee("Elsa", dateFormat.parse("1970-Jul-10"), 1500, 0.15) );
employees.add( new Employee("Oleg", dateFormat.parse("1973-Apr-30"), 2300, 0.25) );
employees.add( new Employee("Neil", dateFormat.parse("1975-Oct-05"), 2500, 0.00) );
employees.add( new Employee("Maria", dateFormat.parse("1978-Jan-07"), 1700, 0.15) );
employees.add( new Employee("John", dateFormat.parse("1969-May-30"), 2800, 0.20) );
return employees;
}
}
Looks like you need poi-3.XX.jar in your class path - see here.
I am trying to run some algorithm in apache Spark. I am getting
Java - A master URL must be set in your configuration error even if I set the configuration.
SparkSession spark = SparkSession.builder().appName("Sp_LogistcRegression").config("spark.master", "local").getOrCreate();
This is the code I work with
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectOutputStream;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.ml.classification.LogisticRegression;
import org.apache.spark.ml.classification.LogisticRegressionModel;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.mllib.util.MLUtils;
public class Sp_LogistcRegression {
public void trainLogisticregression(String path, String model_path) throws IOException {
//SparkConf conf = new SparkConf().setAppName("Linear Regression Example");
// JavaSparkContext sc = new JavaSparkContext(conf);
SparkSession spark = SparkSession.builder().appName("Sp_LogistcRegression").config("spark.master", "local").getOrCreate();
Dataset<Row> training = spark.read().option("header","true").csv(path);
System.out.print(training.count());
LogisticRegression lr = new LogisticRegression().setMaxIter(10).setRegParam(0.3);
// Fit the model
LogisticRegressionModel lrModel = lr.fit(training);
lrModel.save(model_path);
spark.close();
}
}
This is my test case:
import java.io.File;
import org.junit.Test;
public class Sp_LogistcRegressionTest {
Sp_LogistcRegression spl =new Sp_LogistcRegression ();
#Test
public void test() {
String filename = "datas/seg-large.csv";
ClassLoader classLoader = getClass().getClassLoader();
File file1 = new File(classLoader.getResource(filename).getFile());
spl. trainLogisticregression( file1.getAbsolutePath(), "/tmp");
}
}
Why I am getting this error? I checked the solutions here
Spark - Error "A master URL must be set in your configuration" when submitting an app
It does n´t work.
Any clues ?
your
SparkSession spark = SparkSession.builder().appName("Sp_LogistcRegression").config("spark.master", "local").getOrCreate();
should be
SparkSession spark = SparkSession.builder().appName("Sp_LogistcRegression").master("local").getOrCreate();
Or
when you run, you need to
spark-submit --class mainClass --master local yourJarFile
I have a class that i had to test:
package com.mycompany.openapi.cms.core.support.liquibase;
import liquibase.change.custom.CustomTaskChange;
import liquibase.database.Database;
import liquibase.database.jvm.JdbcConnection;
import liquibase.exception.CustomChangeException;
import liquibase.exception.SetupException;
import liquibase.exception.ValidationErrors;
import liquibase.resource.ResourceAccessor;
import org.slf4j.LoggerFactory;
import org.springframework.util.StringUtils;
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.sql.Statement;
import java.util.Set;
public class ApplySqlFileIfExistsChange implements CustomTaskChange {
private final org.slf4j.Logger logger = LoggerFactory.getLogger(getClass());
private String file;
private ResourceAccessor resourceAccessor;
#Override
public void execute(Database database) throws CustomChangeException {
JdbcConnection databaseConnection = (JdbcConnection) database.getConnection();
try {
Set<InputStream> files = resourceAccessor.getResourcesAsStream(file);
if(files != null){
for (InputStream inputStream : files) {
BufferedReader in = new BufferedReader(
new InputStreamReader(inputStream));
String str;
String sql;
StringBuilder sqlBuilder = new StringBuilder("");
while ((str = in.readLine()) != null) {
sqlBuilder.append(str).append(" ");
}
in.close();
sql = sqlBuilder.toString().trim();
if(StringUtils.isEmpty(sql)){
return;
}
Statement statement = databaseConnection.createStatement();
statement.execute(sql);
statement.close();
}
}
} catch (FileNotFoundException e) {
logger.error(e.getMessage(), e);
} catch (Exception e) {
throw new CustomChangeException(e);
}
}
public String getFile() {
return file;
}
public void setFile(String file) {
this.file = file;
}
#Override
public void setFileOpener(ResourceAccessor resourceAccessor) {
this.resourceAccessor = resourceAccessor;
}
}
I had written the test:
package com.mycompany.openapi.cms.core.support.liquibase;
import com.google.common.collect.Sets;
import liquibase.database.Database;
import liquibase.database.jvm.JdbcConnection;
import liquibase.resource.ResourceAccessor;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.sql.Statement;
import static org.mockito.Matchers.anyString;
import static org.mockito.Mockito.verify;
import static org.powermock.api.mockito.PowerMockito.*;
#RunWith(PowerMockRunner.class)
#PrepareForTest({LoggerFactory.class})
public class TestApplySqlFileIfExistsChange {
#InjectMocks
ApplySqlFileIfExistsChange applySqlFileIfExistsChange;
#Mock
private ResourceAccessor resourceAccessor;
#Mock
private JdbcConnection jdbcConnection;
#Mock
private Database database;
#Mock
Statement statement;
#BeforeClass
public static void setUpClass() {
mockStatic(LoggerFactory.class);
when(LoggerFactory.getLogger(ApplySqlFileIfExistsChange.class)).thenReturn(mock(Logger.class));
}
#Before
public void setUp() throws Exception {
when(database.getConnection()).thenReturn(jdbcConnection);
InputStream inp1, inp2;
inp1 = new ByteArrayInputStream("FirstTestQuery".getBytes(StandardCharsets.UTF_8));
inp2 = new ByteArrayInputStream("SecondTestQuery".getBytes(StandardCharsets.UTF_8));
when(resourceAccessor.getResourcesAsStream(anyString())).thenReturn(Sets.newHashSet(inp1, inp2));
when(jdbcConnection.createStatement()).thenReturn(statement);
}
#Test
public void execute() throws Exception {
applySqlFileIfExistsChange.execute(database);
verify(statement).execute("FirstTestQuery");
verify(statement).execute("SecondTestQuery");
}
}
Problem is that test above passed in my IDE, but when i make push in repository TeamCity build failed on my test. I can't undestand why, because code are same in both places. Here is stack trace in TeamCity:
java.lang.IllegalStateException: Failed to transform class with name org.slf4j.LoggerFactory. Reason: java.io.IOException: invalid constant type: 15
at org.powermock.core.classloader.MockClassLoader.loadMockClass(MockClassLoader.java:267)
at org.powermock.core.classloader.MockClassLoader.loadModifiedClass(MockClassLoader.java:180)
at org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:70)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at sun.reflect.generics.factory.CoreReflectionFactory.makeNamedType(CoreReflectionFactory.java:114)
at sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:125)
at sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49)
at sun.reflect.annotation.AnnotationParser.parseSig(AnnotationParser.java:439)
at sun.reflect.annotation.AnnotationParser.parseClassValue(AnnotationParser.java:420)
at sun.reflect.annotation.AnnotationParser.parseClassArray(AnnotationParser.java:724)
at sun.reflect.annotation.AnnotationParser.parseArray(AnnotationParser.java:531)
at sun.reflect.annotation.AnnotationParser.parseMemberValue(AnnotationParser.java:355)
at sun.reflect.annotation.AnnotationParser.parseAnnotation2(AnnotationParser.java:286)
at sun.reflect.annotation.AnnotationParser.parseAnnotations2(AnnotationParser.java:120)
at sun.reflect.annotation.AnnotationParser.parseAnnotations(AnnotationParser.java:72)
at java.lang.Class.createAnnotationData(Class.java:3521)
at java.lang.Class.annotationData(Class.java:3510)
at java.lang.Class.getAnnotation(Class.java:3415)
at org.junit.internal.MethodSorter.getDeclaredMethods(MethodSorter.java:52)
at org.junit.internal.runners.TestClass.getAnnotatedMethods(TestClass.java:45)
at org.junit.internal.runners.MethodValidator.validateTestMethods(MethodValidator.java:71)
at org.junit.internal.runners.MethodValidator.validateStaticMethods(MethodValidator.j
I apologize for such a number of code in my question.
You probably have conflicting Powermock/Mockito versions.
Check your local maven repo for the versions, and get rid of the older ones
in your pom and/or update the other dependency which included it.
Check in the root of the project with:
mvn dependency:tree -Dverbose -Dincludes=NAME_OF_DEPENDENCY
You have a conflict of dependent libraries in the TeamCity repo vs your local repo.
Confirm the TC build is using the exact same version of Powermock that your local is using.
There may be an older version being transitively included.
mvn dependency:tree -Dverbose
This will list all the resolvable dependencies. See if there are different versions of Powermock. You can then stop them being possible pulled in by using the exclude/exclusions tag in the pom.xml.
I have integrated Stanford NER in UIMA and developed a pipeline.
The pipeline contains a FileSystemCollectionReader,an NERAnnotator and a CasConsumer but the output so come isn't desired. In my input directory i have two files and after running the pipeline, i get two files as ouput but the second file is getting merged with the first file in second ouput. I don't know what's happening here.
The code for CasConsumer:
`
package org.gds.uima;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
import java.util.UUID;
import org.apache.uima.UimaContext;
import org.apache.uima.analysis_component.AnalysisComponent_ImplBase;
import org.apache.uima.analysis_component.CasAnnotator_ImplBase;
import org.apache.uima.analysis_component.JCasAnnotator_ImplBase;
import org.apache.uima.analysis_engine.AnalysisEngineProcessException;
import org.apache.uima.cas.CAS;
import org.apache.uima.cas.CASException;
import org.apache.uima.cas.Type;
import org.apache.uima.cas.text.AnnotationFS;
import org.apache.uima.fit.component.CasConsumer_ImplBase;
import org.apache.uima.fit.component.JCasConsumer_ImplBase;
import org.apache.uima.fit.descriptor.ConfigurationParameter;
import org.apache.uima.fit.util.CasUtil;
import org.apache.uima.jcas.JCas;
import org.apache.uima.resource.ResourceInitializationException;
public class CasConsumer extends JCasConsumer_ImplBase
{
public final static String PARAM_OUTPUT="outputDir";
#ConfigurationParameter(name = PARAM_OUTPUT)
private String outputDirectory;
public final static String PARAM_ANNOTATION_TYPES = "annotationTypes";
enter code here
#ConfigurationParameter(name = PARAM_ANNOTATION_TYPES,defaultValue="String")
public List<String> annotationTypes;
public void initialize(final UimaContext context) throws ResourceInitializationException
{
super.initialize(context);
}
#Override
public void process(JCas jcas)
{
String original = jcas.getDocumentText();
try
{
String onlyText="";
JCas sofaText = jcas.getView(NERAnnotator.SOFA_NAME);
onlyText = sofaText.getDocumentText();
String name = UUID.randomUUID().toString().substring(20);
File outputDir = new File(this.outputDirectory+"/"+name);
System.out.print("Saving file to "+outputDir.getAbsolutePath());
FileOutputStream fos = new FileOutputStream(outputDir.getAbsoluteFile());
PrintWriter pw = new PrintWriter(fos);
pw.println(onlyText);
pw.close();
}
catch(CASException cae)
{
System.out.println(cae);
}
catch(FileNotFoundException fne)
{
System.out.print(fne);
}
}
}
`
}
I implemented the example from docs of this plugin, but I have an exception stating that a parameter Initiator is missed. I don't see this parameter at all.
My code is:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.util.logging.Level;
import org.eclipse.birt.core.exception.BirtException;
import org.eclipse.birt.core.framework.Platform;
import org.eclipse.birt.report.engine.api.EngineConfig;
import org.eclipse.birt.report.engine.api.IReportEngine;
import org.eclipse.birt.report.engine.api.IReportEngineFactory;
import org.eclipse.birt.report.engine.api.IReportRunnable;
import org.eclipse.birt.report.engine.api.IRunAndRenderTask;
import org.eclipse.birt.report.engine.emitter.csv.CSVRenderOption;
public class RunExport {
static void runReport() throws FileNotFoundException, BirtException {
String resourcePath = "C:\\Users\\hpsa\\workspace\\My Reports\\";
FileInputStream fs = new FileInputStream(resourcePath + "new_report_1.rptdesign");
IReportEngine engine = null;
EngineConfig config = new EngineConfig();
config.setLogConfig("C:\\birtre\\", Level.FINE);
config.setResourcePath(resourcePath);
Platform.startup(config);
IReportEngineFactory factory = (IReportEngineFactory) Platform.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
engine = factory.createReportEngine(config);
engine.changeLogLevel(Level.FINE);
IReportRunnable design = engine.openReportDesign(fs);
IRunAndRenderTask task = engine.createRunAndRenderTask(design);
CSVRenderOption csvOption = new CSVRenderOption();
String format = CSVRenderOption.OUTPUT_FORMAT_CSV;
csvOption.setOutputFormat(format);
csvOption.setOutputFileName("newBIRTcsv.csv");
csvOption.setShowDatatypeInSecondRow(true);
csvOption.setExportTableByName("SecondTable");
csvOption.setDelimiter("\t");
csvOption.setReplaceDelimiterInsideTextWith("-");
task.setRenderOption(csvOption);
task.setEmitterID("org.eclipse.birt.report.engine.emitter.csv");
task.run();
task.close();
Platform.shutdown();
System.out.println("Report Generated Successfully!!");
}
public static void main(String[] args) {
try {
runReport();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I have an exception:
org.eclipse.birt.report.engine.api.impl.ParameterValidationException: Required parameter Initiator is not set.
at org.eclipse.birt.report.engine.api.impl.EngineTask.validateAbstractScalarParameter(EngineTask.java:803)
at org.eclipse.birt.report.engine.api.impl.EngineTask.access$0(EngineTask.java:789)
at org.eclipse.birt.report.engine.api.impl.EngineTask$ParameterValidationVisitor.visitScalarParameter(EngineTask.java:706)
at org.eclipse.birt.report.engine.api.impl.EngineTask$ParameterVisitor.visit(EngineTask.java:1531)
at org.eclipse.birt.report.engine.api.impl.EngineTask.doValidateParameters(EngineTask.java:692)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:95)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77)
at com.demshin.RunExport.runReport(RunExport.java:44)
at com.demshin.RunExport.main(RunExport.java:54)
[1]: https://code.google.com/a/eclipselabs.org/p/csv-emitter-birt-plugin/
I tried to find this parameter in csvOption, but there is nothing like that.
What am I doing wrong?
It is not a parameter of the emitter. This exception means a report parameter named "Initiator" is defined in the report "new_report_1.rptdesign", and its property "required" is checked.
For example edit the report design, disable "required" for this parameter and set a default value instead.