Java LibreOffice Draw - Set text of a shape - java

I'm using Java and the LibreOffice API, and I'd like to draw rectangles and set their names, or put some text fields on them. Drawing shapes was relatively easy, but adding text is really hard. I didn't find any solution, neither in documentation nor at forums.
I am declaring the shape and text like this:
Object drawShape = xDrawFactory.createInstance("com.sun.star.drawing.RectangleShape");
XShape xDrawShape = UnoRuntime.queryInterface(XShape.class, drawShape);
xDrawShape.setSize(new Size(10000, 20000));
xDrawShape.setPosition(new Point(5000, 5000));
xDrawPage.add(xDrawShape);
XText xShapeText = UnoRuntime.queryInterface(XText.class, drawShape);
XPropertySet xShapeProps = UnoRuntime.queryInterface(XPropertySet.class, drawShape);
And then I am trying to set XText:
xShapeText.setString("ABC");
And this is where the problem appears (this exception is not clear for me even after reading the explanation from documentation):
com.sun.star.lang.DisposedException
at com.sun.star.lib.uno.environments.remote.JobQueue.removeJob(JobQueue.java:210)
at com.sun.star.lib.uno.environments.remote.JobQueue.enter(JobQueue.java:330)
at com.sun.star.lib.uno.environments.remote.JobQueue.enter(JobQueue.java:303)
at com.sun.star.lib.uno.environments.remote.JavaThreadPool.enter(JavaThreadPool.java:87)
at com.sun.star.lib.uno.bridges.java_remote.java_remote_bridge.sendRequest(java_remote_bridge.java:636)
at com.sun.star.lib.uno.bridges.java_remote.ProxyFactory$Handler.request(ProxyFactory.java:146)
at com.sun.star.lib.uno.bridges.java_remote.ProxyFactory$Handler.invoke(ProxyFactory.java:128)
at com.sun.proxy.$Proxy6.setString(Unknown Source)
at com.ericsson.stpdiagramgenerator.presentation.core.HelloTextTableShape.manipulateText(HelloTextTableShape.java:265)
at com.ericsson.stpdiagramgenerator.presentation.core.HelloTextTableShape.useWriter(HelloTextTableShape.java:65)
at com.ericsson.stpdiagramgenerator.presentation.core.HelloTextTableShape.useDocuments(HelloTextTableShape.java:52)
at com.ericsson.stpdiagramgenerator.presentation.core.HelloTextTableShape.main(HelloTextTableShape.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.io.IOException: com.sun.star.io.IOException: EOF reached - socket,host=localhost,port=8100,localHost=localhost.localdomain,localPort=34456,peerHost=localhost,peerPort=8100
at com.sun.star.lib.uno.bridges.java_remote.XConnectionInputStream_Adapter.read(XConnectionInputStream_Adapter.java:55)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at com.sun.star.lib.uno.protocols.urp.urp.readBlock(urp.java:355)
at com.sun.star.lib.uno.protocols.urp.urp.readMessage(urp.java:92)
at com.sun.star.lib.uno.bridges.java_remote.java_remote_bridge$MessageDispatcher.run(java_remote_bridge.java:105)
Maybe you have another solution for inserting text/textbox/textfield on a shape with the LibreOffice API.

Your code works fine on my machine. I tested it in LibreOffice 5.1.0.3 on Windows. Here is the code I used:
com.sun.star.frame.XDesktop xDesktop = null;
// getDesktop() is from
// https://wiki.openoffice.org/wiki/API/Samples/Java/Writer/BookmarkInsertion
xDesktop = getDesktop();
com.sun.star.lang.XComponent xComponent = null;
try {
xComponent = xDesktop.getCurrentComponent();
XDrawPagesSupplier xDrawPagesSupplier =
(XDrawPagesSupplier)UnoRuntime.queryInterface(
XDrawPagesSupplier.class, xComponent);
Object drawPages = xDrawPagesSupplier.getDrawPages();
XIndexAccess xIndexedDrawPages = (XIndexAccess)
UnoRuntime.queryInterface(
XIndexAccess.class, drawPages);
Object drawPage = xIndexedDrawPages.getByIndex(0);
XMultiServiceFactory xDrawFactory =
(XMultiServiceFactory)UnoRuntime.queryInterface(
XMultiServiceFactory.class, xComponent);
Object drawShape = xDrawFactory.createInstance(
"com.sun.star.drawing.RectangleShape");
XDrawPage xDrawPage = (XDrawPage)UnoRuntime.queryInterface(
XDrawPage.class, drawPage);
XShape xDrawShape = UnoRuntime.queryInterface(XShape.class, drawShape);
xDrawShape.setSize(new Size(10000, 20000));
xDrawShape.setPosition(new Point(5000, 5000));
xDrawPage.add(xDrawShape);
XText xShapeText = UnoRuntime.queryInterface(XText.class, drawShape);
XPropertySet xShapeProps = UnoRuntime.queryInterface(
XPropertySet.class, drawShape);
xShapeText.setString("DEF");
} catch( Exception e) {
e.printStackTrace(System.err);
System.exit(1);
}
To run it, I opened a new LibreOffice Draw file, then pressed "Run Project" in NetBeans. This was the result:
It looks like the exception may be caused by a problem with connecting to the document. How exactly are you running the macro?
Related: This question is also posted at https://forum.openoffice.org/en/forum/viewtopic.php?f=20&p=395334, which contains a solution in Basic.

Related

Error : java.lang.UnsatisfiedLinkError with Charuco Camera Calibration (DetectorParameters)

Good morning everyone. I am currently working on a project in which I have to detect object coordinates within an image with high precision. I have tried using a normal chessboard for camera calibration but the reprojection error was too high so I decided to use the Charuco calibration pattern. I am working with OpenCv 3.4 and Java (project constraint). Since the Aruco function are not included in OpenCv for Java I created a new package in my project which includes the necessary classes. The Aruco code is the one that you can find in the following link :
Aruco Code Github
The code that I'm executing is the following:
protected void captureImagesCharuco() {
int squaresX = 5;
int squaresY = 7;
float squareLength = (float) 37.0;
float markerLength = (float) 22.0;
int calibrationFlags = 0;
float aspectRatio = 1;
DetectorParameters detectParams = DetectorParameters.create();
detectParams.set_adaptiveThreshWinSizeMin(3);
detectParams.set_adaptiveThreshWinSizeMax(23);
detectParams.set_adaptiveThreshWinSizeStep(10);
detectParams.set_adaptiveThreshConstant(7);
detectParams.set_minMarkerPerimeterRate(0.03);
detectParams.set_maxMarkerPerimeterRate(4.0);
detectParams.set_polygonalApproxAccuracyRate(0.05);
detectParams.set_minCornerDistanceRate(10);
detectParams.set_minDistanceToBorder(3);
detectParams.set_minMarkerDistanceRate(0.05);
detectParams.set_cornerRefinementWinSize(5);
detectParams.set_cornerRefinementMaxIterations(30);
detectParams.set_cornerRefinementMinAccuracy(0.1);
detectParams.set_markerBorderBits(1);
detectParams.set_perspectiveRemovePixelPerCell(8);
detectParams.set_perspectiveRemoveIgnoredMarginPerCell(0.13);
detectParams.set_maxErroneousBitsInBorderRate(0.04);
detectParams.set_minOtsuStdDev(5.0);
detectParams.set_errorCorrectionRate(0.6);
Dictionary dictionary = Aruco.getPredefinedDictionary(0);
CharucoBoard charucoBoard = CharucoBoard.create(squaresX, squaresY, squareLength, markerLength, dictionary);
List<List<Mat>> charucoCorners = new ArrayList<>();
List<Mat> charucoIds = new ArrayList<>();
List<Mat> validImgs = new ArrayList<>();
Size imgSize = new Size();
int nFrame = 0;
File[] listImages = imageDirectory.listFiles();
for(File file : listImages) {
String src = file.getAbsolutePath();
Mat imgRead = Imgcodecs.imread(src,Imgcodecs.IMREAD_COLOR);
imgSize = imgRead.size();
Mat imgCopy = new Mat();
Mat ids = new Mat();
List<Mat> rejectedCorners = new ArrayList<>();
List<Mat> corners = new ArrayList<>();
if (!imgRead.empty()){
Aruco.detectMarkers(imgRead, dictionary, corners, ids);
Aruco.refineDetectedMarkers(imgRead, (Board)charucoBoard, corners, ids, rejectedCorners);
Mat currentCharucoCorners = new Mat();
Mat currentCharucoIds = new Mat();
int idsSize = ids.rows()*ids.cols();
if(idsSize>0) {
Aruco.interpolateCornersCharuco(corners, ids, imgRead, charucoBoard, currentCharucoCorners, currentCharucoIds);
}
imgRead.copyTo(imgCopy);
if(idsSize<0) {
Aruco.drawDetectedCornersCharuco(imgCopy, currentCharucoCorners);
}
if(currentCharucoCorners.total()>0) {
Aruco.drawDetectedCornersCharuco(imgCopy, currentCharucoCorners, currentCharucoIds, new Scalar(255,0,0));
}
charucoCorners.add(corners);
charucoIds.add(currentCharucoIds);
validImgs.add(imgRead);
nFrame++;
}
}
intrinsic.put(0, 0, 1);
intrinsic.put(1, 1, 1);
List<Mat> allCharucoCorners = new ArrayList<>();
List<Mat> allCharucoIds = new ArrayList<>();
for(int i =0;i<nFrame;i++) {
Mat currentCharucoCorners = new Mat();
Mat currentCharucoIds = new Mat();
Aruco.interpolateCornersCharuco(charucoCorners.get(i), charucoIds.get(i), validImgs.get(i), charucoBoard, currentCharucoCorners, currentCharucoIds,intrinsic,distCoeffs,4);
allCharucoCorners.add(currentCharucoCorners);
allCharucoIds.add(currentCharucoIds);
}
double repError = Aruco.calibrateCameraCharuco(allCharucoCorners, charucoIds, charucoBoard, imgSize, intrinsic, distCoeffsCharuco, rvecs, tvecs, calibrationFlags);
System.out.println("reprojection error : " + repError);
}
I then simply execute the captureImagesCharuco() in the main program. However when I do so I get the following error :
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.javafx.application.LauncherImpl.launchApplicationWithArgs(LauncherImpl.java:389)
at com.sun.javafx.application.LauncherImpl.launchApplication(LauncherImpl.java:328)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.launcher.LauncherHelper$FXHelper.main(LauncherHelper.java:767)
Caused by: java.lang.UnsatisfiedLinkError: application.DetectorParameters.create_0()J
at application.DetectorParameters.create_0(Native Method)
at application.DetectorParameters.create(DetectorParameters.java:24)
at application.CameraCalibrate.captureImagesCharuco(CameraCalibrate.java:115)
at application.Main.main(Main.java:64)
... 11 more
Exception running application application.Main
I have tried searching for how to solve this error (UnsatisfiedLinkError) and I found that it is usually caused when you're using a library that isn't included in the Build Path or the project (Even though I ma not sure). I guess the library in question here is the Aruco package but I don't know how I can include a package in the build path of the project.
Any kind of help will be more than welcome ! Thank you ! :)
The error indicates that you have opencv package without special module you're using;
In order to fix that you'd need to either find a prebuilt opencv with module you need (in your exactly case it's lcoated in contrib library, so this probably helps.
In case you want to build it from sources - you should enable the module you want in cmake properties. For contrib - you'd need to cmake contrib project first & then enable contrib in main opencv makefile. For building opencv & contirb - please follow official documentation.

No data found exception using oracle olapi

i started experimenting with oracle olap api 'olapi', but i'm having some issues while running their examples package.
when i run MakingQueriesExamples.java in source package i get this error:
oracle.olapi.data.cursor.NoDataAvailableException
at oracle.express.olapi.data.full.DefinitionManager.handleException(Unknown Source)
at oracle.express.olapi.data.full.DefinitionManager.createCursorManagerInterfaces(Unknown Source)
at oracle.express.olapi.data.full.DefinitionManager.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManager(Unknown Source)
at olap.Context11g._displayResult(Context11g.java:650)
at olap.Context11g.displayResult(Context11g.java:631)
at olap.source.MakingQueriesExamples.controllingMatchingWithAlias(MakingQueriesExamples.java:114)
at olap.source.MakingQueriesExamples.run(MakingQueriesExamples.java:40)
at olap.BaseExample11g.execute(BaseExample11g.java:54)
at olap.BaseExample11g.execute(BaseExample11g.java:74)
at olap.source.MakingQueriesExamples.main(MakingQueriesExamples.java:478)
the part that's causing the error is in here (last line) (MakingQueriesExamples):
println("\nControlling Input-to-Source Matching With the alias Method");
MdmMeasure mdmUnits = getMdmMeasure("UNITS");
// Get the Source objects for the measure and for the default hierarchies
// of the dimensions.
NumberSource units = (NumberSource) mdmUnits.getSource();
StringSource prodHier = (StringSource)
getMdmPrimaryDimension("PRODUCT").getDefaultHierarchy().getSource();
StringSource custHier = (StringSource)
getMdmPrimaryDimension("CUSTOMER").getDefaultHierarchy().getSource();
StringSource chanHier = (StringSource)
getMdmPrimaryDimension("CHANNEL").getDefaultHierarchy().getSource();
StringSource timeHier = (StringSource)
getMdmPrimaryDimension("TIME").getDefaultHierarchy().getSource();
// Select single values for the hierarchies.
//Source prodSel = prodHier.selectValue("PRODUCT_PRIMARY::ITEM::ENVY ABM");
Source prodSel = prodHier.selectValue("PRIMARY::ITEM::ENVY ABM");
//Source custSel = custHier.selectValue("SHIPMENTS::SHIP_TO::BUSN WRLD SJ");
Source custSel = custHier.selectValue("SHIPMENTS::SHIP_TO::BUSN WRLD SJ");
//Source timeSel = timeHier.selectValue("CALENDAR_YEAR::MONTH::2001.01");
Source timeSel = timeHier.selectValue("CALENDAR::MONTH::2001.01");
// Produce a Source that specifies the units values for the selected
// dimension values.
Source unitsSel = units.join(timeSel).join(custSel).join(prodSel);
// Create aliases for the Channel dimension hierarchy.
Source chanAlias1 = chanHier.alias();
Source chanAlias2 = chanHier.alias();
// Join the aliases to the Source representing the units values specified
// by the selected dimension elements, using the value method to make the
// alias an input.
NumberSource unitsSel1 = (NumberSource) unitsSel.join(chanAlias1.value());
NumberSource unitsSel2 = (NumberSource) unitsSel.join(chanAlias2.value());
// chanAlias2 is the first output of result, so its values are the row
// (slower varying) values; chanAlias1 is the second output of result
// so its values are the column (faster varying) values.
Source result = unitsSel1.gt(unitsSel2)
.join(chanAlias1) // Output 2, column
.join(chanAlias2); // Output 1, row
getContext().commit();
getContext().displayResult(result);
and in here (first line)(context11g.java):
CursorManager cursorManager =
dp.createCursorManager(source);
Cursor cursor = cursorManager.createCursor();
cpw.printCursor(cursor, displayLocVal);
// Close the CursorManager.
cursorManager.close();
i'm using oracle database 11.2.0.1.0 with OLAP option enabled and oracle analytical workplace manager 11.2.0.4B
i've started by installing the 'global' schema as instructed here:
https://www.oracle.com/technetwork/database/options/olap/global-11g-readme-082667.html
i verified everything in AWM (cubes, dimensions and mesures), and the data in sqldevelopper.
i've noticed that some of the hierarchies' names have changed so i updated them on the java code
any help would be appreciated !
thanks in advance

Out of memory Error when using scala to process some xml

I have broken wiki xml dump into many small parts of 1M and tried to clean it (after cleaning it with another program by somebody else)
I get an out of memory error which I don't know how to solve. Can anyone enlighten me?
I get the following error message:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:212)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:235)
at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:252)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:292)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1541)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1237)
at qa.main.ja.Indexing$$anonfun$5$$anonfun$apply$4.apply(SearchDocument.scala:234)
at qa.main.ja.Indexing$$anonfun$5$$anonfun$apply$4.apply(SearchDocument.scala:224)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:750)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at qa.main.ja.Indexing$$anonfun$5.apply(SearchDocument.scala:224)
at qa.main.ja.Indexing$$anonfun$5.apply(SearchDocument.scala:220)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
Where line 234 is as follows:
writer.addDocument(document)
It is adding some documents to Lucene
and where line 224 is as follows:
for (doc <- target_xml \\ "doc") yield {
It is the first line of a for loop for adding various elements as fields in the index.
Is it a code problem, setting problem or hardware problem?
EDIT
Hi, this is my for loop:
for (knowledgeFile <- knowledgeFiles) yield {
System.err.println(s"processing file: ${knowledgeFile}")
val target_xml=XML.loadString(" <file>"+cleanFile(knowledgeFile).mkString+"</file>")
for (doc <- target_xml \\ "doc") yield {
val id = (doc \ "#id").text
val title = (doc \ "#title").text
val text = doc.text
val document = new Document()
document.add(new StringField("id", id, Store.YES))
document.add(new TextField("text", new StringReader(title + text)))
writer.addDocument(document)
val xml_doc = <page><title>{ title }</title><text>{ text }</text></page>
id -> xml_doc
}
}).flatten.toArray`
The inner loop just loops thru every doc element. The outer loop loops thru every file. Is the nested for the source of the problem?
Below is the cleanFile function for reference:
def cleanFile(fileName:String):Array[String] = {
val tagRe = """<\/?doc.*?>""".r
val lines = Source.fromFile(fileName).getLines.toArray
val outLines = new Array[String](lines.length)
for ((line,lineNo) <- lines.zipWithIndex) yield {
if (tagRe.findFirstIn(line)!=None)
{
outLines(lineNo) = line
}
else
{
outLines(lineNo) = StringEscapeUtils.escapeXml11(line)
}
}
outLines
}
Thanks again
Looks like you would like to try increasing the heap size by having -xmx jvm argument?

Conditional formatting using in Excel using APACHE POI

I have a problem using the SheetConditionalFormatting, just for testing if the cell contains particular string (in my case just "test") I run the following code:
SheetConditionalFormatting sheetConditionalFormatting = excelSheet.getSheetConditionalFormatting();
ConditionalFormattingRule rule = sheetConditionalFormatting.createConditionalFormattingRule(ComparisonOperator.EQUAL, "test");
PatternFormatting fill1 = rule.createPatternFormatting();
fill1.setFillBackgroundColor(IndexedColors.BLUE.index);
fill1.setFillPattern(PatternFormatting.SOLID_FOREGROUND);
CellRangeAddress[] regions = {
CellRangeAddress.valueOf("A1")
};
sheetConditionalFormatting.addConditionalFormatting(regions, rule);
And I got message that 'test' does not exist in the workspace. This is my error from Console:
Exception in thread "main" org.apache.poi.ss.formula.FormulaParseException: Specified named range 'test' does not exist in the current workbook.
at org.apache.poi.ss.formula.FormulaParser.parseNonRange(FormulaParser.java:569)
at org.apache.poi.ss.formula.FormulaParser.parseRangeable(FormulaParser.java:429)
at org.apache.poi.ss.formula.FormulaParser.parseRangeExpression(FormulaParser.java:268)
at org.apache.poi.ss.formula.FormulaParser.parseSimpleFactor(FormulaParser.java:1119)
at org.apache.poi.ss.formula.FormulaParser.percentFactor(FormulaParser.java:1079)
at org.apache.poi.ss.formula.FormulaParser.powerFactor(FormulaParser.java:1066)
at org.apache.poi.ss.formula.FormulaParser.Term(FormulaParser.java:1426)
at org.apache.poi.ss.formula.FormulaParser.additiveExpression(FormulaParser.java:1526)
at org.apache.poi.ss.formula.FormulaParser.concatExpression(FormulaParser.java:1510)
at org.apache.poi.ss.formula.FormulaParser.comparisonExpression(FormulaParser.java:1467)
at org.apache.poi.ss.formula.FormulaParser.unionExpression(FormulaParser.java:1447)
at org.apache.poi.ss.formula.FormulaParser.parse(FormulaParser.java:1568)
at org.apache.poi.ss.formula.FormulaParser.parse(FormulaParser.java:176)
at org.apache.poi.hssf.model.HSSFFormulaParser.parse(HSSFFormulaParser.java:70)
at org.apache.poi.hssf.record.CFRuleRecord.parseFormula(CFRuleRecord.java:525)
at org.apache.poi.hssf.record.CFRuleRecord.create(CFRuleRecord.java:146)
at org.apache.poi.hssf.usermodel.HSSFSheetConditionalFormatting.createConditionalFormattingRule(HSSFSheetConditionalFormatting.java:80)
at org.apache.poi.hssf.usermodel.HSSFSheetConditionalFormatting.createConditionalFormattingRule(HSSFSheetConditionalFormatting.java:32)
at MainApp.main(MainApp.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
it found out that string introduced into the createConditionalFormattingRule must be cell coordinates
ConditionalFormattingRule rule = sheetConditionalFormatting.createConditionalFormattingRule(ComparisonOperator.EQUAL, "B1");
To use a string as a comparison it need to be inclosed in quotes eg:
"""string"""

labeling an unlabeled instance in Weka(java code)

I am beginner in java and Weka tool, I want to use Logitboost algorithm with DecisionStump as weak learner in my java code, but I don't know how do this work. I create a vector with six feature(without label feature) and I want feed it into logitboost for labeling and probability of its assignment. Labels are 1 or -1 and train/test data is in an arff file.This is my code, but algorithm always return 0 !
Thanks
double candidate_similarity(ha_nodes ha , WeightMatrix[][] wm , LogitBoost lgb ,ArrayList<Attribute> atts){
LogitBoost lgb = new LogitBoost();
lgb.buildClassifier(newdata);//newdata is an arff file with some labeled data
Evaluation eval = new Evaluation(newdata);
eval.crossValidateModel(lgb, newdata, 10, new Random(1));
try {
feature_vector[0] = IP_sim(Main.a_new.dip, ha.candidate.dip_cand);
feature_vector[1] = IP_sim(Main.a_new.sip, ha.candidate.sip_cand);
feature_vector[2] = IP_s_d_sim(Main.a_new.sip, ha);
feature_vector[3] = Dport_sim(Main.a_new.dport, ha);
freq_weight(Main.a_new.Atype, ha, freq_avg, weight_avg , wm);
feature_vector[4] = weight_avg;
feature_vector[5] = freq_avg;
double[] values = new double[]{feature_vector[0],feature_vector[1],feature_vector[2],feature_vector[3],feature_vector[4],feature_vector[5]};
DenseInstance newInst = new DenseInstance(1.0,values);
Instances dataUnlabeled = new Instances("TestInstances", atts, 0);
dataUnlabeled.add(newInst);
dataUnlabeled.setClassIndex(dataUnlabeled.numAttributes() - 1);
double clslable = lgb.classifyInstance(inst);
} catch (Exception ex) {
//Logger.getLogger(Module2.class.getName()).log(Level.SEVERE, null, ex);
}
return clslable;}
Where did this newdata come from? you need to load the file properly to get a correct classification, use this class to load features from the file:
http://weka.sourceforge.net/doc/weka/core/converters/ArffLoader.html
I'm not posting an example code because I use weka with MATLAB, so I dont have examples in Java.

Categories