Using WEKA in Java - java

I am loading a model which I have saved from the WEKA Explorer to my Java code as seen below. I am now trying to give it an instance in the form of a .arff file so that I can get a prediction however it is giving an output of NaN 0.0 every time.
The prediction should be in the form of Levels (eg. Level 1).
The Screen Shot attached is the output I get.
I also attached another screenshot of a dummy .arff file I am giving the model.
try {
NaiveBayes nb = new NaiveBayes();
nb = (NaiveBayes) weka.core.SerializationHelper.read("Models/NaiveBayesModel.model");
DataSource source1 = new DataSource(final_filePath);
Instances testDataSet = source1.getDataSet();
testDataSet.setClassIndex(testDataSet.numAttributes() - 1);
double actualValue = testDataSet.instance(0).classValue();
Instance newInst = testDataSet.instance(0);
double NaiveBayes = nb.classifyInstance(newInst);
System.out.println(actualValue + " " + NaiveBayes);
} catch (Exception e) {
e.printStackTrace();
}
Output
Input
How can I fix this, please?
Thanks,
Andre

Related

Rendering big Post Script file with Ghost4J in Java

i made a Java application whose purpose is to offer a Print Preview for PS files.
My program uses Ghostscript and Ghost4J to load the Post Script file and produces a list of Images (one for each page) using the SimpleRenderer.render method. Then using a simple JList i show only the image corresponding to the page the user selected in JList.
This worked fine until a really big PS file occurred, causing an OutOfMemoryError when executing the code
PSDocument pdocument = new PSDocument(new File(filename));
I know that is possibile to read a file a little at a time using InputStreams, the problem is that i can't think of a way to connect the bytes that i read with the actual pages of the document.
Example, i tried to read from PS file 100 MB at a time
int buffer_size = 100000000;
byte[] buffer = new byte[buffer_size];
FileInputStream partial = new FileInputStream(filename);
partial.read(buffer, 0, buffer_size);
document.load(new ByteArrayInputStream(buffer));
SimpleRenderer renderer = new SimpleRenderer();
//how many pages do i have to read?
List<Image> images = renderer.render(document, firstpage ??, lastpage ??);
Am i missing some Ghost4J functionality to read partially a file?
Or has someone other suggestions / approaches about how to solve this problem in different ways?
I am really struggling
I found out I can use Ghost4J Core API to retrieve from a Post Script file a reduced set of pages as Images.
Ghostscript gs = Ghostscript.getInstance();
String[] gsArgs = new String[9];
gsArgs[0] = "-dQUIET";
gsArgs[1] = "-dNOPAUSE";
gsArgs[2] = "-dBATCH";
gsArgs[3] = "-dSAFER";
gsArgs[4] = "-sDEVICE=display";
gsArgs[5] = "-sDisplayHandle=0";
gsArgs[6] = "-dDisplayFormat=16#804";
gsArgs[7] = "-sPageList="+firstPage+"-"+lastPage;
gsArgs[8] = "-f"+filename;
//create display callback (capture display output pages as images)
ImageWriterDisplayCallback displayCallback = new ImageWriterDisplayCallback();
//set display callback
gs.setDisplayCallback(displayCallback);
//run PostScript (also works with PDF) and exit interpreter
try {
gs.initialize(gsArgs);
gs.exit();
Ghostscript.deleteInstance();
} catch (GhostscriptException e) {
System.out.println("ERROR: " + e.getMessage());
e.printStackTrace();
}
return displayCallback.getImages(); //return List<Images>
This solve the problem of rendering page as images in the preview.
However, i could not find a way to use Ghost4J to know total number of pages of PS file (in case the file is too big for opening it with Document.load()).
So, i am still here needing some help

Need help, to parse PDF file in a structured way using java [duplicate]

I need to parse a PDF file which contains tabular data. I'm using PDFBox to extract the file text to parse the result (String) later. The problem is that the text extraction doesn't work as I expected for tabular data. For example, I have a file which contains a table like this (7 columns: the first two always have data, only one Complexity column has data, only one Financing column has data):
+----------------------------------------------------------------+
| AIH | Value | Complexity | Financing |
| | | Medium | High | Not applicable | MAC/Other | FAE |
+----------------------------------------------------------------+
| xyz | 12.43 | 12.34 | | | 12.34 | |
+----------------------------------------------------------------+
| abc | 1.56 | | 1.56 | | | 1.56|
+----------------------------------------------------------------+
Then I use PDFBox:
PDDocument document = PDDocument.load(pathToFile);
PDFTextStripper s = new PDFTextStripper();
String content = s.getText(document);
Those two lines of data would be extracted like this:
xyz 12.43 12.4312.43
abc 1.56 1.561.56
There are no white spaces between the last two numbers, but this is not the biggest problem. The problem is that I don't know what the last two numbers mean: Medium, High, Not applicable? MAC/Other, FAE? I don't have the relation between the numbers and their columns.
It is not required for me to use the PDFBox library, so a solution that uses another library is fine. What I want is to be able to parse the file and know what each parsed number means.
You will need to devise an algorithm to extract the data in a usable format. Regardless of which PDF library you use, you will need to do this. Characters and graphics are drawn by a series of stateful drawing operations, i.e. move to this position on the screen and draw the glyph for character 'c'.
I suggest that you extend org.apache.pdfbox.pdfviewer.PDFPageDrawer and override the strokePath method. From there you can intercept the drawing operations for horizontal and vertical line segments and use that information to determine the column and row positions for your table. Then its a simple matter of setting up text regions and determining which numbers/letters/characters are drawn in which region. Since you know the layout of the regions, you'll be able to tell which column the extracted text belongs to.
Also, the reason you may not have spaces between text that is visually separated is that very often, a space character is not drawn by the PDF. Instead the text matrix is updated and a drawing command for 'move' is issued to draw the next character and a "space width" apart from the last one.
Good luck.
You can extract text by area in PDFBox. See the ExtractByArea.java example file, in the pdfbox-examples artifact if you're using Maven. A snippet looks like
PDFTextStripperByArea stripper = new PDFTextStripperByArea();
stripper.setSortByPosition( true );
Rectangle rect = new Rectangle( 464, 59, 55, 5);
stripper.addRegion( "class1", rect );
stripper.extractRegions( page );
String string = stripper.getTextForRegion( "class1" );
The problem is getting the coordinates in the first place. I've had success extending the normal TextStripper, overriding processTextPosition(TextPosition text) and printing out the coordinates for each character and figuring out where in the document they are.
But there's a much simpler way, at least if you're on a Mac. Open the PDF in Preview, ⌘I to show the Inspector, choose the Crop tab and make sure the units are in Points, from the Tools menu choose Rectangular selection, and select the area of interest. If you select an area, the inspector will show you the coordinates, which you can round and feed into the Rectangle constructor arguments. You just need to confirm where the origin is, using the first method.
I had used many tools to extract table from pdf file but it didn't work for me.
So i have implemented my own algorithm ( its name is traprange ) to parse tabular data in pdf files.
Following are some sample pdf files and results:
Input file: sample-1.pdf, result: sample-1.html
Input file: sample-4.pdf, result: sample-4.html
Visit my project page at traprange.
It may be too late for my answer, but I think this is not that hard. You can extend the PDFTextStripper class and override the writePage() and processTextPosition(...) methods. In your case I assume that the column headers are always the same. That means that you know the x-coordinate of each column heading and you can compare the the x-coordinate of the numbers to those of the column headings. If they are close enough (you have to test to decide how close) then you can say that that number belongs to that column.
Another approach would be to intercept the "charactersByArticle" Vector after each page is written:
#Override
public void writePage() throws IOException {
super.writePage();
final Vector<List<TextPosition>> pageText = getCharactersByArticle();
//now you have all the characters on that page
//to do what you want with them
}
Knowing your columns, you can do your comparison of the x-coordinates to decide what column every number belongs to.
The reason you don't have any spaces between numbers is because you have to set the word separator string.
I hope this is useful to you or to others who might be trying similar things.
There's PDFLayoutTextStripper that was designed to keep the format of the data.
From the README:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import org.apache.pdfbox.pdfparser.PDFParser;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.util.PDFTextStripper;
public class Test {
public static void main(String[] args) {
String string = null;
try {
PDFParser pdfParser = new PDFParser(new FileInputStream("sample.pdf"));
pdfParser.parse();
PDDocument pdDocument = new PDDocument(pdfParser.getDocument());
PDFTextStripper pdfTextStripper = new PDFLayoutTextStripper();
string = pdfTextStripper.getText(pdDocument);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
};
System.out.println(string);
}
}
I've had decent success with parsing text files generated by the pdftotext utility (sudo apt-get install poppler-utils).
File convertPdf() throws Exception {
File pdf = new File("mypdf.pdf");
String outfile = "mytxt.txt";
String proc = "/usr/bin/pdftotext";
ProcessBuilder pb = new ProcessBuilder(proc,"-layout",pdf.getAbsolutePath(),outfile);
Process p = pb.start();
p.waitFor();
return new File(outfile);
}
Try using TabulaPDF (https://github.com/tabulapdf/tabula) . This is very good library to extract table content from the PDF file. It is very as expected.
Good luck. :)
Extracting data from PDF is bound to be fraught with problems. Are the documents created through some kind of automatic process? If so, you might consider converting the PDFs to uncompressed PostScript (try pdf2ps) and seeing if the PostScript contains some sort of regular pattern which you can exploit.
I had the same problem in reading the pdf file in which data is in tabular format. After regular parse using PDFBox each row were extracted with comma as a separator... losing the columnar position.
To resolve this I used PDFTextStripperByArea and using coordinates I extracted the data column by column for each row. This is provided that you have a fixed format pdf.
File file = new File("fileName.pdf");
PDDocument document = PDDocument.load(file);
PDFTextStripperByArea stripper = new PDFTextStripperByArea();
stripper.setSortByPosition( true );
Rectangle rect1 = new Rectangle( 50, 140, 60, 20 );
Rectangle rect2 = new Rectangle( 110, 140, 20, 20 );
stripper.addRegion( "row1column1", rect1 );
stripper.addRegion( "row1column2", rect2 );
List allPages = document.getDocumentCatalog().getAllPages();
PDPage firstPage = (PDPage)allPages.get( 2 );
stripper.extractRegions( firstPage );
System.out.println(stripper.getTextForRegion( "row1column1" ));
System.out.println(stripper.getTextForRegion( "row1column2" ));
Then row 2 and so on...
You can use PDFBox's PDFTextStripperByArea class to extract text from a specific region of a document. You can build on this by identifying the region each cell of the table. This isn't provided out of the box, but the example DrawPrintTextLocations class demonstrates how you can parse the bounding boxes of individual characters in a document (it would be great to parse bounding boxes of strings or paragraphs, but I haven't seen support in PDFBox for this - see this question). You can use this approach to group up all touching bounding boxes to identify distinct cells of a table. One way to do this is to maintain a set boxes of Rectangle2D regions and then for each parsed character find the character's bounding box as in DrawPrintTextLocations.writeString(String string, List<TextPosition> textPositions) and merge it with the existing contents.
Rectangle2D bounds = s.getBounds2D();
// Pad sides to detect almost touching boxes
Rectangle2D hitbox = bounds.getBounds2D();
final double dx = 1.0; // This value works for me, feel free to tweak (or add setter)
final double dy = 0.000; // Rows of text tend to overlap, so no need to extend
hitbox.add(bounds.getMinX() - dx , bounds.getMinY() - dy);
hitbox.add(bounds.getMaxX() + dx , bounds.getMaxY() + dy);
// Find all overlapping boxes
List<Rectangle2D> intersectList = new ArrayList<Rectangle2D>();
for(Rectangle2D box: boxes) {
if(box.intersects(hitbox)) {
intersectList.add(box);
}
}
// Combine all touching boxes and update
for(Rectangle2D box: intersectList) {
bounds.add(box);
boxes.remove(box);
}
boxes.add(bounds);
You can then pass these regions to PDFTextStripperByArea.
You can also go one further and separate out the horizontal and vertical components of these regions, and so infer regions of all the table's cells, regardless of whether then hold any content.
I have had cause to perform these steps, and eventually wrote my own PDFTableStripper class using PDFBox. I've shared my code as a gist on GitHub. The main method gives an example of how the class can be used:
try (PDDocument document = PDDocument.load(new File(args[0])))
{
final double res = 72; // PDF units are at 72 DPI
PDFTableStripper stripper = new PDFTableStripper();
stripper.setSortByPosition(true);
// Choose a region in which to extract a table (here a 6"wide, 9" high rectangle offset 1" from top left of page)
stripper.setRegion(new Rectangle(
(int) Math.round(1.0*res),
(int) Math.round(1*res),
(int) Math.round(6*res),
(int) Math.round(9.0*res)));
// Repeat for each page of PDF
for (int page = 0; page < document.getNumberOfPages(); ++page)
{
System.out.println("Page " + page);
PDPage pdPage = document.getPage(page);
stripper.extractTable(pdPage);
for(int c=0; c<stripper.getColumns(); ++c) {
System.out.println("Column " + c);
for(int r=0; r<stripper.getRows(); ++r) {
System.out.println("Row " + r);
System.out.println(stripper.getText(r, c));
}
}
}
}
It is not required for me to use the PDFBox library, so a solution that uses another library is fine
Camelot and Excalibur
You may want to try Python library Camelot, an open source library for Python. If you are not inclined to write code, you may use the web interface Excalibur created around Camelot. You "upload" the document to a localhost web server, and "download" the result from this localhost server.
Here is an example from using this python code:
import camelot
tables = camelot.read_pdf('foo.pdf', flavor="stream")
tables[0].to_csv('foo.csv')
The input is a pdf containing this table:
Sample table from the PDF-TREX set
No help is provided to camelot, it is working on its own by looking at pieces of text relative alignment. The result is returned in a csv file:
PDF table extracted from sample by camelot
"Rules" can de added to help camelot identify where are fillets in sophisticated tables:
Rule added in Excalibur. Source
GitHub:
Camelot: https://github.com/camelot-dev/camelot
Excalibur: https://github.com/camelot-dev/excalibur
The two projects are active.
Here is a comparison with other software (with test based on actual documents), Tabula, pdfplumber, pdftables, pdf-table-extract.
I want is to be able to parse the file and know what each parsed number means
You cannot do that automatically, as pdf is not semantically structured.
Book versus document
Pdf "documents" are unstructured from a semantic standpoint (it's like a notepad file), the pdf document gives instructions on where to print a text fragment, unrelated to other fragments of the same section, there is no separation between content (what to print, and whether this is a fragment of a title, a table or a footnote) and the visual representation (font, location, etc). Pdf is an extension of PostScript, which describes a Hello world! page this way:
!PS
/Courier % font
20 selectfont % size
72 500 moveto % current location to print at
(Hello world!) show % add text fragment
showpage % print all on the page
(Wikipedia).
One can imagine what a table looks like with the same instructions.
We could say html is not clearer, however there is a big difference: Html describes the content semantically (title, paragraph, list, table header, table cell, ...) and associates the css to produce a visual form, hence content is fully accessible. In this sense, html is a simplified descendant of sgml which puts constraints to allow data processing:
Markup should describe a document's structure and other attributes
rather than specify the processing that needs to be performed, because
it is less likely to conflict with future developments.
exactly the opposite of PostScript/Pdf. SGML is used in publishing. Pdf doesn't embed this semantical structure, it carries only the css-equivalent associated to plain character strings which may not be complete words or sentences. Pdf is used for closed documents and now for the so-called workflow management.
After having experimented the uncertainty and difficulty in trying to extract data from pdf, it's clear pdf is not at all a solution to preserve a document content for the future (in spite Adobe has obtained from their pairs a pdf standard).
What is actually preserved well is the printed representation, as the pdf was fully dedicated to this aspect when created. Pdf are nearly as dead as printed books.
When reusing the content matters, one must rely again on manual re-entering of data, like from a printed book (possibly trying to do some OCR on it). This is more and more true, as many pdf even prevent the use of copy-paste, introducing multiple spaces between words or produce an unordered characters gibberish when some "optimization" is done for web use.
When the content of the document, not its printed representation, is valuable, then pdf is not the correct format. Even Adobe is unable to recreate perfectly the source of a document from its pdf rendering.
So open data should never be released in pdf format, this limits their use to reading and printing (when allowed), and makes reuse harder or impossible.
ObjectExtractor oe = new ObjectExtractor(document);
SpreadsheetExtractionAlgorithm sea = new SpreadsheetExtractionAlgorithm(); // Tabula algo.
Page page = oe.extract(1); // extract only the first page
for (int y = 0; y < sea.extract(page).size(); y++) {
System.out.println("table: " + y);
Table table = sea.extract(page).get(y);
for (int i = 0; i < table.getColCount(); i++) {
for (int x = 0; x < table.getRowCount(); x++) {
System.out.println("col:" + i + "/lin:x" + x + " >>" + table.getCell(x, i).getText());
}
}
}
How about printing to image and doing OCR on that?
Sounds terribly ineffective, but it's practically the very purpose of PDF to make text inaccessible, you gotta do what you gotta do.
http://swftools.org/ these guys have a pdf2swf component. They are also able to show tables.
They are also giving the source. So you could possibly check it out.
This works fine if PDF file has "Only Rectangular table" using pdfbox 2.0.6. Won't work with any other table only Rectangular table.
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.text.PDFTextStripper;
import org.apache.pdfbox.text.PDFTextStripperByArea;
public class PDFTableExtractor {
public static void main(String[] args) throws IOException {
ArrayList<String[]> objTableList = readParaFromPDF("C:\\sample1.pdf", 1,1,6);
//Enter Filepath, startPage, EndPage, Number of columns in Rectangular table
}
public static ArrayList<String[]> readParaFromPDF(String pdfPath, int pageNoStart, int pageNoEnd, int noOfColumnsInTable) {
ArrayList<String[]> objArrayList = new ArrayList<>();
try {
PDDocument document = PDDocument.load(new File(pdfPath));
document.getClass();
if (!document.isEncrypted()) {
PDFTextStripperByArea stripper = new PDFTextStripperByArea();
stripper.setSortByPosition(true);
PDFTextStripper tStripper = new PDFTextStripper();
tStripper.setStartPage(pageNoStart);
tStripper.setEndPage(pageNoEnd);
String pdfFileInText = tStripper.getText(document);
// split by whitespace
String Documentlines[] = pdfFileInText.split("\\r?\\n");
for (String line : Documentlines) {
String lineArr[] = line.split("\\s+");
if (lineArr.length == noOfColumnsInTable) {
for (String linedata : lineArr) {
System.out.print(linedata + " ");
}
System.out.println("");
objArrayList.add(lineArr);
}
}
}
} catch (Exception e) {
System.out.println("Exception " +e);
}
return objArrayList;
}
}
For anyone wanting to do the same thing as OP (as I do), after days of research Amazon Textract is the best option (if your volume is low free tier might be enough).
consider using PDFTableStripper.class
The class is available on git :
https://gist.github.com/beldaz/8ed6e7473bd228fcee8d4a3e4525be11#file-pdftablestripper-java-L1
I'm not familiar with PDFBox, but you could try looking at itext. Even though the homepage says PDF generation, you can also do PDF manipulation and extraction. Have a look and see if it fits your use case.
For reading content of the table from pdf file,you have to do only just convert the pdf file into a text file by using any API(I have use PdfTextExtracter.getTextFromPage() of iText) and then read that txt file by your java program..now after reading it the major task is done.. you have to filter the data of your need. you can do it by continuously using split method of String class until you find record of your intrest.. here is my code by which I have extract part of record by an PDF file and write it into a .CSV file.. Url of PDF file is..http://www.cea.nic.in/reports/monthly/generation_rep/actual/jan13/opm_02.pdf
Code:-
public static void genrateCsvMonth_Region(String pdfpath, String csvpath) {
try {
String line = null;
// Appending Header in CSV file...
BufferedWriter writer1 = new BufferedWriter(new FileWriter(csvpath,
true));
writer1.close();
// Checking whether file is empty or not..
BufferedReader br = new BufferedReader(new FileReader(csvpath));
if ((line = br.readLine()) == null) {
BufferedWriter writer = new BufferedWriter(new FileWriter(
csvpath, true));
writer.append("REGION,");
writer.append("YEAR,");
writer.append("MONTH,");
writer.append("THERMAL,");
writer.append("NUCLEAR,");
writer.append("HYDRO,");
writer.append("TOTAL\n");
writer.close();
}
// Reading the pdf file..
PdfReader reader = new PdfReader(pdfpath);
BufferedWriter writer = new BufferedWriter(new FileWriter(csvpath,
true));
// Extracting records from page into String..
String page = PdfTextExtractor.getTextFromPage(reader, 1);
// Extracting month and Year from String..
String period1[] = page.split("PEROID");
String period2[] = period1[0].split(":");
String month[] = period2[1].split("-");
String period3[] = month[1].split("ENERGY");
String year[] = period3[0].split("VIS");
// Extracting Northen region
String northen[] = page.split("NORTHEN REGION");
String nthermal1[] = northen[0].split("THERMAL");
String nthermal2[] = nthermal1[1].split(" ");
String nnuclear1[] = northen[0].split("NUCLEAR");
String nnuclear2[] = nnuclear1[1].split(" ");
String nhydro1[] = northen[0].split("HYDRO");
String nhydro2[] = nhydro1[1].split(" ");
String ntotal1[] = northen[0].split("TOTAL");
String ntotal2[] = ntotal1[1].split(" ");
// Appending filtered data into CSV file..
writer.append("NORTHEN" + ",");
writer.append(year[0] + ",");
writer.append(month[0] + ",");
writer.append(nthermal2[4] + ",");
writer.append(nnuclear2[4] + ",");
writer.append(nhydro2[4] + ",");
writer.append(ntotal2[4] + "\n");
// Extracting Western region
String western[] = page.split("WESTERN");
String wthermal1[] = western[1].split("THERMAL");
String wthermal2[] = wthermal1[1].split(" ");
String wnuclear1[] = western[1].split("NUCLEAR");
String wnuclear2[] = wnuclear1[1].split(" ");
String whydro1[] = western[1].split("HYDRO");
String whydro2[] = whydro1[1].split(" ");
String wtotal1[] = western[1].split("TOTAL");
String wtotal2[] = wtotal1[1].split(" ");
// Appending filtered data into CSV file..
writer.append("WESTERN" + ",");
writer.append(year[0] + ",");
writer.append(month[0] + ",");
writer.append(wthermal2[4] + ",");
writer.append(wnuclear2[4] + ",");
writer.append(whydro2[4] + ",");
writer.append(wtotal2[4] + "\n");
// Extracting Southern Region
String southern[] = page.split("SOUTHERN");
String sthermal1[] = southern[1].split("THERMAL");
String sthermal2[] = sthermal1[1].split(" ");
String snuclear1[] = southern[1].split("NUCLEAR");
String snuclear2[] = snuclear1[1].split(" ");
String shydro1[] = southern[1].split("HYDRO");
String shydro2[] = shydro1[1].split(" ");
String stotal1[] = southern[1].split("TOTAL");
String stotal2[] = stotal1[1].split(" ");
// Appending filtered data into CSV file..
writer.append("SOUTHERN" + ",");
writer.append(year[0] + ",");
writer.append(month[0] + ",");
writer.append(sthermal2[4] + ",");
writer.append(snuclear2[4] + ",");
writer.append(shydro2[4] + ",");
writer.append(stotal2[4] + "\n");
// Extracting eastern region
String eastern[] = page.split("EASTERN");
String ethermal1[] = eastern[1].split("THERMAL");
String ethermal2[] = ethermal1[1].split(" ");
String ehydro1[] = eastern[1].split("HYDRO");
String ehydro2[] = ehydro1[1].split(" ");
String etotal1[] = eastern[1].split("TOTAL");
String etotal2[] = etotal1[1].split(" ");
// Appending filtered data into CSV file..
writer.append("EASTERN" + ",");
writer.append(year[0] + ",");
writer.append(month[0] + ",");
writer.append(ethermal2[4] + ",");
writer.append(" " + ",");
writer.append(ehydro2[4] + ",");
writer.append(etotal2[4] + "\n");
// Extracting northernEastern region
String neestern[] = page.split("NORTH");
String nethermal1[] = neestern[2].split("THERMAL");
String nethermal2[] = nethermal1[1].split(" ");
String nehydro1[] = neestern[2].split("HYDRO");
String nehydro2[] = nehydro1[1].split(" ");
String netotal1[] = neestern[2].split("TOTAL");
String netotal2[] = netotal1[1].split(" ");
writer.append("NORTH EASTERN" + ",");
writer.append(year[0] + ",");
writer.append(month[0] + ",");
writer.append(nethermal2[4] + ",");
writer.append(" " + ",");
writer.append(nehydro2[4] + ",");
writer.append(netotal2[4] + "\n");
writer.close();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}

how to export csv table by gephi-toolkit

recently I'm trying to use gephi-toolkit to do some statistics for several networks,it works fine to import the csv file and do some metrics computation, but if I want to export the csv table like what I do below in the Gephi, I do not find any tutorials and there is not description in the javadoc.
since I want to do the same operations for different networks,I want to use gephi-toolkit to implement the same function,but I've checked the demos and google for it but failed to get the result I want.
To describe my question clearer,I post my current code below, now I've tried two methods, one is using the methods of ExporterCSV which only returned me with the matrix in csv file format while what I want is a node csv file and an edge csv file after computing several metrics like betweenness centrality, closeness centrality and so on for each node of each network. And the other method I've tried is using DataTableControllerImpl but it seems that no files has been created, I want to know if there is something wrong with my code, any help is appreciated.
public class Transfer95 {
public void script() {
//Init a project - and therefore a workspace
ProjectController pc = Lookup.getDefault().lookup(ProjectController.class);
pc.newProject();
Workspace workspace = pc.getCurrentWorkspace();
//Get controllers and models
ImportController importController = Lookup.getDefault().lookup(ImportController.class);
//Get models and controllers for this new workspace - will be useful later
GraphModel graphModel = Lookup.getDefault().lookup(GraphController.class).getGraphModel();
//Import file
Container container,container2;
try {
File file_node = new File(getClass().getResource("/resource/season2/club_1_1995.csv").toURI());
container = importController.importFile(file_node);
container.getLoader().setEdgeDefault(EdgeDirectionDefault.DIRECTED); //Force DIRECTED
container.getLoader().setAllowAutoNode(true); //create missing nodes
container.getLoader().setEdgesMergeStrategy(EdgeMergeStrategy.SUM);
container.getLoader().setAutoScale(true);
File file_edge = new File(getClass().getResource("/resource/season2/transfer_1_1995.csv").toURI());
container2 = importController.importFile(file_edge);
container2.getLoader().setEdgeDefault(EdgeDirectionDefault.DIRECTED); //Force DIRECTED
container2.getLoader().setAllowAutoNode(true); //create missing nodes
container2.getLoader().setEdgesMergeStrategy(EdgeMergeStrategy.SUM);
container2.getLoader().setAutoScale(true);
} catch (Exception ex) {
ex.printStackTrace();
return;
}
//Append imported data to GraphAPI
importController.process(container, new DefaultProcessor(), workspace);
importController.process(container2, new AppendProcessor(), workspace); //Use AppendProcessor to append to current workspace
//See if graph is well imported
DirectedGraph graph = graphModel.getDirectedGraph();
System.out.println("Nodes: " + graph.getNodeCount());
System.out.println("Edges: " + graph.getEdgeCount());
//count several metrics
Degree degree=new Degree();
degree.execute(graph.getModel());
System.out.println("Average Degree: "+degree.getAverageDegree());
WeightedDegree weightedDegree=new WeightedDegree();
weightedDegree.execute(graph.getModel());
System.out.println("Average Weighted Degree: "+weightedDegree.getAverageDegree());
ClusteringCoefficient clusteringcoefficient=new ClusteringCoefficient();
clusteringcoefficient.execute(graph.getModel());
System.out.println("Average Clustering Coefficient: "+clusteringcoefficient.getAverageClusteringCoefficient());
GraphDistance graphDistance=new GraphDistance();
graphDistance.execute(graph.getModel());
System.out.println("Average Path Length: "+graphDistance.getPathLength());
System.out.println("Network Diameter: "+graphDistance.getDiameter());
Modularity modularity=new Modularity();
modularity.execute(graph.getModel());
System.out.println("Modularity: "+modularity.getModularity());
GraphDensity graphDensity=new GraphDensity();
graphDensity.execute(graph.getModel());
System.out.println("Graph Density: "+graphDensity.getDensity());
//Export method 1
// ExportController ec = Lookup.getDefault().lookup(ExportController.class);
// ExporterCSV exporterCSV=(ExporterCSV)ec.getExporter("csv");
// try {
//// ec.exportFile(new File("src/resource/output/test_95.csv"));
// ec.exportFile(new File("src/resource/output/test_95.csv"), exporterCSV);
// } catch (IOException ex) {
// ex.printStackTrace();
// return;
// }
//Export method 2
// Lookup.getDefault().lookup(DataTablesController.class).setDataTablesEventListener(DataTableTopComponent.this);
DataTablesControllerImpl csvExp=new DataTablesControllerImpl();
// Lookup.getDefault().lookup(DataTablesControllerImpl.class).setDataTablesEventListener(csvExp.getDataTablesEventListener());
// DataTablesControllerImpl dataTablesController = Lookup.getDefault().lookup(DataTablesController.class);
csvExp.exportCurrentTable();
}
public static void main(String[] args){
Transfer95 test=new Transfer95();
test.script();
}
}
This is not the best solution, but in my case it worked.
Import and use in your project the ExporterSpreadsheet.java class from Gephi source code (GitHub link)
For example, to export nodes table:
try {
ExporterSpreadsheet exporter = new ExporterSpreadsheet();
exporter.setWorkspace(workspace);
exporter.setWriter(new FileWriter(new File("nodes.csv")));
exporter.setTableToExport(ExporterSpreadsheet.ExportTable.NODES);
exporter.execute();
} catch (Exception ex) {
ex.printStackTrace();
}
In the same way, for edges table:
try {
ExporterSpreadsheet exporter = new ExporterSpreadsheet();
exporter.setWorkspace(workspace);
exporter.setWriter(new FileWriter(new File("edges.csv")));
exporter.setTableToExport(ExporterSpreadsheet.ExportTable.EDGES);
exporter.execute();
} catch (Exception ex) {
ex.printStackTrace();
}
As output, you get the same CSV files that Gephi exports using the Export table button from Data Laboratory.

speech recognition with cmu sphinx - doesn't work properly

I'm trying to use CMU Sphinx for speech recognition in java but the result I'm getting is not correct and I don't know why.
I have a .wav file I recorded with my voice saying some sentence in English.
Here is my code in java:
Configuration configuration = new Configuration();
// Set path to acoustic model.
configuration.setAcousticModelPath("resource:/edu/cmu/sphinx/models/en-us/en-us");
// Set path to dictionary.
configuration.setDictionaryPath("resource:/edu/cmu/sphinx/models/en-us/cmudict-en-us.dict");
// Set language model.
configuration.setLanguageModelPath("resource:/edu/cmu/sphinx/models/en-us/en-us.lm.dmp");
StreamSpeechRecognizer recognizer = new StreamSpeechRecognizer(configuration);
recognizer.startRecognition(new FileInputStream("assets/voice/some_wav_file.wav"));
SpeechResult result = null;
while ((result = recognizer.getResult()) != null) {
System.out.println("~~ RESULTS: " + result.getHypothesis());
}
recognizer.stopRecognition();
}
catch(Exception e){
System.out.println("ERROR: " + e.getMessage());
}
I also have another code in Android that doesn't work as well:
Assets assets = new Assets(context);
File assetDir = assets.syncAssets();
String prefix = assetDir.getPath();
Config c = Decoder.defaultConfig();
c.setString("-hmm", prefix + "/en-us-ptm");
c.setString("-lm", prefix + "/en-us.lm");
c.setString("-dict", prefix + "/cmudict-en-us.dict");
Decoder d = new Decoder(c);
InputStream stream = context.getResources().openRawResource(R.raw.some_wav_file);
d.startUtt();
byte[] b = new byte[4096];
try {
int nbytes;
while ((nbytes = stream.read(b)) >= 0) {
ByteBuffer bb = ByteBuffer.wrap(b, 0, nbytes);
short[] s = new short[nbytes/2];
bb.asShortBuffer().get(s);
d.processRaw(s, nbytes/2, false, false);
}
} catch (IOException e) {
Log.d("ERROR: ", "Error when reading file" + e.getMessage());
}
d.endUtt();
Log.d("TOTAL RESULT: ", d.hyp().getHypstr());
for (Segment seg : d.seg()) {
Log.d("RESULT: ", seg.getWord());
}
I used this website to convert the wav file to 16bit, 16khz, mono and little-endian (tried all the options of it).
Any ideas why is doesn't work. I use the built in dictionaries and accustic models and my accent in English is not perfect (don't know if it matters).
EDIT:
This is my file. I recorded myself saying: "My baby is cute" and that's what I expect to be the output.
In the pure java code I get: "i've amy's youth" and in the android code I getl: " it"
Here is file containing the logs.
Your audio is somewhat corrupted by conversion. You should record into wav originally or into some other lossless format. Your pronunciation is also far from US English. For conversion between formats you can use sox instead of external website. Your android sample seems correct but it feels like you decode different file with android. You might check that you have actual proper file in resources.

How to read shapes group as an image from Word document(.doc or .docx) using apachePOI?

I have a simple requirement to extract all the Images and Diagrams drawn in MS Word file.
I am able to extract only images but not group of shapes(like Use Case Diagram or Activity Diagram). I want to save all the Diagrams as Image.
I have used apachePOI.
Following code I have written
public class worddocreader {
public static void main(String args[]) {
FileInputStream fis;
try {
FileInputStream fs = new FileInputStream("F:/1.docx");
XWPFDocument docx = new XWPFDocument(fs);
List<XWPFPictureData> piclist = docx.getAllPictures();
Iterator<XWPFPictureData> iterator = piclist.iterator();
int i = 0;
while (iterator.hasNext()) {
XWPFPictureData pic = iterator.next();
byte[] bytepic = pic.getData();
BufferedImage imag = ImageIO.read(new ByteArrayInputStream(
bytepic));
ImageIO.write(imag, "image/jpeg", new File("F:/docParsing/imagefromword" + i + ".jpg"));
i++;
}
ArrayList<PackagePart> packArrayList = docx.getPackageRelationship().getPackage().getParts();
int size = packArrayList.size();
System.out.println("Array List Size : " + packArrayList.size());
while (size-->0) {
PackagePart packagePart = packArrayList.get(size);
System.out.println(packagePart.getContentType());
try{
BufferedImage bfrImage = ImageIO.read(packagePart.getInputStream());
ImageIO.write(bfrImage,"image/png",new File("F:/docParsing_emb/size"+size+".png"));
}catch(Exception e){
e.printStackTrace();
}
}
System.out.println("Done");
} catch (Exception e) {
e.printStackTrace();
}
}
}
It only extract Images not Shapes.
Does anybody knows How do I do this ?
So you are after the stuff defined in [MS-ODRAW], i.e. so-called OfficeDrawings which can be created directly in Word using its Drawing palette?
Unfortunately, POI offers only little help here. With HWPF (the old binary *.doc file format) you can get a handle to such data like so:
HWPFDocument document;
OfficeDrawings officeDrawings = document.getOfficeDrawingsMain();
OfficeDrawing drawing = officeDrawings.getOfficeDrawingAt(OFFSET);
// OFFSET is a global character offset describing the position of the drawing in question
// i.e. document.getRange().getStartOffset() + x
This drawing can then be further processed into individual records:
EscherRecordManager escherRecordManager = new EscherRecordManager(drawing.getOfficeArtSpContainer());
EscherSpRecord escherSpRecord = escherRecordManager.getSpRecord();
EscherOptRecord escherOptRecord = escherRecordManager.getOptRecord();
Using the data from all these records you can theoretically render out the original drawing again. But it's rather painful...
So far I've only done this in a single case where I had lots of simple arrows floating around on a page. Those had to be converted to a textual representation (something like: "Positions (x1, y1) and (x2, y2) are connected by an arrow"). Doing this essentially meant to implement a subset of [MS-ODRAW] relevant to those arrows using the above-mentioned records. Not exactly a pleasant task.
MS Word backup solution
If using MS Word itself is an option to you, then there is another pragmatic way:
extract all relevant offsets that contain OfficeDrawings using POI.
Inside Word: Iterate over the document with VBA and copy all the drawings at the given offsets to the clipboard.
Use some other application (I chose Visio) to dump the clipboard contents into a PNG.
The necessary check for a drawing in step 1 is very simple (see below). The rest can be completely automated in Word. If anyone is in need, I can share the respective VBA code.
if (characterRun.isSpecialCharacter()) {
for (char currentChar : characterRun.text().toCharArray()) {
if ('\u0008' == currentChar) return true;
}
}
If you mean Office Art objects then
In the class org.apache.poi.hwpf.HWPFDocument
there is a _officeDrawingsMain that contains the office art objects
check this link https://poi.apache.org/apidocs/org/apache/poi/hwpf/HWPFDocument.html

Categories