StandardAnalyzer - Apache Lucene - java

I'm actually developing a system where you input some text files to a StandardAnalyzer, and the contents of that file are then replaced by the output of the StandardAnalyzer (which tokenizes and removes all the stop words). The code ive developed till now is :
File f = new File(path);
TokenStream stream = analyzer.tokenStream("contents",
new StringReader(readFileToString(f)));
CharTermAttribute charTermAttribute = stream.getAttribute(CharTermAttribute.class);
while (stream.incrementToken()) {
String term = charTermAttribute.toString();
System.out.print(term);
}
//Following is the readFileToString(File f) function
StringBuilder textBuilder = new StringBuilder();
String ls = System.getProperty("line.separator");
Scanner scanner = new Scanner(new FileInputStream(f));
while (scanner.hasNextLine()){
textBuilder.append(scanner.nextLine() + ls);
}
scanner.close();
return textBuilder.toString();
The readFileToString(f) is a simple function which converts the file contents to a string representation.
The output i'm getting are the words each with the spaces or the new line between them removed. Is there a way to preserve the original spaces or the new line characters after the analyzer output, so that i can replace the original file contents with the filtered contents of the StandardAnalyzer and present it in a readable form?

Tokenizers save the term position, so in theory you could look at the position to determine how many characters there are between each token, but they don't save the data which was between the tokens. So you could get back spaces, but not newlines.
If you're comfortable with JFlex you could modify the tokenizer to treat newlines as a token. That's probably harder than any gain you'd get from it though.

Related

CSVParser processes LF as CRLF

I am trying to parse a CSV file as below
String NEW_LINE_SEPARATOR = "\r\n";
CSVFormat csvFileFormat = CSVFormat.DEFAULT.withRecordSeparator(NEW_LINE_SEPARATOR);
FileReader fr = new FileReader("201404051539.csv");
CSVParser csvParser = csvFileFormat.withHeader().parse(fr);
List<CSVRecord> recordsList = csvParser.getRecords();
Now the file got normal lines ending with CRLF characters however for few lines there is additional LF character appearing in middle.
i.e.
a,b,c,dCRLF --line1
e,fLF,g,h,iCRLF --line2
Due to this, the parse operation creates three records whereas actually they are only two.
Is there a way I can get the LF character appearing in middle of second line not treated as line break and get two records only upon parsing?
Thanks
I think uniVocity-parsers is the only parser you will find that will work with line endings as you expect.
The equivalent code using univocity-parsers will be:
CsvParserSettings settings = new CsvParserSettings(); //many options here, check the tutorial
settings.getFormat().setLineSeparator("\r\n");
settings.getFormat().setNormalizedNewline('\u0001'); //uses a special character to represent a new record instead of \n.
settings.setNormalizeLineEndingsWithinQuotes(false); //does not replace \r\n by the normalized new line when reading quoted values.
settings.setHeaderExtractionEnabled(true); //extract headers from file
settings.trimValues(false); //does not remove whitespaces around values
CsvParser parser = new CsvParser(settings);
List<Record> recordsList = parser.parseAllRecords(new File("201404051539.csv"));
If you define a line separator to be \r\n then this is the ONLY sequence of characters that should identify a new record (when outside quotes). All values can have either \r or \n without being enclosed in quotes because that's NOT the line separator sequence.
When parsing the input sample you gave:
String input = "a,b,c,d\r\ne,f\n,g,h,i\r\n";
parser.parseAll(new StringReader(input));
The result will be:
LINE1 = [a, b, c, d]
LINE2 = [e, f
, g, h, i]
Disclosure: I'm the author of this library. It's open-source and free (Apache 2.0 license)

How to print pretty JSON using docx4j into a word document?

I want to print a simple pretty json string (containing multiple line breaks - many \n) into a word document. I tried the following but docx4j just prints all the contents inline in one single line (without \n). Ideally it should print multiline pretty json as it is recognising the "\n" the json string contains :
1)
wordMLPackage.getMainDocumentPart().addParagraphOfText({multiline pretty json String})
2)
ObjectFactory factory = Context.getWmlObjectFactory();
P p = factory.createP();
Text t = factory.createText();
t.setValue(text);
R run = factory.createR();
run.getContent().add(t);
p.getContent().add(run);
PPr ppr = factory.createPPr();
p.setPPr(ppr);
ParaRPr paraRpr = factory.createParaRPr();
ppr.setRPr(paraRpr);
wordMLPackage.getMainDocumentPart().addObject(p);
Looking for help. Thanks.
The docx file format doesn't treat \n as a newline.
So you'll need to split your string on \n, and either create a new P, or use w:br, like so:
Br br = wmlObjectFactory.createBr();
run.getContent().add( br);

How to access values of a line, while reading in a text file in Java

I am trying to load in two files at the same time but also access the first gps1 file. I want to access the gps1 file line-by-line and depending on the sentence type which I will explain later I want to do different stuff with that line and then move to the next line.
Basically gps1 for example has multiple lines but each line falls under a couple of catagories all starting with $GPS(then other characters). Some of these types have a time stamp which I need to collect and some types do not have a time stamp.
File gps1File = new File(gpsFile1);
File gps2File = new File(gpsFile2);
FileReader filegps1 = new FileReader(gpsFile1);
FileReader filegps2 = new FileReader(gpsFile2);
BufferedReader buffer1 = new BufferedReader(filegps1);
BufferedReader buffer2 = new BufferedReader(filegps2);
String gps1;
String gps2;
while ((gps1 = buffer1.readLine()) != null) {
The gps1 data file is as follows
$GPGSA,A,3,28,09,26,15,08,05,21,24,07,,,,1.6,1.0,1.3*3A
$GPRMC,151018.000,A,5225.9627,N,00401.1624,W,0.11,104.71,210214,,*14
$GPGGA,151019.000,5225.9627,N,00401.1624,W,1,09,1.0,38.9,M,51.1,M,,0000*72
$GPGSA,A,3,28,09,26,15,08,05,21,24,07,,,,1.6,1.0,1.3*3A
Thanks
I don't really understand the problem you are facing but anyway, if you want to get your lines content you can use a StringTokenizer
StringTokenizer st = new StringTokenizer(gps1, ",");
And then access the data one by one
while(st.hasMoreToken)
String s = st.nextToken();
EDIT:
NB: the first token will be your "$GPXXX" attribute

Lucene remove stopwords from file

I'm new to Lucene and I wish to remove stopwords from sentences in a large text file. Every sentence is stored on a separate line in the text file. The code I have currently is:
Tokenizer tokenizer = new StandardTokenizer(Version.LUCENE_41, new StringReader("if everyone got spam from me im extremely sorry"));
final StandardFilter standardFilter = new StandardFilter(Version.LUCENE_41, tokenizer);
final StopFilter stopFilter = new StopFilter(Version.LUCENE_41, standardFilter, sa.getStopwordSet());
final CharTermAttribute charTermAttribute = tokenizer.addAttribute(CharTermAttribute.class);
try{
stopFilter.reset();
while(stopFilter.incrementToken()) {
final String token = charTermAttribute.toString().toString();
System.out.printf("%s ", token);
}
}catch(Exception ex){
}
However, as you can see the StringReader only has one predefined sentence. Now, I was wondering how can it be done so I can have the program read in all sentences from my text file?
Thanks in advance!

How to get proper string array when parsing CSV?

Using jcsv I'm trying to parse a CSV to a specified type. When I parse it, it says length of the data param is 1. This is incorrect. I tried removing line breaks, but it still says 1. Am I just missing something in plain sight?
This is my input string csvString variable
"Symbol","Last","Chg(%)","Vol",
INTC,23.90,1.06,28419200,
GE,26.83,0.19,22707700,
PFE,31.88,-0.03,17036200,
MRK,49.83,0.50,11565500,
T,35.41,0.37,11471300,
This is the Parser
public class BuySignalParser implements CSVEntryParser<BuySignal> {
#Override
public BuySignal parseEntry(String... data) {
// console says "Length 1"
System.out.println("Length " + data.length);
if (data.length != 4) {
throw new IllegalArgumentException("data is not a valid BuySignal record");
}
String symbol = data[0];
double last = Double.parseDouble(data[1]);
double change = Double.parseDouble(data[2]);
double volume = Double.parseDouble(data[3]);
return new BuySignal(symbol, last, change, volume);
}
}
And this is where I use the parser (right from the example)
CSVReader<BuySignal> cReader = new CSVReaderBuilder<BuySignal>(new StringReader( csvString)).entryParser(new BuySignalParser()).build();
List<BuySignal> signals = cReader.readAll();
jcsv allows different delimiter characters. The default is semicolon. Use CSVStrategy.UK_DEFAULT to get to use commas.
Also, you have four commas, and that usually indicates five values. You might want to remove the delimiters off the end.
I don't know how to make jcsv ignore the first line
I typically use CSVHelper to parse CSV files, and while jcsv seems pretty good, here is how you would do it with CVSHelper:
Reader reader = new InputStreamReader(new FileInputStream("persons.csv"), "UTF-8");
//bring in the first line with the headers if you want them
List<String> firstRow = CSVHelper.parseLine(reader);
List<String> dataRow = CSVHelper.parseLine(reader);
while (dataRow!=null) {
...put your code here to construct your objects from the strings
dataRow = CSVHelper.parseLine(reader);
}
You shouldn't have commas at the end of lines. Generally there are cell delimiters (commas) and line delimiters (newlines). By placing commas at the end of the line it looks like the entire file is one long line.

Categories