CSVParser processes LF as CRLF - java

I am trying to parse a CSV file as below
String NEW_LINE_SEPARATOR = "\r\n";
CSVFormat csvFileFormat = CSVFormat.DEFAULT.withRecordSeparator(NEW_LINE_SEPARATOR);
FileReader fr = new FileReader("201404051539.csv");
CSVParser csvParser = csvFileFormat.withHeader().parse(fr);
List<CSVRecord> recordsList = csvParser.getRecords();
Now the file got normal lines ending with CRLF characters however for few lines there is additional LF character appearing in middle.
i.e.
a,b,c,dCRLF --line1
e,fLF,g,h,iCRLF --line2
Due to this, the parse operation creates three records whereas actually they are only two.
Is there a way I can get the LF character appearing in middle of second line not treated as line break and get two records only upon parsing?
Thanks

I think uniVocity-parsers is the only parser you will find that will work with line endings as you expect.
The equivalent code using univocity-parsers will be:
CsvParserSettings settings = new CsvParserSettings(); //many options here, check the tutorial
settings.getFormat().setLineSeparator("\r\n");
settings.getFormat().setNormalizedNewline('\u0001'); //uses a special character to represent a new record instead of \n.
settings.setNormalizeLineEndingsWithinQuotes(false); //does not replace \r\n by the normalized new line when reading quoted values.
settings.setHeaderExtractionEnabled(true); //extract headers from file
settings.trimValues(false); //does not remove whitespaces around values
CsvParser parser = new CsvParser(settings);
List<Record> recordsList = parser.parseAllRecords(new File("201404051539.csv"));
If you define a line separator to be \r\n then this is the ONLY sequence of characters that should identify a new record (when outside quotes). All values can have either \r or \n without being enclosed in quotes because that's NOT the line separator sequence.
When parsing the input sample you gave:
String input = "a,b,c,d\r\ne,f\n,g,h,i\r\n";
parser.parseAll(new StringReader(input));
The result will be:
LINE1 = [a, b, c, d]
LINE2 = [e, f
, g, h, i]
Disclosure: I'm the author of this library. It's open-source and free (Apache 2.0 license)

Related

Univocity CSV parser glues the whole line if it begins with quote "

I'm using univocity 2.7.5 to parse csv file. Till now it worked fine and parsed a row in csv file as String array with n elements, where n = number of columns in a row. But now i have a file, where rows start with quote " and the parser cannot handle it. It returns a row as String array with only one element which contains whole row data. I tried to remove that quote from csv file and it worked fine, but there are about 500,000 rows. What should i do to make it work?
Here is the sample line from my file (it has quotes in source file too):
"100926653937,Kasym Amina,620414400630,Marzhan Erbolova,""Kazakhstan, Almaty, 66, 3"",87029845662"
And here's my code:
CsvParserSettings settings = new CsvParserSettings();
settings.setDelimiterDetectionEnabled(true);
CsvParser parser = new CsvParser(settings);
List<String[]> rows = parser.parseAll(csvFile);
Author of the library here. The input you have there is a well-formed CSV, with a single value consisting of:
100926653937,Kasym Amina,620414400630,Marzhan Erbolova,"Kazakhstan, Almaty, 66, 3",87029845662
If that row appeared in the middle of your input, I suppose your input has unescaped quotes (somewhere before you got to that line). Try playing with the unescaped quote handling setting:
For example, this might work:
settings.setUnescapedQuoteHandling(UnescapedQuoteHandling.STOP_AT_CLOSING_QUOTE);
If nothing works, and all your lines look like the one you posted, then you can parse the input twice (which is shitty and slow but will work):
CsvParser parser = new CsvParser(settings);
parser.beginParsing(csvFile);
List<String[]> out = new ArrayList<>();
String[] row;
while ((row = parser.parseNext()) != null) {
//got a row with unexpected length?
if(row.length == 1){
//break it down again.
row = parser.parseLine(row[0]);
}
out.add(row);
}
Hope this helps.

How to print pretty JSON using docx4j into a word document?

I want to print a simple pretty json string (containing multiple line breaks - many \n) into a word document. I tried the following but docx4j just prints all the contents inline in one single line (without \n). Ideally it should print multiline pretty json as it is recognising the "\n" the json string contains :
1)
wordMLPackage.getMainDocumentPart().addParagraphOfText({multiline pretty json String})
2)
ObjectFactory factory = Context.getWmlObjectFactory();
P p = factory.createP();
Text t = factory.createText();
t.setValue(text);
R run = factory.createR();
run.getContent().add(t);
p.getContent().add(run);
PPr ppr = factory.createPPr();
p.setPPr(ppr);
ParaRPr paraRpr = factory.createParaRPr();
ppr.setRPr(paraRpr);
wordMLPackage.getMainDocumentPart().addObject(p);
Looking for help. Thanks.
The docx file format doesn't treat \n as a newline.
So you'll need to split your string on \n, and either create a new P, or use w:br, like so:
Br br = wmlObjectFactory.createBr();
run.getContent().add( br);

OpenCSV not escaping the quotes(")

I have a CSV file which will have delimiter or unclosed quotes inside a quotes, How do i make CSVReader ignore the quotes and delimiters inside quotes.
For example:
123|Bhajji|Maga|39|"I said Hey|" I am "5|'10."|"I a do "you"|get that"
This is the content of file.
The below program to read the csv file.
#Test
public void readFromCsv() throws IOException {
FileInputStream fis = new FileInputStream(
"/home/netspurt/awesomefile.csv");
InputStreamReader isr = new InputStreamReader(fis, "UTF-8");
CSVReader reader = new CSVReader(isr, '|', '\"');
for (String[] row; (row = reader.readNext()) != null;) {
System.out.println(Arrays.toString(row));
}
reader.close();
isr.close();
fis.close();
}
I get the o/p something like this.
[123, Bhajji, Maga, 39, I said Hey| I am "5|'10., I am an idiot do "you|get that]
what happened to quote after you
Edit:
The Opencsv dependency
com.opencsv
opencsv
3.4
from the source code of com.opencsv:opencsv:
/**
* Constructs CSVReader.
*
* #param reader the reader to an underlying CSV source.
* #param separator the delimiter to use for separating entries
* #param quotechar the character to use for quoted elements
* #param escape the character to use for escaping a separator or quote
*/
public CSVReader(Reader reader, char separator,
char quotechar, char escape) {
this(reader, separator, quotechar, escape, DEFAULT_SKIP_LINES, CSVParser.DEFAULT_STRICT_QUOTES);
}
see http://sourceforge.net/p/opencsv/source/ci/master/tree/src/main/java/com/opencsv/CSVReader.java
There is a constructor with an additional parameter escape which allows to escape separators and quotes (as per the javadoc).
As the CSV format specifies the quotes(") if its inside a field we need to precede it by another quote("). So this solved my problem.
123|Bhajji|Maga|39|"I said Hey|"" I am ""5|'10."|"I a do ""you""|get that"
Refrence: https://www.ietf.org/rfc/rfc4180.txt
Sorry but I don't have enough rep to add a comment so I will have to add an answer.
For your original question of what happened to the quote after the you the answer is the same as what happened to the quote before the I.
For CSV data the quote immediately before and after the separator is the start and end of the field data and is thus removed. That is why those two quotes are missing.
You need to escape out the quotes that are part of the field. The default escape character is the \
Taking a guess as to which quotes you want to escape the string should look like
123|Bhajji|Maga|39|"I said \"Hey I am \"5'10. Do \"you\" get that?\""

How to get proper string array when parsing CSV?

Using jcsv I'm trying to parse a CSV to a specified type. When I parse it, it says length of the data param is 1. This is incorrect. I tried removing line breaks, but it still says 1. Am I just missing something in plain sight?
This is my input string csvString variable
"Symbol","Last","Chg(%)","Vol",
INTC,23.90,1.06,28419200,
GE,26.83,0.19,22707700,
PFE,31.88,-0.03,17036200,
MRK,49.83,0.50,11565500,
T,35.41,0.37,11471300,
This is the Parser
public class BuySignalParser implements CSVEntryParser<BuySignal> {
#Override
public BuySignal parseEntry(String... data) {
// console says "Length 1"
System.out.println("Length " + data.length);
if (data.length != 4) {
throw new IllegalArgumentException("data is not a valid BuySignal record");
}
String symbol = data[0];
double last = Double.parseDouble(data[1]);
double change = Double.parseDouble(data[2]);
double volume = Double.parseDouble(data[3]);
return new BuySignal(symbol, last, change, volume);
}
}
And this is where I use the parser (right from the example)
CSVReader<BuySignal> cReader = new CSVReaderBuilder<BuySignal>(new StringReader( csvString)).entryParser(new BuySignalParser()).build();
List<BuySignal> signals = cReader.readAll();
jcsv allows different delimiter characters. The default is semicolon. Use CSVStrategy.UK_DEFAULT to get to use commas.
Also, you have four commas, and that usually indicates five values. You might want to remove the delimiters off the end.
I don't know how to make jcsv ignore the first line
I typically use CSVHelper to parse CSV files, and while jcsv seems pretty good, here is how you would do it with CVSHelper:
Reader reader = new InputStreamReader(new FileInputStream("persons.csv"), "UTF-8");
//bring in the first line with the headers if you want them
List<String> firstRow = CSVHelper.parseLine(reader);
List<String> dataRow = CSVHelper.parseLine(reader);
while (dataRow!=null) {
...put your code here to construct your objects from the strings
dataRow = CSVHelper.parseLine(reader);
}
You shouldn't have commas at the end of lines. Generally there are cell delimiters (commas) and line delimiters (newlines). By placing commas at the end of the line it looks like the entire file is one long line.

StandardAnalyzer - Apache Lucene

I'm actually developing a system where you input some text files to a StandardAnalyzer, and the contents of that file are then replaced by the output of the StandardAnalyzer (which tokenizes and removes all the stop words). The code ive developed till now is :
File f = new File(path);
TokenStream stream = analyzer.tokenStream("contents",
new StringReader(readFileToString(f)));
CharTermAttribute charTermAttribute = stream.getAttribute(CharTermAttribute.class);
while (stream.incrementToken()) {
String term = charTermAttribute.toString();
System.out.print(term);
}
//Following is the readFileToString(File f) function
StringBuilder textBuilder = new StringBuilder();
String ls = System.getProperty("line.separator");
Scanner scanner = new Scanner(new FileInputStream(f));
while (scanner.hasNextLine()){
textBuilder.append(scanner.nextLine() + ls);
}
scanner.close();
return textBuilder.toString();
The readFileToString(f) is a simple function which converts the file contents to a string representation.
The output i'm getting are the words each with the spaces or the new line between them removed. Is there a way to preserve the original spaces or the new line characters after the analyzer output, so that i can replace the original file contents with the filtered contents of the StandardAnalyzer and present it in a readable form?
Tokenizers save the term position, so in theory you could look at the position to determine how many characters there are between each token, but they don't save the data which was between the tokens. So you could get back spaces, but not newlines.
If you're comfortable with JFlex you could modify the tokenizer to treat newlines as a token. That's probably harder than any gain you'd get from it though.

Categories