Apache Commons CSV Mapping not found - java

I am trying to read a CSV file with certain headers into a Java object using Apache Commons CSV. However, when I run the code, I get the following exeption:
Exception in thread "main" java.lang.IllegalArgumentException: Mapping for Color not found, expected one of [Color, Name, Price, House Cost, Rent, 1 House, 2 Houses, 3 Houses, 4 Houses, Hotel, Mortgage]
at org.apache.commons.csv.CSVRecord.get(CSVRecord.java:102)
at GameBoard.<init>(GameBoard.java:25)
at Game.main(Game.java:3)
Can someone explain where the exception is coming from? It appears to me that Apache Commons somehow is not matching my input to a column. Is there something wrong on my part or is something else broken? Here is my code snippet:
Reader in;
Iterable<CSVRecord> records = null;
try {
in = new FileReader(new File(Objects.requireNonNull(getClass().getClassLoader().getResource("Properties.csv")).getFile()));
records = CSVFormat.EXCEL.withFirstRecordAsHeader().parse(in);
} catch (IOException | NullPointerException e) {
e.printStackTrace();
System.exit(1);
}
for (CSVRecord record :
records) {
spaces.add(new Property(
record.get("Color"),
record.get("Name"),
Integer.parseInt(record.get("Price")),
And here are my csv headers (sorry, one was cut off but that's not the point):
Thanks!

I had the same probem which only occurs if you reference the first column, all other column names are working. The problem is, that the UTF-8 representation prepends the following characters "0xEF,0xBB,0xBF" (see Wikipedia page). This seems to be a known problem for commons-csv but since this is application specific, it won't be fixed (CSVFormat.EXCEL.parse should handle byte order marks).
However, there is a documented workaround for this:
http://commons.apache.org/proper/commons-csv/user-guide.html#Handling_Byte_Order_Marks

I got the same weird exception. It actually said "Expecting one of ..." and then listed the field it said it could not find - just like in your case.
The reason was that I had set the wrong CSVFormat:
CSVFormat csvFormat = CSVFormat.newFormat(';');
This meant that my code was trying to separate fields on semi-colons in a file that actually had comma separators.
Once I used the DEFAULT CSVFormat, everything started to work.
CSVFormat csvFormat = CSVFormat.DEFAULT;
So the answer is that probably you must set CSVFormat correctly for your file.

Moving to spring boot version 2.6.7 from 2.4.5 brought about this error.. I had to convert each csvRecord to a map before assigning it to my POJO as follows.
for (CSVRecord csvRecord : csvRecords) {
Map<String, String> csvMap = csvRecord.toMap();
Model newModel = new Model();
model.setSomething(csvMap.get("your_item"));
}

I also got the same exception by giving a different name of header in CSV file like xyz, or trying to get the value by calling csvRecord.get("x_z")
I resolved my problem changing the header name xyz.
try {
fileReader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
csvParser = new CSVParser(fileReader,
CSVFormat.DEFAULT.withFirstRecordAsHeader().withIgnoreHeaderCase().withTrim());
Iterable<CSVRecord> csvRecords = csvParser.getRecords();
CSVFormat csvFormat = CSVFormat.DEFAULT;
for (CSVRecord csvRecord : csvRecords) {
} catch (Exception e) {
System.out.println("Reading CSV Error!");
e.printStackTrace();
} finally {
try {
fileReader.close();
csvParser.close();
} catch (IOException e) {
System.out.println("Closing fileReader/csvParser Error!");
e.printStackTrace();
}
}

Related

Resolving invalid data in CSV file with Apache Commons

Using the apache commons library for parsing CSV data I encounter an error
java.lang.IllegalStateException: IOException reading next record: java.io.IOException:
(line 46196) invalid char between encapsulated token and delimiter
I am using the setup as following:
try {
File csvInput = getLatestFilefromDir(CSV_PATH);
reader = new FileReader(csvInput);
final CSVFormat csvFormat = CSVFormat.Builder.create()
.setHeader(HEADERS)
.setDelimiter(';')
.setQuote('"')
.setEscape('\\')
.setSkipHeaderRecord(true)
.build();
Iterable<CSVRecord> csvRecords = csvFormat.parse(reader);
for (CSVRecord csvRecord : csvRecords) {
// processing
}
} catch (Exception e) {
log.error("Error retrieving CSV data.");
e.printStackTrace();
}
As the error suggest the data has some defect, invalid entry :
"TABLE_NAME";"ATTRIBUTE";"VALUE"
"SWAP_LEG_TYPE";"SWAP_LEG_TYPE_DESC";"The payments (PAY or RECEIVE) of this \"Leg\" are based on the yield linked to a specific equity or an index. (or to the actual market price of the equity or the index ???)"
"CNTPTY_TYPE";"CNTPTY_TYPE_DESC";"With Local Government we mean the so called \Regional Governments or Local Authorities\\" (RGLA) as defined by the EBA (European Banking Authority).\""
Changing the data is out of my control. Assuming the backslash is used for escaping quotes as in other example, in this case is used poorly and made it to the CSV file, hopefully there should be
...Authorities\ \" (RGLA)...
Is there a way to replace string before parsing?
Or what can I do to extend the CSVFormat builder to accept such data?
I am thinking of simple method to read the whole input and just do the replace string \\ for \ as this is the only instance in million lines, but that seems wrong.
This is a slightly modified original version that should solve your issue, setQuote(null) does all magic.
final CSVFormat csvFormat = CSVFormat.Builder.create()
.setHeader(HEADERS)
.setDelimiter(';')
.setQuote(null)
.setEscape('\\')
.setSkipHeaderRecord(true)
.build();

Getting Duplicate while trying to read CSV file with Apache Common CSV

I have a class that try to read a CSV file using Apache Common CSV, so far my code is working fine except that am not getting the result am expecting.
My code is displaying a duplicate of the second column in the csv file as below:
support#gmail.com
google
google.com
support#gmail.com
google
tutorialspoint
info#tuto.com
google
My CSV File
Name,User Name,Password
google.com,support#gmail.com,google
tutorialspoint,info#tuto.com,google
i expect to get something like this:
google.com
support#gmail.com
google
tutorialspoint
info#tuto.com
google
Here is my block that parses the csv using Apache CSV
public List<String> readCSV(String[] fields) {
// HERE WE START PROCESSING THE READ CSV CONTENTS
List<String> contents = new ArrayList<String>();
FileReader fileReader = null;
CSVParser csvFileParser = null;
// HERE WE START PROCESSING
if(fields!=null){
//Create the CSVFormat object with the header mapping
CSVFormat csvFileFormat = CSVFormat.DEFAULT.withHeader(FILE_HEADER_MAPPING);
try {
//Create a new list of student to be filled by CSV file data
List<String> content=new ArrayList<String>();
//initialize FileReader object
fileReader = new FileReader(FilePath);
//initialize CSVParser object
csvFileParser = new CSVParser(fileReader, csvFileFormat);
//Get a list of CSV file records
List<CSVRecord> csvRecords = csvFileParser.getRecords();
//Read the CSV file records starting from the second record to skip the header
for (int i = 1; i < csvRecords.size(); i++) {
CSVRecord record = csvRecords.get(i);
//Create a new student object and fill his data
for(int j=0; j<fields.length; j++){
content.add(record.get(fields[j]));
}
// Here we submit to contents
contents.addAll(content);
System.out.println(contents.size());
} // end of loop
}
catch (Exception e) {
e.printStackTrace();
} finally {
try {
fileReader.close();
csvFileParser.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
// Here we return
return contents;
}
I cant just figure out what am missing here, any help will be welcomed.
The reason is that you're adding the String list content each iteration
contents.addAll(content);
Either clear content on each iteration or just change
content.add(record.get(fields[j]));
to
contents.add(record.get(fields[j]));
and remove the
contents.addAll(content);
line

how to create a hash table in java

I have a csv file of 2 column. I`m trying to create a hash table for each dimension - only add a value if I haven't seen it before. I want to create 2 separate hash table for every column. columns contain string and numeric value. From the class definition i found containsKey(Object key) methoid Tests if the specified object is a key in this hashtable. i can explain a bit detail like my csv file may look like as below
New York, 50
Sydney, jessi
california, 10
New York, 10
so for column 1 New york came 2 in hash table i`d like to put key New York and value 2
can anyone help me how can i create a hash table like this way using java hashtable class, or maintain a separate array
Try this open source project on SourceForge called OpenCSV.
Then you could code something like this to read the CSV into your Map.
try {
CSVReader reader = new CSVReader(new InputStreamReader(new FileInputStream(new File("/path/to/your/file.csv"))));
Map<String, String> result = new HashMap<String, String>();
for(String[] row : reader.readAll()) {
result.put(row[0], row[1]);
}
} catch (FileNotFoundException e1) {
e1.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
You can read more on the OpenCSV documentation here.

Read email contents using Apache Commons Email

I've different properties file as shown below:
abc_en.properties
abc_ch.properties
abc_de.properties
All of these contain HTML tags & some static contents along with some image urls.
I want to send email message using apache commons email & I'm able to compose the name of the template through Java using locale as well.
String name = abc_ch.properties;
Now, how do I read it to send it as a Html Msg parameter using Java?
HtmlEmail e = new HtmlEmail();
e.setHostName("my.mail.com");
...
e.setHtmlMsg(msg);
How do I get the msg param to get the contents from the file? Any efficient & nice solun?
Can any one provide sample java code?
Note: The properties file has dynamic entries for username & some other fields like Dear ,....How do I substitute those dynamically?
Thanks
I would assume that *.properties is a text file.
If so, then do a File read into a String
eg:
String name = getContents(new java.io.File("/path/file.properties");
public static String getContents(File aFile) {
StringBuffer contents = new StringBuffer();
BufferedReader input = null;
try {
InputStreamReader fr=new InputStreamReader(new FileInputStream(aFile), "UTF8");
input = new BufferedReader( fr );
String line = null;
while (( line = input.readLine()) != null){
contents.append(line);
contents.append(System.getProperty("line.separator"));
}
}
catch (FileNotFoundException ex) {
//ex.printStackTrace();
}
catch (IOException ex){
//ex.printStackTrace();
}
finally {
try {
if (input!= null) {
input.close();
}
}
catch (IOException ex) {
//ex.printStackTrace();
}
}
return contents.toString();
}
regards
Hi Mike,
Well, I kind of guess that you are trying to send mails in multiple languages by rendering the elements from different property files at runtime. Also, you said "locale". Are you using the concept of "Resource Bundles )"? Well, in that case before you send mails,
1)You need to understand the naming conventions for naming a property file, without which the java compiler will not be able to load the appropriate property file at run time.
For this read the first page on the Resource Bundles page.
2) Once your naming conventions is fine, you can load the appropriate prop file like this:
Locale yourLocale = new Locale("en", "US");
ResourceBundle rb = ResourceBundle.getBundle("resourceBundleFileName", yourLocale);
3) Resource Bundle property file is nothing but a (Key,Value) pairs. Hence you can retrieve the value of a key like this:
String dearString = rb.getString("Dear");
String emailBody= rb.getString("emailBody");
4) You can later use this values for setting the attributes in your commons-email api.
Hope you find this useful!

HL7 parsing to get ORC-2

I am having trouble reading the ORC-2 field from ORM^O01 order message. I am using HapiStructures-v23-1.2.jar to read but this method(getFillerOrdersNumber()) is returning null value
MSH|^~\\&|recAPP|20010|BIBB|HCL|20110923192607||ORM^O01|11D900220|D|2.3|1\r
PID|1|11D900220|11D900220||TEST^FOURTYONE||19980808|M|||\r
ZRQ|1|11D900220||CHARTMAXX TESTING ACCOUNT 2|||||||||||||||||Y\r
ORC|NW|11D900220||||||||||66662^NOT INDICATED^X^^^^^^^^^^U|||||||||CHARTMAXX
TESTING ACCOUNT 2|^695 S.BROADWAY^DENVER^CO^80209\r
OBR|1|11D900220||66^BHL, 9P21 GENOTYPE^L|NORMAL||20110920001800|
||NOTAVAILABLE|N||Y|||66662^NOT INDICATED^X^^^^^^^^^^U\r
I want to parse this message and read the ORC-2 field and save it in the database
public static string getOrderNumber(){
Message hapiMsg = null;
Parser p = new GenericParser();
p.setValidationContext(null);
try {
hapiMsg = p.parse(hl7Message);
} catch (Exception e) {
Logger.error(e);
}
Terser terser = new Terser(hapiMsg);
try {
ORM_O01 getOrc = (ORM_O01)hapiMsg;
ORC orc = new ORC(getOrc, null);
String fn= orc.getFillerOrderNumber().toString();
}catch(Exception e){
logger.error(e);
}
return fn;
}
I read in some posts that I have to ladder through to reach the ORC OBR and NTE segments. can someone help me how to do this with a piece of code. Thanks in advance
First I have to point out that ORC-2 is Placer Order Number and ORC-3 is Filler Order Number, not the other way round. So, what you might want to do is this:
ORM_O01 msg = ...
ORC orc = msg.getORDER().getORC();
String placerOrderNumber =
orc.getPlacerOrderNumber().getEntityIdentifier().getValue();
String fillerOrderNumber =
orc.getFillerOrderNumber().getEntityIdentifier().getValue();
I would suggest you to read Hapi documentation yourself: http://hl7api.sourceforge.net/v23/apidocs/index.html
Based on this code:
ORM_O01 getOrc = (ORM_O01)hapiMsg;
ORC orc = new ORC(getOrc, null);
String fn= orc.getFillerOrderNumber().toString();
It looks like you are creating a new ORC rather than pulling out the existing one from the message. I unfortunately can't provide the exact code as I'm only familiar with HL7, not HAPI.
EDIT: It looks like you may be able to do ORC orc = getOrc.getORDER().getORC();

Categories