How to use Commons CSV remove duplicate in csv file using Java? - java

I have a csv file. It contains several duplicate columns. I am trying to remove these duplicates using Java. I found Apache Common csv library, some people use it to remove duplicate rows. How can I use it to remove or skip duplicate columns?
For example: my csv header is:
ID Name Email Email
So far my code is:
Reader reader = Files.newBufferedReader(Paths.get("user.csv"));
// read csv file
Iterable<CSVRecord> records = CSVFormat.DEFAULT.withFirstRecordAsHeader()
.withIgnoreHeaderCase()
.withTrim()
.parse(reader);
for (CSVRecord record : records) {
System.out.println("Record #: " + record.getRecordNumber());
System.out.println("ID: " + record.get("ID"));
System.out.println("Name: " + record.get("Name"));
System.out.println("Email: " + record.get("Email"));
}
// close the reader
reader.close();

Your code is close to what you need - you just need to use CSVPrinter to write out your data to a new file.
import java.io.IOException;
import java.io.Reader;
import java.io.Writer;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVPrinter;
import org.apache.commons.csv.CSVRecord;
public class App {
public static void main(String[] args) throws IOException {
try (final Reader reader = Files.newBufferedReader(Paths.get("source.csv"),
StandardCharsets.UTF_8)) {
final Writer writer = Files.newBufferedWriter(Paths.get("target.csv"),
StandardCharsets.UTF_8,
StandardOpenOption.CREATE); // overwrites existing output file
try (final CSVPrinter printer = CSVFormat.DEFAULT
.withHeader("ID", "Name", "Email")
.print(writer)) {
// read each input file record:
Iterable<CSVRecord> records = CSVFormat.DEFAULT
.withFirstRecordAsHeader()
.withIgnoreHeaderCase()
.withTrim()
.parse(reader);
// write each output file record
for (CSVRecord record : records) {
printer.print(record.get("ID"));
printer.print(record.get("Name"));
printer.print(record.get("Email"));
printer.println();
}
}
}
}
}
This transforms the following source file:
ID,Name,Email,Email
1,Albert,foo#bar.com,foo#bar.com
2,Brian,baz#bat.com,baz#bat.com
To this target file:
ID,Name,Email
1,Albert,foo#bar.com
2,Brian,baz#bat.com
Some points to note:
I was wrong in my comment. You do not need to use column indexes - you can use headings (as I do above) in your specific case.
Whenever reading and writing a file, it is recommended to provide the character encoding. In my case, I use UTF-8. (This assumes the original file was created as a URF-8 file, of course - or is compatible with that encoding.)
When opening the reader and the writer I use "try-with-resources" statements. These mean I do not have to explicitly close the file resources - Java takes care of that for me.

Related

Java Iterator is skipping half elements

I wrote a small scripts to read from CSV in java. It takes a CSV, and push some values from the CSV into an HashMap. My CSV has 110 records ( 109 without the header ) however i get an HashMap with 54 values. When i debug, i can see that at each iteration, a line from my CSV is skipped.
Here's the code
package **CENSORED**.utils;
import com.day.cq.dam.api.Asset;
import com.day.cq.dam.api.Rendition;
import com.day.text.csv.Csv;
import java.io.*;
import java.nio.charset.StandardCharsets;
import java.util.*;
import org.apache.sling.api.resource.Resource;
import org.apache.sling.api.resource.ResourceResolver;
public class DateFormatUtils {
private static String dateFormatCsvPath = "/content/dam/csv/country_date_format.csv";
public static String getDateFormatByLocale(Locale Locale, ResourceResolver resourceResolver) {
Resource res = resourceResolver.getResource(dateFormatCsvPath);
Asset asset = res.adaptTo(Asset.class);
Rendition rendition = asset.getOriginal();
InputStream is = rendition.adaptTo(InputStream.class);
HashMap<String, String> localeToFormat = new HashMap<String, String>();
Csv csv = new Csv();
try {
Iterator<String[]> rowIterator = csv.read(is, StandardCharsets.UTF_8.name());
while (rowIterator.hasNext()) {
String[] row = rowIterator.next();
String country = row[1];
String locale = row[4];
String dateFormat = row[6];
localeToFormat.put(locale.toLowerCase() + "_" + country.toLowerCase(), dateFormat);
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Here are few screenshot of my debug
at 1st iteration, the line 2 of my CSV gets added into my hashmap. The header have been skipped.
At 2nd iteration, the line 5 gets added to my hashmap, but lines 3-4 aren't.
At 3rd iteration, the line 8 gets added to my hasmap, but lines 6-7 aren't.
At the end i end up with 53 elements in my hashmap while i expect 109.
Here's also a sample of my CSV :
ISO 3166 Country Code,ISO639-2 Country Code,Country,ISO 3166 Country Code,ISO639-2 Lang,Language,Date Format
ALB,AL,Albania,sqi,sq,Albanian,yyyy-MM-dd
ARE,AE,United Arab Emirates,ara,ar,Arabic,dd/MM/yyyy
ARG,AR,Argentina,spa,es,Spanish,dd/MM/yyyy
AUS,AU,Australia,eng,en,English,d/MM/yyyy
AUT,AT,Austria,deu,de,German,dd.MM.yyyy
BEL,BE,Belgium,fra,fr,French,d/MM/yyyy
BEL,BE,Belgium,nld,nl,Dutch,d/MM/yyyy
BGR,BG,Bulgaria,bul,bg,Bulgarian,yyyy-M-d
BHR,BH,Bahrain,ara,ar,Arabic,dd/MM/yyyy
BIH,BA,Bosnia and Herzegovina,srp,sr,Serbian,yyyy-MM-dd
BLR,BY,Belarus,bel,be,Belarusian,d.M.yyyy
BOL,BO,Bolivia,spa,es,Spanish,dd-MM-yyyy
BRA,BR,Brazil,por,pt,Portuguese,dd/MM/yyyy
CAN,CA,Canada,fra,fr,French,yyyy-MM-dd
CAN,CA,Canada,eng,en,English,dd/MM/yyyy
Finally a last screenshot that shows that my CSV has correct EOL at their line
This is the csv.read() function, a class made by Adobe for AEM :
public Iterator<String[]> read(InputStream in, String charset) throws IOException {
if (charset == null) {
charset = System.getProperty("file.encoding");
}
InputStream in = new BufferedInputStream(in, 4096);
this.input = new InputStreamReader(in, charset);
return this.read();
}
I finally went with another solution since i wasnt able to use this one. For perennity, i was developing this for an AEM project; i decided to leverage the Generic List Item in ACS-common to get a dictionnary with all the values i needed instead of reading from a CSV. As #Artistotle stated, there is def something wrong with the reader so i'd advise against using com.day.text.csv.Csv;

Unexpected record type (org.apache.poi.hssf.record.HyperlinkRecord)

The problem:
I'm just trying to open it .xls file using the Apache-poi 4.1.0 library and it gives the same error as 4 years ago in a similar question.
I already tried
to put version 3.12-3.16.
3.13 as well
All versions can open blank .xls and filled by myself but not this one.
This document is generated automatically and I need to make a program that accepts it.
I already made a .Net standart library C# which is work, I tried to use xamarin android it's a horror, the app weighs 50 mb vs 3 mb due to various terrible SDK link errors, but that's a different story. So I decided to do it on Kotlin.
Code is from the documentation
You can check file on git
val inputStream = FileInputStream("./test.xls")
val wb = HSSFWorkbook(inputStream)
I expect no errors while opening xls.
Actual output is
Exception in thread "main" java.lang.RuntimeException: Unexpected record type (org.apache.poi.hssf.record.HyperlinkRecord)
at org.apache.poi.hssf.record.aggregates.RowRecordsAggregate.<init>(RowRecordsAggregate.java:97)
at org.apache.poi.hssf.model.InternalSheet.<init>(InternalSheet.java:183)
at org.apache.poi.hssf.model.InternalSheet.createSheet(InternalSheet.java:122)
at org.apache.poi.hssf.usermodel.HSSFWorkbook.<init>(HSSFWorkbook.java:354)
at org.apache.poi.hssf.usermodel.HSSFWorkbook.<init>(HSSFWorkbook.java:400)
at org.apache.poi.hssf.usermodel.HSSFWorkbook.<init>(HSSFWorkbook.java:381)
at ru.plumber71.toolbox.ExcelParcerKt.main(ExcelParcer.kt:19)
at ru.plumber71.toolbox.ExcelParcerKt.main(ExcelParcer.kt)
The document will not be modified in any way. If there any other libraries to just read the dataset or strings from the .xls file will be OK.
After some investigation I found the problem with your test.xls file.
According the file format specifications, all HyperlinkRecords should be together in the Hyperlink Table. It is contained in the Sheet Substream following the cell records. In your case the HyperlinkRecords are between other records (between NumberRecords and LabelSSTRecords in that case). So I suspect it was not Excel what had created that test.xls file.
Excelmight be tolerant enough to open that file nevertheless. But you cannot expect that apache poi also tries to tolerate all possible violations in file format. If you open the file using Excel and then re-save it, apache poi is able creating the Workbookafter that.
Apache poi is not able repairing this as Excel can do. But one could read the POIFSFileSystem a low level way and filtering out the HyperlinkRecords that are between other records. That way one could read the content using apache poi, of course except the hyperlinks.
Example:
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import org.apache.poi.poifs.filesystem.POIFSFileSystem;
import org.apache.poi.poifs.filesystem.DirectoryNode;
import org.apache.poi.hssf.record.Record;
import org.apache.poi.hssf.record.NameRecord;
import org.apache.poi.hssf.record.NameCommentRecord;
import org.apache.poi.hssf.record.HyperlinkRecord;
import org.apache.poi.hssf.record.RecordFactoryInputStream;
import org.apache.poi.hssf.record.RecordFactory;
import org.apache.poi.hssf.model.RecordStream;
import org.apache.poi.hssf.model.InternalWorkbook;
import org.apache.poi.hssf.model.InternalSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFName;
import org.apache.poi.ss.usermodel.DataFormatter;
import org.apache.poi.ss.usermodel.Sheet;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.util.CellReference;
import java.util.List;
import java.util.ArrayList;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.lang.reflect.Constructor;
class ExcelOpenHSSF {
public static void main(String[] args) throws Exception {
String fileName = "test(2).xls";
try (InputStream is = new FileInputStream(fileName);
POIFSFileSystem fileSystem = new POIFSFileSystem(is)) {
//find workbook directory entry
DirectoryNode directory = fileSystem.getRoot();
String workbookName = "";
for(String wbName : InternalWorkbook.WORKBOOK_DIR_ENTRY_NAMES) {
if(directory.hasEntry(wbName)) {
workbookName = wbName;
break;
}
}
InputStream stream = directory.createDocumentInputStream(workbookName);
//loop over all records and manipulate if needed
List<Record> records = new ArrayList<Record>();
RecordFactoryInputStream recStream = new RecordFactoryInputStream(stream, true);
//here we filter out the HyperlinkRecords that are between other records (NumberRecords and LabelSSTRecords in that case)
//System.out.println prints the problematic records
Record record1 = null;
Record record2 = null;
while ((record1 = recStream.nextRecord()) != null) {
record2 = recStream.nextRecord();
if (!(record1 instanceof HyperlinkRecord) && (record2 instanceof HyperlinkRecord)) {
System.out.println(record1);
System.out.println(record2);
records.add(record1);
} else if ((record1 instanceof HyperlinkRecord) && !(record2 instanceof HyperlinkRecord)) {
System.out.println(record1);
System.out.println(record2);
records.add(record2);
} else {
records.add(record1);
if (record2 != null) records.add(record2);
}
}
//now create the HSSFWorkbook
//see https://svn.apache.org/viewvc/poi/tags/REL_4_1_0/src/java/org/apache/poi/hssf/usermodel/HSSFWorkbook.java?view=markup#l322
InternalWorkbook internalWorkbook = InternalWorkbook.createWorkbook(records);
HSSFWorkbook wb = HSSFWorkbook.create(internalWorkbook);
int recOffset = internalWorkbook.getNumRecords();
Method convertLabelRecords = HSSFWorkbook.class.getDeclaredMethod("convertLabelRecords", List.class, int.class);
convertLabelRecords.setAccessible(true);
convertLabelRecords.invoke(wb, records, recOffset);
RecordStream rs = new RecordStream(records, recOffset);
while (rs.hasNext()) {
InternalSheet internelSheet = InternalSheet.createSheet(rs);
Constructor constructor = HSSFSheet.class.getDeclaredConstructor(HSSFWorkbook.class, InternalSheet.class);
constructor.setAccessible(true);
HSSFSheet hssfSheet = (HSSFSheet)constructor.newInstance(wb, internelSheet);
Field _sheets = HSSFWorkbook.class.getDeclaredField("_sheets");
_sheets.setAccessible(true);
#SuppressWarnings("unchecked")
List<HSSFSheet> sheets = (ArrayList<HSSFSheet>)_sheets.get(wb);
sheets.add(hssfSheet);
}
for (int i = 0 ; i < internalWorkbook.getNumNames() ; ++i){
NameRecord nameRecord = internalWorkbook.getNameRecord(i);
Constructor constructor = HSSFName.class.getDeclaredConstructor(HSSFWorkbook.class, NameRecord.class, NameCommentRecord.class);
constructor.setAccessible(true);
HSSFName name = (HSSFName)constructor.newInstance(wb, nameRecord, internalWorkbook.getNameCommentRecord(nameRecord));
Field _names = HSSFWorkbook.class.getDeclaredField("names");
_names.setAccessible(true);
#SuppressWarnings("unchecked")
List<HSSFName> names = (ArrayList<HSSFName>)_names.get(wb);
names.add(name);
}
//now the workbook is created properly
System.out.println(wb);
/*
//getting the data
DataFormatter formatter = new DataFormatter();
Sheet sheet = wb.getSheetAt(0);
for (Row row : sheet) {
for (Cell cell : row) {
CellReference cellRef = new CellReference(row.getRowNum(), cell.getColumnIndex());
System.out.print(cellRef.formatAsString());
System.out.print(" - ");
String text = formatter.formatCellValue(cell);
System.out.println(text);
}
}
*/
}
}
}
I was able to open a file of this "corrupted" type by using JExcel API
But using poi.apache.org also opens the file if manually resave it using excel application. (It may not be suitable for someone)
Sorry that it was asking strange questions. Thank you all and hope that someone may find useful.
val inputStream = FileInputStream("./testCorrupted.xls")
val workbook = Workbook.getWorkbook(inputStream)
val sheet = workbook.getSheet(0)
val cell1 = sheet.getCell(0, 0)
print(cell1.contents + ":")

How to store multiple columns of a csv dataset in a single variable in java so that the variable can be used as input feature for ml model

I have done this in python. Here is my python code:
Here X is the input variable in which I stored all the
input columns of csv file and y is the target variable.
dataset=pandas.read_csv("newone.csv")
features = [0,1,4,5,6,7]
X =dataset.iloc[:,features]
y =dataset.iloc[:,2]
How can I do this in java?
Here is my java code in which I read the csv file but
I am able to store only one column value of the csv in a variable.
public static void main(String[] args) throws IOException {
BufferedReader reader = Files.newBufferedReader(Paths.get("C:/Users/N/Desktop/newone.csv"));
CSVParser csvParser = new CSVParser(reader,
CSVFormat.DEFAULT.withHeader("Enounter", "Relation", "Event", "Tag","Encounter_no", "Diagonosis", "User_Id", "Client_Id").withIgnoreHeaderCase().withTrim());
for (CSVRecord csvRecord : csvParser) {
encntr=csvRecord.get("Encounter");
}
}
----------
Consider using MyKong's code on how to read CSV files in Java. Something like the following snippet below (heavily borrowed from MyKong's better version):
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
String[] valuesFromLine = line.split(cvsSplitBy);
String secondValue = valuesFromLine[1];
// do something with second value
}
This depends entirely on what the relationship between your columns is like. It is impossible to answer this question in a general manner as this changes from dataset to dataset and even from algorithm to algorithm, but here are a few approaches you might like to try:
Use Principal Component Analysis to identify if there any variables in your desired tuple of columns you can omit because they contribute very little to the row class variable.
Use Feature Hashing to reduce the dimensionality of your dataset by bundling together related properties (this does not work as a blanket solution - indeed, nothing in ML ever does. Try it before you commit to it).
If the columns you'd like to unite are numerical, you may want to think of an algorithm to join them in a unique way, or a way which makes sense. If they are categorical, a sparse one-hot bit vector may help you.
Download these jar https://mvnrepository.com/artifact/org.apache.poi/poi-ooxml/3.15 and https://mvnrepository.com/artifact/org.apache.poi/poi/3.15
and add them to your build path
import java.io.FileInputStream;
import java.io.IOException;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
public class ExcelReader {
public static void main(String[] args) throws IOException
{
//specify your file path
FileInputStream file = new FileInputStream("D:\\test.xlsx");
XSSFWorkbook workbook = new XSSFWorkbook(file);
//to fetch sheet
XSSFSheet sheet = workbook.getSheetAt(0);
// for iterating through rows
for(int c=0;c<=sheet.getLastRowNum();c++)
{
// for iterating through columns
Row rows = sheet.getRow(c);
for(int b=0;b<=rows.getLastCellNum();b++)
{
Cell cells=rows.getCell(b);
//to read cell value as string
String comp=cells.getStringCellValue();
}
}
}
}

Delete specific rownumbers in .csv file using java

For ex: I am trying search a text with name "abc"in .csv file which is present in column no 6 in multiple rows and I need to delete those rows.
I tried below code. I am able to get the line no/row no where text "abc" is present in column 6 but it is not deleting the rows.
import java.io.BufferedReader;
import java.io.*;
import java.io.FileReader;
import java.util.ArrayList;
import java.util.List;
import com.opencsv.CSVReader;
import com.opencsv.CSVWriter;
public class ReadExcel {
public static void main(String[] args) throws Exception{
String csvFile = "csv filelocation";
CSVReader reader = new CSVReader(new FileReader(csvFile));
List<String[]> allElements = reader.readAll();
String [] nextLine;
int lineNumber = 0;
while ((nextLine = reader.readNext()) != null) {
lineNumber++;
if(nextLine[5].equalsIgnoreCase("abc")){
System.out.println("Line # " + lineNumber);
allElements.remove(lineNumber);
}
}
For reading the files in CSV format, I am currently using the library super-csv. There are various examples.
Let me know if you need help to use it.
So, if you would like to use the opencsv library, I start a new example for writing the new content in a CSV file. I take inspiration from your example code.
List<String[]> allElements; /* This list will contain the lines that cover your criteria */
/*
...
*/
CSVWriter writer = new CSVWriter(new FileWriter("yourfile.csv"));
writer.writeAll(allElements);
writer.close();

Convert .csv to .xls in Java

Does anyone here know of any quick, clean way to convert csv files to xls or xlsx files in java?
I have something to manage csv files already in place and I need the extra compatibility for other programs.
Sample code in addition to package names is always well appreciated.
Many thanks,
Justian
Here's my code thus far. I need to remove the returns ("\n") from the lines. Some of my cells contain multiple lines of information (a list), so I can use "\n" in csv to indicate multiple lines within a cell, but xls treats these as if I mean to put them on a new line.
The code is modified from the internet and a little messy at the moment. You might notice some deprecated methods, as it was written in 2004, and be sure to ignore the terrible return statements. I'm just using S.o.p at the moment for testing and I'll clean that up later.
package jab.jm.io;
import java.io.DataInputStream;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import org.apache.poi.hssf.usermodel.HSSFCell;
import org.apache.poi.hssf.usermodel.HSSFRow;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
public class FileConverter {
public static String ConvertCSVToXLS(String file) throws IOException {
if (file.indexOf(".csv") < 0)
return "Error converting file: .csv file not given.";
String name = FileManager.getFileNameFromPath(file, false);
ArrayList<ArrayList<String>> arList = new ArrayList<ArrayList<String>>();
ArrayList<String> al = null;
String thisLine;
DataInputStream myInput = new DataInputStream(new FileInputStream(file));
while ((thisLine = myInput.readLine()) != null) {
al = new ArrayList<String>();
String strar[] = thisLine.split(",");
for (int j = 0; j < strar.length; j++) {
// My Attempt (BELOW)
String edit = strar[j].replace('\n', ' ');
al.add(edit);
}
arList.add(al);
System.out.println();
}
try {
HSSFWorkbook hwb = new HSSFWorkbook();
HSSFSheet sheet = hwb.createSheet("new sheet");
for (int k = 0; k < arList.size(); k++) {
ArrayList<String> ardata = (ArrayList<String>) arList.get(k);
HSSFRow row = sheet.createRow((short) 0 + k);
for (int p = 0; p < ardata.size(); p++) {
System.out.print(ardata.get(p));
HSSFCell cell = row.createCell((short) p);
cell.setCellValue(ardata.get(p).toString());
}
}
FileOutputStream fileOut = new FileOutputStream(
FileManager.getCleanPath() + "/converted files/" + name
+ ".xls");
hwb.write(fileOut);
fileOut.close();
System.out.println(name + ".xls has been generated");
} catch (Exception ex) {
}
return "";
}
}
Don't know if you know this already, but:
Excel (if that's your real target) is easily able to read .csv files directly, so any conversion you'd do would only be a courtesy to your less "gifted" users.
CSV is a lowest-common-denominator format. It's unlikely for any converter to add information to that found in a .csv file that will make it more useful. In other words, CSV is a "dumb" format and converting it to .xls will (probably) increase file size but not make the format any smarter.
Curtis' suggestion of POI is the first thing that would come to my mind too.
If you're doing this conversion on a Windows machine, another alternative could be Jacob, a Java-COM bridge that would allow you to effectively remote control Excel from a Java program so as to do things like open a file and save in a different format, perhaps even applying some formatting changes or such.
Finally, I've also had some success doing SQL INSERTs (via JDBC) into an Excel worksheet accessed via the JDBC-ODBC bridge. i.e. ODBC can make an Excel file look like a database. It's not very flexible though, you can't ask the DB to create arbitrarily named .XLS files.
EDIT:
It looks to me like readLine() is already not giving you whole lines. How is it to know that carriage return is not a line terminator? You should be able to verify this with debug print statements right after the readLine().
If this is indeed so, it would suck because the way forward would be for you to
either recognize incomplete lines and paste them together after the fact,
or write your own substitute for readLine(). A simple approach would be to read character by character, replacing CRs within a CSV string and accumulating text in a StringBuilder until you feel you have a complete line.
Both alternatives are work you probably weren't looking forward to.
If you want to read or write XLS or XLSX files in Java, Apache POI is a good bet: http://poi.apache.org/
Copy paste the below program,I ran the program and it is working fine,Let me know if you have any concerns on this program.(You need Apache POI Jar to run this program)
import java.io.DataInputStream;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import org.apache.poi.hssf.usermodel.HSSFCell;
import org.apache.poi.hssf.usermodel.HSSFRow;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.ss.usermodel.Cell;
public class CSVToExcelConverter {
public static void main(String args[]) throws IOException
{
ArrayList arList=null;
ArrayList al=null;
String fName = "test.csv";
String thisLine;
int count=0;
FileInputStream fis = new FileInputStream(fName);
DataInputStream myInput = new DataInputStream(fis);
int i=0;
arList = new ArrayList();
while ((thisLine = myInput.readLine()) != null)
{
al = new ArrayList();
String strar[] = thisLine.split(",");
for(int j=0;j<strar.length;j++)
{
al.add(strar[j]);
}
arList.add(al);
System.out.println();
i++;
}
try
{
HSSFWorkbook hwb = new HSSFWorkbook();
HSSFSheet sheet = hwb.createSheet("new sheet");
for(int k=0;k<arList.size();k++)
{
ArrayList ardata = (ArrayList)arList.get(k);
HSSFRow row = sheet.createRow((short) 0+k);
for(int p=0;p<ardata.size();p++)
{
HSSFCell cell = row.createCell((short) p);
String data = ardata.get(p).toString();
if(data.startsWith("=")){
cell.setCellType(Cell.CELL_TYPE_STRING);
data=data.replaceAll("\"", "");
data=data.replaceAll("=", "");
cell.setCellValue(data);
}else if(data.startsWith("\"")){
data=data.replaceAll("\"", "");
cell.setCellType(Cell.CELL_TYPE_STRING);
cell.setCellValue(data);
}else{
data=data.replaceAll("\"", "");
cell.setCellType(Cell.CELL_TYPE_NUMERIC);
cell.setCellValue(data);
}
//*/
// cell.setCellValue(ardata.get(p).toString());
}
System.out.println();
}
FileOutputStream fileOut = new FileOutputStream("test.xls");
hwb.write(fileOut);
fileOut.close();
System.out.println("Your excel file has been generated");
} catch ( Exception ex ) {
ex.printStackTrace();
} //main method ends
}
}
The tools in Excel are not adequate for what the OP wants to do. He's on the right track there. Excel cannot import multiple CSV files into different worksheets in the same file, which is why you'd want to do it in code. My suggestion is to use OpenCSV to read the CSV, as it can automatically correct for newlines in data and missing columns, and it's free and open source. It's actually very, very robust and can handle all sorts of different non-standard CSV files.
You wrote:
I have something to manage csv files
already in place and I need the extra
compatibility for other programs.
What are those other programs? Are they required to access your data through Excel files, or could they work with an JDBC or ODBC connection to a database? Using a database as the central location, you could extract the data into CSV files or other formats as needed.
I created a small software called csv2xls. It needs Java.

Categories