how to export dynamic column data to excel - java

I have to export data to excel having dynamic number of columns
using java apache workbook,
on every execution, column details will be saved in ListObject,
which will be dynamically generated and get saved in
List<Object> expColName = new ArrayList<Object>();
From the List , I have to obtain individual values and export into every column of the excel sheet,
for(int i=0; i<expColName.size(); i++){
data.put("1",new Object[] {
expColName.get(i)
});
}
The above code gives only the last column value in the excel sheet

What type is data and how do you read the values from the map?
It seems like you are putting every object into the same "key" of the Map, thats why you only get the last item from the list.
You could try to give it a test with:
for(int i=0; i<expColName.size(); i++){
data.put(i+"",new Object[] {
expColName.get(i)
});
}

Related

writing to a csv file in java based on some condition

I am fetching the data by reading from a csv file and storing it as List<List<String<>> "data". While trying to write that data to another csv file I need to Check a condition whether id in this "rowdata" matches with id column of another csv file or not. Please help in how to check this condition.
The condition is data.id=value.id;// value is data from another csv in the form List<List<String<>> value
public void writeRecords(List<List<String>> data) throws IOException {
FileWriter csvWriter1 = new FileWriter(OUTPUT_PATH);
Cable c1=new Cable();
List<List<String>> value=c1.readRecords();
for (List<String> rowData : data) {
csvWriter1.append(String.join(",", rowData));
csvWriter1.append("\n");
}
csvWriter1.flush();
csvWriter1.close();
}
Would not suggest you to operate with data structure like that List<List<String<>> "data".However, assumed all your lists are sorted and you don't care about code complexity level - in your situation you can do smth like that to compare data from your two lists:
for (int i =0; i<value.size(); i++){
for (int j = 0; j<value.get(i).size(); j++) {
if(value.get(i).get(j)==data.get(i).get(j));
//write data to another csv
}
}

Retrieve DataFrame Values in a Java Array

I am using apache spark. I want to retrieve the values pf DataFrame in a String type array. I have created a table using DataFrame.
dataframe.registerTempTable("table_name");
DataFrame d2=sqlContext.sql("Select * from table_name");
Now I want this data to be retrieved in a java Array(String type would be fine). How can I do that.
You can use collect() method to get Row[]. Each Row contains column values of your Dataframe.If there is single value in each row then you can add them in ArrayList of String. If there are more than one column in each row then use ArrayList of your custom object type and set the properties. In below code instead of printing "Row Data" you can add them in ArrayList.
Row[] dataRows = d2.collect();
for (Row row : dataRows) {
System.out.println("Row : "+row);
for (int i = 0; i < row.length(); i++) {
System.out.println("Row Data : "+row.get(i));
}
}

Excel Columns into Java Using API

I had to create a program that calculates the GPA, using apache poi that reads the xlsx excel file. It coints 220 rows and 4 columns, such as
Course Number Course Name Credit Hours Course Multipler
110 Eng 1 CP 5.0 1.0
There are 220 other courses.
However, I was able to print those data using cell.getStringCellValue and cell.getNumericCellValue, but I can't get these printed data into each array.
I wanted to create an array called courseNumList and put courseNumList[0] the first course Number, and the second course number in courseNumList[1].. on and on..
I want to create 4 arrays, but what is a good way?
private static ArrayList<Object> c = new ArrayList <Object>();
public static void readXLSXFile() throws IOException {
InputStream ExcelFileToRead = new FileInputStream("C:/Users/14KimTa/Desktop/Downloads/Course_List.xlsx");
XSSFWorkbook wb = new XSSFWorkbook(ExcelFileToRead);
XSSFWorkbook test = new XSSFWorkbook();
XSSFSheet sheet = wb.getSheetAt(0);
XSSFRow row;
XSSFCell cell;
Iterator rows = sheet.rowIterator();
while (rows.hasNext())
{
row=(XSSFRow) rows.next();
Iterator cells = row.cellIterator();
while (cells.hasNext())
{
cell=(XSSFCell) cells.next();
if (cell.getCellType() == XSSFCell.CELL_TYPE_STRING)
{
System.out.print(cell.getStringCellValue()+" ");
c.add(getStringCellValue());
}
else if(cell.getCellType() == XSSFCell.CELL_TYPE_NUMERIC)
{
System.out.print(cell.getNumericCellValue()+" ");
}
}
System.out.println();
}
}
this is my code so far.
I tried to create each columns into arrays, but it is not working at all.
Thanks!
I would create a new class to define your data, Course, with one field for each of your columns (4 fields). Then I would create some kind of List (ArrayList<Course> looks good) to hold all Courses. An array of Courses would work too, since you know how many there are from the start. In a loop, I would create one Course object for each row, setting the fields based on the values from cell.getStringCellValue() and cell.getNumericCellValue(), adding the Course to the List (or array) after processing each row.
I don't think creating one array per each column is a good idea. Keeping track of data in the same row by following the same indexes in the 4 arrays may be cumbersome and bad practice.
I would rather create a Java class - Course - with 4 fields -courseNumber, courseName, creditHours and courseMultiplier. Then, I would create a collection of such objects, e.g. Collection<Course> courses = new ArrayList<Course>();, and populate it according to the data read from Excel - one object per row.
EDIT:
I'd suggest you create a custom type instead of using Object for your ArrayList type parameter. You're not gaining much by using Object.
Then, for each row, you'd do the following:
//...obtain your values from cells and populate `courseNumber`, `courseName`,`creditHours` and `courseMultiplier` accordingly
Course course = new Course();
course.setCourseNumber(courseNumber);
course.setCourseName(courseName);
course.setCreditHours(creditHours);
course.setCourseMultiplier(courseMultiplier);
c.add(course);
This snippet of code should be placed inside the loop you use for iterating through rows.

How to read an array from columns in Java?

I have a .csv file with 177 rows and 18,000 odd cols.Given the column label, I should pick that particular column and as a default the first two label columns.
Please help me with this,
Thanks all,
Priya
So, what's the question? Parse CSV file. You can either implement this yourself or use third party code.
If you implement it yourself read line by line, split lines line.split(",") into elements and put it into data structure that should be a map of lists:
Map<String, List<String>> table = new LinkedHashMap<String, List<String>>();
Use column name as a key and column values as a list elements.
LinkedHashMap is preferable here to preserve the order of your columns.
Read first line that contains the column names and create list instances:
table.put(columnName, new LinkedList<String>());
Additionally create an array of column names:
String[] columns = new String[0];
table.keys().toArray();
Now continue iterating over your data and populate your table:
String[] data = line.split(",");
for (int i = 0; i < data.length; i++) {
table.get(columns[i]).add(data[i]);
}
TBD...
Good luck.
Have you looked into OpenCSV ?
You may go for OpenCSV

How to merge CSV files in Java

My first CSV file looks like this with header included (header is included only at the top not after every entry):
NAME,SURNAME,AGE
Fred,Krueger,Unknown
.... n records
My second file might look like this:
NAME,MIDDLENAME,SURNAME,AGE
Jason,Noname,Scarry,16
.... n records with this header template
The merged file should look like this:
NAME,SURNAME,AGE,MIDDLENAME
Fred,Krueger,Unknown,
Jason,Scarry,16,Noname
....
Basically if headers don't match, all new header titles (columns) should be added after original header and their values according to that order.
Update
Above CSV were made smaller so I can illustrate what I want to achieve, in reality CSV files are generated one step before this (merge) and can be up to 100 columns
How can I do this?
I'd create a model for the 'bigger' format (a simple class with four fields and a collection for instances of this class) and implemented two parsers, one for the first, one for the second model. Create records for all rows of both csv files and implement a writer to output the csv in the correct format. IN brief:
public void convert(File output, File...input) {
List<Record> records = new ArrayList<Record>();
for (File file:input) {
if (input.isThreeColumnFormat()) {
records.addAll(ThreeColumnFormatParser.parse(file));
} else {
records.addAll(FourColumnFormatParser.parse(file));
}
}
CsvWriter.write(output, records);
}
From your comment I see, that you a lot of different csv formats with some common columns.
You could define the model for any row in the various csv files like this:
public class Record {
Object id; // some sort of unique identifier
Map<String, String> values; // all key/values of a single row
public Record(Object id) {this.id=id;}
public void put(String key, String value){
values.put(key, value);
}
public void get(String key) {
values.get(key);
}
}
For parsing any file you would first read the header and add the column headers to a global keystore (will be needed later on for outputting), then create records for all rows, like:
//...
List<Record> records = new ArrayList<Record>()
for (File file:getAllFiles()) {
List<String> keys = getColumnsHeaders(file);
KeyStore.addAll(keys); // the store is a Set
for (String line:file.getLines()) {
String[] values = line.split(DELIMITER);
Record record = new Record(file.getName()+i); // as an example for id
for (int i = 0; i < values.length; i++) {
record.put(keys.get(i), values[i]);
}
records.add(record);
}
}
// ...
Now the keystore has all used column header names and we can iterate over the collection of all records, get all values for all keys (and get null if the file for this record didn't use the key), assemble the csv lines and write everything to a new file.
Read in the header of the first file and create a list of the column names. Now read the header of the second file and add any column names that don't exist already in the list to the end of the list. Now you have your columns in the order that you want and you can write this to the new file first.
Next I would parse each file and for each row I would create a Map of column name to value. Once the row is parsed you could then iterate over the new list of column names and pull the values from the map and write them immediately to the new file. If the value is null don't print anything (just a comma, if required).
There might be more efficient solutions available, but I think this meets the requirements you set out.
Try this:
http://ondra.zizka.cz/stranky/programovani/ruzne/querying-transforming-csv-using-sql.texy
crunch input.csv output.csv "SELECT AVG(duration) AS durAvg FROM (SELECT * FROM indata ORDER BY duration LIMIT 2 OFFSET 6)"

Categories