I have a standalone application, which connects to a SQL database and saves ResultSet in a list of Map. This is what I have so far:
List<Map<String, Object>> rows;
stmt = conn.createStatement();
Resultset rs = stmt.executeQuery(queryString);
ResultSetMetaData rsmd; //Properties of Resultset object and column count
while(rs.next){
Map<String, Object> rowResult = new HashMap<String, Object>(columnCount);
for(int i =1; i <=columnCount; i++){
rowResult.put(rsmd.getColumnName(i), rs.getObject(i));
}
rows.add(rowResult);
}
//WRITE TO CSV
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
//Write the record to file
writer.writeNext(rows);
//close the writer
writer.close();
How do I add this "rows" of List to a csv with columns? Any clues and suggestions. Your help is appreciated.
Since every record will have the same columns in the same order, then I would just use a List<List<Object>> for the rows.
For the headers, you don't need to get them on every row. Just get them once like so:
List<String> headers = new ArrayList<>();
for (int i = 1; i <= columnCount; i++ )
{
String colName = rsmd.getColumnName(i);
headers.add(colName);
}
Next, you can get the rows like this:
List<List<Object>> rows = new ArrayList<>();
while(rs != null && rs.next)
{
List<Object> row = new ArrayList<>();
for(int i =1; i <=columnCount; i++)
{
row.add(rs.getObject(i));
}
rows.add(row);
}
Finally, to create the CSV file, you can do this:
// create the CSVWriter
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
// write the header line
for (String colName : headers)
{
writer.write(colName);
}
writer.endRecord();
// write the data records
for (List<Object> row : rows)
{
for (Object o : row)
{
// handle nulls how you wish here
String val = (o == null) ? "null" : o.toString();
writer.write(val);
}
writer.endRecord();
}
// you should close the CSVWriter in a finally block or use a
// try-with-resources Statement
writer.close;
Note: In my code examples, I'm using Type Inference
See: Try-With-Resources Statement.
Honestly for what you are trying to do I would recommend you use the writeAll method in CSVWriter and pass in the ResultSet.
writer.writeAll(rs, true);
The second parameter is the include column names so the first row in your csv file will be the column names. Then when you read the file you can translate that back into your Map if you want to (though it will be all strings unless you know when you are reading it what the types are).
Hope that helps.
Scott :)
Related
My company asked me to improve one method from an older project.
The task is to create a csv file from a SQL table (select *) and download with a jsp, the main problem there aren't any Model class and there are a lot of table and I prefer don't create many one, my idea it was to get the list of rows and than for each entity get one row
The service class
public List<String> searchBersaniFile() {
Query q = em.createNativeQuery(SQL_SELECT_REPNORM_BERSANI);
List<String> resultList = (List<String>) q.getResultList(); //I get CastClassException here
System.out.println(resultList);
if (resultList == null) {
resultList = new ArrayList<>();
}
return resultList;
}
The main class:
try (ServletOutputStream outServlet = response.getOutputStream()) {
switch (flowType) {
case STATO_BERSANI:
listResult = awpSapNewRepository.searchBersaniFile();
break;
}
StringBuffer buffer = new StringBuffer();
String str;
for (String result : listResult) {
System.out.println(result);
str = result.replaceAll("\\[|\\]", "").replaceAll(",", ";");
System.out.println(str);
buffer.append(str);
buffer.append("\r\n");
}
String fileName = buildFileName(flowType);
response.setContentType("text/csv");
;
response.setHeader("Content-Disposition", "attachment; filename=" + fileName);
outServlet.write(buffer.toString().getBytes());
outServlet.flush();
}
I tried to get to get in String as you can see on top (via debug I see it returns like this: [column1, column2, column3][column1,....])
Do you have any idea how can I get the csv without create any model?
Be gentle,
This is my first time using Apache Commons CSV 1.7.
I am creating a service to process some CSV inputs,
add some additional information from exterior sources,
then write out this CSV for ingestion into another system.
I store the information that I have gathered into a list of
HashMap<String, String> for each row of the final output csv.
The Hashmap contains the <ColumnName, Value for column>.
I have issues using the CSVPrinter to correctly assign the values of the HashMaps into the rows.
I can concatenate the values into a string with commas between the variables;
however,
this just inserts the whole string into the first column.
I cannot define or hardcode the headers since they are obtained from a config file and may change depending on which project uses the service.
Here is some of my code:
try (BufferedWriter writer = Files.newBufferedWriter(
Paths.get(OUTPUT + "/" + project + "/" + project + ".csv"));)
{
CSVPrinter csvPrinter = new CSVPrinter(writer,
CSVFormat.RFC4180.withFirstRecordAsHeader());
csvPrinter.printRecord(columnList);
for (HashMap<String, String> row : rowCollection)
{
//Need to map __record__ to column -> row.key, value -> row.value for whole map.
csvPrinter.printrecord(__record__);
}
csvPrinter.flush();
}
Thanks for your assistance.
You actually have multiple concerns with your technique;
How do you maintain column order?
How do you print the column names?
How do you print the column values?
Here are my suggestions.
Maintain column order.
Do not use HashMap,
because it is unordered.
Instead,
use LinkedHashMap which has a "predictable iteration order"
(i.e. maintains order).
Print column names.
Every row in your list contains the column names in the form of key values,
but you only print the column names as the first row of output.
The solution is to print the column names before you loop through the rows.
Get them from the first element of the list.
Print column values.
The "billal GHILAS" answer demonstrates a way to print the values of each row.
Here is some code:
try (BufferedWriter writer = Files.newBufferedWriter(
Paths.get(OUTPUT + "/" + project + "/" + project + ".csv"));)
{
CSVPrinter csvPrinter = new CSVPrinter(writer,
CSVFormat.RFC4180.withFirstRecordAsHeader());
// This assumes that the rowCollection will never be empty.
// An anonymous scope block just to limit the scope of the variable names.
{
HashMap<String, String> firstRow = rowCollection.get(0);
int valueIndex = 0;
String[] valueArray = new String[firstRow.size()];
for (String currentValue : firstRow.keySet())
{
valueArray[valueIndex++] = currentValue;
}
csvPrinter.printrecord(valueArray);
}
for (HashMap<String, String> row : rowCollection)
{
int valueIndex = 0;
String[] valueArray = new String[row.size()];
for (String currentValue : row.values())
{
valueArray[valueIndex++] = currentValue;
}
csvPrinter.printrecord(valueArray);
}
csvPrinter.flush();
}
for (HashMap<String,String> row : rowCollection) {
Object[] record = new Object[row.size()];
for (int i = 0; i < columnList.size(); i++) {
record[i] = row.get(columnList.get(i));
}
csvPrinter.printRecord(record);
}
I want to export the results of an SQL query, fired through JDBC, to a file; and then import that result, at some point later.
I'm currently doing it by querying the database through a NamedParameterJdbcTemplate of Spring which returns a SqlRowSet that I can iterate. In each iteration, I extract desired fields and dump the result into a CSV file, using PrintWriter.
final SqlRowSet rs = namedJdbcTemplate.queryForRowSet(sql,params);
while (rs.next()) {
The problem is that when I read back the file, they are all Strings and I need to cast them to their proper types, e.g Integer, String, Date etc.
while (line != null) {
String[] csvLine = line.split(";");
Object[] params = new Object[14];
params[0] = csvLine[0];
params[1] = csvLine[1];
params[2] = Integer.parseInt(csvLine[2]);
params[3] = csvLine[3];
params[4] = csvLine[4];
params[5] = Integer.parseInt(csvLine[5]);
params[6] = Integer.parseInt(csvLine[6]);
params[7] = Long.parseLong(csvLine[7]);
params[8] = formatter.parse(csvLine[8]);
params[9] = Integer.parseInt(csvLine[9]);
params[10] = Double.parseDouble(csvLine[10]);
params[11] = Double.parseDouble(csvLine[11]);
params[12] = Double.parseDouble(csvLine[12]);
params[13] = Double.parseDouble(csvLine[13]);
batchParams.add(params);
line = reader.readLine();
}
Is there a better way to export this SqlRowSet to a file in order to facilitate the import process later on; some way to store the schema for an easier insertion into the DB?
If parsing is your concern, one way of handling this is,
Create a parser Factory interface, say ParserFactory
Create a parse interface, say MyParser
Have MyParser implemented using Factory Method Pattern, i.e. IntegerParser implements MyParser etc.
Have your factory class names as a header in your CSV
This way your calling code would look like,
List<String> headerRow = reader.readLine().split(";"); // Get the 1st row here
Map<String, MyParser> parsers = new HashMap<>();
for(int i = 0; i < headerRow.length(); i++) {
if(!parser.containsKey(headerRow[i]))
parsers.put(headerRow[i], ParserFactory.get(headerRow[i]));
}
line = reader.readLine();
while (line != null) { // From 2nd row onwards
String[] row = line.split(";");
Object[] params = new Object[row.length()];
for(int i = 0; i<row.length(); i++) {
params[i] = parser.get(headerRow[i]).parse(row[i]);
}
batchParams.add(params);
line = reader.readLine();
}
You might like to extract out the creation of parser map, into it's own method. Or let your ParserFactory take headerRow as parameter, and return respective parsers as a result. Something like,
List<String> headerRow = reader.readLine().split(";"); // Get the 1st row here
Map<String, MyParser> parsers = ParserFactory.getParsers(headerRow);
Tried Importing Excel Data to Mongo db in the Following Document Format
[
{"productId":"",
"programeName":"",
"programeThumbImageURL":"",
"programeURL":"",
"programEditors":["editor1","editor2"],
"programChapters":[
{
"chapterName":"chapter1",
"authorNames":["authorName1","authorname2"]
},
{"chapterName":"chapter2"},
"authorNames":["authorName1","authorName2"]
}
,...
]},...]
There are many products in the Excel with with chapterNames has multiple authors. following is the code which tried executing and i could do inserting data. But the i couldn't merge the authorNames corresponding to a particular chapterName as above. So currently there are programChapters array contains objects as duplicated chapterNames. Following code shows my experiment towards this.
private static XSSFWorkbook myWorkBook;
public static void main(String[] args) throws IOException {
String[] programs = {"programName1","programName2","programName3","programName4",...};
#SuppressWarnings("deprecation")
Mongo mongo = new Mongo("localhost", 27017);
#SuppressWarnings("deprecation")
DB db = mongo.getDB("dbName");
DBCollection collection = db.getCollection("programsCollection");
File myFile =
new File("dsm_article_author_details.xlsx");
FileInputStream fis = new FileInputStream(myFile); // Finds the workbook instance for XLSX file
myWorkBook = new XSSFWorkbook(fis);
XSSFSheet mySheet = myWorkBook.getSheetAt(0); // Get iterator to all the rows in current sheet
#SuppressWarnings("unused")
Iterator<Row> rowIterator = mySheet.iterator(); // Traversing over each row of XLSX file
for (String program : programs) {
String programName = "";
String chapterName = "";
String authorName = "";
BasicDBObject product = new BasicDBObject();
BasicDBList programChaptersList = new BasicDBList();
// For Each Row , Create Chapters Object here
for (int i = 0; i <= mySheet.getLastRowNum(); i++) { // points to the starting of excel i.e
// excel first row
Row row = (Row) mySheet.getRow(i); // sheet number
System.out.println("Row is :" + row.getRowNum());
BasicDBObject programChapters = new BasicDBObject();
if (row.getCell(0).getCellType() == Cell.CELL_TYPE_STRING) {
programName = row.getCell(0).getStringCellValue();
System.out.println("programName : " + programName);
}
if (row.getCell(1).getCellType() == Cell.CELL_TYPE_STRING) {
chapterName = row.getCell(1).getStringCellValue().replaceAll("\n", "");
System.out.println("chapterName : " + chapterName);
}
if (row.getCell(2).getCellType() == Cell.CELL_TYPE_STRING) {
authorName = row.getCell(2).getStringCellValue();
System.out.println("authorName : " + authorName);
}
List<String> authors = new ArrayList<String>();
programChapters.put("chapterName", chapterName);
authors.add(authorName);
programChapters.put("authorName", authors);
if (programName.trim().equals(program.trim())) {
programChaptersList.add(programChapters);
}
}
product.put("programName", program);
product.put("programThumbImageURL", "");
product.put("programeURL", "");
product.put("programChapters", programChaptersList);
collection.insert(product);
System.out.println("*#*#*#*#*#");
}
}
I hope this is the part went wrong. Need to store all chapterNames in an array and compare with each upcoming value and according to that create new objects and store it in a list
List<String> authors = new ArrayList<String>();
programChapters.put("chapterName", chapterName);
authors.add(authorName);
programChapters.put("authorName", authors);
Can someone suggest me, available solutions :-)
I hope this is the part went wrong. Need to store all chapterNames in an array and compare with each upcoming value and according to that create new objects and store it in a list
List<String> authors = new ArrayList<String>();
programChapters.put("chapterName", chapterName);
authors.add(authorName);
programChapters.put("authorName", authors);
I am connecting to an Oracle database and querying multiple tables. My current code creates the connection and calls a PL/SQL function which contains the query. Once I have the result set, I add it to a Vector (as I am unsure the number of records each query will result in).
My problem is that I am unsure how to write a delimited file from the Vector. I imagine once I have added my result set to it, it is simply one gigantic string. I need to be able to receive each field from the query and delimit between them, as well as keep rows separate.
Here is my code:
public static void main(String[] args) throws SQLException {
// instantiate db connection
try {
Class.forName("oracle.jdbc.driver.OracleDriver");
}catch (Exception e) {
throw new SQLException("Oracle JDBC is not available", e);
}
// define connection string and parameters
String url = "jdbc:oracle:thin:#//host:port/sid";
Connection conn = DriverManager.getConnection(url, "USERNAME","PASSWORD");
CallableStatement stmt = conn.prepareCall("{? = CALL <functionname>(?)}");
// get result set and add to a Vector
ResultSet rs = stmt.executeQuery();
Vector<String> results = new Vector();
while ( rs.next() ){
results.add(rs.getString(1));
}
// close result set, sql statement, and connection
rs.close();
stmt.close();
conn.close();
// write Vector to output file,
// where the file name format is MMddyyyy.txt
try {
Calendar cal = Calendar.getInstance();
SimpleDateFormat sdf = new SimpleDateFormat("MMddyyyy");
String dateStr = sdf.format(cal.getTime());
FileWriter fwrite = new FileWriter(dateStr + ".txt");
BufferedWriter out = new BufferedWriter(fwrite);
for(int i = 0; i < results.size(); i++)
{
String temp = results.elementAt(i);
out.write(temp);
}
out.close();
}catch (Exception e) {
System.err.println("Error: " + e.getMessage());
}
}
I am just unsure how to go about getting the information from the db and writing it to a delimited file. Thanks in advance!
If you are unsure about the number of fields in each of your rows, then probably, it won't be possible. Because to fetch all the field values from database, you need to know what is the type of each fields, and the number of fields.
But, I'll post an example for when you have fixed number of fields, that you know.
Suppose you have 4 columns per row. Now to display it in tabular form, you would have to use List of List.
If you are using Vector, use Vector of List.
Here's an example for List of List: -
List<List<String>> results = new ArrayList<List<String>>();
while ( rs.next() ) {
List<String> tempList = new ArrayList<String>();
tempList.add(rs.getString(1));
tempList.add(rs.getString(2));
tempList.add(rs.getString(3));
tempList.add(rs.getString(4));
results.add(tempList);
}
Then to print it, use this loop: -
for (List<String> innerList: results) {
for (String fields: innerList) {
System.out.print(fields + "\t");
}
System.out.println();
}
You can write it in the same form to your BufferedWriter.
Use results.toString() and truncate the braces([]) from resulting string to write all values as comma separated at once in the file.
//toString() returns comma separated string values enclosed in []
String resultString = results.toString();
//truncate leading '[' and termincating ']'
resultString = resultString.substring(1, resultString.length()-1);
//if comma is not your delimiter then use String.replaceAll()
//to replace `,` with your delimiter
//write comma separated elements all at once
out.write(resultString);
So here if you have added str1, str2 in the results Vector, then resultString with have value as str1, str2, which you may write at once using your BufferedWriter out.
Also please use Generics in both the sides of initialization as:
Vector<String> results = new Vector<String>();