I need to write my query.getResultList() into a .CSV file.
I call the query over this:
final Query q = em.createNamedQuery("getalljobs");
final List<Job> joblist = q.getResultList();
and the Namequery just do SELECT * FROM TABLE, the result of query.getResultList() looks like this:
[id;name, id;name, ... ]
I can't use OpenCSV.
The CSV file needs to have headers.
You can use something like
Query q = em.createNamedQuery("getalljobs");
List<Job> jobList = q.getResultList();
String csvHeader = getHeader();
try (PrintWriter fw = new PrintWriter(new FileWriter("output.csv"))) {
fw.println(csvHeader);
for(String line : jobList){
fw.println(line);
}
}
Solution
Write line for line in the file with StringBuidler.append. To this add your comma and a new line after each row.
Related
My company asked me to improve one method from an older project.
The task is to create a csv file from a SQL table (select *) and download with a jsp, the main problem there aren't any Model class and there are a lot of table and I prefer don't create many one, my idea it was to get the list of rows and than for each entity get one row
The service class
public List<String> searchBersaniFile() {
Query q = em.createNativeQuery(SQL_SELECT_REPNORM_BERSANI);
List<String> resultList = (List<String>) q.getResultList(); //I get CastClassException here
System.out.println(resultList);
if (resultList == null) {
resultList = new ArrayList<>();
}
return resultList;
}
The main class:
try (ServletOutputStream outServlet = response.getOutputStream()) {
switch (flowType) {
case STATO_BERSANI:
listResult = awpSapNewRepository.searchBersaniFile();
break;
}
StringBuffer buffer = new StringBuffer();
String str;
for (String result : listResult) {
System.out.println(result);
str = result.replaceAll("\\[|\\]", "").replaceAll(",", ";");
System.out.println(str);
buffer.append(str);
buffer.append("\r\n");
}
String fileName = buildFileName(flowType);
response.setContentType("text/csv");
;
response.setHeader("Content-Disposition", "attachment; filename=" + fileName);
outServlet.write(buffer.toString().getBytes());
outServlet.flush();
}
I tried to get to get in String as you can see on top (via debug I see it returns like this: [column1, column2, column3][column1,....])
Do you have any idea how can I get the csv without create any model?
I want to join two csv files based on a common column in. My two csv files and final csv file looks like this.
Here are the example files - 1st file looks like:
sno,first name,last name
--------------------------
1,xx,yy
2,aa,bb
2nd file looks like:
sno,place
-----------
1,pp
2,qq
Output:
sno,first name,last name,place
------------------------------
1,xx,yy,pp
2,aa,bb,qq
Code:
CSVReader r1 = new CSVReader(new FileReader("c:/csv/file1.csv"));;
CSVReader r2 = new CSVReader(new FileReader("c:/csv/file2.csv"));;
HashMap<String,String[]> dic = new HashMap<String,String[]>();
int commonCol = 1;
r1.readNext(); // skip header
String[] line = null;
while ((line = r1.readNext()) != null)
{
dic.put(line[commonCol],line)
}
commonCol = 1;
r2.readNext();
String[] line2 = null;
while ((line2 = r2.readNext()) != null)
{
if (dic.keySet().contains(line2[commonCol])
{
// append line to existing entry
}
else
{
// create a new entry and pre-pend it with default values
// for the columns of file1
}
}
foreach (String[] line : dic.valueSet())
{
// write line to the output file.
}
I don't know how to proceed further to get desired output. Any help will be appreciated.
Thanks
First, you need to use zero as your commonCol value as the first column has index zero rather than one.
if (dic.keySet().contains(line2[commonCol])
{
//Get the whole line from the first file.
String firstPart = dic.get(line2[commonCol]);
//Gets the line from the second file, without the common column.
String secondPart = String.join (Arrays.copyOfRange(line2, 1, line2.length -1), ",");
// Join together and put in Hashmap.
dic.put(line2[commonCol], String.join (firstPart, secondPart));
}
else
{
// create a new entry and pre-pend it with default values
// for the columns of file1
String firstPart = String.join(",","some", "default", "values")
String secondPart = String.join (Arrays.copyOfRange(line2, 1, line2.length -1), ",");
dic.put(line2[commonCol], String.join (firstPart, secondPart));
}
I want to export the results of an SQL query, fired through JDBC, to a file; and then import that result, at some point later.
I'm currently doing it by querying the database through a NamedParameterJdbcTemplate of Spring which returns a SqlRowSet that I can iterate. In each iteration, I extract desired fields and dump the result into a CSV file, using PrintWriter.
final SqlRowSet rs = namedJdbcTemplate.queryForRowSet(sql,params);
while (rs.next()) {
The problem is that when I read back the file, they are all Strings and I need to cast them to their proper types, e.g Integer, String, Date etc.
while (line != null) {
String[] csvLine = line.split(";");
Object[] params = new Object[14];
params[0] = csvLine[0];
params[1] = csvLine[1];
params[2] = Integer.parseInt(csvLine[2]);
params[3] = csvLine[3];
params[4] = csvLine[4];
params[5] = Integer.parseInt(csvLine[5]);
params[6] = Integer.parseInt(csvLine[6]);
params[7] = Long.parseLong(csvLine[7]);
params[8] = formatter.parse(csvLine[8]);
params[9] = Integer.parseInt(csvLine[9]);
params[10] = Double.parseDouble(csvLine[10]);
params[11] = Double.parseDouble(csvLine[11]);
params[12] = Double.parseDouble(csvLine[12]);
params[13] = Double.parseDouble(csvLine[13]);
batchParams.add(params);
line = reader.readLine();
}
Is there a better way to export this SqlRowSet to a file in order to facilitate the import process later on; some way to store the schema for an easier insertion into the DB?
If parsing is your concern, one way of handling this is,
Create a parser Factory interface, say ParserFactory
Create a parse interface, say MyParser
Have MyParser implemented using Factory Method Pattern, i.e. IntegerParser implements MyParser etc.
Have your factory class names as a header in your CSV
This way your calling code would look like,
List<String> headerRow = reader.readLine().split(";"); // Get the 1st row here
Map<String, MyParser> parsers = new HashMap<>();
for(int i = 0; i < headerRow.length(); i++) {
if(!parser.containsKey(headerRow[i]))
parsers.put(headerRow[i], ParserFactory.get(headerRow[i]));
}
line = reader.readLine();
while (line != null) { // From 2nd row onwards
String[] row = line.split(";");
Object[] params = new Object[row.length()];
for(int i = 0; i<row.length(); i++) {
params[i] = parser.get(headerRow[i]).parse(row[i]);
}
batchParams.add(params);
line = reader.readLine();
}
You might like to extract out the creation of parser map, into it's own method. Or let your ParserFactory take headerRow as parameter, and return respective parsers as a result. Something like,
List<String> headerRow = reader.readLine().split(";"); // Get the 1st row here
Map<String, MyParser> parsers = ParserFactory.getParsers(headerRow);
I have a standalone application, which connects to a SQL database and saves ResultSet in a list of Map. This is what I have so far:
List<Map<String, Object>> rows;
stmt = conn.createStatement();
Resultset rs = stmt.executeQuery(queryString);
ResultSetMetaData rsmd; //Properties of Resultset object and column count
while(rs.next){
Map<String, Object> rowResult = new HashMap<String, Object>(columnCount);
for(int i =1; i <=columnCount; i++){
rowResult.put(rsmd.getColumnName(i), rs.getObject(i));
}
rows.add(rowResult);
}
//WRITE TO CSV
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
//Write the record to file
writer.writeNext(rows);
//close the writer
writer.close();
How do I add this "rows" of List to a csv with columns? Any clues and suggestions. Your help is appreciated.
Since every record will have the same columns in the same order, then I would just use a List<List<Object>> for the rows.
For the headers, you don't need to get them on every row. Just get them once like so:
List<String> headers = new ArrayList<>();
for (int i = 1; i <= columnCount; i++ )
{
String colName = rsmd.getColumnName(i);
headers.add(colName);
}
Next, you can get the rows like this:
List<List<Object>> rows = new ArrayList<>();
while(rs != null && rs.next)
{
List<Object> row = new ArrayList<>();
for(int i =1; i <=columnCount; i++)
{
row.add(rs.getObject(i));
}
rows.add(row);
}
Finally, to create the CSV file, you can do this:
// create the CSVWriter
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
// write the header line
for (String colName : headers)
{
writer.write(colName);
}
writer.endRecord();
// write the data records
for (List<Object> row : rows)
{
for (Object o : row)
{
// handle nulls how you wish here
String val = (o == null) ? "null" : o.toString();
writer.write(val);
}
writer.endRecord();
}
// you should close the CSVWriter in a finally block or use a
// try-with-resources Statement
writer.close;
Note: In my code examples, I'm using Type Inference
See: Try-With-Resources Statement.
Honestly for what you are trying to do I would recommend you use the writeAll method in CSVWriter and pass in the ResultSet.
writer.writeAll(rs, true);
The second parameter is the include column names so the first row in your csv file will be the column names. Then when you read the file you can translate that back into your Map if you want to (though it will be all strings unless you know when you are reading it what the types are).
Hope that helps.
Scott :)
I have two csv files. One Master CSV File around 500000 records. Another DailyCSV file has 50000 Records.
The DailyCSV files misses few columns which has to be fetched from Master CSV File.
For example
DailyCSV File
id,name,city,zip,occupation
1,Jhon,Florida,50069,Accountant
MasterCSV File
id,name,city,zip,occupation,company,exp,salary
1, Jhon, Florida, 50069, Accountant, AuditFirm, 3, $5000
What I have to do is, read both files, match the records with ID, if ID is present in the master file, then i have to fetch company, exp, salary and write it to a new csv file.
How to achieve this.??
What I have done Currently
while (true) {
line = bstream.readLine();
lineMaster = bstreamMaster.readLine();
if (line == null || lineMaster == null)
{
break;
}
else
{
while(lineMaster != null)
readlineSplit = line.split(",(?=([^\"]*\"[^\"]*\")*[^\"]*$)", -1);
String splitId = readlineSplit[4];
String[] readLineSplitMaster =lineMaster.split(",(?=([^\"]*\"[^\"]*\")*[^\"]*$)", -1);
String SplitIDMaster = readLineSplitMaster[13];
System.out.println(splitId + "|" + SplitIDMaster);
//System.out.println(splitId.equalsIgnoreCase(SplitIDMaster));
if (splitId.equalsIgnoreCase(SplitIDMaster)) {
String writeLine = readlineSplit[0] + "," + readlineSplit[1] + "," + readlineSplit[2] + "," + readlineSplit[3] + "," + readlineSplit[4] + "," + readlineSplit[5] + "," + readLineSplitMaster[15]+ "," + readLineSplitMaster[16] + "," + readLineSplitMaster[17];
System.out.println(writeLine);
pstream.print(writeLine + "\r\n");
}
}
}pstream.close();
fout.flush();
bstream.close();
bstreamMaster.close();
First of all, your current parsing approach will be painfully slow. Use a CSV parsing library dedicated for that to speed things up. With uniVocity-parsers you can process your 500K records in less than a second. This is how you can use it to solve your problem:
First let's define a few utility methods to read/write your files:
//opens the file for reading (using UTF-8 encoding)
private static Reader newReader(String pathToFile) {
try {
return new InputStreamReader(new FileInputStream(new File(pathToFile)), "UTF-8");
} catch (Exception e) {
throw new IllegalArgumentException("Unable to open file for reading at " + pathToFile, e);
}
}
//creates a file for writing (using UTF-8 encoding)
private static Writer newWriter(String pathToFile) {
try {
return new OutputStreamWriter(new FileOutputStream(new File(pathToFile)), "UTF-8");
} catch (Exception e) {
throw new IllegalArgumentException("Unable to open file for writing at " + pathToFile, e);
}
}
Then, we can start reading your daily CSV file, and generate a Map:
public static void main(String... args){
//First we parse the daily update file.
CsvParserSettings settings = new CsvParserSettings();
//here we tell the parser to read the CSV headers
settings.setHeaderExtractionEnabled(true);
//and to select ONLY the following columns.
//This ensures rows with a fixed size will be returned in case some records come with less or more columns than anticipated.
settings.selectFields("id", "name", "city", "zip", "occupation");
CsvParser parser = new CsvParser(settings);
//Here we parse all data into a list.
List<String[]> dailyRecords = parser.parseAll(newReader("/path/to/daily.csv"));
//And convert them to a map. ID's are the keys.
Map<String, String[]> mapOfDailyRecords = toMap(dailyRecords);
... //we'll get back here in a second.
This is the code to generate a Map from the list of daily records:
/* Converts a list of records to a map. Uses element at index 0 as the key */
private static Map<String, String[]> toMap(List<String[]> records) {
HashMap<String, String[]> map = new HashMap<String, String[]>();
for (String[] row : records) {
//column 0 will always have an ID.
map.put(row[0], row);
}
return map;
}
With the map of records, we can process your master file and generate the list of updates:
private static List<Object[]> processMasterFile(final Map<String, String[]> mapOfDailyRecords) {
//we'll put the updated data here
final List<Object[]> output = new ArrayList<Object[]>();
//configures the parser to process only the columns you are interested in.
CsvParserSettings settings = new CsvParserSettings();
settings.setHeaderExtractionEnabled(true);
settings.selectFields("id", "company", "exp", "salary");
//All parsed rows will be submitted to the following RowProcessor. This way the bigger Master file won't
//have all its rows stored in memory.
settings.setRowProcessor(new AbstractRowProcessor() {
#Override
public void rowProcessed(String[] row, ParsingContext context) {
// Incoming rows from MASTER will have the ID as index 0.
// If the daily update map contains the ID, we'll get the daily row
String[] dailyData = mapOfDailyRecords.get(row[0]);
if (dailyData != null) {
//We got a match. Let's join the data from the daily row with the master row.
Object[] mergedRow = new Object[8];
for (int i = 0; i < dailyData.length; i++) {
mergedRow[i] = dailyData[i];
}
for (int i = 1; i < row.length; i++) { //starts from 1 to skip the ID at index 0
mergedRow[i + dailyData.length - 1] = row[i];
}
output.add(mergedRow);
}
}
});
CsvParser parser = new CsvParser(settings);
//the parse() method will submit all rows to the RowProcessor defined above.
parser.parse(newReader("/path/to/master.csv"));
return output;
}
Finally, we can get the merged data and write everything to another file:
... // getting back to the main method here
//Now we process the master data and get a list of updates
List<Object[]> updatedData = processMasterFile(mapOfDailyRecords);
//And write the updated data to another file
CsvWriterSettings writerSettings = new CsvWriterSettings();
writerSettings.setHeaders("id", "name", "city", "zip", "occupation", "company", "exp", "salary");
writerSettings.setHeaderWritingEnabled(true);
CsvWriter writer = new CsvWriter(newWriter("/path/to/updates.csv"), writerSettings);
//Here we write everything, and get the job done.
writer.writeRowsAndClose(updatedData);
}
This should work like a charm. Hope it helps.
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).
I will approach the problem in a step by step manner.
First I will parse/read the master CSV file and keep its content into a hashmap, where the key will be each record's unique 'id' as for the value maybe you can store them in a hash or simply create a java class to store the information.
Example of hash:
{
'1' : { 'name': 'Jhon',
'City': 'Florida',
'zip' : 50069,
....
}
}
Next, read your comparer csv file. For each row, read the 'id' and check if the key exists on the hashmap you have created earlier.
if it exists, then from the hashmap access the information you need and write to a new CSV file.
Also, you might want to consider using a 3rd party CSV parser to make this task easier.
If you have maven maybe you can follow this example I found on net. Otherwise you can just google for apache 'csv parser' example on the internet.
http://examples.javacodegeeks.com/core-java/apache/commons/csv-commons/writeread-csv-files-with-apache-commons-csv-example/