Exporting SqlRowSet to a file - java

I want to export the results of an SQL query, fired through JDBC, to a file; and then import that result, at some point later.
I'm currently doing it by querying the database through a NamedParameterJdbcTemplate of Spring which returns a SqlRowSet that I can iterate. In each iteration, I extract desired fields and dump the result into a CSV file, using PrintWriter.
final SqlRowSet rs = namedJdbcTemplate.queryForRowSet(sql,params);
while (rs.next()) {
The problem is that when I read back the file, they are all Strings and I need to cast them to their proper types, e.g Integer, String, Date etc.
while (line != null) {
String[] csvLine = line.split(";");
Object[] params = new Object[14];
params[0] = csvLine[0];
params[1] = csvLine[1];
params[2] = Integer.parseInt(csvLine[2]);
params[3] = csvLine[3];
params[4] = csvLine[4];
params[5] = Integer.parseInt(csvLine[5]);
params[6] = Integer.parseInt(csvLine[6]);
params[7] = Long.parseLong(csvLine[7]);
params[8] = formatter.parse(csvLine[8]);
params[9] = Integer.parseInt(csvLine[9]);
params[10] = Double.parseDouble(csvLine[10]);
params[11] = Double.parseDouble(csvLine[11]);
params[12] = Double.parseDouble(csvLine[12]);
params[13] = Double.parseDouble(csvLine[13]);
batchParams.add(params);
line = reader.readLine();
}
Is there a better way to export this SqlRowSet to a file in order to facilitate the import process later on; some way to store the schema for an easier insertion into the DB?

If parsing is your concern, one way of handling this is,
Create a parser Factory interface, say ParserFactory
Create a parse interface, say MyParser
Have MyParser implemented using Factory Method Pattern, i.e. IntegerParser implements MyParser etc.
Have your factory class names as a header in your CSV
This way your calling code would look like,
List<String> headerRow = reader.readLine().split(";"); // Get the 1st row here
Map<String, MyParser> parsers = new HashMap<>();
for(int i = 0; i < headerRow.length(); i++) {
if(!parser.containsKey(headerRow[i]))
parsers.put(headerRow[i], ParserFactory.get(headerRow[i]));
}
line = reader.readLine();
while (line != null) { // From 2nd row onwards
String[] row = line.split(";");
Object[] params = new Object[row.length()];
for(int i = 0; i<row.length(); i++) {
params[i] = parser.get(headerRow[i]).parse(row[i]);
}
batchParams.add(params);
line = reader.readLine();
}
You might like to extract out the creation of parser map, into it's own method. Or let your ParserFactory take headerRow as parameter, and return respective parsers as a result. Something like,
List<String> headerRow = reader.readLine().split(";"); // Get the 1st row here
Map<String, MyParser> parsers = ParserFactory.getParsers(headerRow);

Related

Create CSV file with columns and values from HashMap

Be gentle,
This is my first time using Apache Commons CSV 1.7.
I am creating a service to process some CSV inputs,
add some additional information from exterior sources,
then write out this CSV for ingestion into another system.
I store the information that I have gathered into a list of
HashMap<String, String> for each row of the final output csv.
The Hashmap contains the <ColumnName, Value for column>.
I have issues using the CSVPrinter to correctly assign the values of the HashMaps into the rows.
I can concatenate the values into a string with commas between the variables;
however,
this just inserts the whole string into the first column.
I cannot define or hardcode the headers since they are obtained from a config file and may change depending on which project uses the service.
Here is some of my code:
try (BufferedWriter writer = Files.newBufferedWriter(
Paths.get(OUTPUT + "/" + project + "/" + project + ".csv"));)
{
CSVPrinter csvPrinter = new CSVPrinter(writer,
CSVFormat.RFC4180.withFirstRecordAsHeader());
csvPrinter.printRecord(columnList);
for (HashMap<String, String> row : rowCollection)
{
//Need to map __record__ to column -> row.key, value -> row.value for whole map.
csvPrinter.printrecord(__record__);
}
csvPrinter.flush();
}
Thanks for your assistance.
You actually have multiple concerns with your technique;
How do you maintain column order?
How do you print the column names?
How do you print the column values?
Here are my suggestions.
Maintain column order.
Do not use HashMap,
because it is unordered.
Instead,
use LinkedHashMap which has a "predictable iteration order"
(i.e. maintains order).
Print column names.
Every row in your list contains the column names in the form of key values,
but you only print the column names as the first row of output.
The solution is to print the column names before you loop through the rows.
Get them from the first element of the list.
Print column values.
The "billal GHILAS" answer demonstrates a way to print the values of each row.
Here is some code:
try (BufferedWriter writer = Files.newBufferedWriter(
Paths.get(OUTPUT + "/" + project + "/" + project + ".csv"));)
{
CSVPrinter csvPrinter = new CSVPrinter(writer,
CSVFormat.RFC4180.withFirstRecordAsHeader());
// This assumes that the rowCollection will never be empty.
// An anonymous scope block just to limit the scope of the variable names.
{
HashMap<String, String> firstRow = rowCollection.get(0);
int valueIndex = 0;
String[] valueArray = new String[firstRow.size()];
for (String currentValue : firstRow.keySet())
{
valueArray[valueIndex++] = currentValue;
}
csvPrinter.printrecord(valueArray);
}
for (HashMap<String, String> row : rowCollection)
{
int valueIndex = 0;
String[] valueArray = new String[row.size()];
for (String currentValue : row.values())
{
valueArray[valueIndex++] = currentValue;
}
csvPrinter.printrecord(valueArray);
}
csvPrinter.flush();
}
for (HashMap<String,String> row : rowCollection) {
Object[] record = new Object[row.size()];
for (int i = 0; i < columnList.size(); i++) {
record[i] = row.get(columnList.get(i));
}
csvPrinter.printRecord(record);
}

Write List of Maps to a CSV

I have a standalone application, which connects to a SQL database and saves ResultSet in a list of Map. This is what I have so far:
List<Map<String, Object>> rows;
stmt = conn.createStatement();
Resultset rs = stmt.executeQuery(queryString);
ResultSetMetaData rsmd; //Properties of Resultset object and column count
while(rs.next){
Map<String, Object> rowResult = new HashMap<String, Object>(columnCount);
for(int i =1; i <=columnCount; i++){
rowResult.put(rsmd.getColumnName(i), rs.getObject(i));
}
rows.add(rowResult);
}
//WRITE TO CSV
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
//Write the record to file
writer.writeNext(rows);
//close the writer
writer.close();
How do I add this "rows" of List to a csv with columns? Any clues and suggestions. Your help is appreciated.
Since every record will have the same columns in the same order, then I would just use a List<List<Object>> for the rows.
For the headers, you don't need to get them on every row. Just get them once like so:
List<String> headers = new ArrayList<>();
for (int i = 1; i <= columnCount; i++ )
{
String colName = rsmd.getColumnName(i);
headers.add(colName);
}
Next, you can get the rows like this:
List<List<Object>> rows = new ArrayList<>();
while(rs != null && rs.next)
{
List<Object> row = new ArrayList<>();
for(int i =1; i <=columnCount; i++)
{
row.add(rs.getObject(i));
}
rows.add(row);
}
Finally, to create the CSV file, you can do this:
// create the CSVWriter
String csv = "C:\\Temp\\data.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
// write the header line
for (String colName : headers)
{
writer.write(colName);
}
writer.endRecord();
// write the data records
for (List<Object> row : rows)
{
for (Object o : row)
{
// handle nulls how you wish here
String val = (o == null) ? "null" : o.toString();
writer.write(val);
}
writer.endRecord();
}
// you should close the CSVWriter in a finally block or use a
// try-with-resources Statement
writer.close;
Note: In my code examples, I'm using Type Inference
See: Try-With-Resources Statement.
Honestly for what you are trying to do I would recommend you use the writeAll method in CSVWriter and pass in the ResultSet.
writer.writeAll(rs, true);
The second parameter is the include column names so the first row in your csv file will be the column names. Then when you read the file you can translate that back into your Map if you want to (though it will be all strings unless you know when you are reading it what the types are).
Hope that helps.
Scott :)

Accessing custom Lucene attribute from DirectoryReader

I added a custom attribute to my Lucene pipeline like described here (in the "Adding a custom Attribute" section).
Now, after I built my index (by adding all the documents via IndexWriter) I want to be able to assess this attribute when reading the index directory. How do I do this?
What I'm doing now is the following:
DirectoryReader reader = DirectoryReader.open(index);
TermsEnum iterator = null;
for (int i = 0; i < r.maxDoc(); i++) {
Terms terms = r.getTermVector(i, "content");
iterator = terms.iterator(iterator);
AttributeSource attributes = iterator.attributes();
SentenceAttribute sentence = attributes.addAttribute(SentenceAttribute.class);
while (true) {
BytesRef term = iterator.next();
if (term == null) {
break;
}
System.out.println(term.utf8ToString());
System.out.println(sentence.getStringSentenceId());
}
}
It doesn't seem to work: I get the same sentenceId all the time.
I use Lucene 4.9.1.
Finally, I solved it. To do it, I used PayloadAttribute to store the data I needed.
To store payloads in the index, first, set storeTermVectorPayloads property of the Field as well as some other stuff:
fieldType.setStoreTermVectors(true);
fieldType.setStoreTermVectorOffsets(true);
fieldType.setStoreTermVectorPositions(true);
fieldType.setStoreTermVectorPayloads(true);
Then for each token during the analyzation phase, set the payload attribute:
private final PayloadAttribute payloadAtt = addAttribute(PayloadAttribute.class);
// in incrementToken()
payloadAtt.setPayload(new BytesRef(String.valueOf(myAttr)));
Then build an index, and, finally, after that it's possible to get the payload this way:
DocsAndPositionsEnum payloads = null;
TermsEnum iterator = null;
Terms termVector = reader.getTermVector(docId, "field");
iterator = termVector.iterator(iterator);
while ((ref = iterator.next()) != null) {
payloads = iterator.docsAndPositions(null, payloads, DocsAndPositionsEnum.FLAG_PAYLOADS);
while (payloads.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {
int freq = payloads.freq();
for (int i = 0; i < freq; i++) {
payloads.nextPosition();
BytesRef payload = payloads.getPayload();
// do something with the payload
}
}
}

BIRT: How to remove a dataset parameter programmatically

I want to modify an existing *.rptdesign file and save it under a new name.
The existing file contains a Data Set with a template SQL select statement and several DS parameters.
I'd like to use an actual SQL select statement which uses only part of the DS parameters.
However, the following code results in the exception:
Exception in thread "main" `java.lang.RuntimeException`: *The structure is floating, and its handle is invalid!*
at org.eclipse.birt.report.model.api.StructureHandle.getStringProperty(StructureHandle.java:207)
at org.eclipse.birt.report.model.api.DataSetParameterHandle.getName(DataSetParameterHandle.java:143)
at org.eclipse.birt.report.model.api.DataSetHandle$DataSetParametersPropertyHandle.removeParamBindingsFor(DataSetHandle.java:851)
at org.eclipse.birt.report.model.api.DataSetHandle$DataSetParametersPropertyHandle.removeItems(DataSetHandle.java:694)
--
OdaDataSetHandle dsMaster = (OdaDataSetHandle) report.findDataSet("Master");
HashSet<String> bindVarsUsed = new HashSet<String>();
...
// find out which DS parameters are actually used
HashSet<String> bindVarsUsed = new HashSet<String>();
...
ArrayList<OdaDataSetParameterHandle> toRemove = new ArrayList<OdaDataSetParameterHandle>();
for (Iterator iter = dsMaster.parametersIterator(); iter.hasNext(); ) {
OdaDataSetParameterHandle dsPara = (OdaDataSetParameterHandle)iter.next();
String name = dsPara.getName();
if (name.startsWith("param_")) {
String bindVarName = name.substring(6);
if (!bindVarsUsed.contains(bindVarName)) {
toRemove.add(dsPara);
}
}
}
PropertyHandle paramsHandle = dsMaster.getPropertyHandle( OdaDataSetHandle.PARAMETERS_PROP );
paramsHandle.removeItems(toRemove);
What is wrong here?
Has anyone used the DE API to remove parameters from an existing Data Set?
I had similar issue. Resolved it by calling 'removeItem' multiple times and also had to re-evaluate parametersIterator everytime.
protected void updateDataSetParameters(OdaDataSetHandle dataSetHandle) throws SemanticException {
int countMatches = StringUtils.countMatches(dataSetHandle.getQueryText(), "?");
int paramIndex = 0;
do {
paramIndex = 0;
PropertyHandle odaDataSetParameterProp = dataSetHandle.getPropertyHandle(OdaDataSetHandle.PARAMETERS_PROP);
Iterator parametersIterator = dataSetHandle.parametersIterator();
while(parametersIterator.hasNext()) {
Object next = parametersIterator.next();
paramIndex++;
if(paramIndex > countMatches) {
odaDataSetParameterProp.removeItem(next);
break;
}
}
if(paramIndex < countMatches) {
paramIndex++;
OdaDataSetParameter dataSetParameter = createDataSetParameter(paramIndex);
odaDataSetParameterProp.addItem(dataSetParameter);
}
} while(countMatches != paramIndex);
}
private OdaDataSetParameter createDataSetParameter(int paramIndex) {
OdaDataSetParameter dataSetParameter = StructureFactory.createOdaDataSetParameter();
dataSetParameter.setName("param_" + paramIndex);
dataSetParameter.setDataType(DesignChoiceConstants.PARAM_TYPE_INTEGER);
dataSetParameter.setNativeDataType(1);
dataSetParameter.setPosition(paramIndex);
dataSetParameter.setIsInput(true);
dataSetParameter.setIsOutput(false);
dataSetParameter.setExpressionProperty("defaultValue", new Expression("<evaluation script>", ExpressionType.JAVASCRIPT));
return dataSetParameter;
}

How do I parse a column from a CSV string using jackson CsvMapper or another csv parser?

I have a Java method that receives a CSV string of values and an integer index to reference which column in the CSV string to parse. The method returns the value associated with the integer index in the CSV string.
For example if I have a CSV string with a header and a second row with values defined as:
String csvString = "Entry #,Date Created,Date Updated, IP Address
165,8/22/13 14:46,,11.222.33.444";
and the integer index received by the method was 1, I'd expect the method to return the string "165"
And if the integer index received by the method was 2, I'd expect the method to return the string "8/22/13 14:46"
etc,...
I don't want to just split up the CSV string by commas as that could get ugly and I'm sure that there is a CSV parser that already does some parsing like this. From my Google searches it sounds like OpenCSV or the Jackson CsvMapper can do this.
I've been playing with the com.fasterxml.jackson.dataformat.csv.CsvMapper libary to parse out the appropriate column of this CSV string and here's what I have so far:
int csvFieldIndex (this is the integer index passed into my method)
String csvString = "Entry #,Date Created,Date Updated, IP Address
165,8/22/13 14:46,,11.222.33.444";
CsvSchema csvSchema = CsvSchema.emptySchema().withHeader();
ObjectReader mapper = new CsvMapper().reader(Map.class).with(csvSchema);
MappingIterator<Map<String, Object>> iter = null;
iter = mapper.readValues(csvString);
while (iter.hasNext()) {
Map<String, Object> row = iter.next();
System.out.Println("row= " + row.toString());
}
But this iterator gives me all the csv values which is not what what I want; I just want the one value associated with my integer index.
Here's the output I get when I run this snippet of code:
row= {Entry #=165, Date Created=8/22/13 14:46, Date Updated=, IP Address=11.222.33.444}
Is there a way I can use the Jackson CsvMapper to do this?
======== UPDATE: ========
Based on feedback from keshlam I was able to parse a column from a CSV with the following code:
CsvSchema csvSchema = CsvSchema.emptySchema().withHeader();
ObjectReader mapper = new CsvMapper().reader(Map.class).with(csvSchema);
MappingIterator<Map<String, Object>> iter = null;
iter = mapper.readValues(csvString);
// iterate over whole csv and store in a map
Map<String, Object> row = null;
while (iter.hasNext()) {
row = iter.next();
}
// now put the set of field names (keys) from this map into an array
String[] csvKeysArray = row.keySet().toArray(new String[0]);
int j = 0;
// loop over this array of keys and compare with csvFieldIndex
for (int i = 0; i < csvKeysArray.length; i++) {
// increment j by 1 since array index starts at 0 but input index starts at 1
j = i + 1;
if (j == csvFieldIndex) {
csvValue = row.get(csvKeysArray[i]).toString();
}
}
return csvValue;
If I'm following what you're asking, and understanding your code correctly (I've never used that CsvMapper), change the final loop to
while (iter.hasNext()) {
Map<String, Object> row = iter.next();
System.out.Println("Entry #= " + row.get("Entry #");
}
If you want to access the columns numerically rather than by name (why?), set up a mapping from number to column name and use that to retrieve the key you pass to the get() operation.

Categories