Java / MySQL Save data as CSV / all values in one cell - java

Good day,
I would like to export values from a MySQL database as CSV and this only works halfway.
I can export the values as a CSV file, but the values do not all have their own cell but everything is in one column.
ResultSet rsE10;
rsE10 = stmt.executeQuery("select * from e10");
datenSchreiber.SchreibDaten("E10", rsE10);
public void SchreibDaten(String fuel, ResultSet kraftstoff) {
try (
CSVPrinter csvPrinter = new CSVPrinter(new FileWriter("/Users/XXXXXX/Desktop/"+fuel+".csv", StandardCharsets.UTF_8), CSVFormat.EXCEL)
)
{
csvPrinter.printRecords(kraftstoff, true);
}
catch (IOException e) {
System.err.println(e.getMessage());
}
catch (java.sql.SQLException e) {
System.err.println(e.getMessage());
}
}
How do I get each value into its own cell?
Many thanks in advance.
I tried setting the CSV format to Default, Exel and MySQL, but that doesn't help either.
I can't figure it out from the JavaDoc of the API either.

Related

Spark Save as Text File grouped by Key

I would like to save RDD to text file grouped by key, currently I can't figure out how to split the output to multiple files, it seems all the output spanning across multiple keys which share the same partition gets written to the same file. I would like to have different files for each key. Here's my code snippet :
JavaPairRDD<String, Iterable<Customer>> groupedResults = customerCityPairRDD.groupByKey();
groupedResults.flatMap(x -> x._2().iterator())
.saveAsTextFile(outputPath + "/cityCounts");
This can be achieved by using foreachPartition to save each partitions into separate file.
You can develop your code as follows
groupedResults.foreachPartition(new VoidFunction<Iterator<Customer>>() {
#Override
public void call(Iterator<Customer> rec) throws Exception {
FSDataOutputStream fsoutputStream = null;
BufferedWriter writer = null;
try {
fsoutputStream = FileSystem.get(new Configuration()).create(new Path("path1"))
writer = new BufferedWriter(fsoutputStream)
while (rec.hasNext()) {
Customer cust = rec.next();
writer.write(cust)
}
} catch (Exception exp) {
exp.printStackTrace()
//Handle exception
}
finally {
// close writer.
}
}
});
Hope this helps.
Ravi
So I figured how to solve this. Convert RDD to Dataframe and then just partition by key during write.
Dataset<Row> dataFrame = spark.createDataFrame(customerRDD, Customer.class);
dataFrame.write()
.partitionBy("city")
.text("cityCounts"); // write as text file at file path cityCounts

How to export different tables from mysql database to different XML file using dbunit?

I am trying to export a database into XML file using DBUNIT. I am facing problem while generating separate XML for each table. I could not able to do this.
Can someone help me with this?
Following is the code:
`
QueryDataSet partialDataSet = new QueryDataSet(connection);
addTables(partialDataSet);
// XML file into which data needs to be extracted
FlatXmlDataSet.write(partialDataSet, new FileOutputStream("C:/Users/name/Desktop/test-dataset_temp.xml"));
System.out.println("Data set written");
static private void addTables(QueryDataSet dataSet) {
if (tableList == null) return;
for (Iterator k = tableList.iterator(); k.hasNext(); ) {
String table = (String) k.next();
try {
dataSet.addTable(table);
} catch (AmbiguousTableNameException e) {
e.printStackTrace();
}
}
}`
Now my problem is how to I seperate tables so that I can generate seperate xml file for each table.
Thanks in Advance

java: reading large file with charset

My file is 14GB and I would like to read line by line and will be export to excel file.
As the file include different language, such as Chinese and English,
I tried to use FileInputStream with UTF-16 for reading data,
but result in java.lang.OutOfMemoryError: Java heap space
I have tried to increase the heap space but problem still exist
How should I change my file reading code?
createExcel(); //open a excel file
try {
//success but cannot read and output for different language
//br = new BufferedReader(
// new FileReader("C:\\Users\\brian_000\\Desktop\\appdatafile.json"));
//result in java.lang.OutOfMemoryError: Java heap space
br = new BufferedReader(new InputStreamReader(
new FileInputStream("C:\\Users\\brian_000\\Desktop\\appdatafile.json"),
"UTF-16"));
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (UnsupportedEncodingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("cann be print");
String line;
int i=0;
try {
while ((line = br.readLine()) != null) {
// process the line.
try{
System.out.println("cannot be print");
//some statement for storing the data in variables.
//a function for writing the variable into excel
writeToExcel(platform,kind,title,shareUrl,contentRating,userRatingCount,averageUserRating
,marketLanguage,pricing
,majorVersionNumber,releaseDate,downloadsCount);
}
catch(com.google.gson.JsonSyntaxException exception){
System.out.println("error");
}
// trying to get the first 1000rows
i++;
if(i==1000){
br.close();
break;
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
closeExcel();
public static void writeToExcel(String platform,String kind,String title,String shareUrl,String contentRating,String userRatingCount,String averageUserRating
,String marketLanguage,String pricing,String majorVersionNumber,String releaseDate,String downloadsCount){
currentRow++;
System.out.println(currentRow);
if(currentRow>1000000){
currentsheet++;
sheet = workbook.createSheet("apps"+currentsheet, 0);
createFristRow();
currentRow=1;
}
try {
//character id
Label label = new Label(0, currentRow, String.valueOf(currentRow), cellFormat);
sheet.addCell(label);
//12 of statements for write the data to excel
label = new Label(1, currentRow, platform, cellFormat);
sheet.addCell(label);
} catch (WriteException e) {
e.printStackTrace();
}
Excel, UTF-16
As mentioned, the problem is likely caused by the Excel document construction. Try whether UTF-8 yields a lesser size; for instance Chinese HTML still is better compressed with UTF-8 rather than UTF-16 because of the many ASCII chars.
Object creation java
You can share common small Strings. Useful for String.valueOf(row) and such. Cache only strings with a small length. I assume the cellFormat to be fixed.
DIY with xlsx
Excel builds a costly DOM.
If CSV text (with a Unicode BOM marker) is no options (you could give it the extension .xls to be opened by Excel), try generating an xslx.
Create an example workbook in xslx.
This is a zip format you can process in java easiest with a zip filesystem.
For Excel there is a content XML and a shared XML, sharing cell values with an index from content to shared strings.
Then no overflow happens as you write buffer-wise.
Or use a JDBC driver for Excel. (No recent experience on my side, maybe JDBC/ODBC.)
Best
Excel is hard to use with that much data. Consider more effort using a database, or write every N rows in a proper Excel file. Maybe you can later import them with java in one document. (I doubt it.)

using CSVWriter to export database tables with BLOB

I've already tried exporting my database tables to CSV using the CSVWriter.
But my tables contain BLOB data. How can I include them in my export?
Then later on, im going to import that exported CSV using CSVReader. Can anyone share some concepts?
This is a part of my code for export
ResultSet res = st.executeQuery("select * from "+db+"."+obTableNames[23]);
int colunmCount = getColumnCount(res);
try {
File filename = new File(dir,""+obTableNames[23]+".csv");
fw = new FileWriter(filename);
CSVWriter writer = new CSVWriter(fw);
writer.writeAll(res, false);
int colType = res.getMetaData().getColumnType(colunmCount);
dispInt(colType);
fw.flush();
fw.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Did you take a look at encodeBase64String(byte[] data) method from the Base64 provided by Apache?
Encodes binary data using the base64 algorithm but does not chunk the output.
This should allow you to return encoded strings representing your Binary Large Object and incorporate it in your CSV.
People on the other side can then use the decodeBase64String(String data) to get the BLOB back again.

Java ME record store problem

I save data to Record store. If the aplication is running it works fine, but when I restart aplication data in record store is lost.
Here is my load command:
try {
int i=1;
display.setCurrent(list2);
RecordStore RS = RecordStore.openRecordStore("recordStore", true);
RecordEnumeration re= RS.enumerateRecords(null, null, true);
adresaURL ad = new adresaURL();
System.out.println("nacteno");
while(re.hasNextElement()){
byte br[] = RS.getRecord(i);
ad.setPopis(new String(br));
br = RS.getRecord(i+1);
ad.setUrl(new String(br));
System.out.println(ad.getPopis());
System.out.println(ad.getUrl());
i+=2;
adresy.addElement(ad);
list2.append(ad.getPopis(), null);
System.out.println("nacteno2");
}
recordStore.closeRecordStore();
} catch (Exception e) {
}
Yeah that won't work.
If you use a RecordEnumeration to iterate through your RMS (as you are), you must use RecordEnumeration.nextRecord() to retrieve the record data. You are using RecordStore.getRecord().
RecordEnumeration.nextRecord() advances your RecordEnumeration on by one. As you never call it, your loop:
while (re.hasNextElement()) {
...
}
will never end!

Categories