CSVWriter adds blank line beetween each row - java

I am trying to write some data into a csv file but some of them are written at the end of the line and there is a blank space between each row. I have also tried do collect data into an ArrayList of strings and then use the method csvWriter.writeAll but it gives me the same result.
try(FileWriter fileWriter1 = new FileWriter(csvPath)){
CSVWriter csvWriter = new CSVWriter(fileWriter1);
//Impostazione di Git e della repo.
Git git = Git.open(new File(Cpath));
Repository repository = FileRepositoryBuilder.create(new File(Cpath));
String repo = String.valueOf(repository);
logger.info(repo);
List<Ref> branches = git.branchList().call();
for (Ref ref : branches)
{
logger.info(ref.getName());
}
Iterable<RevCommit> commits = git.log().all().call();
for (RevCommit revCommit : commits) { //itero tutti i commit.
String pattern = "MM/dd/yyyy HH:mm:ss";
DateFormat df = new SimpleDateFormat(pattern);
String date = df.format(revCommit.getAuthorIdent().getWhen());
String[] columns = {date, revCommit.getFullMessage()};
csvWriter.writeNext(new String[]{date, revCommit.getFullMessage()});
}
csvWriter.flush();
csvWriter.close();
}

The CSV format loosely follows the RFC 4180. But most cvs writers consistenly use a \r\n (MS/DOS convention) end of file.
When you read that with an editor (or a terminal) that expects only one single \n for an end of line (Unix convention), you wrongly see additional lines when the file is perfectly correct.
Ref: Wikipedia

Related

How to replace a CSV file after removing a column in Java

I am new to Java. I was successfully able to read my CSV file from my local file location and was able to identify which column needed to be deleted for my requirements. However, I was not able to delete the required column and write the file into my local folder. Is there a way to resolve this issue? I have used the following code:
CSVReader reader = new CSVReader(new FileReader(fileName));
String [] nextLine;
while ((nextLine = reader.readNext()) != null) {
System.out.println(nextLine[15]);
}
All I would like to do is to remove the column having index 15 and write the file as a CSV file in my local folder.
I'm assuming you're using the OpenCSV library.
In order to make your code work, you have to fix 2 issues:
You need a writer to write your modified CSV to. OpenCSV provides a CSVWriter class for this purpose.
You need to convert your line (which is currently a String array) into a list to be able to remove an element, then convert it back into an array to match what the CSVWriter.writeNext method expects.
Here's some code that does this:
CSVReader reader = new CSVReader(new FileReader(fileName));
CSVWriter writer = new CSVWriter(new FileWriter(outFileName));
String[] origLine;
while ((origLine = reader.readNext()) != null) {
List<String> lineList = new ArrayList<>(Arrays.asList(origLine));
lineList.remove(15);
String[] newLine = lineList.toArray(new String[lineList.size()]);
writer.writeNext(newLine, true);
}
writer.close();
reader.close();
Some additional remarks:
The code probably needs a bit more error handling etc if it's to be used in a production capacity.
List indices in Java start at 0, so remove[15] actually removes the 16th column from the file.
The code writes its output to a separate file. Trying to use the same file name for input and output will not work.

Export CSV with date stamp in the filename

I have written a simple SQL query to fetch few columns from a table.
I have created a OIM Script, which is currently running a SQL query and exporting that in a CSV. However, I am looking for to add date in the file name but not able to find any clue.
I am using CSVWritter.
CSVWriter writer = new CSVWriter(new FileWriter("Path", false));
ResultSetMetaData Mdata = rs.getMetaData();
What is the exact statement what to add in this to get the date in the file name?
Expected file name: Filename_Date.csv
Maybe get the current Date as a string when creating the filewriter and then just append it to the name as follows:
String date = new SimpleDateFormat("yyyy-MM-dd").format(new Date(System.currentTimeMillis()));
new FileWriter("Filename_" + date + ".csv", false);
You can also change the format of the timestamp using the formatting rules of the simple date format: https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html
Full timestamp:
String fileName = String.format("%s.csv", DateTimeFormatter.ISO_DATE_TIME.format(LocalDateTime.now()));
Thank you everyone for your response, finally figured out.
String filepath = "C:\\Users\\Test\\";
String filename = filepath + "Output_" + new SimpleDateFormat("d MMMM yyyy").format(new Date()) + ".csv";
CSVWriter writer = new CSVWriter(new FileWriter(filename, false));

Splitting CSV input

I am trying to get data out of a CSV data. However, if I try to read the data so I can use the individual data inside it, it prints extra stuff like:
x����sr��java.util.ArrayListx����a���I��sizexp������w������t��17 mei 2017t��Home - Gastt��4 - 1t��(4 - 0)t��
With this code:
FileInputStream in = openFileInput("savetest13.dat");
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
List<String[]> resultList = new ArrayList();
String csvLine;
while ((csvLine = reader.readLine()) != null){
String[] row = csvLine.split(",");
out.println("while gepakt");
out.println(row);
date = row[0];
out.println("date: "+date);
resultList.add(row);
txtTest.setText(date);
}
But whenever I read the file to check what data it contains, I get the exact same data as I put in. But I can't manage to split the data with stuff:
FileInputStream in = openFileInput("savetest13.dat");
ObjectInputStream ois = new ObjectInputStream(in);
List stuff = (List) ois.readObject();
txtTest.setText(String.valueOf(stuff));
[17 mei 2017, Home - Guest, 2 - 1, (2 - 0), ]
I am trying to get them separated into date, names, score1, score2
.
Which of the 2 would be better to use and how can I get the correct output, which I am failing to obtain?
You are not writing CSV to your output file, rather than that your are using standard java serialization ObjectOutputStream#writeObject(...) to create that file. try using a CSV library to write/read data in CSV format (see hier) and before that, start here to learn about CSV because
[17 mei 2017, Home - Guest, 2 - 1, (2 - 0), ]
is not csv, but only the output of toString of the list you are using.
Here is an easy way to write the CSV file formatted correctly. This is a simple example which does not take into account the need to escape any commas found in your data.You can open the file created in Excel, Google Sheets, OpenOffice etc. and see that it is formatted correctly.
final String COMMA = ",";
final String NEW_LINE = System.getProperty("line.separator");
ArrayList<String> myRows = new ArrayList<String>();
// add comma delimited rows to ArrayList
myRows.add("date" + COMMA +
"names" + COMMA +
"score1" + COMMA +
"score2"); // etc.
// optional - insert field names into the first row
myRows.add(0, "[Date]" + COMMA +
"[Names]" + COMMA +
"[Score1]" + COMMA +
"[Score2]");
// get a writer
final String fileName = "myFileName.csv";
final String dirPath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOCUMENTS).getAbsolutePath();
File myDirectory = new File(dirPath);
FileWriter writer = new FileWriter(new File(myDirectory, fileName));
// write the rows to the file
for (String myRow : myRows) {
writer.append(myRow + NEW_LINE);
}
writer.close();

how can I parse CSV(excel,not separated by comma) file in Java ?

I have a CSV files (excel) which has data in it and i need to parse the data using java.
the data in those files doesn't separated using comma,the CSV files has number of columns and number of rows(each cell has data) where all the data is written.
i need to go through on all the files until i get to the EOF(end of file)of each file and parse the data.
the files contains also empty rows in it so empty row is not a criteria to stop parsing,i think only EOF will indicate that i've reached to the end of the specific file.
many thanks.
You can use opencsv to parse the excel CSV. I've used this myself, all you need to do is split on the ';'. Empty cells will be parsed aswell.
You can find info here : http://opencsv.sourceforge.net/
And to parse the excelCSV you can do:
CSVReader reader = new CSVReader(new FileReader("yourfile.csv"), ';');
Aside from other suggestions, I would offer Jackson CSV module. Jackson has very powerful data-binding functionality, and CSV module allows reading/writing as CSV as an alternative to JSON (or XML, YAML, and other supported formats). So you can also do conversions between other data formats, in addition to powerful CSV-to/from-POJO binding.
Please have a Stream Object to read the CSV file.
FileInputStream fis = new FileInputStream("FileName.CSV");
BufferedInputStream bis = new BufferedInputStream(fis);
InputStreamReader isr = new InputStreamReader(bis);
Read an inputstream Object and store the file in String object.
Then using StringTokenizer with ,[comma] as delimeter -->you will get the tokens
Please manipulate the token to get the value.
String str = "This is String , split by StringTokenizer, created by mkyong";
StringTokenizer st = new StringTokenizer(str);
System.out.println("---- Split by space ------");
while (st.hasMoreElements()) {
System.out.println(st.nextElement());
}
System.out.println("---- Split by comma ',' ------");
StringTokenizer st2 = new StringTokenizer(str, ",");
while (st2.hasMoreElements()) {
System.out.println(st2.nextElement());
}
Thanks,
Pavan
Suppose you have a csv fileContent in form of string:
String fileContent;
Generally, the CSV fileContent are parsed into List>.
final List<String> rows = new ArrayList<String>(Lists.newArraysList(fileContent.split("[\\r\\n]+")));
Split the file into List of rows.
Then use CSVParser of OpenCSV and parse the comma separated line into List
final CSVParser parser = new CSVParser();
final List<List<String>> csvDetails = new ArrayList<List<String>>();
rows.forEach(t -> {
try {
csvDetails.add(Lists.newArrayList(parser.parseLine(t)));
} catch (Exception e) {
throw new RunTimeException("Exception occurred while parsing the data");
}
});

DbUnit NoSuchTableException - Workaround for long table names in Oracle

I'm working on creating a test suite that runs on multiple databases using dbunit xml. Unfortunately, yesterday I discovered that some table names in our schema are over 30 characters, and are truncated for Oracle. For example, a table named unusually_long_table_name_error in mysql is named unusually_long_table_name_erro in Oracle. This means that my dbunit file contains lines like <unusually_long_table_name_error col1="value1" col2="value2 />. These lines throw a NoSuchTableException when running the tests in Oracle.
Is there a programmatic workaround for this? I'd really like to avoid generating special xml files for Oracle. I looked into a custom MetadataHandler but it returns lots of java.sql datatypes that I don't know how to intercept/spoof. I could read the xml myself, truncate each table name to 30 characters, write that out to a temp file or StringBufferInputStream and then use that as input to my DataSetBuilder, but that seems like a whole lot of steps to accomplish very little. Maybe there's some ninja Oracle trick with synonyms or stored procedures or goodness-know-what-else that could help me. Is one of these ideas clearly better than the others? Is there some other approach that would blow me away with its simplicity and elegance? Thanks!
In light of the lack of answers, I ended up going with my own suggested approach, which
Reads the .xml file
Regex's out the table name
Truncates the table name if it's over 30 characters
Appends the (potentially modified) line to a StringBuilder
Feeds that StringBuilder into a ByteArrayInputStream, suitable for passing into a DataSetBuilder
public InputStream oracleWorkaroundStream(String fileName) throws IOException
{
String ls = System.getProperty("line.separator");
// This pattern isolates the table name from the rest of the line
Pattern pattern = Pattern.compile("(\\s*<)(\\w+)(.*/>)");
FileInputStream fis = new FileInputStream(fileName);
// Use a StringBuidler for better performance over repeated concatenation
StringBuilder sb = new StringBuilder(fis.available()*2);
InputStreamReader isr = new InputStreamReader(fis, "UTF-8");
BufferedReader buff = new BufferedReader(isr);
while (buff.ready())
{
// Read a line from the source xml file
String line = buff.readLine();
Matcher matcher = pattern.matcher(line);
// See if the line contains a table name
if (matcher.matches())
{
String tableName = matcher.group(2);
if (tableName.length() > 30)
{
tableName = tableName.substring(0, 30);
}
// Append the (potentially modified) line
sb.append(matcher.group(1));
sb.append(tableName);
sb.append(matcher.group(3));
}
else
{
// Some lines don't have tables names (<dataset>, <?xml?>, etc.)
sb.append(line);
}
sb.append(ls);
}
return new ByteArrayInputStream(sb.toString().getBytes("UTF-8"));
}
EDIT: Swtiched to StringBuilder from repeated String concatenation, which gives a huge performance boost

Categories