Checking which parameter is missing from file content - java

I have a TransferReader class which reads a file containing transfer data from bank account to another using the following form:
SenderAccountID,ReceiverAccountID,Amount,TransferDate
"473728292,474728298,1500.00,2019-10-17 12:34:12" (unmodified string)
Suppose that the file has been modified before being read so that one of the above mentioned paramaters are missing, and I want to check which of those are missing.
"474728298,1500.00,2019-10-17 12:34:12" (modified string)
I am using a BufferedReader to read each line, and then splitting each element into a String[] using String.split(",") as delimeter.

As already realized, because the Sender Account ID and Receiver Account ID are right next to one another within a record there is no real way of knowing which ID might be missing unless a delimiter remains in its' place indicating a Null value. There are however mechanisms available to determine that it is indeed one of the two that is missing, which one will need to be carried out through User scrutiny and even then, that may not be good enough. The other record column fields like Amount and Transfer Date can be easily validated or if missing can be implicated within a specific File Data Status Log.
Below is some code that will read a data file (named Data.csv) and log potential data line (record) errors into a List Interface object which is iterated through and displayed within the Console Window when the read is complete. There are also some small helper methods. Here is the code:
private void checkDataFile(String filePath) {
String ls = System.lineSeparator();
List<String> validationFailures = new ArrayList<>();
StringBuilder sb = new StringBuilder();
// 'Try With Resources' used here to auto-close reader.
try (BufferedReader reader = new BufferedReader(new FileReader(filePath))) {
String line;
int lineCount = 0;
// Read the file line-by-line.
while ((line = reader.readLine()) != null) {
line = line.trim();
lineCount++;
if (lineCount == 1 || line.equals("")) {
continue;
}
sb.delete(0, sb.capacity()); // Clear the StringBuilder object
// Start the Status Log
sb.append("File Line Number: ").append(lineCount)
.append(" (\"").append(line).append("\")").append(ls);
// Split line into an Array based on a comma delimiter
// reguardless of the delimiter's spacing situation.
String[] lineParts = line.split("\\s{0,},\\s{0,}");
/* Validate each file line. Log any line that fails
any validation for any record column data into a
List Interface object named: validationFailures
*/
// Are there 4 Columns of data in each line...
if (lineParts.length < 4) {
sb.append("\t- Invalid Column Count!").append(ls);
// Which column is missing...
// *** You may need to add more conditions to suit your needs. ***
if (checkAccountIDs(lineParts[0]) && lineParts.length >= 2 && !checkAccountIDs(lineParts[1])) {
sb.append("\t- Either the 'Sender Account ID' or the "
+ "'ReceiverAccountID' is missing!").append(ls);
}
else if (lineParts.length >= 3 && !checkAmount(lineParts[2])) {
sb.append("\t- The 'Amount' value is missing!").append(ls);
}
else if (lineParts.length < 4) {
sb.append("\t- The 'Transfer Date' is missing!").append(ls);
}
}
else {
// Is SenderAccountID data valid...
if (!checkAccountIDs(lineParts[0])) {
sb.append("\t- Invalid Sender Account ID in column 1! (")
.append(lineParts[0].equals("") ? "Null" :
lineParts[0]).append(")");
if (lineParts[0].length() < 9) {
sb.append(" <-- Not Enough Or No Digits!").append(ls);
}
else if (lineParts[0].length() > 9) {
sb.append(" <-- Too Many Digits!").append(ls);
}
else {
sb.append(" <-- Not All Digits!").append(ls);
}
}
// Is ReceiverAccountID data valid...
if (!checkAccountIDs(lineParts[1])) {
sb.append("\t- Invalid Receiver Account ID in coloun 2! (")
.append(lineParts[1].equals("") ? "Null" :
lineParts[1]).append(")");
if (lineParts[1].length() < 9) {
sb.append(" <-- Not Enough Or No Digits!").append(ls);
}
else if (lineParts[1].length() > 9) {
sb.append(" <-- Too Many Digits!").append(ls);
}
else {
sb.append(" <-- Not All Digits!").append(ls);
}
}
// Is Amount data valid...
if (!checkAmount(lineParts[2])) {
sb.append("\t- Invalid Amount Value in column 3! (")
.append(lineParts[2].equals("") ? "Null" :
lineParts[2]).append(")").append(ls);
}
// Is TransferDate data valid...
if (!checkTransferDate(lineParts[3], "yyyy-MM-dd HH:mm:ss")) {
sb.append("\t- Invalid Transfer Date Timestamp in column 4! (")
.append(lineParts[3].equals("") ? "Null" :
lineParts[3]).append(")").append(ls);
}
}
if (!sb.toString().equals("")) {
validationFailures.add(sb.toString());
}
}
}
catch (FileNotFoundException ex) {
System.err.println(ex.getMessage());
}
catch (IOException ex) {
System.err.println(ex.getMessage());
}
// Display the Log...
String timeStamp = new SimpleDateFormat("yyyy/MM/dd - hh:mm:ssa").
format(new Timestamp(System.currentTimeMillis()));
String dispTitle = "File Data Status at " + timeStamp.toLowerCase()
+ " <:-:> (" + filePath + "):";
System.out.println(dispTitle + ls + String.join("",
Collections.nCopies(dispTitle.length(), "=")) + ls);
if (validationFailures.size() > 0) {
for (String str : validationFailures) {
if (str.split(ls).length > 1) {
System.out.println(str);
System.out.println(String.join("", Collections.nCopies(80, "-")) + ls);
}
}
}
else {
System.out.println("No Issues Detected!" + ls);
}
}
private boolean checkAccountIDs(String accountID) {
return (accountID.matches("\\d+") && accountID.length() == 9);
}
private boolean checkAmount(String amount) {
return amount.matches("-?\\d+(\\.\\d+)?");
}
private boolean checkTransferDate(String transferDate, String format) {
return isValidDateString(transferDate, format);
}
private boolean isValidDateString(String dateToValidate, String dateFromat) {
if (dateToValidate == null || dateToValidate.equals("")) {
return false;
}
SimpleDateFormat sdf = new SimpleDateFormat(dateFromat);
sdf.setLenient(false);
try {
// If not valid, it will throw a ParseException
Date date = sdf.parse(dateToValidate);
return true;
}
catch (ParseException e) {
return false;
}
}
I'm not exactly sure what your particular application process will ultimately entail but if other processes are accessing the file and making modifications to it then it may be wise utilize a locking mechanism to Lock the file during your particular process and Unlock the file when it is done. This however will most likely require you to utilize a different reading algorithm since locking a file must be done through a writable channel. Using the FileChannel and FileLock classes from the java.nio package could possibly assist you here. There would be examples of how to utilize these classes within the StackOverflow forum.

Related

The operator && is undefined for the argument String,boolean on JAVA?

I'm a junior java Developer, entrusted with a Java Tool.
I have the following problem:
This tool takes in input 2 CSV files with specific fields.
The tool then generates 2 csv files as Output. First and Second Output.
Both Output files have the same fields, the first Output is based on some conditions and the second output on some other.
These 2 Output files contain different reconciliations for some data.Some records of the file have the same ID.
Example:
record1 = ID10-name One,
record2 = ID10-Blue Two,
record3 = ID10-name Three
One of the conditions is as follows:
if (line.getName (). toLowerCase (). contains ("Blue" .toLowerCase ())
|| line.getName (). equalsIgnoreCase ("Orange")) {
return true;
The method who implement this is a boolean ,and all the logic of the tool it's based on this logic.The tool scrolls/processing line by line .
Iterator <BaseElaboration> itElab = result.iterator ();
while (itElab.hasNext ()) {
BaseLine Processing = itElab.next ();
On the SECOND OUTPUT file,I find a line/record that has the name beginning with Blue.The tool rightly, it takes the line and inserts it in the Second Output File,cause all record who has name (getName), with Blue or Orange go on it
I should instead clump all the lines for the same ID even if only one of them has the name with blue.
Currently the tool do this :
FIRST FILE OUTPUT
record1 = ID10-name
record3 = ID10-name Three
SECOND FILE OUTPUT
record2 = ID10-Blue Two
The expected output is
FIRST FILE OUTPUT
nothing cause one of the group of IDs,cointains a name with Blue
SECOND FILE OUTPUT
record1 = ID10-name
record2 = ID10-Blue Two
record3 = ID10-name Three
i think something like this,but doesnt work
if (line.getID() && line.getCollector().toLowerCase().contains("Blue".toLowerCase())
|| line.getName().equalsIgnoreCase("black")) {
return true;
How to group in java lines for lined for the same ID,and do the esclusione on the noutput
CODE
Output
private void creaCSVOutput() throws IOException, CsvDataTypeMismatchException, CsvRequiredFieldEmptyException, ParseException {
Writer writerOutput = new FileWriter(pathOutput);
Writer writerEsclusi = new FileWriter(pathOutputEsclusi);
StatefulBeanToCsv<BaseElaborazione> beanToCsv = new StatefulBeanToCsvBuilder<BaseElaborazione>(writerOutput)
.withSeparator(';').withQuotechar('"').build();
StatefulBeanToCsv<BaseElaborazione> beanToCsvEsclusi = new StatefulBeanToCsvBuilder<BaseElaborazione>(writerEsclusi)
.withSeparator(';').withQuotechar('"').build();
beanToCsv.write(CsvHelper.genHeaderBeanBase());
beanToCsvEsclusi.write(CsvHelper.genHeaderBeanBase());
Iterator<BaseElaborazione> itElab = result.iterator();
while (itElab.hasNext()) {
BaseElaborazione riga = itElab.next();
some set if and condition ecc
esclusi.add(riga);
itElab.remove();
}
}
for (BaseElaborazione riga : result) {
if(riga.getNota() == null || riga.getNota().isEmpty()) {
riga.setNota(mapNota.get(cuvNota.get(riga.getCuv())));
}
beanToCsv.write(riga);
}
for (BaseElaborazione riga : esclusi) {
if(riga.getNota() == null || riga.getNota().isEmpty()) {
riga.setNota(mapNota.get(cuvNota.get(riga.getCuv())));
}
beanToCsvEsclusi.write(riga);
}
writerOutput.close();
writerEsclusi.close();
}
The method for the esclusi( 2 output)
private boolean checkPerimetroJunk(BaseElaborazione riga) {
if (riga.getMercato().toLowerCase().contains("Energia Libero".toLowerCase())) {
if (riga.getStrategia().toLowerCase().startsWith("STRATEGIA FO".toLowerCase())
|| (riga.getStrategia().toLowerCase().contains("CREDITI CEDUTI".toLowerCase())
|| (riga.getAttivita().equalsIgnoreCase("Proposta di Recupero Stragiudiziale FO")
|| (riga.getAttivita().toLowerCase().contains("Cessione".toLowerCase())
|| (riga.getLegalenome().equalsIgnoreCase("Euroservice junk STR FO")
|| (riga.getLegalenome().equalsIgnoreCase("Euroservice_FO"))))))) {
onlyCUV=true;
}
else if(Collections.frequency(storedIds,riga.getCuv()) >= 1 ){
onlyCUV = true;
}
return onlyCUV;
}
else if (riga.getMercato().equals("MAGGIOR TUTELA")) {
if(riga.getCollector().toLowerCase().contains("Cessione".toLowerCase())
|| (riga.getCollector().equalsIgnoreCase("Euroservice_Fo"))
|| (riga.getAttivitaCrabb().toLowerCase().contains("*FO".toLowerCase())
|| (riga.getaNomeCluster().equalsIgnoreCase("Full Outsourcing")))) {
onlyCUV = true;
}
else if(Collections.frequency(storedIds,riga.getCuv()) >= 1 ){
onlyCUV = true;
}
return onlyCUV;
}
return false;
}
WHERE riga=lines
cessione ecc are people who have black ecc its an example
Now the part of MAGGIOR TUTELA ITS WORKING,but not working the part of LIBERO.I dont know why.

Using Jackcess to retrieve numeric values stored in a text field gives ClassCastException

I am working with Jackcess to read and categorize an access database. It's simply meant to open the database, loop through each line, and print out individual row data to the console which meet certain conditions. It works fine, except for when I try to read numeric values. My code is below. (This code is built into a Swing GUI and gets executed when a jbutton is pressed.)
if (inv == null) { // Check to see if inventory file has been set. If not, then set it to the default reference path.
inv = rPath;
}
if (inventoryFile.exists()) { // Check to see if the reference path exists.
List<String> testTypes = jList1.getSelectedValuesList();
List<String> evalTypes = jList3.getSelectedValuesList();
List<String> grainTypes = jList2.getSelectedValuesList();
StringBuilder sb = new StringBuilder();
for (int i=0; i<=evalTypes.size()-1; i++) {
if (i<evalTypes.size()-1) {
sb.append(evalTypes.get(i)).append(" ");
}
else {
sb.append(evalTypes.get(i));
}
}
String evalType = sb.toString();
try (Database db = DatabaseBuilder.open(new File(inv));) {
Table sampleList = db.getTable("NTEP SAMPLES LIST");
Cursor cursor = CursorBuilder.createCursor(sampleList);
for (int i=0; i<=testTypes.size()-1; i++) {
if ("Sample Volume".equals(testTypes.get(i))) {
if (grainTypes.size() == 1 && "HRW".equals(grainTypes.get(0))) {
switch (evalType) {
case "GMM":
for (Row row : sampleList){
if (null != row.getString("CURRENTGAC")) {}
if ("HRW".equals(row.get("GRAIN")) && row.getDouble("CURRENTGAC")>=12.00) {
System.out.print(row.get("GRAIN") + "\t");
System.out.println(row.get("CURRENTGAC"));
}
}
break;
case "NIRT":
// some conditional code
break;
case "TW":
// some more code
break;
}
}
else {
JOptionPane.showMessageDialog(null, "Only HRW samples can be used for the selected test(s).", "Error", JOptionPane.ERROR_MESSAGE);
}
break;
}
}
}
catch (IOException ex) {
Logger.getLogger(SampleFilterGUI.class.getName()).log(Level.SEVERE, null, ex);
}
When the code is run I get the following error:
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Double
The following condition looks to be what is throwing the error.
row.getDouble("CURRENTGAC")>=12.00
It appears that when the data is read from the database, the program is reading everything as a string, even though some fields are numeric. I was attempting to cast this field as a double, but java doesn't seem to like that. I have tried using the Double.parseDouble() and Double.valueOf() commands to try converting the value (as mentioned here) but without success.
My question is, how can I convert these fields to numeric values? Is trying to type cast the way to go, or is there a different method I'm not aware of? You will also notice in the code that I created a cursor, but am not using it. The original plan was to use it for navigating through the database, but I found some example code from the jackcess webpage and decided to use that instead. Not sure if that was the right move or not, but it seemed like a simpler solution. Any help is much appreciated. Thanks.
EDIT:
To ensure the program was reading a string value from my database, I input the following code
row.get("CURRENTGAC").getClass().getName()
The output was java.lang.String, so this confirms that it is a string. As was suggested, I changed the following code
case "GMM":
for (Row row : sampleList){
if (null != row.get("CURRENTGAC"))
//System.out.println(row.get("CURRENTGAC").getClass().getName());
System.out.println(String.format("|%s|", row.getString("CURRENTGAC")));
/*if ("HRW".equals(row.get("GRAIN")) && row.getDouble("CURRENTGAC")>=12.00 && row.getDouble("CURRENTGAC")<=14.00) {
System.out.print(row.get("GRAIN") + "\t");
System.out.println(row.get("CURRENTGAC"));
}*/
}
break;
The ouput to the console from these changes is below
|9.85|
|11.76|
|9.57|
|12.98|
|10.43|
|13.08|
|10.53|
|11.46|
...
This output, although looks numeric, is still of the string type. So when I tried to run it with my conditional statement (which is commented out in the updated sample code) I still get the same java.lang.ClassCastException error that I was getting before.
Jackcess does not return all values as strings. It will retrieve the fields (columns) of a table as the appropriate Java type for that Access field type. For example, with a test table named "Table1" ...
ID DoubleField TextField
-- ----------- ---------
1 1.23 4.56
... the following Java code ...
Table t = db.getTable("Table1");
for (Row r : t) {
Object o;
Double d;
String fieldName;
fieldName = "DoubleField";
o = r.get(fieldName);
System.out.println(String.format(
"%s comes back as: %s",
fieldName,
o.getClass().getName()));
System.out.println(String.format(
"Value: %f",
o));
System.out.println();
fieldName = "TextField";
o = r.get(fieldName);
System.out.println(String.format(
"%s comes back as: %s",
fieldName,
o.getClass().getName()));
System.out.println(String.format(
"Value: %s",
o));
try {
d = r.getDouble(fieldName);
} catch (Exception x) {
System.out.println(String.format(
"r.getDouble(\"%s\") failed - %s: %s",
fieldName,
x.getClass().getName(),
x.getMessage()));
}
try {
d = Double.parseDouble(r.getString(fieldName));
System.out.println(String.format(
"Double.parseDouble(r.getString(\"%s\")) succeeded. Value: %f",
fieldName,
d));
} catch (Exception x) {
System.out.println(String.format(
"Double.parseDouble(r.getString(\"%s\")) failed: %s",
fieldName,
x.getClass().getName()));
}
System.out.println();
}
... produces:
DoubleField comes back as: java.lang.Double
Value: 1.230000
TextField comes back as: java.lang.String
Value: 4.56
r.getDouble("TextField") failed - java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Double
Double.parseDouble(r.getString("TextField")) succeeded. Value: 4.560000
If you are unable to get Double.parseDouble() to parse the string values from your database then either
they contain "funny characters" that are not apparent from the samples you posted, or
you're doing it wrong.
Additional information re: your sample file
Jackcess is returning CURRENTGAC as String because it is a Text field in the table:
The following Java code ...
Table t = db.getTable("NTEP SAMPLES LIST");
int countNotNull = 0;
int countAtLeast12 = 0;
for (Row r : t) {
String s = r.getString("CURRENTGAC");
if (s != null) {
countNotNull++;
Double d = Double.parseDouble(s);
if (d >= 12.00) {
countAtLeast12++;
}
}
}
System.out.println(String.format(
"Scan complete. Found %d non-null CURRENTGAC values, %d of which were >= 12.00.",
countNotNull,
countAtLeast12));
... produces ...
Scan complete. Found 100 non-null CURRENTGAC values, 62 of which were >= 12.00.

Iterating over tokens in HIDDEN channel

I am currently working on creating an IDE for the custom, very lua-like scripting language MobTalkerScript (MTS), which provides me with an ANTLR4 lexer. Since the specifications from the language file for MTS puts comments into the HIDDEN_CHANNEL channel, I need to tell the lexer to actually read from the HIDDEN_CHANNEL channel. This is how I tried to do that.
Mts3Lexer lexer = new Mts3Lexer(new ANTLRInputStream("<replace this with the input>"));
lexer.setTokenFactory(new CommonTokenFactory(false));
lexer.setChannel(Token.HIDDEN_CHANNEL);
Token token = lexer.emit();
int type = token.getType();
do {
switch(type) {
case Mts3Lexer.LINE_COMMENT:
case Mts3Lexer.COMMENT:
System.out.println("token "+token.getText()+" is a comment");
default:
System.out.println("token "+token.getText()+" is not a comment");
}
} while((token = lexer.nextToken()) != null && (type = token.getType()) != Token.EOF);
Now, if I use this code on the following input, nothing but token ... is not a comment gets printed to the console.
function foo()
-- this should be a single-line comment
something = "blah"
--[[ this should
be a multi-line
comment ]]--
end
The tokens containing the comments never show up, though. So I searched for the source of this problem and found the following method in the ANTLR4 Lexer class:
/** Return a token from this source; i.e., match a token on the char
* stream.
*/
#Override
public Token nextToken() {
if (_input == null) {
throw new IllegalStateException("nextToken requires a non-null input stream.");
}
// Mark start location in char stream so unbuffered streams are
// guaranteed at least have text of current token
int tokenStartMarker = _input.mark();
try{
outer:
while (true) {
if (_hitEOF) {
emitEOF();
return _token;
}
_token = null;
_channel = Token.DEFAULT_CHANNEL;
_tokenStartCharIndex = _input.index();
_tokenStartCharPositionInLine = getInterpreter().getCharPositionInLine();
_tokenStartLine = getInterpreter().getLine();
_text = null;
do {
_type = Token.INVALID_TYPE;
// System.out.println("nextToken line "+tokenStartLine+" at "+((char)input.LA(1))+
// " in mode "+mode+
// " at index "+input.index());
int ttype;
try {
ttype = getInterpreter().match(_input, _mode);
}
catch (LexerNoViableAltException e) {
notifyListeners(e); // report error
recover(e);
ttype = SKIP;
}
if ( _input.LA(1)==IntStream.EOF ) {
_hitEOF = true;
}
if ( _type == Token.INVALID_TYPE ) _type = ttype;
if ( _type ==SKIP ) {
continue outer;
}
} while ( _type ==MORE );
if ( _token == null ) emit();
return _token;
}
}
finally {
// make sure we release marker after match or
// unbuffered char stream will keep buffering
_input.release(tokenStartMarker);
}
}
The line that caught my eye was the following.
_channel = Token.DEFAULT_CHANNEL;
I don't know much about ANTLR, but apparently this line keeps the lexer in the DEFAULT_CHANNEL channel.
Is the way I tried to read from the HIDDEN_CHANNEL channel right or can't I use nextToken() with the hidden channel?
I found out why the lexer didn't give me any tokens containing the comments - I seem to have missed that the grammar file skips comments instead of putting them into the hidden channel. Contacted the author, changed the grammar file and now it works.
Note to myself: pay more attention to what you read.
For Go (golang) this snippet works for me:
import (
"github.com/antlr/antlr4/runtime/Go/antlr"
)
type antlrparser interface {
GetParser() antlr.Parser
}
func fullText(prc antlr.ParserRuleContext) string {
p := prc.(antlrparser).GetParser()
ts := p.GetTokenStream()
tx := ts.GetTextFromTokens(prc.GetStart(), prc.GetStop())
return tx
}
just pass your ctx.GetSomething() into fullText. Of course, as shown above, whitespace has to go to the hidden channel in the *.g4 file:
WS: [ \t\r\n] -> channel(HIDDEN);

Hiding headers from a query in java program

So i have a program that spits out data from google to a csv file. What i want to do is allow users to choose to display the headers or not via using a string.
Here is my printer:
...some code
// getting the queries to print
if (results.getRows() == null || results.getRows().isEmpty()) {
pw.println("No results Found.");
System.out.println("No results Found.");
} else {
// Print column headers.
for (ColumnHeaders header : results.getColumnHeaders()) {
pw.print(header.getName() + ", ");
}
pw.println();
// Print actual data.
for (List<String> row : results.getRows()) {
for (String column : row) {
pw.print(column + ",");
}
pw.println();
}
pw.close();
}
}
}
I have a properties file that is connected to my program and i want to put it so that when a user types in no in the header part of the properties file i dont want the headers to show.
I was thinking about converting the header part into a string and putting it in the if then statement. any suggestions? thx in advanced
EDIT:
// column headers statement
if (headers=="yes") {
for (ColumnHeaders header : results.getColumnHeaders())
pw.print(header.getName() + ", ");
} else {
// Print column headers.
for (ColumnHeaders header : results.getColumnHeaders()) {
pw.print("" + ", ");
}
pw.println();
}
// getting the queries to print
if (results.getRows() == null || results.getRows().isEmpty()) {
pw.println("No results Found.");
System.out.println("No results Found.");
} else {
// Print actual data.
for (List<String> row : results.getRows()) {
for (String column : row) {
pw.print(column + ",");
}
pw.println();
}
pw.close();
}
}
}
But what I have now is not working correctly.
First thing is first, you are not checking the user input correctly to match
You need to change
if (headers=="yes") {
to
if (headers.equals("yes")) {
I would also get rid of the else statement for printing out nothing in the first row except commas. Do you really want the first row to just be commas?
Make sure to close your stream at the end no matter what, too. It looks like your pw.close() is in an else statement.

how to read two consecutive commas from .csv file format as unique value in java

Suppose csv file contains
1,112,,ASIF
Following code eliminates the null value in between two consecutive commas.
Code provided is more than it is required
String p1=null, p2=null;
while ((lineData = Buffreadr.readLine()) != null)
{
row = new Vector(); int i=0;
StringTokenizer st = new StringTokenizer(lineData, ",");
while(st.hasMoreTokens())
{
row.addElement(st.nextElement());
if (row.get(i).toString().startsWith("\"")==true)
{
while(row.get(i).toString().endsWith("\"")==false)
{
p1= row.get(i).toString();
p2= st.nextElement().toString();
row.set(i,p1+", "+p2);
}
String CellValue= row.get(i).toString();
CellValue= CellValue.substring(1, CellValue.length() - 1);
row.set(i,CellValue);
//System.out.println(" Final Cell Value : "+row.get(i).toString());
}
eror=row.get(i).toString();
try
{
eror=eror.replace('\'',' ');
eror=eror.replace('[' , ' ');
eror=eror.replace(']' , ' ');
//System.out.println("Error "+ eror);
row.remove(i);
row.insertElementAt(eror, i);
}
catch (Exception e)
{
System.out.println("Error exception "+ eror);
}
//}
i++;
}
how to read two consecutive commas from .csv file format as unique value in java.
Here is an example of doing this by splitting to String array. Changed lines are marked as comments.
// Start of your code.
row = new Vector(); int i=0;
String[] st = lineData.split(","); // Changed
for (String s : st) { // Changed
row.addElement(s); // Changed
if (row.get(i).toString().startsWith("\"") == true) {
while (row.get(i).toString().endsWith("\"") == false) {
p1 = row.get(i).toString();
p2 = s.toString(); // Changed
row.set(i, p1 + ", " + p2);
}
...// Rest of Code here
}
The StringTokenizer skpis empty tokens. This is their behavious. From the JLS
StringTokenizer is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use the split method of String or the java.util.regex package instead.
Just use String.split(",") and you are done.
Just read the whole line into a string then do string.split(",").
The resulting array should have exactly what you are looking for...
If you need to check for "escaped" commas then you will need some regex for the query instead of a simple ",".
while ((lineData = Buffreadr.readLine()) != null) {
String[] row = line.split(",");
// Now process the array however you like, each cell in the csv is one entry in the array

Categories