I have a .csv file. Data is divided by commas and I need to extract information out this file. Thing is if i just write this it works but partially:
String file = "FinalProject/src/Data.csv";
BufferedReader rd = null;
String line = "";
HashSet<String> platforms = new HashSet<String>();
try
{
rd = new BufferedReader(new FileReader(file));
rd.readLine();
while ((line = rd.readLine())!=null)
{
String [] arr = line.split("\"");
var words = new ArrayList<String>();
for(int i =0; i < arr.length;i++)
{
if(i % 2 == 0)
{
words.addAll(Arrays.asList(arr[i].split(",")));
}
else
{
words.add(arr[i]);
}
platforms.add(words.get(2));
}
}
}
catch (Exception e)
{
System.out.println("");
}
finally
{
try
{
rd.close();
}
catch (IOException e)
{
throw new RuntimeException(e);
}
}
When I check the contents of Set and extract the same data out of the database created from this .csv file it shows difference. For example - my set has 38 values, when the database has 40, all of them are unique( nothing is repeated). I think the problem is caused by separation of data in .csv file with comma signs. Because some of these signs are inside of quotes and this probably causes a loss of the potential values that i need. Is there any solution to that problem? Or perhaps there is a more efficient way to deal with the comma sings inside of the quotes so that they are ignored?
Related
I have a CSV file named barcode, I have successfully imported that file into my SQLite DB but the problem is the data is saving as it is as what we have in CSV file.
The data im getting: 8.90103E+12, The data i wanted: 8901030382253.
Example:
ITEM - EAN_CODE
100047253 - 8.90103E+12
100047252 - 8.90103E+12
I have two columns, the data in Ean_code is not getting proper into my db.
I have used the trim function but the o/p remains the same.
My code:
if (exportDir.exists()) {
FileReader file = null;
try {
file = new FileReader(exportDir);
BufferedReader buffer = new BufferedReader(file);
String line = "";
int iteration = 0;
ArrayList<MasterDataModel2> arrayList_stock2 = new ArrayList<>();
while ((line = buffer.readLine()) != null) {
if (iteration == 0) {
iteration++;
continue;
}
//StringBuilder sb = new StringBuilder();
String[] str = line.split(",");
arrayList_stock2.add(new MasterDataModel2(str[0].replace("\"", ""), str[1].replace("\"", "")));
//arrayList_stock2.add(new MasterDataModel2(str[0],str[1]));
Log.d("insertTotal", "Msg:" + lastId);
}
Log.e("size",String.valueOf(arrayList_stock2.size()));
db.addAllMasterData2(arrayList_stock2);
} catch (FileNotFoundException e) {
e.printStackTrace();
//CallingImportantMethod.showToast(this, "File is not available");
} catch (IOException e) {
e.printStackTrace();
//CallingImportantMethod.showToast(this, "Something wents wrong");
}
}
This below code is incorrect.
arrayList_stock2.add(new MasterDataModel2(str[0].replace("\"", ""), str[1].replace("\"", "")));
\ is special escape character, so you need \\ to escape it. Even though unrelated to the original problem suggest you to read this to have a clear idea about usage of backslashes in java.
So back to the problem, i guess you really don't need to call the str[].replace(). Correct me if i am wrong.
arrayList_stock2.add(new MasterDataModel2(str[0].trim(), str[1].trim()));
I think above code segment will be enough.
Edit: for your actual problem you can use below. To avoid null issues you may need to check that also.
if(str[1] != null) {
Double eanCodeDouble = Double.parseDouble(str[1]);
NumberFormat nf = NumberFormat.getInstance();
String eanCodeString = nf.format(eanCodeDouble);
System.out.println(resultedString);
} else {
String eanCodeString = '';
}
if(str[0] == null) {
str[0] = '';
}
arrayList_stock2.add(new MasterDataModel2(str[0].trim(), eanCodeString));
I'm looping csv. I have two question:
1) I'm selecting second column by name like
if(tab[1].equals("Col2")
I don't want to put the name of column. I want to select just second column.
2) how to skip first line (header)
Here is sample of code to looping csv:
String csvFile = "C:\\test.csv";
BufferedReader br = null;
String line = "";
String cvsSplitBy = ";";
try{
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
String[] tab=line.split(cvsSplitBy);
int tmp;
if(tab[1].equals("Col2")){
tmp = Integer.parseInt(tab[2]);
for(int i=0;i<tmp;i++){
// TO DO
}
}
}
}
Better to make use of CSVReader for this, which provides lot of APIs for processing your csv files. Here is a complete working code, ofcourse, without exception handling.
String csvFile = "C:\\test.csv";
CSVReader reader;
String[] nextRow;
char cvsSplitBy = ';';
try {
//Last argument will determine how many lines to skip. 1 means skip header
reader = new CSVReader(new FileReader(csvFile), cvsSplitBy, CSVParser.DEFAULT_QUOTE_CHARACTER, 1);
while ((nextRow = reader.readNext()) != null) {
if(nextRow.length > 2){
//nextRow[1] will always give second column value
int tmp = Integer.parseInt(nextRow[1]);
for (int i = 0; i < tmp; i++) {
// TO DO
}
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Here is an example using Apache Commons CSV, and its CSVParser.
The first line is considered to be the header and is skipped (withFirstRecordAsHeader()) , the "columns" of each record can be accessed with their index (get(int)) .The indexes are 0-based .
Just adapt the charset and CSVFormat to your needs.
CSVParser parser = null;
try {
parser = CSVParser.parse(new File(csvFile), Charset.forName("UTF-8"),
CSVFormat.RFC4180.withFirstRecordAsHeader());
List<CSVRecord> records = parser.getRecords();
for (CSVRecord record : records) {
int tmp = Integer.parseInt(record.get(1));
for (int i = 0; i < tmp; i++) {
// TO DO
}
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
parser.close();
} catch (IOException e) {
}
}
With univocity-parsers this becomes a piece of cake:
CsvParserSettings parserSettings = new CsvParserSettings(); //many options here, check the tutorial.
parserSettings.setHeaderExtractionEnabled(true); //header is extracted and not part of the result
parserSettings.selectIndexes(1); //select 2nd column (indexes are 0-based)
CsvParser parser = new CsvParser(parserSettings);
List<String[]> allRows = parser.parseAll(csvFile);
Note that this will work even if some of the rows are empty of have just one column, whereas all other solutions posted here will fail unless you handle such situations by yourself.
Not only this involves WAY less code (and complexity), the parser is also ~4 times faster than Commons CSV, and ~3 times faster than OpenCSV.
Disclaimer: I'm the author of this library, it's open-source and free (Apache v2.0 License)
I want to cut a text file.
I want to cut the file 50 lines by 50 lines.
For example, If the file is 1010 lines, I would recover 21 files.
I know how to count the number of files, the number of lines but as soon as I write, it's doesn't work.
I use the Camel Simple (Talend) but it's Java code.
private void ExtractOrderFromBAC02(ProducerTemplate producerTemplate, InputStream content, String endpoint, String fileName, HashMap<String, Object> headers){
ArrayList<String> list = new ArrayList<String>();
BufferedReader br = new BufferedReader(new InputStreamReader(content));
String line;
long numSplits = 50;
int sourcesize=0;
int nof=0;
int number = 800;
try {
while((line = br.readLine()) != null){
sourcesize++;
list.add(line);
}
System.out.println("Lines in the file: " + sourcesize);
double numberFiles = (sourcesize/numSplits);
int numberFiles1=(int)numberFiles;
if(sourcesize<=50) {
nof=1;
}
else {
nof=numberFiles1+1;
}
System.out.println("No. of files to be generated :"+nof);
for (int j=1;j<=nof;j++) {
number++;
String Filename = ""+ number;
System.out.println(Filename);
StringBuilder builder = new StringBuilder();
for (String value : list) {
builder.append("/n"+value);
}
producerTemplate.sendBodyAndHeader(endpoint, builder.toString(), "CamelFileName",Filename);
}
}
} catch (IOException e) {
e.printStackTrace();
}
finally{
try {
if(br != null)br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
For people who don't know Camel, this line is used to send the file:
producerTemplate.sendBodyAndHeader (endpoint, line.toString (), "CamelFileName" Filename);
endpoint ==> Destination (it's ok with another code)
line.toString () ==> Values
And then the file name (it's ok with another code)
you count the lines first
while((line = br.readLine()) != null){
sourcesize++; }
and then you're at the end of the file: you read nothing
for (int i=1;i<=numSplits;i++) {
while((line = br.readLine()) != null){
You have to seek back to the start of the file before reading again.
But that's a waste of time & power because you'll read the file twice
It's better to read the file once and for all, put it in a List<String> (resizable), and proceed with your split using the lines stored in memory.
EDIT: seems that you followed my advice and stumbled on the next issue. You should have maybe asked another question, well... this creates a buffer with all the lines.
for (String value : list) {
builder.append("/n"+value);
}
You have to use indexes on the list to build small files.
for (int k=0;k<numSplits;k++) {
builder.append("/n"+list[current_line++]);
current_line being the global line counter in your file. That way you create files of 50 different lines each time :)
I am trying to convert binary numbers to decimal from a file. I am able to get the numbers to convert, however, in my text file, if I have more then one binary number in a line the code just skips it.
List<Integer> list = new ArrayList<Integer>();
File file = new File("binary.txt");
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader(file));
String text = null;
while ((text = reader.readLine()) != null) {
try {
list.add(Integer.parseInt(text,2));
}
catch (Exception ex) {
continue;
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Here is the text file that I am using as well:
00100101
00100110 01000001
01100000
01111011
10010100 00100101
01101011
11000111 00011010
When I run my code I get: [37, 96, 123, 107]
The code skips the lines where there are two binary numbers.
I'm having trouble trying to be able to convert the integers and not use reader.readLine() in the while loop. Any help is greatly appreciated!
Split each line read by the while loop using text.split("\\s+"), and iterate the split values:
String text = null;
while ((text = reader.readLine()) != null) {
for (String value : text.split("\\s+")) {
try {
list.add(Integer.parseInt(value,2));
}
catch (Exception ex) {
continue; // should throw error: File is corrupt
}
}
}
This should do for when you have multiple values in one line.
You loop over the multiple values and add them separately.
try {
for (String s : text.split(" ") list.add(Integer.parseInt(s,2));
}
Also, like Andreas wrote it, it is not recommended to ignore Exceptions.
Ive been working on this on and off today.
Here is my method, which basically needs to accept a .data (txt) file location, and then go through the contents of that text file and break it up into strings based on the delimiters present. These are the 2 files.
The person file.
Person ID,First Name,Last Name,Street,City
1,Ola,Hansen,Timoteivn,Sandnes
2,Tove,Svendson,Borgvn,Stavanger
3,Kari,Pettersen,Storgt,Stavanger
The order file.
Order ID|Order Number|Person ID
10|2000|1
11|2001|2
12|2002|1
13|2003|10
public static void openFile(String url) {
//initialize array for data to be held
String[][] myStringArray = new String[10][10];
int row = 0;
try {
//open the file
FileInputStream fstream = new FileInputStream(url);
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
String strLine;
//Read File Line By Line
while ((strLine = br.readLine()) != null) {
//ignores any blank entries
if (!"".equals(strLine)) {
//splits by comma(\\| for order) and places individually into array
String[] splitStr = new String[5];
//splitStr = strLine.split("\\|");
/*
* This is the part that i am struggling with getting to work.
*/
if (strLine.contains("\\|")) {
splitStr = strLine.split("\\|");
} else if (strLine.contains(",")) {
splitStr = strLine.split(",");
}else{
System.out.println("error no delimiter detected");
}
for (int i = 0; i < splitStr.length; i++) {
myStringArray[row][i] = splitStr[i];
System.out.println(myStringArray[row][i]);
}
}
}
//Close the input stream
br.close();
} catch (FileNotFoundException ex) {
Logger.getLogger(Client.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(Client.class.getName()).log(Level.SEVERE, null, ex);
}
}
The person file is correctly read and parsed. But the order file with the "|" delimiter is having none of it. I just get 'null' printouts.
Whats confusing me is that when i just have splitStr = strLine.split("\|"); It works but i need this method to be able to detect the delimiter present and then apply the correct split.
Any help will be much appreciated
Apart from the fact that this should be done using a CSV library, the reason this code is failing is that contains doesnt accept a regular expression. Remove the escape characters so the pipe character can be detected
if (strLine.contains("|")) {