How to create Strings from a JTable txt file? - java

I need to read from a txt file and sort everything in different arrays or strings, allowing me to set text for my JLabels. One array/string for ID, Item Name, Price and Stock.
This is a preview of my txt file:
Here is my code to read the txt file to import it to my JTable:
String filePath = "C:\\Users\\zagad\\IdeaProjects\\DATABYTES\\stock\\consoles\\consoles.txt";
File file = new File(filePath);
try {
BufferedReader br = new BufferedReader(new FileReader(file));
String firstLine = br.readLine().trim();
String[] columnsName = firstLine.split(", ");
DefaultTableModel model3 = (DefaultTableModel) productTable.getModel();
model3.setColumnIdentifiers(columnsName);
Object[] tableLines = br.lines().toArray();
for (int i = 0; i < tableLines.length; i++) {
String line = tableLines[i].toString().trim();
String[] dataRow = line.split("/");
model3.addRow(dataRow);
}
} catch (IOException ex) {
ex.printStackTrace();
}
How do I separate them? Any help would be appreciated.
TXT FILE:
ID , Item Name ,Price , Stock
00016 / Apple Airpods / 8999 / 20
00017 / Samsung Galaxy Buds / 6999 / 13
00018 / Apple Airpods Pro / 14999 / 5
00019 / Beats Powerbeats Pro / 13490 / 8
00020 / Sony WF-1000XM3 / 10799 / 10

It appears that the format separates each of the columns by \ so you can split the String by that. What we can do is read all lines from a given Path object which is the Path to the file. We can then skip the first line which we know is the table column names. We can then map each of these lines which are individual String objects to a String array by removing all whitespace with the replaceAll reference and then use the String#split method to split the line by \ which will give us each of the columns for each row. We can then collect all of these String arrays to a List using the Stream#collect method.
List<String> lines = Files.readAllLines(Paths.get("first.txt"));
String[] columnNames = lines.stream().findFirst().orElseThrow(IOException::new).split(",");
List<MyRow> rows = lines
.stream()
.skip(1)
.map(line -> line.replaceAll(" ", "").split("/"))
.map(MyRow::valueOf)
.collect(Collectors.toList());
DefaultTableModel model3 = new DefaultTableModel();
model3.setColumnIdentifiers(columnNames);
rows.forEach(row -> model3.addRow(new Object[] { row.getId(), row.getItemName(), row.getPrice(), row.getStock() }));
List<Integer> ids = rows.stream().map(MyRow::getId).collect(Collectors.toList());
Output:
[00016, AppleAirpods, 8999, 20]
[00017, SamsungGalaxyBuds, 6999, 13]
[00018, AppleAirpodsPro, 14999, 5]
[00019, BeatsPowerbeatsPro, 13490, 8]
[00020, SonyWF-1000XM3, 10799, 10]

Related

Join csv files ased on common column in java

I want to join two csv files based on a common column in. My two csv files and final csv file looks like this.
Here are the example files - 1st file looks like:
sno,first name,last name
--------------------------
1,xx,yy
2,aa,bb
2nd file looks like:
sno,place
-----------
1,pp
2,qq
Output:
sno,first name,last name,place
------------------------------
1,xx,yy,pp
2,aa,bb,qq
Code:
CSVReader r1 = new CSVReader(new FileReader("c:/csv/file1.csv"));;
CSVReader r2 = new CSVReader(new FileReader("c:/csv/file2.csv"));;
HashMap<String,String[]> dic = new HashMap<String,String[]>();
int commonCol = 1;
r1.readNext(); // skip header
String[] line = null;
while ((line = r1.readNext()) != null)
{
dic.put(line[commonCol],line)
}
commonCol = 1;
r2.readNext();
String[] line2 = null;
while ((line2 = r2.readNext()) != null)
{
if (dic.keySet().contains(line2[commonCol])
{
// append line to existing entry
}
else
{
// create a new entry and pre-pend it with default values
// for the columns of file1
}
}
foreach (String[] line : dic.valueSet())
{
// write line to the output file.
}
I don't know how to proceed further to get desired output. Any help will be appreciated.
Thanks
First, you need to use zero as your commonCol value as the first column has index zero rather than one.
if (dic.keySet().contains(line2[commonCol])
{
//Get the whole line from the first file.
String firstPart = dic.get(line2[commonCol]);
//Gets the line from the second file, without the common column.
String secondPart = String.join (Arrays.copyOfRange(line2, 1, line2.length -1), ",");
// Join together and put in Hashmap.
dic.put(line2[commonCol], String.join (firstPart, secondPart));
}
else
{
// create a new entry and pre-pend it with default values
// for the columns of file1
String firstPart = String.join(",","some", "default", "values")
String secondPart = String.join (Arrays.copyOfRange(line2, 1, line2.length -1), ",");
dic.put(line2[commonCol], String.join (firstPart, secondPart));
}

Cannot iterate through CSV columns

I'm building a stock screener that applies a calculation through each column of a csv file. However, when I run the for loop, I only get one result back.
String path = "C:/Users/0/Desktop/Git/Finance/Data/NQ100.csv";
Reader buf = Files.newBufferedReader(Paths.get(path));
CSVParser parsed = new CSVParser(buf, CSVFormat.DEFAULT.withFirstRecordAsHeader()
.withIgnoreHeaderCase().withTrim());
// Parse tickers
Map<String, Integer> header = parsed.getHeaderMap();
List<String> tickerList = new ArrayList<>(header.keySet());
for (int x=1; x < tickerList.size(); x++) { <----------------------- PROBLEM
// Accessing closing price by Header names
List<Double> closeList = new ArrayList<>();
for (CSVRecord record : parsed) {
String stringClose = record.get(x);
Double close = Double.valueOf(stringClose);
closeList.add(close);
}
// Percentage Change
List<Double> pctList = new ArrayList<>();
for (int i=1; i < closeList.size(); i++) {
Double pct = closeList.get(i) / closeList.get(i-1) - 1;
pctList.add(pct);
}
// Statistics
Double sum = 0.0, var = 0.0, mean, sd, rfr, sr;
// Mean
for (Double num : pctList) sum += num;
mean = sum/pctList.size();
// Standard Deviation
for (Double num: pctList) var += Math.pow(num - mean, 2);
sd = Math.sqrt(var/pctList.size());
// Risk Free Rate
rfr = Math.pow((1+0.03),(1/252.0))-1;
// Sharpe Ratio
sr = Math.sqrt(252) * ((mean-rfr)/sd);
System.out.println(tickerList.get(x) + " " + sr);
}
My data looks like this:
,AAL,AAPL,ADBE
2007-10-25,26.311651,23.141403,47.200001
2007-10-26,26.273216,23.384495,47.0
2007-10-29,26.004248,23.43387,47.0
So I was expecting:
AAL XXX
AAPL XXX
ADBE XXX
But I got just:
AAL 0.3604941921663456
Would be grateful if you guys can help me find the problem!
You can iterate through Iterable in Java only once, in your case CSVParser parsed implements Iterable<CSVRecord>.
So you iterate through it only for the first time when you calculate statistics for AAL, during analyzing data for AAPL and ADBE it will be handled as an empty one.
You can handle this by introducing helper list init by the parsed, add next code (it is a one line solution of course e.g. in Java 8, but this option will work for earlier versions too) before the for cycle:
List<CSVRecord> records = new ArrayList<>();
for (CSVRecord record : parsed) {
records.add(record);
}
And change next line:
for (CSVRecord record : records) {
with:
for (CSVRecord record : parsed) {
For the CSV you've provided you will have next output then:
AAL -21.583101145880306
AAPL 23.417753561072438
ADBE -16.75343297000953
So here's a block of the code that work for me, if i understand your question, you only want to "read" each column and row from a csv file, hope helps.
br = new BufferedReader(new InputStreamReader(new FileInputStream(archivo), "UTF8"));
while ((line = br.readLine()) != null) {
if(a!=0){
String[] datos = line.split(cvsSplitBy);
System.out.println(datos[0] + " - " + datos[1] + " - " + datos[2]);
}
a++;
}

Reading and matching contents of two big files

I have two files each having the same format with approximately 100,000 lines. For each line in file one I am extracting the second component or column and if I find a match in the second column of second file, I extract their third components and combine them, store or output it.
Though my implementation works but the programs runs extremely slow, it takes more than an hour to iterate over the files, compare and output all the results.
I am reading and storing the data of both files in ArrayList then iterate over those list and do the comparison. Below is my code, is there any performance related glitch or its just normal for such an operation.
Note : I was using String.split() but I understand form other post that StringTokenizer is faster.
public ArrayList<String> match(String file1, String file2) throws IOException{
ArrayList<String> finalOut = new ArrayList<>();
try {
ArrayList<String> data = readGenreDataIntoMemory(file1);
ArrayList<String> data1 = readGenreDataIntoMemory(file2);
StringTokenizer st = null;
for(String line : data){
HashSet<String> genres = new HashSet<>();
boolean sameMovie = false;
String movie2 = "";
st = new StringTokenizer(line, "|");
//String line[] = fline.split("\\|");
String ratingInfo = st.nextToken();
String movie1 = st.nextToken();
String genreInfo = st.nextToken();
if(!genreInfo.equals("null")){
for(String s : genreInfo.split(",")){
genres.add(s);
}
}
StringTokenizer st1 = null;
for(String line1 : data1){
st1 = new StringTokenizer(line1, "|");
st1.nextToken();
movie2 = st1.nextToken();
String genreInfo2= st1.nextToken();
//If the movie name are similar then they should have the same genre
//Update their genres to be the same
if(!genreInfo2.equals("null") && movie1.equals(movie2)){
for(String s : genreInfo2.split(",")){
genres.add(s);
}
sameMovie = true;
break;
}
}
if(sameMovie){
finalOut.add(ratingInfo+""+movieName+""+genres.toString()+"\n");
}else if(sameMovie == false){
finalOut.add(line);
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
}
return finalOut;
}
I would use the Streams API
String file1 = "files1.txt";
String file2 = "files2.txt";
// get all the lines by movie name for each file.
Map<String, List<String[]>> map = Stream.of(Files.lines(Paths.get(file1)),
Files.lines(Paths.get(file2)))
.flatMap(p -> p)
.parallel()
.map(s -> s.split("[|]", 3))
.collect(Collectors.groupingByConcurrent(sa -> sa[1], Collectors.toList()));
// merge all the genres for each movie.
map.forEach((movie, lines) -> {
Set<String> genres = lines.stream()
.flatMap(l -> Stream.of(l[2].split(",")))
.collect(Collectors.toSet());
System.out.println("movie: " + movie + " genres: " + genres);
});
This has the advantage of being O(n) instead of O(n^2) and it's multi-threaded.
Do a hash join.
As of now you are doing an outer loop join which is O(n^2), the hash join will be amortized O(n)
Put the contents of each file in a hash map, with key the field you want (second field).
Map<String,String> map1 = new HashMap<>();
// build the map from file1
Then do the hash join
for(String key1 : map1.keySet()){
if(map2.containsKey(key1)){
// do your thing you found the match
}
}

Reading txt file, then re organizing it to an array

So basically what I need to do is:
Read a text file like this:
[Student ID], [Student Name], Asg 1, 10, Asg 2, 10, Midterm, 40, Final, 40
01234567, Timture Choi, 99.5, 97, 100.0, 99.0
02345678, Elaine Tam, 89.5, 88.5, 99.0, 100
and present it like this (with calculations of rank and average):
ID Name Asg 1 Asg 2 Midterm Final Overall Rank
01234567 Timture Choi 99.5 97.0 100.0 99.0 99.3 1
02345678
Elaine Tam 89.5 88.5 99.0 100.0 97.4 2
Average: 94.5 92.75 99.5 99.5 98.3
Using printf() function
now this is what I have done so far:
import java.io.*;
import java.util.Scanner;
class AssignmentGrades {
public static void main(String args[]) throws Exception {
Scanner filename = new Scanner(System.in);
String fn = filename.nextLine(); //scannig the file name
System.out.println("Enter your name of file : ");
FileReader fr = new FileReader(fn+".txt");
BufferedReader br = new BufferedReader (fr);
String list;
while((list = br.readLine()) !=null) {
System.out.println(list);
}
fr.close();
}
}
So I can ask the user for the name of the file, then read it and print.
Now.. I'm stuck. I think I need to probably put it in to array and split?
String firstrow = br.readLine();
String[] firstrow = firstrow.split(", ");
something like that?.. ugh ive been stuck here for more than an hour
I really need help!! I appreciate your attention!! ( I started to learn java this week)
There are two ways for splitting the input line just read from the file
Using String object's split() method which would return an array. Read more about the split here.
StringTokenizer Class - This class can be used to divide the input string into separate tokens based on a set of delimeter. Here is a good tutorial to get started.
You should be able to get more examples using google :)
In case you want to parse integers from String. Check this.
Here I store the columns as an array of Strings and I store the record set as an ArrayList of String arrays. In the while loop if the column set is not initialized yet (first iteration) I initialize it with the split. Otherwise I add the split to the ArrayList. Import java.util.ArrayList.
String[] columns = null;
ArrayList<String[]> values = new ArrayList<String[]>();
String list;
while((list = br.readLine()) !=null) {
if (columns != null) {
columns = list.split(", ");
} else {
values.add(list.split(", "));
}
}
fr.close();

TSV file into 2d array - java

I have a tsv txt file containing data in 3 rows.
It looks like:
HG sn FA
PC 2 16:0
PI 1 18:0
PS 3 20:0
PE 2 24:0
26:0
16:1
18:2
I want to read this file into a 2 dimensional array in java.
But i get an error all the time, no matter what i try.
File file = new File("table.txt");
Scanner scanner = new Scanner(file);
final int maxLines = 100;
String[][] resultArray = new String[maxLines][];
int linesCounter = 0;
while (scanner.hasNextLine() && linesCounter < maxLines) {
resultArray[linesCounter] = scanner.nextLine().split("\t");
linesCounter++;
}
System.out.print(resultArray[1][1]);
I keep getting this error
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
at exercise.exercise2.main(exercise2.java:31)
Line 31 is
System.out.print(resultArray[1][1]);
I cannot find any reasons why this error keeps emerging
In your case I would use Java 7 Files.readAllLines.
Something like:
String[][] resultArray;
List<String> lines = Files.readAllLines(Paths.get("table.txt"), StandardCharsets.UTF_8);
//lines.removeAll(Arrays.asList("", null)); // <- remove empty lines
resultArray = new String[lines.size()][];
for(int i =0; i<lines.size(); i++){
resultArray[i] = lines.get(i).split("\t"); //tab-separated
}
Output:
[[HG, sn FA ], [PC, 2, 16:0], [PI, 1, 18:0], [PS, 3, 20:0], [PE, 2, 24:0], [, , 26:0], [, , 16:1], [, , 18:2]]
And this is the file (press edit and grab the content, it should be tab separated):
HG sn FA
PC 2 16:0
PI 1 18:0
PS 3 20:0
PE 2 24:0
26:0
16:1
18:2
[EDIT]
To get 16:1:
System.out.println(root[6][2]);

Categories