I am trying to get a project finished but am having no luck. It is an online course so my only communication is through email. He has yet to reply to my four emails over the last five days.
So for this assignment we had to download a csv file from containing NASDAQ stock price info for a specific company. I chose GOOG (google). Below are the requirements for the code portion.
Create a second file ReadFiles.java. This is the file that will read in the data from your csv file. Note: You will want to use a smaller version of your data file (20 rows) for testing.
Your ReadFiles.java class requires the following methods:
Method: check to see if the file exists
Method: find number of rows in csv file
Method: Converts the csv file to a mutli-dimensional array
Method: PrintArray
Method: Return array using a get method
Create a file DataAnalyzer.java. This file will be used to call the methods in ReadFiles.java. Be sure to demonstrate that all of your methods work through DataAnalyzer.java.
This is what I have so far.
package Analysis;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import java.util.StringTokenizer;
import java.util.Scanner;
public class ReadFiles
{
public static int numberOfRows;
public static int rowNumber = 0;
public static int columnNumber = 0;
public static void main(String[] args)
{
Scanner kb = new Scanner (System.in);
String fileName;
System.out.print("Enter the file name >> ");
fileName = kb.nextLine();
File f = new File("D:\\Java\\Assignment 3\\" + fileName);
if(f.exists())
{
System.out.print("File exists.");
}
fileName="D:\\Java\\Assignment 3\\" + fileName;
try
{
BufferedReader br = new BufferedReader(new FileReader(fileName));
StringTokenizer st = null;
while((fileName = br.readLine()) != null)
{
rowNumber++;
numberOfRows++;
st = new StringTokenizer(fileName, ",");
while(st.hasMoreTokens())
{
columnNumber++;
System.out.println("Row " + rowNumber +
", Column " + columnNumber
+ ", Entry : "+ st.nextToken());
}
columnNumber = 0;
}
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
}
public static void rows()
{
System.out.println("Total Rows: " + numberOfRows);
}
}
The book we have been given for the course is no help. All of the "Examples" and "You do it" portions give errors. Also in the entire chapter this assignment is based on, not one mention of an array.
When I run this code I do not get any error. I am shown the following:
File exists.
Row 1, Column 1, Entry : 30/12/2011
Row 1, Column 2, Entry : 642.02
Row 1, Column 3, Entry : 646.76
Row 1, Column 4, Entry : 642.02
Row 1, Column 5, Entry : 645.9
Row 1, Column 6, Entry : 1782300
Row 1, Column 7, Entry : 645.9
Row 2, Column 1, Entry : 29/12/2011
Row 2, Column 2, Entry : 641.49
I am shown from row 1 - 19 (the entire file).
What I do not understand is how to create separate methods in this class to convert to an array, print the array, and return the array.
Any help would be much appreciated.
Thanks
You need to define 2 classes, DataAnalyzer and ReadFiles. You usually have one file per class, although this is not a requirement. The structure of ReadFiles has been provided, so you will have a file called ReadFiles.java like this:
public class ReadFiles{
//instance var(s)
...
//constructor(s)
...
//methods(s)
/**
* Checks whether the file exists
*/
public boolean exists(){
....
}
/*
* Number of rows in the file
*/
public int getRowCount(){
....
}
// add the rest your self!!
}
}
You'll also need a file called DataAnalyzer.java:
public class DataAnalyzer{
public static void main(String args){
//create ReadFiles and call it's methods and check they return what is expected
}
}
Assume the ReadFile manages a single input file; it probably needs a class variable to hold that information. The DataAnalyzer will need to tell the ReadFiles which file to analyse (a constructor seems a good choice).
My advice is to create your skeleton structure (you already have been told what it is) and start building the functionality of each method one at a time.
Related
package Testing;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.HashMap;
import java.util.Map;
import java.util.Scanner;
public class testing {
// map to store the number of errors per user
private static Map<String, Integer> errorsPerUser = new HashMap<>();
// variable to store the number of jobs started
private static int jobsStarted = 0;
// variable to store the number of jobs completed
private static int jobsCompleted = 0;
public static void main(String[] args) {
// specify the path to the log file
String filePath = "C:/Users/Wafiq/Documents/WIX1002/GroupAssignment/extracted_log.txt";
try (Scanner scanner = new Scanner(new File(filePath))) {
while (scanner.hasNextLine()) {
String line = scanner.nextLine();
int timestampEndIndex = line.indexOf("]");
String lineWithoutTimestamp = line.substring(timestampEndIndex+2);
// check if line contains error message
if (lineWithoutTimestamp.contains("error: This association")) {
// extract the user from the line
String user = extractUser(lineWithoutTimestamp);
// increment the error count for the user
incrementErrorCount(user);
}
// check if line indicates job start
if (lineWithoutTimestamp.contains("sched: Allocate")) {
jobsStarted++;
}
// check if line indicates job completion
if (lineWithoutTimestamp.contains("_job_complete: JobId")) {
jobsCompleted++;
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
}
// print the results
System.out.println("Number of jobs started: " + jobsStarted);
System.out.println("Number of jobs completed: " + jobsCompleted);
System.out.println("Number of errors per user:");
for (Map.Entry<String, Integer> entry : errorsPerUser.entrySet()) {
System.out.println(": " + entry.getValue());
}
}
// method to extract the user from the line
private static String extractUser(String line) {
// assuming the user is the string before "error" in the line
return line.substring(0, line.indexOf("error")).trim();
}
// method to increment the error count for the user
private static void incrementErrorCount(String user) {
if (errorsPerUser.containsKey(user)) {
errorsPerUser.put(user, errorsPerUser.get(user) + 1);
} else {
errorsPerUser.put(user, 1);
}
}
}
Output:
File data:
I'm trying to extract the number of jobs causing error and the corresponding user. I have done the number of jobs causing error but I don't know how to extract the number of corresponding user.
(p/s: Pls don't slander me, I'm a first year student in Comp Science. I have tried my best)
The user is not at the same index each line so I dont know how to extract it from the line.
While the user is not at the same index across lines, it always comes after user=' and ends on the next '. Search for these substrings in your line and you are done.
int startIndex = line.indexOf("user='");
if (startIndex>=0) {
int endIndex = line.indexOf("'", startIndex);
String user = line.substring(startIndex, endIndex);
System.out.println("user="+user);
} else {
System.out.println("no user in line");
}
Edit: I saw there is another pattern also in use. I think you can change the above algorithm to also allow for the second one.
I am learning how to work with files in Java. I have a sample file which contains key pairs and it values. I am trying to find a key pairs and if it matches, then output file would be updated with both, key pair and it's value. I am able to get key pairs in output file but unable to get values too. Stringbuilder may work here to append strings but I don't know how.
Below are my input and output files.
Input File:
born time 9 AM London -- kingNumber 1234567890 -- address: abc/cd/ef -- birthmonth: unknown
born time 9 AM Europe -- kingNumber 1234567890 -- address: abc/cd/ef -- birthmonth: december
Expected Output File:
kingNumber 1234567890 birthmonth unknown
kingNumber 1234567890 birthmonth unkbown
Current Output File:
kingNumber birthmonth
kingNumber birthmonth
I am able to write key pair ("kingNumber" and "birthmonth" in this case) to output file but I am not sure what I can do to get it's value too.
String kn = "kingNumber:";
String bd = "birthmonth:";
try {
File f = new File("sample.txt");
Scanner sc = new Scanner(f);
FileWriter fw = new FileWriter("output.txt");
while(sc.hasNextLine()) {
String lineContains = sc.next();
if(lineContains.contains(kn)) {
fw.write(kn + "\n");
// This is where I am stuck. What
// can I do to get it's value (number in this case).
}
else if(lineContains.contains(bd)) {
fw.write(bd);
// This is where I am stuck. What
// can I do to get it's value (birthday in this case).
}
}
} catch (IOException e) {
e.printStackTrace();
}
you could use java.util.regex.Pattern & java.util.regex.Matcherwith a pattern alike:
^born\stime\s([a-zA-Z0-9\s]*)\s--\skingNumber\s(\d+)\s--\saddress:\s([a-zA-Z0-9\s/]*)\s--\sbirthmonth:\s([a-zA-Z0-9\s]*)$
write less, do more.
I have written a simple parser that it following data format from your example.
You will need to call it like this:
PairParser parser = new PairParser(lineContains);
then you can get value from the parser by pair keys
How to get value:
parser.getValue("kingNumber")
Note that keys do not have trailing column character.
The parser code is here:
package com.grenader.example;
import java.util.HashMap;
import java.util.Map;
public class PairParser {
private Map<String, String> data = new HashMap<>();
/**
* Constructor, prepare the data
* #param dataString line from the given data file
*/
public PairParser(String dataString) {
if (dataString == null || dataString.isEmpty())
throw new IllegalArgumentException("Data line cannot be empty");
// Spit the input line into array of string blocks based on '--' as a separator
String[] blocks = dataString.split("--");
for (String block : blocks)
{
if (block.startsWith("born time")) // skip this one because it doesn't looks like a key/value pair
continue;
String[] strings = block.split("\\s");
if (strings.length != 3) // has not exactly 3 items (first items is empty), skipping this one as well
continue;
String key = strings[1];
String value = strings[2];
if (key.endsWith(":"))
key = key.substring(0, key.length()-1).trim();
data.put(key.trim(), value.trim());
}
}
/**
* Return value based on key
* #param key
* #return
*/
public String getValue(String key)
{
return data.get(key);
}
/**
* Return number of key/value pairs
* #return
*/
public int size()
{
return data.size();
}
}
And here is the Unit Test to make sure that the code works
package com.grenader.example;
import com.grenader.example.PairParser;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
public class PairParserTest {
#Test
public void getValue_Ok() {
PairParser parser = new PairParser("born time 9 AM London -- kingNumber 1234567890 -- address: abc/cd/ef -- birthmonth: unknown");
assertEquals("1234567890", parser.getValue("kingNumber"));
assertEquals("unknown", parser.getValue("birthmonth"));
}
#Test(expected = IllegalArgumentException.class)
public void getValue_Null() {
new PairParser(null);
fail("This test should fail with Exception");
}
#Test(expected = IllegalArgumentException.class)
public void getValue_EmptyLine() {
new PairParser("");
fail("This test should fail with Exception");
}
#Test()
public void getValue_BadData() {
PairParser parser = new PairParser("bad data bad data");
assertEquals(0, parser.size());
}
}
I have an excel sheet whose first column contains following data "What is ${v1} % of ${v2}?", two more columns (v1 and v2) in this sheet contains {"type":"int", "minimum":15, "maximum":58} and {"type":"int", "minimum":30, "maximum":100}, these are the ranges of variable v1 and v2. I need to replace v1 and v2 in the expression with a random value from the given range and store the expression in another spread sheet using JAVA. How can I do this by making use of JETT?
For example: I should store "What is 25% of 50?"
This is what I have done,I am able to read the column in my java program but not replace the values
import java.io.FileInputStream;
import java.util.ArrayList;
import java.util.List;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.poifs.filesystem.POIFSFileSystem;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Row;
public class ACGS {
public static void main(String[] args) throws Exception {
//test file is located in your project path
FileInputStream fileIn = new FileInputStream("C://users/user/Desktop/Content.xls");
//read file
POIFSFileSystem fs = new POIFSFileSystem(fileIn);
HSSFWorkbook filename = new HSSFWorkbook(fs);
//open sheet 0 which is first sheet of your worksheet
HSSFSheet sheet = filename.getSheetAt(0);
//we will search for column index containing string "Your Column Name" in the row 0 (which is first row of a worksheet
String columnWanted = "${v1}";
Integer columnNo = null;
//output all not null values to the list
List<Cell> cells = new ArrayList<Cell>();
Row firstRow = sheet.getRow(0);
for(Cell cell:firstRow){
if (cell.getStringCellValue().contains(columnWanted)){
columnNo = cell.getColumnIndex();
System.out.println("cell contains "+cell.getStringCellValue());
}
}
if (columnNo != null){
for (Row row : sheet) {
Cell c = row.getCell(columnNo);
if (c == null || c.getCellType() == Cell.CELL_TYPE_BLANK) {
// Nothing in the cell in this row, skip it
} else {
cells.add(c);
}
}
} else{
System.out.println("could not find column " + columnWanted + " in first row of " + fileIn.toString());
}
}
}
First, it looks like you aren't using JETT at all. You appear to be attempting to read the spreadsheet yourself and do some processing.
Here is how you would do this in JETT. JETT doesn't provide its own random number support, but together with its Apache Commons JEXL expression support, and Java's own Random, you can publish the expected ranges of your random variables as beans to JETT, and you can calculate a random variable with an expression.
First, create your template spreadsheet, populating it with expressions (between ${ and }) that JETT will evaluate. One cell might contain something like this.
What is ${rnd.nextInt(v1Max - v1Min + 1) + v1Min}% of ${rnd.nextInt(v2Max - v2Min + 1) + v2Min}?
Next, create beans to be supplied to JETT. These beans are the named objects that are available to JEXL expressions in your spreadsheet template.
Map<String, Object> beans = new HashMap<String, Object>();
beans.put("v1Min", 15);
beans.put("v1Max", 58);
beans.put("v2Min", 30);
beans.put("v2Max", 100);
beans.put("rnd", new Random());
Next, create your code that invokes the JETT ExcelTransformer.
try
{
ExcelTransformer transformer = new ExcelTransformer();
// template file name, destination file name, beans
transformer.transform("Content.xls", "Populated.xls", beans);
}
catch (IOException e)
{
System.err.println("IOException caught: " + e.getMessage());
}
catch (InvalidFormatException e)
{
System.err.println("InvalidFormatException caught: " + e.getMessage());
}
In the resultant spreadsheet, you will see the expressions evaluated. In the cell that contained the expressions above, you will see for example:
What is 41% of 38?
(Or you will see different numbers, depending on the random numbers generated.)
I am novice to java however, I cannot seem to figure this one out. I have a CSV file in the following format:
String1,String2
String1,String2
String1,String2
String1,String2
Each line are pairs. The 2nd line is a new record, same with the 3rd. In the real word the CSV file will change in size, sometimes it will be 3 records, or 4, or even 10.
My issues is how do I read the values into an array and dynamically adjust the size? I would imagine, first we would have to parse though the csv file, get the number of records/elements, then create the array based on that size, then go though the CSV again and store it in the array.
I'm just not sure how to accomplish this.
Any help would be appreciated.
You can use ArrayList instead of Array. An ArrayList is a dynamic array. ex.
Scanner scan = new Scanner(new File("yourfile"));
ArrayList<String[]> records = new ArrayList<String[]>();
String[] record = new String[2];
while(scan.hasNext())
{
record = scan.nextLine().split(",");
records.add(record);
}
//now records has your records.
//here is a way to loop through the records (process)
for(String[] temp : records)
{
for(String temp1 : temp)
{
System.out.print(temp1 + " ");
}
System.out.print("\n");
}
Just replace "yourfile" with the absolute path to your file.
You could do something like this.
More traditional for loop for processing the data if you don't like the first example:
for(int i = 0; i < records.size(); i++)
{
for(int j = 0; j < records.get(i).length; j++)
{
System.out.print(records.get(i)[j] + " ");
}
System.out.print("\n");
}
Both for loops are doing the same thing though.
You can simply read the CSV into a 2-dimensional array just in 2 lines with the open source library uniVocity-parsers.
Refer to the following code as an example:
public static void main(String[] args) throws FileNotFoundException {
/**
* ---------------------------------------
* Read CSV rows into 2-dimensional array
* ---------------------------------------
*/
// 1st, creates a CSV parser with the configs
CsvParser parser = new CsvParser(new CsvParserSettings());
// 2nd, parses all rows from the CSV file into a 2-dimensional array
List<String[]> resolvedData = parser.parseAll(new FileReader("/examples/example.csv"));
// 3rd, process the 2-dimensional array with business logic
// ......
}
tl;dr
Use the Java Collections rather than arrays, specifically a List or Set, to auto-expand as you add items.
Define a class to hold your data read from CSV, instantiating an object for each row read.
Use the Apache Commons CSV library to help with the chore of reading/writing CSV files.
Class to hold data
Define a class to hold the data of each row being read from your CSV. Let's use Person class with a given name and surname, to be more concrete than the example in your Question.
In Java 16 and later, more briefly define the class as a record.
record Person ( String givenName , String surname ) {}
In older Java, define a conventional class.
package work.basil.example;
public class Person {
public String givenName, surname;
public Person ( String givenName , String surname ) {
this.givenName = givenName;
this.surname = surname;
}
#Override
public String toString ( ) {
return "Person{ " +
"givenName='" + givenName + '\'' +
" | surname='" + surname + '\'' +
" }";
}
}
Collections, not arrays
Using the Java Collections is generally better than using mere arrays. The collections are more flexible and more powerful. See Oracle Tutorial.
Here we will use the List interface to collect each Person object instantiated from data read in from the CSV file. We use the concrete ArrayList implementation of List which uses arrays in the background. The important part here, related to your Question, is that you can add objects to a List without worrying about resizing. The List implementation is responsible for any needed resizing.
If you happen to know the approximate size of your list to be populated, you can supply an optional initial capacity as a hint when creating the List.
Apache Commons CSV
The Apache Commons CSV library does a nice job of reading and writing several variants of CSV and Tab-delimited formats.
Example app
Here is an example app, in a single PersoIo.java file. The Io is short for input-output.
Example data.
GivenName,Surname
Alice,Albert
Bob,Babin
Charlie,Comtois
Darlene,Deschamps
Source code.
package work.basil.example;
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVRecord;
import java.io.BufferedReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
public class PersonIo {
public static void main ( String[] args ) {
PersonIo app = new PersonIo();
app.doIt();
}
private void doIt ( ) {
Path path = Paths.get( "/Users/basilbourque/people.csv" );
List < Person > people = this.read( path );
System.out.println( "People: \n" + people );
}
private List < Person > read ( final Path path ) {
Objects.requireNonNull( path );
if ( Files.notExists( path ) ) {
System.out.println( "ERROR - no file found for path: " + path + ". Message # de1f0be7-901f-4b57-85ae-3eecac66c8f6." );
}
List < Person > people = List.of(); // Default to empty list.
try {
// Hold data read from file.
int initialCapacity = ( int ) Files.lines( path ).count();
people = new ArrayList <>( initialCapacity );
// Read CSV file.
BufferedReader reader = Files.newBufferedReader( path );
Iterable < CSVRecord > records = CSVFormat.RFC4180.withFirstRecordAsHeader().parse( reader );
for ( CSVRecord record : records ) {
// GivenName,Surname
// Alice,Albert
// Bob,Babin
// Charlie,Comtois
// Darlene,Deschamps
String givenName = record.get( "GivenName" );
String surname = record.get( "Surname" );
// Use read data to instantiate.
Person p = new Person( givenName , surname );
// Collect
people.add( p ); // For real work, you would define a class to hold these values.
}
} catch ( IOException e ) {
e.printStackTrace();
}
return people;
}
}
When run.
People:
[Person{ givenName='Alice' | surname='Albert' }, Person{ givenName='Bob' | surname='Babin' }, Person{ givenName='Charlie' | surname='Comtois' }, Person{ givenName='Darlene' | surname='Deschamps' }]
The attached file is a CSV file.
Write a Java program to read data from this file and populate each service in separate maps and print it.
CSV File
--------
CompanyName location foundedby profit noofyears
HCL chennai shivnadar 3.2 8
XYZ chennai XCV 9 10
In this how to pass the column names as key in two separate maps (one map for HCL and another map for XYZ)
Each row has separate mapping.
My code is
import java.io.BufferedReader;
import java.io.FileReader;
import java.util.HashMap;
import java.util.Map;
import java.util.StringTokenizer;
public class ReadServicesExercise {
public static void main(String[] args) {
String file = "D://ReadCsv.csv";
try{
BufferedReader br = new BufferedReader(new FileReader(file));
Map<Integer,String> values = new HashMap<Integer,String>();
String line = "";
StringTokenizer tokens = null;
int lineNo = 0;
int tokenNo = 0;
//reading the csv file line by line
while ((line = br.readLine()) != null) {
//increment the lineNo after every line is being read
lineNo++;
System.out.println("Reading Line No. : "+lineNo);
tokens = new StringTokenizer(line, ",");
while (tokens.hasMoreTokens()) {
//increment the token no!
tokenNo++;
//Print csv values
System.out.print(tokens.nextToken() + " ");
//Need to write the values to the hashmap
values.put(arg0, arg1);
}
System.out.println();
//reset token number
tokenNo = 0;
}
}
catch(Exception ex){
System.err.println("CSV file not found : " + ex);
}
}
}
But it prints every line, but I want the first line column name to be passed as key and correspondingly values should display for each company name.
You can achieve this using maps within a map. Create maps corressponding to each row, with column name as key and cell values as values. Now the parent map will contain a unique row value as key, say Company name, and maps as the value.
While retriving, 1st retrive the row map using the unique key and then out of that map, use he column name as key to get values.
Row maps: Map(String, String)
Parent map: Map(String, Map)