Design a report with OO in Java - java

I want to implement 2 reports with OO. The reports are all like (but have different columns and data):
name age gender phone_number
A 10 male 1234
B 20 female 5678
C 30 n/a 9012
As you can see, in the report, each column has its own header and parser (for parsing the data). I have design an object Column:
class Column<T extends Object>
{
private String header;
private ColumnParser parser;
public Column(String header)
{
this.header = header;
this.parser = new ColumnParser<T>()
{
public String parse(T t)
{
return t.toString();
}
}
}
public Column(String header, ColumnParser parser)
{
this.header = header;
this.parser = parser;
}
public interface ColumnParser<T>
{
public String parse(T t);
}
}
So that each column has its own parser to parse the data in that column. But after this, I don't know how to store the data so that they can be mapped to each column and can be parsed.
Please advise.

First, it would be helpful to know what format your original data (in memory) is in - e.g. is it an Object[][]?
Second, the output you require looks like it's tab separated. Is that correct?
Third, to write to a text file you have to append row by row. Your current code seems to suggest you want to append column by column - this would be much harder to implement.
If you can convert your data into a String[][] - which should be straightforward - you can then use the following to write to a file. If you want tab-delimited, you can use the "\t" as the delimiter (although, that's for windows - not sure if it is OS specific like new line).
public static void writeToFile(File file, String[][] data, String delimiter){
PrintWriter out = new PrintWriter(new FileWriter(file));
for (String[] row : data){
out.write(makeLine(row, delimiter));
}
out.close();
}
private static String makeLine(String[] row, String delimiter) {
StringBuilder str = new StringBuilder();
for (String cell : row){
str.append("\""+cell+"\"").append(delimiter);
}
str.deleteCharAt(str.length()-1);
str.append("\n");
return str.toString();
}

Related

Avro SchemaBuilder - "Can't overwrite property: scale" for Decimal logical type

I am attempting to generate an Avro schema from java to describe a table that I can access via JDBC.
I use the JDBC getMetaData() method to retrieve the relevant column metadata and store in an array list of "columnDetail" objects.
Column Detail defined as
private static class columnDetail {
public String tableName;
public String columnName;
public String dataTypeName;
public int dataTypeId;
public String size;
public String scale;
}
I then iterate through this array list and build up the Avro schema using the org.apache.avro.SchemaBuilder class.
My issue is around decimal logical types.
I iterate throuth the array list twice. The first time to add all fields to the FieldAssembler, the second to modify certain byte fields to add the decimal logical datatype.
The issue I am experiencing is that I get an error if the Decimal scale value changes between iterations.
As it iterates through the columnDetail array, it will work so long as the value "scale" does not change. If it does change, the following occurs:
Exception in thread "main" org.apache.avro.AvroRuntimeException: Can't overwrite property: scale
at org.apache.avro.JsonProperties.addProp(JsonProperties.java:187)
at org.apache.avro.Schema.addProp(Schema.java:134)
at org.apache.avro.JsonProperties.addProp(JsonProperties.java:191)
at org.apache.avro.Schema.addProp(Schema.java:139)
at org.apache.avro.LogicalTypes$Decimal.addToSchema(LogicalTypes.java:193)
at GenAvroSchema.main(GenAvroSchema.java:85)
I can prevent this by hardcoding the decimal size. i.e. I can replace
org.apache.avro.LogicalTypes.decimal(Integer.parseInt(cd.size),Integer.parseInt(cd.scale)).addToSchema(schema.getField(cd.columnName).schema());
with
org.apache.avro.LogicalTypes.decimal(18,2).addToSchema(schema.getField(cd.columnName).schema());
This however ends up with the same size datatype for all decimal fields which is not desirable.
Can someone help with this ?
Java: 1.8.0_202
Avro: avro-1.8.2.jar
My java code:
public static void main(String[] args) throws Exception{
String jdbcURL = "jdbc:sforce://login.salesforce.com";
String jdbcUser = "userid";
String jdbcPassword = "password";
String avroDataType = "";
HashMap<String, String> dtmap = new HashMap<String, String>();
dtmap.put("VARCHAR", "string");
dtmap.put("BOOLEAN", "boolean");
dtmap.put("NUMERIC", "bytes");
dtmap.put("INTEGER", "int");
dtmap.put("TIMESTAMP", "string");
dtmap.put("DATE", "string");
ArrayList<columnDetail> columnDetails = new ArrayList<columnDetail>();
columnDetails = populateMetadata(jdbcURL, jdbcUser, jdbcPassword); // This works so have not included code here
SchemaBuilder.FieldAssembler<Schema> fields = SchemaBuilder.builder().record("account").doc("Account Detials").fields() ;
for(columnDetail cd:columnDetails) {
avroDataType = dtmap.get(JDBCType.valueOf(cd.dataTypeId).getName());
switch(avroDataType)
{
case "string":
fields.name(cd.columnName).type().unionOf().nullType().and().stringType().endUnion().nullDefault();
break;
case "int":
fields.name(cd.columnName).type().unionOf().nullType().and().intType().endUnion().nullDefault();
break;
case "boolean":
fields.name(cd.columnName).type().unionOf().booleanType().and().nullType().endUnion().booleanDefault(false);
break;
case "bytes":
if(Integer.parseInt(cd.scale) == 0) {
fields.name(cd.columnName).type().unionOf().nullType().and().longType().endUnion().nullDefault();
} else {
fields.name(cd.columnName).type().bytesType().noDefault();
}
break;
default:
fields.name(cd.columnName).type().unionOf().nullType().and().stringType().endUnion().nullDefault();
break;
}
}
Schema schema = fields.endRecord();
for(columnDetail cd:columnDetails) {
avroDataType = dtmap.get(JDBCType.valueOf(cd.dataTypeId).getName());
if(avroDataType == "bytes" && Integer.parseInt(cd.scale) != 0) {
//org.apache.avro.LogicalTypes.decimal(Integer.parseInt(cd.size),Integer.parseInt(cd.scale)).addToSchema(schema.getField(cd.columnName).schema());
org.apache.avro.LogicalTypes.decimal(18,2).addToSchema(schema.getField(cd.columnName).schema());
}
}
BufferedWriter writer = new BufferedWriter(new FileWriter("./account.avsc"));
writer.write(schema.toString());
writer.close();
}
Thanks,
Eoin.

Java sanitizing Arraylist records suggestions

I am looking for an idea how to accomplish this task. So I'll start with how my program is working.
My program reads a CSV file. They are key value pairs separated by a comma.
L1234456,ygja-3bcb-iiiv-pppp-a8yr-c3d2-ct7v-giap-24yj-3gie
L6789101,zgna-3mcb-iiiv-pppp-a8yr-c3d2-ct7v-gggg-zz33-33ie
etc
Function takes a file and parses it into an arrayList of String[]. The function returns the ArrayList.
public ArrayList<String[]> parseFile(File csvFile) {
Scanner scan = null;
try {
scan = new Scanner(csvFile);
} catch (FileNotFoundException e) {
}
ArrayList<String[]> records = new ArrayList<String[]>();
String[] record = new String[2];
while (scan.hasNext()) {
record = scan.nextLine().trim().split(",");
records.add(record);
}
return records;
}
Here is the code, where I am calling parse file and passing in the CSVFile.
ArrayList<String[]> Records = parseFile(csvFile);
I then created another ArrayList for files that aren't parsed.
ArrayList<String> NotParsed = new ArrayList<String>();
So the program then continues to sanitize the key value pairs separated by a comma. So we first start with the first key in the record. E.g L1234456. If the record could not be sanitized it then it replaces the current key with "CouldNOtBeParsed" text.
for (int i = 0; i < Records.size(); i++) {
if(!validateRecord(Records.get(i)[0].toString())) {
Logging.info("Records could not be parsed " + Records.get(i)[0]);
NotParsed.add(srpRecords.get(i)[0].toString());
Records.get(i)[0] = "CouldNotBeParsed";
} else {
Logging.info(Records.get(i)[0] + " has been sanitized");
}
}
Next we do the 2nd key in the key value pair e.g ygja-3bcb-iiiv-pppp-a8yr-c3d2-ct7v-giap-24yj-3gie
for (int i = 0; i < Records.size(); i++) {
if(!validateRecordKey(Records.get(i)[1].toString())) {
Logging.info("Record Key could not be parsed " + Records.get(i)[0]);
NotParsed.add(Records.get(i)[1].toString());
Records.get(i)[1] = "CouldNotBeParsed";
} else {
Logging.info(Records.get(i)[1] + " has been sanitized");
}
}
The problem is that I need both keyvalue pairs to be sanitized, make a separate list of the keyValue pairs that could not be sanitized and a list of the ones there were sanitized so they can be inserted into a database. The ones that cannot will be printed out to the user.
I thought about looping thought the records and removing the records with the "CouldNotBeParsed" text so that would just leave the ones that could be parsed. I also tried removing the records from the during the for loop Records.remove((i)); However that messes up the For loop because if the first record could not be sanitized, then it's removed, the on the next iteration of the loop it's skipped because record 2 is now record 1. That's why i went with adding the text.
Atually I need two lists, one for the Records that were sanitized and another that wasn't.
So I was thinking there must be a better way to do this. Or a better method of sanitizing both keyValue pairs at the same time or something of that nature. Suggestions?
Start by changing the data structure: rather than using a list of two-element String[] arrays, define a class for your key-value pairs:
class KeyValuePair {
private final String key;
private final String value;
public KeyValuePair(String k, String v) { key = k; value = v; }
public String getKey() { return key; }
public String getValue() { return value; }
}
Note that the class is immutable.
Now make an object with three lists of KeyValuePair objects:
class ParseResult {
private final List<KeyValuePair> sanitized = new ArrayList<KeyValuePair>();
private final List<KeyValuePair> badKey = new ArrayList<KeyValuePair>();
private final List<KeyValuePair> badValue = new ArrayList<KeyValuePair>();
public ParseResult(List<KeyValuePair> s, List<KeyValuePair> bk, List<KeyValuePair> bv) {
sanitized = s;
badKey = bk;
badValue = bv;
}
public List<KeyValuePair> getSanitized() { return sanitized; }
public List<KeyValuePair> getBadKey() { return badKey; }
public List<KeyValuePair> getBadValue() { return badValue; }
}
Finally, populate these three lists in a single loop that reads from the file:
public static ParseResult parseFile(File csvFile) {
Scanner scan = null;
try {
scan = new Scanner(csvFile);
} catch (FileNotFoundException e) {
???
// Do something about this exception.
// Consider not catching it here, letting the caller deal with it.
}
final List<KeyValuePair> sanitized = new ArrayList<KeyValuePair>();
final List<KeyValuePair> badKey = new ArrayList<KeyValuePair>();
final List<KeyValuePair> badValue = new ArrayList<KeyValuePair>();
while (scan.hasNext()) {
String[] tokens = scan.nextLine().trim().split(",");
if (tokens.length != 2) {
???
// Do something about this - either throw an exception,
// or log a message and continue.
}
KeyValuePair kvp = new KeyValuePair(tokens[0], tokens[1]);
// Do the validation on the spot
if (!validateRecordKey(kvp.getKey())) {
badKey.add(kvp);
} else if (!validateRecord(kvp.getValue())) {
badValue.add(kvp);
} else {
sanitized.add(kvp);
}
}
return new ParseResult(sanitized, badKey, badValue);
}
Now you have a single function that produces a single result with all your records cleanly separated into three buckets - i.e. sanitized records, records with bad keys, and record with good keys but bad values.

How to display records in a JTable from an arraylist .TXT file in java MVC?

Currently, this is my main screen:
()
I have 2 files: “patient.txt” and “treatment.txt” which hold records of multiple patients and treatments.
What I’m trying to do is to display all of those records in a nice JTable whenever I click “Display Treatments” or “Display Patients”, in a screen like so:
I am using an MVC model for this Hospital Management System (with HMSGUIModel.java, HMSGUIView.java, HMSGUIController.java, HMSGUIInterface.java files), and add records using the following code:
FileWriter tfw = new FileWriter(file.getAbsoluteFile(), true);
BufferedWriter tbw = new BufferedWriter(tfw);
tbw.write(this.view.gettNumber() + "," + this.view.gettName() + "," + this.view.gettDoctor() + "," + this.view.gettRoom());
tbw.newLine();
tbw.flush();
JOptionPane.showMessageDialog(null, "Successfully added treatment!"); }
Please help on how I can add a reader as well, to display all the records from the text file to a table?
Many thanks in advance!!
Keeping in line with your MVC, you could create a TableModel which knew how to read a give patient record.
Personally though, I'd prefer to separate the management of the patient data from the view, so the view didn't care about where the data came from.
To this end, I would start by creating a Patient object and a Treatment object, these would hold the data in a self contained entity, making the management simpler...
You would need to read this data in and parse the results...
List<Treatment> treatments = new ArrayList<Treatment>(25);
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
String text = null;
while ((text = br.readline()) != null) {
String parts[] = text.split(",");
Treatmeant treament = new Treatment(parts[0],
parts[1],
parts[2],
parts[3]);
treatments.add(treament);
}
} // Handle exception as required...
I'd wrap this into a readTreatments method in some utility class to make it easier to use...
Around about here, I'd be considering using a stand alone database or even an XML document, but that's just me.
Once you have this, you can design a TableModel to support it...
public class TreatmentTableModel extends AbstractTableModel {
protected static final String[] COUMN_NAMES = {
"Treatment-Number",
"Treatment-Name",
"Doctor-in-charge",
"Room-No",
};
protected static final Class[] COLUMN_CLASSES = new Class[]{
Integer.class,
String.class,
Doctor.class,
Integer.class,
};
private List<Treatment> treatments;
public TreatmentTableModel() {
this.treatments = new ArrayList<>();
}
public TreatmentTableModel(List<Treatment> treatments) {
this.treatments = new ArrayList<>(treatments);
}
#Override
public int getRowCount() {
return treatments.size();
}
#Override
public int getColumnCount() {
return 4;
}
#Override
public String getColumnName(int column) {
return COUMN_NAMES[column];
}
#Override
public Class<?> getColumnClass(int columnIndex) {
return COLUMN_CLASSES[columnIndex];
}
#Override
public Object getValueAt(int rowIndex, int columnIndex) {
Treatment treatment = treatments.get(rowIndex);
Object value = null;
switch (columnIndex) {
case 0:
value = treatment.getNumber();
break;
case 1:
value = treatment.getName();
break;
case 2:
value = treatment.getDoctor();
break;
case 3:
value = treatment.getRoomNumber();
break;
}
return value;
}
}
Then you simply apply it to what ever JTable you need...
private JTable treatments;
//...
treatments = new JTable(new TreatmentTableModel());
add(new JScrollPane(treatments));
Then, we you need to, you would load the List of Treatments and apply it to the table...
File file = new File("...");
treatments.setModel(new TreatmentTableModel(TreatmentUtilities.readTreatments(file)));
Depending on your needs for the table, you can look at using the DefaultTableModel and populating your data using that model. The downside to that is, you may want special capability from your table, like not being able to edit cells, store more than strings, etc... in which case you might look in to extending AbstractTableModel and defining your own behavior for the model.
A simple thing to do would be to start with the default model and expand on that.
String[] myColumns = {"Treatment-Number","Treatment-Name", "Doctor-in-charge", "Room-No"};
// init a model with no data and the specified column names
DefaultTableModel myModel = new DefaultTableModel(new Object[myList.size()][4](), myColumns);
// assuming you have a list of lists...
int i = 0;
int j = 0;
for (ArrayList<Object> list : myList) {
for ( Object o : list ) {
myModel.setValueAt(o, i, j); // set the value at cell i,j to o
j++;
}
i++;
}
JTable myTable = new JTable(myModel); // make a new table with the specified data model
// ... do other stuff with the table
If you want to access the table data, you use myTable.getModel() and update the data. This will automatically update the view of the table (completing the MVC connection)
Look here for more info on using tables.

Is there an SQL wrapper to query an HTML table

There are lots of questions on how to format the results of an SQL query to an HTML table, but I'd like to go the other way - given an arbitrary HTML table with a header row, I'd like to be able to extract information form one or more rows using SQL (or an SQL-like language). Simple to state, but apparently not so simple to accomplish.
Ultimately, I'd prefer to parse the HTML properly with something like libtidy or JSoup, but while the API documentation is usually reasonable, when it comes to examples or tutorials on actually using them, you usually find an example of extracting the <title> tag (which could be accomplished with regexes) with no real-world examples of how to use the library. So, a good resource or example code for one of the existing, established libraries would also be good.
A simple code for transforming a table into a list of tuples using JSoup looks like this:
public class Main {
public static void main(String[] args) throws Exception {
final String html =
"<html><head/><body>" +
"<table id=\"example\">" +
"<tr><td>John</td><td>Doe</td></tr>" +
"<tr><td>Michael</td><td>Smith</td>" +
"</table>" +
"</body></html>";
final List<Tuple> tuples = parse (html, "example");
//... Here the table is parsed
}
private static final List<Tuple> parse(final String html, final String tableId) {
final List<Tuple> tuples = new LinkedList<Tuple> ();
final Element table = Jsoup.parse (html).getElementById(tableId);
final Elements rows = table.getElementsByTag("tr");
for (final Element row : rows) {
final Elements children = row.children();
final int childCount = children.size();
final Tuple tuple = new Tuple (childCount);
for (final Element child : children) {
tuple.addColumn (child.text ());
}
}
return tuples;
}
}
public final class Tuple {
private final String[] columns;
private int cursor;
public Tuple (final int size) {
columns = new String[size];
cursor = 0;
}
public String getColumn (final int no) {
return columns[no];
}
public void addColumn(final String value) {
columns[cursor++] = value;
}
}
From this on you can e.g. create an in-memory table with H2 and use a regular SQL.

Parsing a .txt file (considering performance measure)

DurationOfRun:5
ThreadSize:10
ExistingRange:1-1000
NewRange:5000-10000
Percentage:55 - AutoRefreshStoreCategories Data:Previous/30,New/70 UserLogged:true/50,false/50 SleepTime:5000 AttributeGet:1,16,10106,10111 AttributeSet:2060/30,10053/27
Percentage:25 - CrossPromoEditItemRule Data:Previous/60,New/40 UserLogged:true/50,false/50 SleepTime:4000 AttributeGet:1,10107 AttributeSet:10108/34,10109/25
Percentage:20 - CrossPromoManageRules Data:Previous/30,New/70 UserLogged:true/50,false/50 SleepTime:2000 AttributeGet:1,10107 AttributeSet:10108/26,10109/21
I am trying to parse above .txt file(first four lines are fixed and last three Lines can increase means it can be more than 3), so for that I wrote the below code and its working but it looks so messy. so Is there any better way to parse the above .txt file and also if we consider performance then which will be best way to parse the above txt file.
private static int noOfThreads;
private static List<Command> commands;
public static int startRange;
public static int endRange;
public static int newStartRange;
public static int newEndRange;
private static BufferedReader br = null;
private static String sCurrentLine = null;
private static List<String> values;
private static String commandName;
private static String percentage;
private static List<String> attributeIDGet;
private static List<String> attributeIDSet;
private static LinkedHashMap<String, Double> dataCriteria;
private static LinkedHashMap<Boolean, Double> userLoggingCriteria;
private static long sleepTimeOfCommand;
private static long durationOfRun;
br = new BufferedReader(new FileReader("S:\\Testing\\PDSTest1.txt"));
values = new ArrayList<String>();
while ((sCurrentLine = br.readLine()) != null) {
if(sCurrentLine.startsWith("DurationOfRun")) {
durationOfRun = Long.parseLong(sCurrentLine.split(":")[1]);
} else if(sCurrentLine.startsWith("ThreadSize")) {
noOfThreads = Integer.parseInt(sCurrentLine.split(":")[1]);
} else if(sCurrentLine.startsWith("ExistingRange")) {
startRange = Integer.parseInt(sCurrentLine.split(":")[1].split("-")[0]);
endRange = Integer.parseInt(sCurrentLine.split(":")[1].split("-")[1]);
} else if(sCurrentLine.startsWith("NewRange")) {
newStartRange = Integer.parseInt(sCurrentLine.split(":")[1].split("-")[0]);
newEndRange = Integer.parseInt(sCurrentLine.split(":")[1].split("-")[1]);
} else {
attributeIDGet = new ArrayList<String>();
attributeIDSet = new ArrayList<String>();
dataCriteria = new LinkedHashMap<String, Double>();
userLoggingCriteria = new LinkedHashMap<Boolean, Double>();
percentage = sCurrentLine.split("-")[0].split(":")[1].trim();
values = Arrays.asList(sCurrentLine.split("-")[1].trim().split("\\s+"));
for(String s : values) {
if(s.startsWith("Data")) {
String[] data = s.split(":")[1].split(",");
for (String n : data) {
dataCriteria.put(n.split("/")[0], Double.parseDouble(n.split("/")[1]));
}
//dataCriteria.put(data.split("/")[0], value)
} else if(s.startsWith("UserLogged")) {
String[] userLogged = s.split(":")[1].split(",");
for (String t : userLogged) {
userLoggingCriteria.put(Boolean.parseBoolean(t.split("/")[0]), Double.parseDouble(t.split("/")[1]));
}
//userLogged = Boolean.parseBoolean(s.split(":")[1]);
} else if(s.startsWith("SleepTime")) {
sleepTimeOfCommand = Long.parseLong(s.split(":")[1]);
} else if(s.startsWith("AttributeGet")) {
String[] strGet = s.split(":")[1].split(",");
for(String q : strGet) attributeIDGet.add(q);
} else if(s.startsWith("AttributeSet:")) {
String[] strSet = s.split(":")[1].split(",");
for(String p : strSet) attributeIDSet.add(p);
} else {
commandName = s;
}
}
Command command = new Command();
command.setName(commandName);
command.setExecutionPercentage(Double.parseDouble(percentage));
command.setAttributeIDGet(attributeIDGet);
command.setAttributeIDSet(attributeIDSet);
command.setDataUsageCriteria(dataCriteria);
command.setUserLoggingCriteria(userLoggingCriteria);
command.setSleepTime(sleepTimeOfCommand);
commands.add(command);
Well, parsers usually are messy once you get down to the lower layers of them :-)
However, one possible improvement, at least in terms of code quality, would be to recognize the fact that your grammar is layered.
By that, I mean every line is an identifying token followed by some properties.
In the case of DurationOfRun, ThreadSize, ExistingRange and NewRange, the properties are relatively simple. Percentage is somewhat more complex but still okay.
I would structure the code as (pseudo-code):
def parseFile (fileHandle):
while (currentLine = fileHandle.getNextLine()) != EOF:
if currentLine.beginsWith ("DurationOfRun:"):
processDurationOfRun (currentLine[14:])
elsif currentLine.beginsWith ("ThreadSize:"):
processThreadSize (currentLine[11:])
elsif currentLine.beginsWith ("ExistingRange:"):
processExistingRange (currentLine[14:])
elsif currentLine.beginsWith ("NewRange:"):
processNewRange (currentLine[9:])
elsif currentLine.beginsWith ("Percentage:"):
processPercentage (currentLine[11:])
else
raise error
Then, in each of those processWhatever() functions, you parse the remainder of the line based on the expected format. That keeps your code small and readable and easily changed in future, without having to navigate a morass :-)
For example, processDurationOfRun() simply gets an integer from the remainder of the line:
def processDurationOfRun (line):
this.durationOfRun = line.parseAsInt()
Similarly, the functions for the two ranges split the string on - and get two integers from the resultant values:
def processExistingRange (line):
values[] = line.split("-")
this.existingRangeStart = values[0].parseAsInt()
this.existingRangeEnd = values[1].parseAsInt()
The processPercentage() function is the tricky one but that is also easily doable if you layer it as well. Assuming those things are always in the same order, it consists of:
an integer;
a literal -;
some sort of textual category; and
a series of key:value pairs.
And even these values within the pairs can be parsed by lower levels, splitting first on commas to get subvalues like Previous/30 and New/70, then splitting each of those subvalues on slashes to get individual items. That way, a logical hierarchy can be reflected in your code.
Unless you're expecting to be parsing this text files many times per second, or unless it's many megabytes in size, I'd be more concerned about the readability and maintainability of your code than the speed of the parsing.
Mostly gone are the days when we need to wring the last ounce of performance from our code but we still have problems in fixing said code in a timely manner when bugs are found or enhancements are desired.
Sometimes it's preferable to optimise for readability.
I would not worry about performance until I was sure there was actually a performance issue. Regarding the rest of the code, if you won't be adding any new line types I would not worry about it. If you do worry about it, however, a factory design pattern can help you separate the selection of the type of processing needed from the actual processing. It makes adding new line types easier without introducing as much opportunity for error.
The younger and more convenient class is Scanner. You just need to modify the delimiter, and get reading of data in the desired format (readInt, readLong) in one go - no need for separate x.parseX - calls.
Second: Split your code into small, reusable pieces. They make the program readable, and you can hide details easily.
Don't hesitate to use a struct-like class for a range, for example. Returning multiple values from a method can be done by these, without boilerplate (getter,setter,ctor).
import java.util.*;
import java.io.*;
public class ReadSampleFile
{
// struct like classes:
class PercentageRow {
public int percentage;
public String name;
public int dataPrevious;
public int dataNew;
public int userLoggedTrue;
public int userLoggedFalse;
public List<Integer> attributeGet;
public List<Integer> attributeSet;
}
class Range {
public int from;
public int to;
}
private int readInt (String name, Scanner sc) {
String s = sc.next ();
if (s.startsWith (name)) {
return sc.nextLong ();
}
else err (name + " expected, found: " + s);
}
private long readLong (String name, Scanner sc) {
String s = sc.next ();
if (s.startsWith (name)) {
return sc.nextInt ();
}
else err (name + " expected, found: " + s);
}
private Range readRange (String name, Scanner sc) {
String s = sc.next ();
if (s.startsWith (name)) {
Range r = new Range ();
r.from = sc.nextInt ();
r.to = sc.nextInt ();
return r;
}
else err (name + " expected, found: " + s);
}
private PercentageLine readPercentageLine (Scanner sc) {
// reuse above methods
PercentageLine percentageLine = new PercentageLine ();
percentageLine.percentage = readInt ("Percentage", sc);
// ...
return percentageLine;
}
public ReadSampleFile () throws FileNotFoundException
{
/* I only read from my sourcefile for convenience.
So I could scroll up to see what's the next entry.
Don't do this at home. :) The dummy later ...
*/
Scanner sc = new Scanner (new File ("./ReadSampleFile.java"));
sc.useDelimiter ("[ \n/,:-]");
// ... is the comment I had to insert.
String dummy = sc.nextLine ();
List <String> values = new ArrayList<String> ();
if (sc.hasNext ()) {
// see how nice the data structure is reflected
// by this code:
long duration = readLong ("DurationOfRun");
int noOfThreads = readInt ("ThreadSize");
Range eRange = readRange ("ExistingRange");
Range nRange = readRange ("NewRange");
List <PercentageRow> percentageRows = new ArrayList <PercentageRow> ();
// including the repetition ...
while (sc.hasNext ()) {
percentageRows.add (readPercentageLine ());
}
}
}
public static void main (String args[]) throws FileNotFoundException
{
new ReadSampleFile ();
}
public static void err (String msg)
{
System.out.println ("Err:\t" + msg);
}
}

Categories