I want a java program to count the most frequent elements in a file
Is the problem that your "newCount" is a String instead of an Integer?
String newCount = entry.getValue().toString();
if(topN.containsKey(entry.getKey())){
newCount += topN.get(entry.getKey());
}
With the line Parser parser; you declare a variable of your class Parser, but you do not initialize that varialbe. Use Parser parser = new Parser(); instead.
There also seem to be a whole lot of type problem along the lines of
String newCount = entry.getValue().toString();
if(topN.containsKey(entry.getKey())){
newCount += topN.get(entry.getKey());
}
topN.put(entry.getKey(), newCount);
It seems like you want to add up the counts, but this will not work if you convert the Integer to String first! Also, the key of the Entry will be a value, so topN can never contain that key, as it is a Map of Strings and Actors, and even if it would, how would you add an Actor to an Integer (or String)? Finally, as others have noted, the put will fail, as both the type of the key and the value do not match the types of the Map.
WIthout knowing what those other classes (Sketch, Value, Actor, etc.) do, it is very hard do give clear advice how to fix your problem.
topN is declared as Map<String, Actor>. So key must be a String and value must be of type Actor.
In topN.put(entry.getKey(), newCount);, newCount (a String) is not an Actor. Also check if entry.getKey() is a String.
public class Parser {
private BufferedReader bf;
private static final String ACTOR_MOVIE_FILE = "actormovie.txt";
private Map<String, Actor> actors;
//this is the input file size
int fileSize = ACTOR_MOVIE_FILE.length();
public Parser() {
try {
bf = new BufferedReader(new FileReader(ACTOR_MOVIE_FILE), 32768);
} catch (FileNotFoundException e) {
JOptionPane.showMessageDialog(null, "file cannot be located ", "File not found exception", JOptionPane.ERROR_MESSAGE);
}
actors = new Hashtable<String, Actor>(1713251);
}
/**
* this reads data on a line one at a time
* #return actors in the hash table, with the name of an actor as a,
* and value as the actor object
*/
public Map<String, Actor> readLines() {
String line=" ";
while(true){
try {
line = bf.readLine();
} catch (IOException e) {
JOptionPane.showMessageDialog(null, "deadlock file not in proper format", "we have error reading the file", JOptionPane.ERROR_MESSAGE);
}
if(line == null){
break;
}
String[] tokens = line.split("/");
assemblyLines(tokens);
}
try {
bf.close();
} catch(IOException e){
}
return actors;
}
/**
* from a line we get stringTokenizers parse to the data structures. Film is described as a
* stringTokenizer with film object created from it. There are actors which are as well
* stringTokenizers created as actor object
* there is an actor table that controls entry space. In occurrences of other actors ,
* the object is altered, other than that
* objected is created and appended to the table
* #param stringTokenizer makes the text file divided into individual components
*/
public void assemblyLines(String[] stringTokenizer){
Film film = new Film(stringTokenizer[0]);
for(int i = 1; i < stringTokenizer.length;i++){
Actor actor;
String actorName = stringTokenizer[i];
if(actors.containsKey(actorName)){
actor = actors.get(actorName);
} else {
actor = new Actor(actorName);
actors.put(actorName, actor);
}
film.addActor(actor);
actor.addFilm(film);
}
}
}
Related
I am a beginner programmer so please excuse any technically incorrect statements/incorrect use of terminology.
I am trying to make a program that reduces CNF SAT in DIMACS format to 3SAT, and then from 3SAT to 3Graph Coloring, and then 3Graph coloring to SAT again. The idea is to make it circular so that the output from one reduction can be piped straight into the input of another, AKA if you reduce a CNF to 3SAT, the program should automatically reduce the 3SAT to Graph coloring after if the use specifies it to.
I have chosen to represent CNFs in a LinkedHashMap in a class called CNFHandler. The LinkedHashMap is where File = the DIMACS cnf formatted file and the CNF object is the CNF (which contains an ArrayList of Clause objects) that corresponds to the CNF.
In my CNFHandler class, I have a reduce object, and it's in this object that I am trying to initiate my piping functionality:
package CNFHandler;
import SAT_to_3SAT_Reducer.Reducer;
import java.io.*;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Optional;
public class CNFHandler {
private Map<File, CNF> allCNFs = new LinkedHashMap<>();
private CNFReader reader;
private Reducer reducer = new Reducer();
// PIPES
private Optional<ObjectInputStream> inputPipe;
private Optional<ObjectOutputStream> outputPipe;
// Instantiate Pipes
public void setInputPipe(ObjectInputStream inputStream) {
this.inputPipe = Optional.of(inputStream);
}
public void setOutputPipe(ObjectOutputStream outputStream) {
this.outputPipe = Optional.of(outputStream);
}
//...
// Skipping lines for brevity
//...
public void reduce(String filePath) {
File path = new File(filePath);
addCNF(filePath);
CNF result = reducer.reduce(allCNFs.get(path));
if (!outputPipe.isPresent()) {
System.out.println(result.toDIMACS());
} else {
try {
outputPipe.get().writeObject(result);
outputPipe.get().close();
} catch (Exception e){
e.printStackTrace();
}
}
}
}
When I try to run "writeObject" (within the try block in the reduce() method) the program doesn't seem to go past that point. I've tried using breakpoints in IntelliJ to see what's going on, but the best I could figure out was as follows:
A Native method called waitForReferencePendingList() seems to be stuck waiting for something, and that's why it won't go past the writeObject method
IntelliJ tells me "Connected to the target VM, address: '127.0.0.1:51236', transport: 'socket'" but I'm not sure why because I'm not using Sockets anywhere in my program
Here is the code for my Main method where I instantiate the ObjectOutputStreams :
import CNFHandler.CNFHandler;
import GraphHandler.GraphHandler;
import java.io.*;
public class Main {
public static void main(String[] args) {
try {
String inFile = "short_cnf.cnf";
PipedOutputStream _S_3S_OUT_PIPE_STREAM = new PipedOutputStream();
PipedInputStream _S_3S_IN_PIPE_STREAM = new PipedInputStream();
_S_3S_IN_PIPE_STREAM.connect(_S_3S_OUT_PIPE_STREAM);
ObjectOutputStream _S_3S_OUT_OBJECT_STREAM = new ObjectOutputStream(_S_3S_OUT_PIPE_STREAM);
ObjectInputStream _S_3S_IN_OBJEECT_STREAM = new ObjectInputStream(_S_3S_IN_PIPE_STREAM);
CNFHandler handler = new CNFHandler();
handler.setOutputPipe(_S_3S_OUT_OBJECT_STREAM);
handler.reduce(inFile);
PipedOutputStream _3S_G_OUT = new PipedOutputStream();
PipedInputStream _3S_G_IN = new PipedInputStream();
_3S_G_IN.connect(_3S_G_OUT);
ObjectOutputStream _3S_G_OUT_STREAM = new ObjectOutputStream(_3S_G_OUT);
ObjectInputStream _3S_G_IN_STREAM = new ObjectInputStream(_3S_G_IN);
GraphHandler graphHandler = new GraphHandler();
graphHandler.setInputPipe(_S_3S_IN_OBJEECT_STREAM);
graphHandler.setOutputPipe(_3S_G_OUT_STREAM);
graphHandler.reduce();
} catch (IOException e) {
e.printStackTrace();
}
}
}
The other weird thing is that the writeObject() method seems to work if I use a different kind of object, for example, if I instantiate a String within the writeObject() method in the same place it's being called in reduce(), or if I instantiate a new CNF object in the same place, it WILL write the object. But I can't do it this way because I have to pass along the values of the object as well (the clauses, etc.) so I don't know what to do.
This is my CNF class, in brief:
package CNFHandler;
import java.io.Serializable;
import java.util.*;
public class CNF implements Serializable {
protected int numVars;
protected int numClauses;
protected String fileName;
// store all variables with no duplicates
protected Set<String> allLiterals = new HashSet<>();
protected ArrayList<Clause> clauses = new ArrayList<>();
/*
for printing to DIMACS: keep track of the max # of
literals that are needed to print a clause
for example if all clauses in the CNF file contain
2 literals, and only one contains 3 literals
then the literalsize will be 3 to ensure things
are printed with proper spacing
*/
protected int literalSize = -20;
/*
keep track of the label referring to the highest #ed literal
just in case they are not stored in order -- this way when we perform
reductions we can just add literals to the end and be sure we are not
duplicating any
*/
protected int highestLiteral = -10;
public CNF(String fileName) {
this.fileName = fileName;
}
protected void addClause(String[] inputs) {
try {
Clause clauseToAdd = new Clause();
// add literals to the hashset, excluding dashes that indicate negative literals
for (int i = 0; i < inputs.length - 1; i++) {
// removing whitespace from the input
String toAdd = inputs[i].replaceAll("\\s+", "");;
// in case the variable is false (has a dash before the int), remove the dash to standardize storage
String moddedToAdd = inputs[i].replaceAll("[-]*", "");
/*
if an unknown variable is in the stream, reject it.
(we're basically checking here if the variable set is full,
and if it is and the variable we're trying to add is new,
then it can't be added)
*/
if ((!allLiterals.contains(moddedToAdd)) && (allLiterals.size() == numVars) && (moddedToAdd.trim().length() > 0)) {
throw new FailedCNFException();
}
// add the original input (so not the regex'd one but the one that would be false if it had been input as false
clauseToAdd.addLiteral(toAdd);
if (!allLiterals.contains(moddedToAdd) && !moddedToAdd.equalsIgnoreCase("")) {
allLiterals.add(moddedToAdd);
/*
change the highestLiteral value if the literal being added is "bigger" than the others that have been seen
*/
if(highestLiteral < Integer.parseInt(moddedToAdd)) {
highestLiteral = Integer.parseInt(moddedToAdd);
}
}
}
if (clauseToAdd.getNumberOfLiterals() > literalSize) {
literalSize = clauseToAdd.getNumberOfLiterals();
}
clauses.add(clauseToAdd);
} catch (FailedCNFException e) {
System.out.println("The number of variables that have been introduced is too many!");
}
}
public void makeClause(String[] inputs) {
try {
if (inputs[inputs.length - 1].equals("0")) {
addClause(inputs);
} else throw new FailedCNFException();
} catch (FailedCNFException f) {
System.out.println("There is no 0 at the end of this line: ");
for (String s : inputs ) {
System.out.print(s + " ");
}
System.out.println();
}
}
public void initializeClauses (String[] inputs) {
setNumVars(inputs[2]);
setNumClauses(inputs[3]);
}
public String toDIMACS () {
String toReturn = "p cnf " + getNumVars() + " " + getNumClauses() + "\n";
for(int i = 0; i < clauses.size()-1; i++){
Clause c = clauses.get(i);
toReturn += c.toDIMACS(literalSize) + "\n";
}
toReturn += clauses.get(clauses.size()-1).toDIMACS(literalSize);
return toReturn;
}
/*
Override tostring method to print clauses in human-readable format
*/
#Override
public String toString () {
if(highestLiteral != -10) {
String toReturn = "(";
for (int i = 0; i < clauses.size() - 1; i++) {
Clause c = clauses.get(i);
toReturn += c + "&&";
}
toReturn += clauses.get(clauses.size() - 1).toString() + ")";
return toReturn;
} else {
return "Add some clauses!";
}
}
public String toString (boolean addFile) {
String toReturn = "";
if (addFile) {
toReturn += "src/test/ExampleCNFs/" + fileName + ".cnf: \n";
}
toReturn += "(";
for(int i = 0; i < clauses.size()-1; i++){
Clause c = clauses.get(i);
toReturn += c + "&&";
}
toReturn += clauses.get(clauses.size()-1).toString() + ")";
return toReturn;
}
//=============================================================================
// HELPER FUNCTIONS
//=============================================================================
public void setNumVars(String vars) {
numVars = Integer.parseInt(vars);
}
public void setNumClauses(String clauses) {
numClauses = Integer.parseInt(clauses);
}
public Clause getClause(int index) {
return clauses.get(index);
}
public void addLiteral(int newLiteral) {
allLiterals.add(String.valueOf(newLiteral));
}
public void addLiterals(Set<String> newLiterals) {
allLiterals.addAll(newLiterals);
}
public void addClauses(ArrayList<Clause> toAdd, int maxLiterals) {
clauses.addAll(toAdd);
numClauses += toAdd.size();
// update literalsize if need be
if (maxLiterals > literalSize) {
literalSize = maxLiterals;
}
}
//=============================================================================
// GETTERS AND SETTERS
//=============================================================================
public void setNumVars(int numVars) {
this.numVars = numVars;
}
public void setNumClauses(int numClauses) {
this.numClauses = numClauses;
}
public int getNumVars() {
return numVars;
}
public int getNumClauses() {
return numClauses;
}
public ArrayList<Clause> getClauses() {
return clauses;
}
public Set<String> getAllLiterals() {
return allLiterals;
}
//
// LITERAL SIZE REPRESENTS THE MAXIMUM NUMBER OF LITERALS A CLAUSE CAN CONTAIN
//
public int getLiteralSize() {
return literalSize;
}
public void setLiteralSize(int literalSize) {
this.literalSize = literalSize;
}
public String getFilePath() {
return "src/test/ExampleCNFs/" + fileName + ".cnf";
}
public String getFileName() {
return fileName;
}
public void setFileName(String fileName) {
this.fileName = fileName;
}
//
// HIGHEST LITERAL REPRESENTS THE HIGHEST NUMBER USED TO REPRESENT A LITERAL
// IN THE DIMACS CNF FORMAT
//
public int getHighestLiteral() {
return highestLiteral;
}
public void setHighestLiteral(int highestLiteral) {
this.highestLiteral = highestLiteral;
}
public void setHighestLiteral(String highestLiteral) {
this.highestLiteral = Integer.parseInt(highestLiteral);
}
}
Can someone give me some insight as to what's going on here, please? Thank you very much.
First of all, neither of the symptoms are actually relevant to your question:
A Native method called waitForReferencePendingList() seems to be stuck waiting for something.
You appear to have found an internal thread that is dealing with the processing of Reference objects following a garbage collection. It is normal for it to be waiting there.
IntelliJ tells me "Connected to the target VM, address: '127.0.0.1:51236', transport: 'socket'"
That is Intellij saying that it has connected to the debug agent in the JVM that is running your application. Again, this is normal.
If you are trying to find the cause of a problem via a debugger, you need to find the application thread that is stuck. Then drill down to the point where it is actually stuck and look at the corresponding source code to figure out what it is doing. In this case, you need to look at the standard Java SE library source code for your platform. Randomly looking for clues rarely works ...
Now to your actual problem.
Without a stacktrace or a minimal reproducible example, it is not possible to say with certainty what is happening.
However, I suspect that writeObject is simply stuck waiting for something to read from the "other end" of the pipeline. It looks like you have set up a PipedInputStream / PipedOutputStream pair. This has only a limited amount of buffering. If the "writer" writes too much to the output stream, it will block until the "reader" has read some data from the input stream.
The other weird thing is that the writeObject() method seems to work if I use a different kind of object ...
The other kind of object probably has a smaller serialization which fits into the available buffer space.
I am reading a txt file and store the data in a hashtable, but I couldn't get the correct output. the txt file like this (part) attached image
this is part of my data
And I want to store the column 1 and column 2 as the key(String type) in hashtable, and column 3 and column 4 as the value (ArrayList type) in hashtable.
My code below:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"));
br.readLine();
ArrayList<String[]> value = new ArrayList<String[]>();
String[] probDes = new String[2];
String key = "";
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String [] splited = line.split("\\t");
source = splited[0];
action = splited[1];
key = source+","+action;
probDes[0] = splited[2];
probDes[1] = splited[3];
value.add(probDes);
hashTableForWorld.put(key, value);
System.out.println("hash table is like this:" +hashTableForWorld);
}
br.close();
return hashTableForWorld;
}
The output looks like this:
it's a very long long line
I think maybe the hashtable is broken, but I don't know why. Thank you for reading my problem.
The first thing we need to establish is that you have a really obvious XY-Problem, in that "what you need to do" and "how you're trying to solve it" are completely at odds with each other.
So let's go back to the original problem and try to work out what we need first.
As best as I can determine, source and action are connected, in that they represent queryable "keys" to your data structure, and probability, destination, and reward are queryable "outcomes" in your data structure. So we'll start by creating objects to represent those two concepts:
public class SourceAction implements Comparable<SourceAction>{
public final String source;
public final String action;
public SourceAction() {
this("", "");
}
public SourceAction(String source, String action) {
this.source = source;
this.action = action;
}
public int compareTo(SourceAction sa) {
int comp = source.compareTo(sa.source);
if(comp != 0) return comp;
return action.compareto(sa.action);
}
public boolean equals(SourceAction sa) {
return source.equals(sa.source) && action.equals(sa.action);
}
public String toString() {
return source + ',' + action;
}
}
public class Outcome {
public String probability; //You can use double if you've written code to parse the probability
public String destination;
public String reward; //you can use double if you're written code to parse the reward
public Outcome() {
this("", "", "");
}
public Outcome(String probability, String destination, String reward) {
this.probability = probability;
this.destination = destination;
this.reward = reward;
}
public boolean equals(Outcome o) {
return probability.equals(o.probability) && destination.equals(o.destination) && reward.equals(o.reward);
public String toString() {
return probability + ',' + destination + ',' + reward;
}
}
So then, given these objects, what sort of Data Structure can properly encapsulate the relationship between these objects, given that a SourceAction seems to have a One-To-Many relationship to Outcome objects? My suggestion is that a Map<SourceAction, List<Outcome>> represents this relationship.
private Map<SourceAction, List<Outcome>> readData() throws Exception {
It is possible to use a Hash Table (in this case, HashMap) to contain these objects, but I'm trying to keep the code as simple as possible, so we're going to stick to the more generic interface.
Then, we can reuse the logic you used in your original code to insert values into this data structure, with a few tweaks.
private Map<SourceAction, List<Outcome>> readData() {
//We're using a try-with-resources block to eliminate the later call to close the reader
try (BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"))) {
br.readLine();//Skip the first line because it's just a header
//I'm using a TreeMap because that makes the implementation simpler. If you absolutely
//need to use a HashMap, then make sure you implement a hash() function for SourceAction
Map<SourceAction, List<Outcome>> dataStructure = new TreeMap<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//split by tab
String [] splited = line.split("\\t");
SourceAction sourceAction = new SourceAction(splited[0], splited[1]);
Outcome outcome = new Outcome(splited[2], splited[3], splited[4]);
if(dataStructure.contains(sourceAction)) {
//Entry already found; we're just going to add this outcome to the already
//existing list.
dataStructure.get(sourceAction).add(outcome);
} else {
List<Outcome> outcomes = new ArrayList<>();
outcomes.add(outcome);
dataStructure.put(sourceAction, outcomes);
}
}
} catch (IOException e) {//Do whatever, or rethrow the exception}
return dataStructure;
}
Then, if you want to query for all the outcomes associated with a given source + action, you need only construct a SourceAction object and query the Map for it.
Map<SourceAction, List<Outcome>> actionMap = readData();
List<Outcome> outcomes = actionMap.get(new SourceAction("(1,1)", "Up"));
assert(outcomes != null);
assert(outcomes.size() == 3);
assert(outcomes.get(0).equals(new Outcome("0.8", "(1,2)", "-0.04")));
assert(outcomes.get(1).equals(new Outcome("0.1", "(2,1)", "-0.04")));
assert(outcomes.get(2).equals(new Outcome("0.1", "(1,1)", "-0.04")));
This should yield the functionality you need for your problem.
You should change your logic for adding to your hashtable to check for the key you create. If the key exists, then grab your array list of arrays that it maps to and add your array to it. Currently you will overwrite the data.
Try this
if(hashTableForWorld.containsKey(key))
{
value = hashTableForWorld.get(key);
value.add(probDes);
hashTableForWorld.put(key, value);
}
else
{
value = new ArrayList<String[]>();
value.add(probDes);
hashTableForWorld.put(key, value);
}
Then to print the contents try something like this
for (Map.Entry<String, ArrayList<String[]>> entry : hashTableForWorld.entrySet()) {
String key = entry.getKey();
ArrayList<String[]> value = entry.getValue();
System.out.println ("Key: " + key + " Value: ");
for(int i = 0; i < value.size(); i++)
{
System.out.print("Array " + i + ": ");
for(String val : value.get(i))
System.out.print(val + " :: ")
System.out.println();
}
}
Hashtable and ArrayList (and other collections) do not make a copy of key and value, and thus all values you are storing are the same probDes array you are allocating at the beginning (note that it is normal that the String[] appears in a cryptic form, you would have to make it pretty yourself, but you can still see that it is the very same cryptic thing all the time).
What is sure is that you should allocate a new probDes for each element inside the loop.
Based on your data you could work with an array as value in my opinion, there is no real use for the ArrayList
And the same applies to value, it has to be allocated separately upon encountering a new key:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
try(BufferedReader br=new BufferedReader(new FileReader("MyGridWorld.txt"))) {
br.readLine();
Hashtable<String, ArrayList<String[]>> hashTableForWorld=new Hashtable<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String[] split = line.split("\\t");
source = split[0];
action = split[1];
String key = source+","+action;
String[] probDesRew = new String[3];
probDesRew[0] = split[2];
probDesRew[1] = split[3];
probDesRew[2] = split[4];
ArrayList<String[]> value = hashTableForWorld.get(key);
if(value == null){
value = new ArrayList<>();
hashTableForWorld.put(key, value);
}
value.add(probDesRew);
}
return hashTableForWorld;
}
}
Besides relocating the variables to their place of actual usage, the return value is also created locally, and the reader is wrapped into a try-with-resource construct which ensures that it is getting closed even if an exception occurs (see official tutorial here).
I have a textfile as such:
type = "Movie"
year = 2014
Producer = "John"
title = "The Movie"
type = "Magazine"
year = 2013
Writer = "Alfred"
title = "The Magazine"
What I'm trying to do is, first, search the file for the type, in this case "Movie" or "Magazine".
If it's a Movie, store all the values below it, i.e
Set the movie variable to be 2014, Producer to be "John" etc.
If it's a Magazine type, store all the variables below it as well separately.
What I have so far is this:
public static void Parse(String inPath) {
String value;
try {
Scanner sc = new Scanner(new FileInputStream("resources/input.txt"));
while(sc.hasNextLine()) {
String line = sc.nextLine();
if(line.startsWith("type")) {
value = line.substring(8-line.length()-1);
System.out.println(value);
}
}
} catch (FileNotFoundException ex) {
Logger.getLogger(LibrarySearch.class.getName()).log(Level.SEVERE, null, ex);
}
}
However, I'm already having an issue in simply printing out the first type, which is "Movie". My program seems to skip that one, and print out "Magazine" instead.
For this problem solely, is it because the line: line.startsWith("type")is checking if the current line in the file starts with type, but since the actual String called lineis set to the nextline, it skips the first "type"?
Also, what would be the best approach to parsing the actual values (right side of equal sign) below the type "Movie" and "Magazine" respectively?
I recommend you try the following:
BufferedReader reader = new BufferedReader(new FileReader(new File("resources/input.txt")));
String line;
while((line = reader.readLine()) != null) {
if (line.contains("=")) {
String[] bits = line.split("=");
String name = bits[0].trim();
String value = bits[1].trim();
if (name.equals("type")) {
// Make a new object
} else if (name.equals("year")) {
// Store in the current object
}
} else {
// It's a new line, so you should make a new object to store stuff in.
}
}
In your code, the substring looks suspect to me. If you do a split based on the equals sign, then that should be much more resilient.
I have a Vehicle class which contains all information about Vehicle objects including get and set methods. There is also a Showroom class which maintains a list of all of the Vehicle objects, with methods to add/delete and scroll through the list.
In my main (a seperate class called VehicleDriverClass) I am trying to use I/O to write Vehicle data to a file and read in Vehicle data from a file. I can write to a file fine. I am using notepad and so a .txt file to read from. The problem I am having is with how to terminate the end of a line when reading from the file. Here is the constructor for the Vehicle class, so you know the paramaters.
public Vehicle(String man, String mod, String VIN, String dateOfMan, char taxBand, int costOfVehicle)
{
this.manufacturer = man;
this.model = mod;
this.VIN = VIN;
this.dateOfManufacture = dateOfMan;
this.taxBand = taxBand;
this.costOfVehicle = costOfVehicle;
}
This is what I have for the Input method at the moment (without trying to create the oject, just reading from file). The Showroom s being passed to it is for use later, when I create the vehicle object and add it to the showroom.
// code replaced below.
With this implementation when the dataFromFile is outputted to the console it is all on one line, rather than on new lines. Does readline() not terminate the line when '\n' is read in?
Here is how my data is stored in the input file.
Fordtest\n Focus\n frank\n ioCheck\n 09/01/1989\n 23/11/2013\n true\n d\n 1995\n
So for now, how do I get the line to terminate? So that I can then implement the creation of an object from this.
EDIT: I/O is working now. I am now having trouble with the constructor for my Vehicle object needing a the data types char and int for the last two variables. With the current method they are in a string array.
I have removed the code from above and added the new implementation below.public static void
addNewVehicleFromFile(Showroom s)
{
String dataFromFile;
String[] tokens = null;
try
{
File fileReader = new File("AddNewVehicleFromFile.txt");
FileReader fr = new FileReader(fileReader);
BufferedReader br = new BufferedReader(fr);
while ((dataFromFile = br.readLine()) != null)
{
tokens = dataFromFile.split("~");
}
System.out.println(Arrays.toString(tokens));
Vehicle inputVehicle = new Vehicle(tokens[0], tokens[1], tokens[2], tokens[3],
tokens[4], tokens[5]);
/*
Erorr above here with these two. token[4] should be a char and [5] an int
*/
s.addVehicle(inputVehicle);
System.out.println("addNewVehicleFromFile Complete");
}
catch (FileNotFoundException fnfe)
{
System.out.println("File not found exception: " + fnfe.toString());
}
catch (IOException ioe)
{
System.out.println("I/O exception: " + ioe.toString());
}
}
Should I be writing my own toChar and toInt methods to call for these two variables? Or parsing to int or similar.
I think you'll do better if you change your input data format. This is what XML and JSON were born for. If you must persist with your current arrangement, change the delimiter between data elements to something like a tilde '~' instead of \n.
So your input looks like this:
Fordtest~Focus~frank~ioCheck~09/01/1989~23/11/2013~true~d~1995
It's easy to parse now:
String [] tokens = data.split("~");
Write yourself some factory methods to create Vehicles:
public class VehicleFactory {
private static final VehicleFactory INSTANCE= new VehicleFactory();
private VehicleFactory() {}
public static VehicleFactory getInstance() { return INSTANCE; }
public static Vehicle createVehicle(String data) {
Vehicle value = null;
String [] tokens = data.split("~");
if ((tokens != null) && (tokens.length > X)) {
// Map String to int or Date here
value = new Vehicle(tokens[0], tokens[1], tokens[2], tokens[3], tokens[4], tokens[5]);
}
return value;
}
public static List<Vehicle> createVehicles(File f) {
List<Vehicle> values = new ArrayList<Vehicle>();
// implementation left for you
return values;
}
}
readLine() terminates the line when a character matching the Java syntax of \n is read. In most text editors, this is a newline. To express a newline in a Java string, use \n in the source code, but when creating the file by hand, use:
Fordtest
Focus
frank
ioCheck
09/01/1989
23/11/2013
true
d1995
To speed-up a lookup search into a multi-record file I wish to store its elements into a String array of array so that I can just search a string like "AF" into similar strings only ("AA", "AB, ... , "AZ") and not into the whole file.
The original file is like this:
AA
ABC
AF
(...)
AP
BE
BEND
(...)
BZ
(...)
SHORT
VERYLONGRECORD
ZX
which I want to translate into
AA ABC AF (...) AP
BE BEND (...) BZ
(...)
SHORT
VERYLONGRECORD
ZX
I don't know how much records there are and how many "elements" each "row" will have as the source file can change in the time (even if, after being read into memory, the array is only read).
I tried whis solution:
in a class I defined the string array of (string) arrays, without defining its dimensions
public static String[][] tldTabData;
then, in another class, I read the file:
public static void tldLoadTable() {
String rec = null;
int previdx = 0;
int rowidx = 0;
// this will hold each row
ArrayList<String> mVector = new ArrayList<String>();
FileInputStream fStream;
BufferedReader bufRead = null;
try {
fStream = new FileInputStream(eVal.appPath+eVal.tldTabDataFilename);
// Use DataInputStream to read binary NOT text.
bufRead = new BufferedReader(new InputStreamReader(fStream));
} catch (Exception er1) {
/* if we fail the 1.st try maybe we're working into some "package" (e.g. debugging)
* so we'll try a second time with a modified path (e.g. adding "bin\") instead of
* raising an error and exiting.
*/
try {
fStream = new FileInputStream(eVal.appPath +
"bin"+ File.separatorChar + eVal.tldTabDataFilename);
// Use DataInputStream to read binary NOT text.
bufRead = new BufferedReader(new InputStreamReader(fStream));
} catch (FileNotFoundException er2) {
System.err.println("Error: " + er2.getMessage());
er2.printStackTrace();
System.exit(1);
}
}
try {
while((rec = bufRead.readLine()) != null) {
// strip comments and short (empty) rows
if(!rec.startsWith("#") && rec.length() > 1) {
// work with uppercase only (maybe unuseful)
//rec.toUpperCase();
// use the 1st char as a row index
rowidx = rec.charAt(0);
// if row changes (e.g. A->B and is not the 1.st line we read)
if(previdx != rowidx && previdx != 0)
{
// store the (completed) collection into the Array
eVal.tldTabData[previdx] = mVector.toArray(new String[mVector.size()]);
// clear the collection itself
mVector.clear();
// and restart to fill it from scratch
mVector.add(rec);
} else
{
// continue filling the collection
mVector.add(rec);
}
// and sync the indexes
previdx = rowidx;
}
}
streamIn.close();
// globally flag the table as loaded
eVal.tldTabLoaded = true;
} catch (Exception er2) {
System.err.println("Error: " + er2.getMessage());
er2.printStackTrace();
System.exit(1);
}
}
When executing the program, it correctly accumulates the strings into mVector but, when trying to copy them into the eVal.tldTabData I get a NullPointerException.
I bet I have to create/initialize the array at some point but having problems to figure where and how.
First time I'm coding in Java... helloworld apart. :-)
you can use a Map to store your strings per row;
here something that you'll need :
//Assuming that mVector already holds all you input strings
Map<String,List<String>> map = new HashMap<String,List<String>>();
for (String str : mVector){
List<String> storedList;
if (map.containsKey(str.substring(0, 1))){
storedList = map.get(str.substring(0, 1));
}else{
storedList = new ArrayList<String>();
map.put(str.substring(0, 1), storedList);
}
storedList.add(str);
}
Set<String> unOrdered = map.keySet();
List<String> orderedIndexes = new ArrayList<String>(unOrdered);
Collections.sort(orderedIndexes);
for (String key : orderedIndexes){//get strings for every row
List<String> values = map.get(key);
for (String value : values){//writing strings on the same row
System.out.print(value + "\t"); // change this to writing to some file
}
System.out.println(); // add new line at the end of the row
}