I am reading a txt file and store the data in a hashtable, but I couldn't get the correct output. the txt file like this (part) attached image
this is part of my data
And I want to store the column 1 and column 2 as the key(String type) in hashtable, and column 3 and column 4 as the value (ArrayList type) in hashtable.
My code below:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"));
br.readLine();
ArrayList<String[]> value = new ArrayList<String[]>();
String[] probDes = new String[2];
String key = "";
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String [] splited = line.split("\\t");
source = splited[0];
action = splited[1];
key = source+","+action;
probDes[0] = splited[2];
probDes[1] = splited[3];
value.add(probDes);
hashTableForWorld.put(key, value);
System.out.println("hash table is like this:" +hashTableForWorld);
}
br.close();
return hashTableForWorld;
}
The output looks like this:
it's a very long long line
I think maybe the hashtable is broken, but I don't know why. Thank you for reading my problem.
The first thing we need to establish is that you have a really obvious XY-Problem, in that "what you need to do" and "how you're trying to solve it" are completely at odds with each other.
So let's go back to the original problem and try to work out what we need first.
As best as I can determine, source and action are connected, in that they represent queryable "keys" to your data structure, and probability, destination, and reward are queryable "outcomes" in your data structure. So we'll start by creating objects to represent those two concepts:
public class SourceAction implements Comparable<SourceAction>{
public final String source;
public final String action;
public SourceAction() {
this("", "");
}
public SourceAction(String source, String action) {
this.source = source;
this.action = action;
}
public int compareTo(SourceAction sa) {
int comp = source.compareTo(sa.source);
if(comp != 0) return comp;
return action.compareto(sa.action);
}
public boolean equals(SourceAction sa) {
return source.equals(sa.source) && action.equals(sa.action);
}
public String toString() {
return source + ',' + action;
}
}
public class Outcome {
public String probability; //You can use double if you've written code to parse the probability
public String destination;
public String reward; //you can use double if you're written code to parse the reward
public Outcome() {
this("", "", "");
}
public Outcome(String probability, String destination, String reward) {
this.probability = probability;
this.destination = destination;
this.reward = reward;
}
public boolean equals(Outcome o) {
return probability.equals(o.probability) && destination.equals(o.destination) && reward.equals(o.reward);
public String toString() {
return probability + ',' + destination + ',' + reward;
}
}
So then, given these objects, what sort of Data Structure can properly encapsulate the relationship between these objects, given that a SourceAction seems to have a One-To-Many relationship to Outcome objects? My suggestion is that a Map<SourceAction, List<Outcome>> represents this relationship.
private Map<SourceAction, List<Outcome>> readData() throws Exception {
It is possible to use a Hash Table (in this case, HashMap) to contain these objects, but I'm trying to keep the code as simple as possible, so we're going to stick to the more generic interface.
Then, we can reuse the logic you used in your original code to insert values into this data structure, with a few tweaks.
private Map<SourceAction, List<Outcome>> readData() {
//We're using a try-with-resources block to eliminate the later call to close the reader
try (BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"))) {
br.readLine();//Skip the first line because it's just a header
//I'm using a TreeMap because that makes the implementation simpler. If you absolutely
//need to use a HashMap, then make sure you implement a hash() function for SourceAction
Map<SourceAction, List<Outcome>> dataStructure = new TreeMap<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//split by tab
String [] splited = line.split("\\t");
SourceAction sourceAction = new SourceAction(splited[0], splited[1]);
Outcome outcome = new Outcome(splited[2], splited[3], splited[4]);
if(dataStructure.contains(sourceAction)) {
//Entry already found; we're just going to add this outcome to the already
//existing list.
dataStructure.get(sourceAction).add(outcome);
} else {
List<Outcome> outcomes = new ArrayList<>();
outcomes.add(outcome);
dataStructure.put(sourceAction, outcomes);
}
}
} catch (IOException e) {//Do whatever, or rethrow the exception}
return dataStructure;
}
Then, if you want to query for all the outcomes associated with a given source + action, you need only construct a SourceAction object and query the Map for it.
Map<SourceAction, List<Outcome>> actionMap = readData();
List<Outcome> outcomes = actionMap.get(new SourceAction("(1,1)", "Up"));
assert(outcomes != null);
assert(outcomes.size() == 3);
assert(outcomes.get(0).equals(new Outcome("0.8", "(1,2)", "-0.04")));
assert(outcomes.get(1).equals(new Outcome("0.1", "(2,1)", "-0.04")));
assert(outcomes.get(2).equals(new Outcome("0.1", "(1,1)", "-0.04")));
This should yield the functionality you need for your problem.
You should change your logic for adding to your hashtable to check for the key you create. If the key exists, then grab your array list of arrays that it maps to and add your array to it. Currently you will overwrite the data.
Try this
if(hashTableForWorld.containsKey(key))
{
value = hashTableForWorld.get(key);
value.add(probDes);
hashTableForWorld.put(key, value);
}
else
{
value = new ArrayList<String[]>();
value.add(probDes);
hashTableForWorld.put(key, value);
}
Then to print the contents try something like this
for (Map.Entry<String, ArrayList<String[]>> entry : hashTableForWorld.entrySet()) {
String key = entry.getKey();
ArrayList<String[]> value = entry.getValue();
System.out.println ("Key: " + key + " Value: ");
for(int i = 0; i < value.size(); i++)
{
System.out.print("Array " + i + ": ");
for(String val : value.get(i))
System.out.print(val + " :: ")
System.out.println();
}
}
Hashtable and ArrayList (and other collections) do not make a copy of key and value, and thus all values you are storing are the same probDes array you are allocating at the beginning (note that it is normal that the String[] appears in a cryptic form, you would have to make it pretty yourself, but you can still see that it is the very same cryptic thing all the time).
What is sure is that you should allocate a new probDes for each element inside the loop.
Based on your data you could work with an array as value in my opinion, there is no real use for the ArrayList
And the same applies to value, it has to be allocated separately upon encountering a new key:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
try(BufferedReader br=new BufferedReader(new FileReader("MyGridWorld.txt"))) {
br.readLine();
Hashtable<String, ArrayList<String[]>> hashTableForWorld=new Hashtable<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String[] split = line.split("\\t");
source = split[0];
action = split[1];
String key = source+","+action;
String[] probDesRew = new String[3];
probDesRew[0] = split[2];
probDesRew[1] = split[3];
probDesRew[2] = split[4];
ArrayList<String[]> value = hashTableForWorld.get(key);
if(value == null){
value = new ArrayList<>();
hashTableForWorld.put(key, value);
}
value.add(probDesRew);
}
return hashTableForWorld;
}
}
Besides relocating the variables to their place of actual usage, the return value is also created locally, and the reader is wrapped into a try-with-resource construct which ensures that it is getting closed even if an exception occurs (see official tutorial here).
Related
So this is the structure of the file that I'm reading from:
[MESSAGE BEGIN]
uan:123
messageID: 111
[MESSAGE END]
[MESSAGE BEGIN]
uan:123
status:test
[MESSAGE END]
What I'm trying to do is, for a given uan, return all the details for it, whilst maintaining the block structure "MESSAGE BEGIN" "MESSAGE END".
This is the code I've written:
startPattern= "uan:123"
endPattern= "[MESSAGE END]"
System.out.println("Matching: " + this.getStartPattern());
List<String> desiredLines = new ArrayList<>();
try (BufferedReader buff = Files.newBufferedReader(getPath())) {
String line = "";
while ((line = buff.readLine()) != null) {
if (line.contains(this.getStartPattern())) {
desiredLines.add(line);
System.out.println(" \nMatch Found! ");
buff.lines().forEach(streamElement -> {
if (!streamElement.contains(this.getEndPattern())) {
desiredLines.add(streamElement);
} else if (streamElement.contains(this.getEndPattern())) {
throw new IndexOutOfBoundsException("Exit Status 0");
}
});
}
Now, the problem is, the while condition breaks when it sees the first "uan" and just captures the message ID. I want the code to also include "status" when I pass the uan.
Can anyone help with this?
EDIT
This is my expected output:
uan:123
messageID: 111
uan:123
status:test
All instances of uan:123 should be captured
What about to create e.g. Data class, that holds all fields for given uan? I can see that you have an object with id (i.e. uan) and many messaged for this object.
I offer to use this approach and collect all relative information (belong to the same object with uan) in the same instance:
This is Data class:
final class Data {
private String uan;
private final List<Map<String, String>> events = new LinkedList<>();
public Data(String uan) {
this.uan = uan;
}
public String getUan() {
return uan;
}
public boolean hasUan() {
return uan != null && !uan.isEmpty();
}
public void set(Data data) {
if (data != null)
events.addAll(data.events);
}
public void addEvent(String key, String value) {
if ("uan".equalsIgnoreCase(key))
uan = value;
else
events.add(Collections.singletonMap(key, value));
}
}
This is method that reads given file and retrieves Map<String, Data> with key as uan and values are all data for this object:
private static final String BEGIN = "[MESSAGE BEGIN]";
private static final String END = "[MESSAGE END]";
private static final Pattern KEY_VALUE_PATTERN = Pattern.compile("\\s*(?<key>[^:]+)\\s*:\\s*(?<value>[^:]+)\\s*");
private static Map<String, Data> readFile(Reader reader) throws IOException {
try (BufferedReader br = new BufferedReader(reader)) {
Data data = null;
Map<String, Data> map = new TreeMap<>();
for (String str; (str = br.readLine()) != null; ) {
if (str.equalsIgnoreCase(BEGIN))
data = new Data(null);
else if (str.equalsIgnoreCase(END)) {
if (data != null && data.hasUan()) {
String uan = data.getUan();
map.putIfAbsent(uan, new Data(uan));
map.get(uan).set(data);
}
data = null;
} else if (data != null) {
Matcher matcher = KEY_VALUE_PATTERN.matcher(str);
if (matcher.matches())
data.addEvent(matcher.group("key"), matcher.group("value"));
}
}
return map;
}
}
And finally, this is like the client looks like:
Map<String, Data> map = readFile(new FileReader("data.txt"));
Grouping filtered messages
Your general approach seems good. Instead of the nested loop I would break it down to a simpler and more straightforward logic like:
String needle = "uan:123";
String startPattern = "[MESSAGE BEGIN]";
String endPattern = "[MESSAGE END]";
List<List<String>>> result = new ArrayList<>();
try (BufferedReader buff = Files.newBufferedReader(getPath())) {
// Lines and flag for current message
List<String> currentMessage = new ArrayList<>();
boolean messageContainedNeedle = false;
// Read all lines
while (true) {
String line = buff.readLine();
if (line == null) {
break;
}
// Collect current line to message, ignore indicator
if (!line.equals(endPattern) && !line.equals(startPattern)) {
currentMessage.add(line);
}
// Set flag if message contains needle
if (!messageContainedNeedle && line.equals(needle)) {
messageContainedNeedle = true;
}
// Message ends
if (line.equals(endPattern)) {
// Collect if needle was contained
if (messageContainedNeedle) {
result.add(currentMessage);
}
// Prepare for next message
messageContainedNeedle = false;
currentMessage = new ArrayList<>();
}
}
}
It's easier to read and understand. And it supports that your message items come in arbitrary order. Also, the resulting result does still group messages in a List<List<String>>. You can easily flat-map that if you still want a List<String>.
The resulting structure is:
[
["uan:123", "messageID: 111"],
["uan:123", "status: test"]
]
Achieving exactly your desired output is simple now:
// Variant 1: Nested for-each
result.forEach(message -> message.forEach(System.out::println));
// Variant 2: Flat-map
result.stream().flatMap(List::stream).forEach(System.out::println));
// Variant 3: Without streams
for (List<String> message : result) {
for (String line : message) {
System.out.println(line);
}
}
Grouping all messages
If you leave out the flag-part you can parse all messages into that structure and then easily stream on them:
public static List<List<String>> parseMessages(Path path) {
String startPattern = "[MESSAGE BEGIN]";
String endPattern = "[MESSAGE END]";
List<List<String>>> result = new ArrayList<>();
try (BufferedReader buff = Files.newBufferedReader(path)) {
// Data for current message
List<String> currentMessage = new ArrayList<>();
// Read all lines
while (true) {
String line = buff.readLine();
if (line == null) {
break;
}
// Collect current line to message, ignore indicator
if (!line.equals(endPattern) && !line.equals(startPattern)) {
currentMessage.add(line);
}
// Message ends
if (line.equals(endPattern)) {
// Collect message
result.add(currentMessage);
// Prepare for next message
currentMessage = new ArrayList<>();
}
}
}
return result;
}
Usage is simple and straightforward. For example, filtering for messages with "uan:123":
List<List<String>> messages = parseMessages(getPath());
String needle = "uan:123";
List<List<String>> messagesWithNeedle = messages.stream()
.filter(message -> message.contains(needle))
.collect(Collectors.toList());
The resulting structure again is:
[
["uan:123", "messageID: 111"],
["uan:123", "status: test"]
]
Achieving your desired output can be made directly on the stream cascade:
messages.stream() // Stream<List<String>>
.filter(message -> message.contains(needle))
.flatMap(List::stream) // Stream<String>
.forEach(System.out::println);
Message Container
A natural idea would be to group the message data in a designated Message container class. Something like that:
public class Message {
private final Map<String, String> mProperties;
public Message() {
mProperties = new HashMap<>();
}
public String getValue(String key) {
return mProperties.get(key);
}
public void put(String key, String value) {
mProperties.put(key, value);
}
public static Message fromLines(List<String> lines) {
Message message = new Message();
for (String line : lines) {
String[] data = line.split(":");
message.put(data[0].trim(), data[1].trim());
}
return message;
}
// Other methods ...
}
Note the handy Message#fromLines method. Using that you get a List<Message> and working with the data is way more convenient.
Just use simple parsing logic and only output data if you see the matching uan. I use a boolean variable to keep track of whether we have hit a matching uan inside a given block. If so, then we output all lines, otherwise we no-op and skip everything.
try (BufferedReader buff = Files.newBufferedReader(getPath())) {
String line = "";
String uan = "uan:123";
String begin = "[MESSAGE BEGIN]";
String end = "[MESSAGE END]";
boolean match = false;
while ((line = buff.readLine()) != null) {
if (uan.equals(line)) {
match = true;
}
else if (end.equals(line)) {
match = false;
}
else if (!begin.equals(line) && match) {
System.out.println(line);
}
}
}
Note that I don't do any validation to check if, for example, every BEGIN is mirrored by a proper closing END. If you need this you may add extra logic to the above code.
I have contents in CSV file like this
User1,What is your favorite color?,color
User1,What is the name of your pet?,pet
User1,What is your mother's maiden name?,mother
User2,In what city were you born?,city
User2,What elementary school did you attend?,school
User2,What was your first best friend's name?,friend
I need to call OIM API which will take parameters like this
void setUserChallengeValues(java.lang.String userID,
boolean isUserLogin,
java.util.HashMap quesAnsMap)
where quesAnsMap parameter means HashMap of challenge question and answers
What is the efficient way of parsing the CSV file with hashmap of userid as key and question and answer as value?
My hashmap should be like User1 is key and value should have question as key and answer as value.
Any sample snippet to refer?
Thanks
Read the file line by line, spliting it by ',' using String.split()
HashMap<String, Map<String, String>> userAnswers = new HashMap<>();
BufferedReader reader = new BufferedReader(new FileReader("/PATH/TO/YOUR/FILE.cvs"));
String ln;
while((ln = reader.readLine()) != null)
{
String [] split = ln.split(",");
String user = split[0];
Map<String, String> userMap = userAnswers.get(user);
if(userMap == null)
{
userMap = new HashMap<String, String>();
userAnswers.put(user, userMap);
}
userMap.put(split[1], split[2]);
}
reader.close();
Here I am writing an method in which you can provide file (.csv) name as an parameter and get HashMap<String, String> as a Result
public Map<String, String> putYourCSVToHashMap(String prm_csvFile) {
BufferedReader br = null; //bufferReader
String line = "";
HashMap<String,Map<String, String>> hMapData = new HashMap<>();
Map<String, String> userMap; //refering to inner Hashmap.
String cvsSplitBy = ","; //parameter on which your csv lines is splitted as an Array.
try {
br = new BufferedReader(new FileReader(prm_csvFile)); // Read Your File and Stored into BufferedReader.
while ((line = br.readLine()) != null) { //read each Line of File till last.
String[] csv_LineAsArray= line.split(cvsSplitBy); //each line will is splitted into an String Array.
String username = csv_LineAsArray[0]; //pick username available at 0th Index.
userMap= hMapData.get(username);
if(userMap == null) //if perticular user doesnot have any record
{
//Create a New Object for each new line where Question as a key Answer as a Value.
userMap = new HashMap<String, String>();
hMapData.put(user, userMap);
}
// put question as a key and Answer as a Value.
userMap.put(csv_LineAsArray[1], csv_LineAsArray[2]);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return hMapData; // return your csv file as a HashMap.
}
I will tell you how I would do it, following the principle "Keep it as simple as possible".
I have read the other answers and I think using String.split is bad idea since you know exactly where to look for your values in each line of the CSV file.
Much better approach would be to use substring.
So here is sort of complete solution.
We create the class Tuple to store Q/A tuple. ( I am no using map since its a overkill :) )
class Tuple {
public String question;
public String answer;
public Tuple(String question, String answer) {
this.question = question;
this.answer = answer;
}
}
Its a simple class but it will save you lots of code later.
Now for the main class to do all the work.
class Questions {
private final Map csvData;
public Questions() {
csvData = new HashMap<String, Tuple>();
}
public void setUserChallengeValues(String line) {
String name = "";
String question = "";
String answer = "";
name = line.substring(0, line.indexOf(","));
line = line.substring(line.indexOf(",") + 1);
question = line.substring(0, line.indexOf(","));
line = line.substring(line.indexOf(",") + 1);
answer = line;
this.csvData.put(name, new Tuple(question, answer));
}
}
There is your method, the logic is very simple (a lot simpler compared to split in my opinion). You just look for ","'s index.
That way you can easily extract Name, Question and Answer from each line.
And finally the actual parsing becomes few lines of code.
Questions questions = new Questions();
//Feed the lines here one by one
String line1 = "User1,What is your favorite color?,color";
questions.setUserChallengeValues(line1);
Let me know if you need the whole code snippet.
Good luck :)
I have a textfile as such:
type = "Movie"
year = 2014
Producer = "John"
title = "The Movie"
type = "Magazine"
year = 2013
Writer = "Alfred"
title = "The Magazine"
What I'm trying to do is, first, search the file for the type, in this case "Movie" or "Magazine".
If it's a Movie, store all the values below it, i.e
Set the movie variable to be 2014, Producer to be "John" etc.
If it's a Magazine type, store all the variables below it as well separately.
What I have so far is this:
public static void Parse(String inPath) {
String value;
try {
Scanner sc = new Scanner(new FileInputStream("resources/input.txt"));
while(sc.hasNextLine()) {
String line = sc.nextLine();
if(line.startsWith("type")) {
value = line.substring(8-line.length()-1);
System.out.println(value);
}
}
} catch (FileNotFoundException ex) {
Logger.getLogger(LibrarySearch.class.getName()).log(Level.SEVERE, null, ex);
}
}
However, I'm already having an issue in simply printing out the first type, which is "Movie". My program seems to skip that one, and print out "Magazine" instead.
For this problem solely, is it because the line: line.startsWith("type")is checking if the current line in the file starts with type, but since the actual String called lineis set to the nextline, it skips the first "type"?
Also, what would be the best approach to parsing the actual values (right side of equal sign) below the type "Movie" and "Magazine" respectively?
I recommend you try the following:
BufferedReader reader = new BufferedReader(new FileReader(new File("resources/input.txt")));
String line;
while((line = reader.readLine()) != null) {
if (line.contains("=")) {
String[] bits = line.split("=");
String name = bits[0].trim();
String value = bits[1].trim();
if (name.equals("type")) {
// Make a new object
} else if (name.equals("year")) {
// Store in the current object
}
} else {
// It's a new line, so you should make a new object to store stuff in.
}
}
In your code, the substring looks suspect to me. If you do a split based on the equals sign, then that should be much more resilient.
So far, I have this project where I read in a properties file using PropertiesConfiguration (from Apache), edit the values I would like to edit, and then save change to the file. It keeps the comments and formatting and such, but one thing it does change is taking the multi-line values formatted like this:
key=value1,\
value2,\
value3
and turns it into the array style:
key=value1,value2,value3
I would like to be able to print those lines formatted as the were before.
I did this via this method:
PropertiesConfiguration config = new PropertiesConfiguration(configFile);
config.setProperty(key,value);
config.save();
I created a work around in case anyone else needs this functionality. Also, there is probably a better way to do this, but this solution currently works for me.
First, set your PropertiesConfiguration delimiter to the new line character like so:
PropertiesConfiguration config = new PropertiesConfiguration(configFile);
config.setListDelimiter('\n');
Then you will need to iterate through and update all properties (to set the format):
Iterator<String> keys = config.getKeys();
while (keys.hasNext()) {
String key = keys.next();
config.setProperty(key,setPropertyFormatter(key, config.getProperty(key))) ;
}
use this method to format your value list data (as shown above):
private List<String> setPropertyFormatter(String key, Object list) {
List<String> tempProperties = new ArrayList<>();
Iterator<?> propertyIterator = PropertyConverter.toIterator(list, '\n');;
String indent = new String(new char[key.length() + 1]).replace('\0', ' ');
Boolean firstIteration = true;
while (propertyIterator.hasNext()) {
String value = propertyIterator.next().toString();
Boolean lastIteration = !propertyIterator.hasNext();
if(firstIteration && lastIteration) {
tempProperties.add(value);
continue;
}
if(firstIteration) {
tempProperties.add(value + ",\\");
firstIteration = false;
continue;
}
if (lastIteration) {
tempProperties.add(indent + value);
continue;
}
tempProperties.add(indent + value + ",\\");
}
return tempProperties;
}
Then it is going to be almost correct, except the save function takes the double backslash that is stored in the List, and turns it into 4 back slashes in the file! So you need to replace those with a single backslash. I did this like so:
try {
config.save(new File(filePath));
byte[] readIn = Files.readAllBytes(Paths.get(filePath));
String replacer = new String(readIn, StandardCharsets.UTF_8).replace("\\\\\\\\", "\\");
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(filePath, false), "UTF-8"));
bw.write(replacer);
bw.close();
} catch (ConfigurationException | IOException e) {
e.printStackTrace();
}
With commons-configuration2, you would handle such cases with a custom PropertiesWriter implementation, as described in its documentation under "Custom properties readers and writers" (Reader biased though).
A writer provides a way to govern writing of each character that is to be written to the properties file, so you can achieve pretty much anything you desire with it (via PropertiesWriter.write(String)). There is also a convenient method that writes proper newlines (PropertiesWriter.writeln(String)).
For example, I had to handle classpath entries in a Netbeans Ant project project.properties file:
public class ClasspathPropertiesWriter extends PropertiesConfiguration.PropertiesWriter {
public ClasspathPropertiesWriter(Writer writer, ListDelimiterHandler delimiter) {
super(writer, delimiter);
}
#Override
public void writeProperty(String key, Object value, boolean forceSingleLine) throws IOException {
switch (key) {
case "javac.classpath":
case "run.classpath":
case "javac.test.classpath":
case "run.test.classpath":
String str = (String) value;
String[] split = str.split(":");
if (split.length > 1) {
write(key);
write("=\\");
writeln(null);
for (int i = 0; i < split.length; i++) {
write(" ");
write(split[i]);
if (i != split.length - 1) {
write(":\\");
}
writeln(null);
}
} else {
super.writeProperty(key, value, forceSingleLine);
}
break;
default:
super.writeProperty(key, value, forceSingleLine);
break;
}
}
}
public class CustomIOFactory extends PropertiesConfiguration.DefaultIOFactory {
#Override
public PropertiesConfiguration.PropertiesWriter createPropertiesWriter(
Writer out, ListDelimiterHandler handler) {
return new ClasspathPropertiesWriter(out, handler);
}
}
Parameters params = new Parameters();
FileBasedConfigurationBuilder<Configuration> builder =
new FileBasedConfigurationBuilder<Configuration>(PropertiesConfiguration.class)
.configure(params.properties()
.setFileName("project.properties")
.setIOFactory(new CustomIOFactory());
Configuration config = builder.getConfiguration();
builder.save();
To speed-up a lookup search into a multi-record file I wish to store its elements into a String array of array so that I can just search a string like "AF" into similar strings only ("AA", "AB, ... , "AZ") and not into the whole file.
The original file is like this:
AA
ABC
AF
(...)
AP
BE
BEND
(...)
BZ
(...)
SHORT
VERYLONGRECORD
ZX
which I want to translate into
AA ABC AF (...) AP
BE BEND (...) BZ
(...)
SHORT
VERYLONGRECORD
ZX
I don't know how much records there are and how many "elements" each "row" will have as the source file can change in the time (even if, after being read into memory, the array is only read).
I tried whis solution:
in a class I defined the string array of (string) arrays, without defining its dimensions
public static String[][] tldTabData;
then, in another class, I read the file:
public static void tldLoadTable() {
String rec = null;
int previdx = 0;
int rowidx = 0;
// this will hold each row
ArrayList<String> mVector = new ArrayList<String>();
FileInputStream fStream;
BufferedReader bufRead = null;
try {
fStream = new FileInputStream(eVal.appPath+eVal.tldTabDataFilename);
// Use DataInputStream to read binary NOT text.
bufRead = new BufferedReader(new InputStreamReader(fStream));
} catch (Exception er1) {
/* if we fail the 1.st try maybe we're working into some "package" (e.g. debugging)
* so we'll try a second time with a modified path (e.g. adding "bin\") instead of
* raising an error and exiting.
*/
try {
fStream = new FileInputStream(eVal.appPath +
"bin"+ File.separatorChar + eVal.tldTabDataFilename);
// Use DataInputStream to read binary NOT text.
bufRead = new BufferedReader(new InputStreamReader(fStream));
} catch (FileNotFoundException er2) {
System.err.println("Error: " + er2.getMessage());
er2.printStackTrace();
System.exit(1);
}
}
try {
while((rec = bufRead.readLine()) != null) {
// strip comments and short (empty) rows
if(!rec.startsWith("#") && rec.length() > 1) {
// work with uppercase only (maybe unuseful)
//rec.toUpperCase();
// use the 1st char as a row index
rowidx = rec.charAt(0);
// if row changes (e.g. A->B and is not the 1.st line we read)
if(previdx != rowidx && previdx != 0)
{
// store the (completed) collection into the Array
eVal.tldTabData[previdx] = mVector.toArray(new String[mVector.size()]);
// clear the collection itself
mVector.clear();
// and restart to fill it from scratch
mVector.add(rec);
} else
{
// continue filling the collection
mVector.add(rec);
}
// and sync the indexes
previdx = rowidx;
}
}
streamIn.close();
// globally flag the table as loaded
eVal.tldTabLoaded = true;
} catch (Exception er2) {
System.err.println("Error: " + er2.getMessage());
er2.printStackTrace();
System.exit(1);
}
}
When executing the program, it correctly accumulates the strings into mVector but, when trying to copy them into the eVal.tldTabData I get a NullPointerException.
I bet I have to create/initialize the array at some point but having problems to figure where and how.
First time I'm coding in Java... helloworld apart. :-)
you can use a Map to store your strings per row;
here something that you'll need :
//Assuming that mVector already holds all you input strings
Map<String,List<String>> map = new HashMap<String,List<String>>();
for (String str : mVector){
List<String> storedList;
if (map.containsKey(str.substring(0, 1))){
storedList = map.get(str.substring(0, 1));
}else{
storedList = new ArrayList<String>();
map.put(str.substring(0, 1), storedList);
}
storedList.add(str);
}
Set<String> unOrdered = map.keySet();
List<String> orderedIndexes = new ArrayList<String>(unOrdered);
Collections.sort(orderedIndexes);
for (String key : orderedIndexes){//get strings for every row
List<String> values = map.get(key);
for (String value : values){//writing strings on the same row
System.out.print(value + "\t"); // change this to writing to some file
}
System.out.println(); // add new line at the end of the row
}