Properties File multi-line values using PropertiesConfiguration - java

So far, I have this project where I read in a properties file using PropertiesConfiguration (from Apache), edit the values I would like to edit, and then save change to the file. It keeps the comments and formatting and such, but one thing it does change is taking the multi-line values formatted like this:
key=value1,\
value2,\
value3
and turns it into the array style:
key=value1,value2,value3
I would like to be able to print those lines formatted as the were before.
I did this via this method:
PropertiesConfiguration config = new PropertiesConfiguration(configFile);
config.setProperty(key,value);
config.save();

I created a work around in case anyone else needs this functionality. Also, there is probably a better way to do this, but this solution currently works for me.
First, set your PropertiesConfiguration delimiter to the new line character like so:
PropertiesConfiguration config = new PropertiesConfiguration(configFile);
config.setListDelimiter('\n');
Then you will need to iterate through and update all properties (to set the format):
Iterator<String> keys = config.getKeys();
while (keys.hasNext()) {
String key = keys.next();
config.setProperty(key,setPropertyFormatter(key, config.getProperty(key))) ;
}
use this method to format your value list data (as shown above):
private List<String> setPropertyFormatter(String key, Object list) {
List<String> tempProperties = new ArrayList<>();
Iterator<?> propertyIterator = PropertyConverter.toIterator(list, '\n');;
String indent = new String(new char[key.length() + 1]).replace('\0', ' ');
Boolean firstIteration = true;
while (propertyIterator.hasNext()) {
String value = propertyIterator.next().toString();
Boolean lastIteration = !propertyIterator.hasNext();
if(firstIteration && lastIteration) {
tempProperties.add(value);
continue;
}
if(firstIteration) {
tempProperties.add(value + ",\\");
firstIteration = false;
continue;
}
if (lastIteration) {
tempProperties.add(indent + value);
continue;
}
tempProperties.add(indent + value + ",\\");
}
return tempProperties;
}
Then it is going to be almost correct, except the save function takes the double backslash that is stored in the List, and turns it into 4 back slashes in the file! So you need to replace those with a single backslash. I did this like so:
try {
config.save(new File(filePath));
byte[] readIn = Files.readAllBytes(Paths.get(filePath));
String replacer = new String(readIn, StandardCharsets.UTF_8).replace("\\\\\\\\", "\\");
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(filePath, false), "UTF-8"));
bw.write(replacer);
bw.close();
} catch (ConfigurationException | IOException e) {
e.printStackTrace();
}

With commons-configuration2, you would handle such cases with a custom PropertiesWriter implementation, as described in its documentation under "Custom properties readers and writers" (Reader biased though).
A writer provides a way to govern writing of each character that is to be written to the properties file, so you can achieve pretty much anything you desire with it (via PropertiesWriter.write(String)). There is also a convenient method that writes proper newlines (PropertiesWriter.writeln(String)).
For example, I had to handle classpath entries in a Netbeans Ant project project.properties file:
public class ClasspathPropertiesWriter extends PropertiesConfiguration.PropertiesWriter {
public ClasspathPropertiesWriter(Writer writer, ListDelimiterHandler delimiter) {
super(writer, delimiter);
}
#Override
public void writeProperty(String key, Object value, boolean forceSingleLine) throws IOException {
switch (key) {
case "javac.classpath":
case "run.classpath":
case "javac.test.classpath":
case "run.test.classpath":
String str = (String) value;
String[] split = str.split(":");
if (split.length > 1) {
write(key);
write("=\\");
writeln(null);
for (int i = 0; i < split.length; i++) {
write(" ");
write(split[i]);
if (i != split.length - 1) {
write(":\\");
}
writeln(null);
}
} else {
super.writeProperty(key, value, forceSingleLine);
}
break;
default:
super.writeProperty(key, value, forceSingleLine);
break;
}
}
}
public class CustomIOFactory extends PropertiesConfiguration.DefaultIOFactory {
#Override
public PropertiesConfiguration.PropertiesWriter createPropertiesWriter(
Writer out, ListDelimiterHandler handler) {
return new ClasspathPropertiesWriter(out, handler);
}
}
Parameters params = new Parameters();
FileBasedConfigurationBuilder<Configuration> builder =
new FileBasedConfigurationBuilder<Configuration>(PropertiesConfiguration.class)
.configure(params.properties()
.setFileName("project.properties")
.setIOFactory(new CustomIOFactory());
Configuration config = builder.getConfiguration();
builder.save();

Related

read txt file and store data in a hashtable in java

I am reading a txt file and store the data in a hashtable, but I couldn't get the correct output. the txt file like this (part) attached image
this is part of my data
And I want to store the column 1 and column 2 as the key(String type) in hashtable, and column 3 and column 4 as the value (ArrayList type) in hashtable.
My code below:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"));
br.readLine();
ArrayList<String[]> value = new ArrayList<String[]>();
String[] probDes = new String[2];
String key = "";
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String [] splited = line.split("\\t");
source = splited[0];
action = splited[1];
key = source+","+action;
probDes[0] = splited[2];
probDes[1] = splited[3];
value.add(probDes);
hashTableForWorld.put(key, value);
System.out.println("hash table is like this:" +hashTableForWorld);
}
br.close();
return hashTableForWorld;
}
The output looks like this:
it's a very long long line
I think maybe the hashtable is broken, but I don't know why. Thank you for reading my problem.
The first thing we need to establish is that you have a really obvious XY-Problem, in that "what you need to do" and "how you're trying to solve it" are completely at odds with each other.
So let's go back to the original problem and try to work out what we need first.
As best as I can determine, source and action are connected, in that they represent queryable "keys" to your data structure, and probability, destination, and reward are queryable "outcomes" in your data structure. So we'll start by creating objects to represent those two concepts:
public class SourceAction implements Comparable<SourceAction>{
public final String source;
public final String action;
public SourceAction() {
this("", "");
}
public SourceAction(String source, String action) {
this.source = source;
this.action = action;
}
public int compareTo(SourceAction sa) {
int comp = source.compareTo(sa.source);
if(comp != 0) return comp;
return action.compareto(sa.action);
}
public boolean equals(SourceAction sa) {
return source.equals(sa.source) && action.equals(sa.action);
}
public String toString() {
return source + ',' + action;
}
}
public class Outcome {
public String probability; //You can use double if you've written code to parse the probability
public String destination;
public String reward; //you can use double if you're written code to parse the reward
public Outcome() {
this("", "", "");
}
public Outcome(String probability, String destination, String reward) {
this.probability = probability;
this.destination = destination;
this.reward = reward;
}
public boolean equals(Outcome o) {
return probability.equals(o.probability) && destination.equals(o.destination) && reward.equals(o.reward);
public String toString() {
return probability + ',' + destination + ',' + reward;
}
}
So then, given these objects, what sort of Data Structure can properly encapsulate the relationship between these objects, given that a SourceAction seems to have a One-To-Many relationship to Outcome objects? My suggestion is that a Map<SourceAction, List<Outcome>> represents this relationship.
private Map<SourceAction, List<Outcome>> readData() throws Exception {
It is possible to use a Hash Table (in this case, HashMap) to contain these objects, but I'm trying to keep the code as simple as possible, so we're going to stick to the more generic interface.
Then, we can reuse the logic you used in your original code to insert values into this data structure, with a few tweaks.
private Map<SourceAction, List<Outcome>> readData() {
//We're using a try-with-resources block to eliminate the later call to close the reader
try (BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"))) {
br.readLine();//Skip the first line because it's just a header
//I'm using a TreeMap because that makes the implementation simpler. If you absolutely
//need to use a HashMap, then make sure you implement a hash() function for SourceAction
Map<SourceAction, List<Outcome>> dataStructure = new TreeMap<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//split by tab
String [] splited = line.split("\\t");
SourceAction sourceAction = new SourceAction(splited[0], splited[1]);
Outcome outcome = new Outcome(splited[2], splited[3], splited[4]);
if(dataStructure.contains(sourceAction)) {
//Entry already found; we're just going to add this outcome to the already
//existing list.
dataStructure.get(sourceAction).add(outcome);
} else {
List<Outcome> outcomes = new ArrayList<>();
outcomes.add(outcome);
dataStructure.put(sourceAction, outcomes);
}
}
} catch (IOException e) {//Do whatever, or rethrow the exception}
return dataStructure;
}
Then, if you want to query for all the outcomes associated with a given source + action, you need only construct a SourceAction object and query the Map for it.
Map<SourceAction, List<Outcome>> actionMap = readData();
List<Outcome> outcomes = actionMap.get(new SourceAction("(1,1)", "Up"));
assert(outcomes != null);
assert(outcomes.size() == 3);
assert(outcomes.get(0).equals(new Outcome("0.8", "(1,2)", "-0.04")));
assert(outcomes.get(1).equals(new Outcome("0.1", "(2,1)", "-0.04")));
assert(outcomes.get(2).equals(new Outcome("0.1", "(1,1)", "-0.04")));
This should yield the functionality you need for your problem.
You should change your logic for adding to your hashtable to check for the key you create. If the key exists, then grab your array list of arrays that it maps to and add your array to it. Currently you will overwrite the data.
Try this
if(hashTableForWorld.containsKey(key))
{
value = hashTableForWorld.get(key);
value.add(probDes);
hashTableForWorld.put(key, value);
}
else
{
value = new ArrayList<String[]>();
value.add(probDes);
hashTableForWorld.put(key, value);
}
Then to print the contents try something like this
for (Map.Entry<String, ArrayList<String[]>> entry : hashTableForWorld.entrySet()) {
String key = entry.getKey();
ArrayList<String[]> value = entry.getValue();
System.out.println ("Key: " + key + " Value: ");
for(int i = 0; i < value.size(); i++)
{
System.out.print("Array " + i + ": ");
for(String val : value.get(i))
System.out.print(val + " :: ")
System.out.println();
}
}
Hashtable and ArrayList (and other collections) do not make a copy of key and value, and thus all values you are storing are the same probDes array you are allocating at the beginning (note that it is normal that the String[] appears in a cryptic form, you would have to make it pretty yourself, but you can still see that it is the very same cryptic thing all the time).
What is sure is that you should allocate a new probDes for each element inside the loop.
Based on your data you could work with an array as value in my opinion, there is no real use for the ArrayList
And the same applies to value, it has to be allocated separately upon encountering a new key:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
try(BufferedReader br=new BufferedReader(new FileReader("MyGridWorld.txt"))) {
br.readLine();
Hashtable<String, ArrayList<String[]>> hashTableForWorld=new Hashtable<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String[] split = line.split("\\t");
source = split[0];
action = split[1];
String key = source+","+action;
String[] probDesRew = new String[3];
probDesRew[0] = split[2];
probDesRew[1] = split[3];
probDesRew[2] = split[4];
ArrayList<String[]> value = hashTableForWorld.get(key);
if(value == null){
value = new ArrayList<>();
hashTableForWorld.put(key, value);
}
value.add(probDesRew);
}
return hashTableForWorld;
}
}
Besides relocating the variables to their place of actual usage, the return value is also created locally, and the reader is wrapped into a try-with-resource construct which ensures that it is getting closed even if an exception occurs (see official tutorial here).

merging sorted files Java

im implementing external merge sort using Java.
So given a file I split it into smaller ones , then sort the smaller portions and finally merge the sorted (smaller) files.
So , the last step is what im having trouble with.
I have a list of files and I want at each step , take the minimum value of the first rows of each file and then remove that line.
So , it is supposed to be something like this:
public static void mergeSortedFiles(List<File> sorted, File output) throws IOException {
BufferedWriter wf = new BufferedWriter(new FileWriter(output));
String curLine = "";
while(!sorted.isEmpty()) {
curLine = findMinLine(sorted);
wf.write(curLine);
}
}
public static String findMinLine(List<File> sorted) throws IOException {
List<BufferedReader> brs = new ArrayList<>();
for(int i =0; i<sorted.size() ; i++) {
brs.add(new BufferedReader(new FileReader(sorted.get(i))));
}
List<String> lines = new ArrayList<>();
for(BufferedReader br : brs) {
lines.add(br.readLine());
}
Collections.sort(lines);
return lines.get(0);
}
Im not sure how to update the files, anyone can help with that?
Thanks for helping!
You can create a Comparable wrapper around each file and then place the wrappers in a heap (for example a PriorityQueue).
public class ComparableFile<T extends Comparable<T>> implements Comparable<ComparableFile<T>> {
private final Deserializer<T> deserializer;
private final Iterator<String> lines;
private T buffered;
public ComparableFile(File file, Deserializer<T> deserializer) {
this.deserializer = deserializer;
try {
this.lines = Files.newBufferedReader(file.toPath()).lines().iterator();
} catch (IOException e) {
// deal with it differently if you want, I'm just providing a working example
// and wanted to use the constructor in a lambda function
throw new UncheckedIOException(e);
}
}
#Override
public int compareTo(ComparableFile<T> that) {
T mine = peek();
T theirs = that.peek();
if (mine == null) return theirs == null ? 0 : -1;
if (theirs == null) return 1;
return mine.compareTo(theirs);
}
public T pop() {
T tmp = peek();
if (tmp != null) {
buffered = null;
return tmp;
}
throw new NoSuchElementException();
}
public boolean isEmpty() {
return peek() == null;
}
private T peek() {
if (buffered != null) return buffered;
if (!lines.hasNext()) return null;
return buffered = deserializer.deserialize(lines.next());
}
}
Then, you can merge them this way:
public class MergeFiles<T extends Comparable<T>> {
private final PriorityQueue<ComparableFile<T>> files;
public MergeFiles(List<File> files, Deserializer<T> deserializer) {
this.files = new PriorityQueue<>(files.stream()
.map(file -> new ComparableFile<>(file, deserializer))
.filter(comparableFile -> !comparableFile.isEmpty())
.collect(toList()));
}
public Iterator<T> getSortedElements() {
return new Iterator<T>() {
#Override
public boolean hasNext() {
return !files.isEmpty();
}
#Override
public T next() {
if (!hasNext()) throw new NoSuchElementException();
ComparableFile<T> head = files.poll();
T next = head.pop();
if (!head.isEmpty()) files.add(head);
return next;
}
};
}
}
And here's some code to demonstrate it works:
public static void main(String[] args) throws IOException {
List<File> files = Arrays.asList(
newTempFile(Arrays.asList("hello", "world")),
newTempFile(Arrays.asList("english", "java", "programming")),
newTempFile(Arrays.asList("american", "scala", "stackoverflow"))
);
Iterator<String> sortedElements = new MergeFiles<>(files, line -> line).getSortedElements();
while (sortedElements.hasNext()) {
System.out.println(sortedElements.next());
}
}
private static File newTempFile(List<String> words) throws IOException {
File tempFile = File.createTempFile("sorted-", ".txt");
Files.write(tempFile.toPath(), words);
tempFile.deleteOnExit();
return tempFile;
}
Output:
american
english
hello
java
programming
scala
stackoverflow
world
So what you want to do is to swap two lines in a text file? You can do it by using a RandomAccessFile however this will be horrible slow since everytime when you swap two lines you have to wait for the next IO burst.
So i highly recommend you to use the following code to be able to do the merge sort on the heap:
List<String> lines1 = Files.readAllLines(youFile1);
List<String> lines2 = Files.readAllLines(youFile2);
//use merge sort on theese lines
List<String> merged;
FileWriter writer = new FileWriter(yourOutputFile);
for(String str: merged) {
writer.write(str + System.lineSeparator());
}
writer.close();
The standard merge technique between a fixed number of files (say, 2) is :
have a variable for the value of the ordering key of the current record of each file (for java, make that variable Comparable).
start the process by reading the first record of each file (and fill in the corresponding variable)
loop (until end-of-file on both) through a code block that says essentially
if (key_1.compareTo(key_2) == 0) { process both files ; then read both files}
else if (key_1.compareTo(key_2) == -1) { process file 1 ; then read file 1}
else { process file 2 ; then read file 2}
Note how this code does essentially nothing more than determine the file with the lowest key, and process that.
If your number of files is variable, then your number of key variables is variable too, and "determining the file with the lowest current key" cannot be done as per above. Instead, have as many current_key_value objects as there are files, and store them all in a TreeSet. Now, the first element of the TreeSet will be the lowest current key value of all the files and if you make sure that you maintain a link between your key variable and the file number you just process that file (and delete the just processed key value from the TreeSet and read a new record from the processed file and add its key value to the TreeSet).

Can I manipulate LineIterator from Apache Commons IO to set its start line in file?

I have two large tab delimited text files. What I am trying to do is compare both of them and write changes to a new file. For that I am using Apache Commons IO java library. Inputs are done as streams. Since it doesn't need to be fancy and there is a set structure of files, principle of compare is simple. Go through first file and use part of line to search for it in second file.
As you can see its looping through second file for every line of first file (simplified the example as it does a bit more parsing on the line for boths to determine the key and parts to check changes in). Since key will always be unique and I can use part of key to know where lines with that part starts in file. My problem I have no idea how to manipulate lineIterator so it starts from said lines and not from beginning of file. so is it even possible to manipulate? If so how? Or should I look at some other way?
public static void main(String[] args) throws IOException {
File theFile = new File("first.txt");
File oldFile = new File("second.txt");
File targetFile = new File("changes.txt");
int counter = 0;
boolean found = false;
String key = null;
LineIterator update = FileUtils.lineIterator(theFile, "UTF-8");
try {
while (update.hasNext()) {
//update.nextLine();
counter++;
String line = update.nextLine();
String[] splitted = line.split("\\t");
key = splitted[0];
LineIterator old = FileUtils.lineIterator(oldFile, "UTF-8");
found = false;
while (old.hasNext() && !found) {
String oldLine = old.nextLine();
String[] content = oldLine.split("\\t");
if (oldLine.startsWith(key)) {
if (!content[2].equals(splitted[2])) {
System.out.println(counter + " [CHANGE FOUND]");
FileUtils.writeStringToFile(targetFile, line+"\t[TEXT CHANGE]\n", "UTF-8", true);
}
found = true;
}
}
if (!found) {
System.out.println(counter + " [NEW LINE]");
FileUtils.writeStringToFile(targetFile, line+"\t[NEW LINE]\n", "UTF-8", true);
}
old.close();
key = null;
}
} finally {
LineIterator.closeQuietly(update);
System.out.println("Checking Done");
}
}

Reading, comparing and merging multiple files in Java

Given there are some files Customer-1.txt, Customer-2.txt and Customer-3.txt and these files have the following content:
Customer-1.txt
1|1|MARY|SMITH
2|1|PATRICIA|JOHNSON
4|2|BARBARA|JONES
Customer-2.txt
1|1|MARY|SMITH
2|1|PATRICIA|JOHNSON
3|1|LINDA|WILLIAMS
4|2|BARBARA|JONES
Customer-3.txt
2|1|PATRICIA|JOHNSON
3|1|LINDA|WILLIAMS
5|2|ALEXANDER|ANDERSON
These files have a lot of duplicate data, but it is possible that each file contains some data that is unique.
And given that the actual files are sorted, big (a few GB each file) and there are many files...
Then what is the:
a) memory cheapest
b) cpu cheapest
c) fastest
way in Java to create one file out of these three files that will contain all the unique data of each file sorted and concatenated like such:
Customer-final.txt
1|1|MARY|SMITH
2|1|PATRICIA|JOHNSON
3|1|LINDA|WILLIAMS
4|2|BARBARA|JONES
5|2|ALEXANDER|ANDERSON
I looked into the following solution https://github.com/upcrob/spring-batch-sort-merge , but I would like to know if its possible to perhaps do it with the FileInputStream and/or a non spring batch solution.
A solution to use an in memory or real database to join them is not viable for my use case due to the size of the files and the absence of an actual database.
Since the input files are already sorted, a simple parallel iteration of the files, merging their content, is the memory cheapest, cpu cheapest, and fastest way to do it.
This is a multi-way merge join, i.e. a sort-merge join without the "sort", with elimination of duplicates, similar to a SQL DISTINCT.
Here is a version that can do unlimited number of input files (well, as many as you can have open files anyway). It uses a helper class to stage the next line from each input file, so the leading ID value only has to be parsed once per line.
private static void merge(StringWriter out, BufferedReader ... in) throws IOException {
CustomerReader[] customerReader = new CustomerReader[in.length];
for (int i = 0; i < in.length; i++)
customerReader[i] = new CustomerReader(in[i]);
merge(out, customerReader);
}
private static void merge(StringWriter out, CustomerReader ... in) throws IOException {
List<CustomerReader> min = new ArrayList<>(in.length);
for (;;) {
min.clear();
for (CustomerReader reader : in)
if (reader.hasData()) {
int cmp = (min.isEmpty() ? 0 : reader.compareTo(min.get(0)));
if (cmp < 0)
min.clear();
if (cmp <= 0)
min.add(reader);
}
if (min.isEmpty())
break; // all done
// optional: Verify that lines that compared equal by ID are entirely equal
out.write(min.get(0).getCustomerLine());
out.write(System.lineSeparator());
for (CustomerReader reader : min)
reader.readNext();
}
}
private static final class CustomerReader implements Comparable<CustomerReader> {
private BufferedReader in;
private String customerLine;
private int customerId;
CustomerReader(BufferedReader in) throws IOException {
this.in = in;
readNext();
}
void readNext() throws IOException {
if ((this.customerLine = this.in.readLine()) == null)
this.customerId = Integer.MAX_VALUE;
else
this.customerId = Integer.parseInt(this.customerLine.substring(0, this.customerLine.indexOf('|')));
}
boolean hasData() {
return (this.customerLine != null);
}
String getCustomerLine() {
return this.customerLine;
}
#Override
public int compareTo(CustomerReader that) {
// Order by customerId only. Inconsistent with equals()
return Integer.compare(this.customerId, that.customerId);
}
}
TEST
String file1data = "1|1|MARY|SMITH\n" +
"2|1|PATRICIA|JOHNSON\n" +
"4|2|BARBARA|JONES\n";
String file2data = "1|1|MARY|SMITH\n" +
"2|1|PATRICIA|JOHNSON\n" +
"3|1|LINDA|WILLIAMS\n" +
"4|2|BARBARA|JONES\n";
String file3data = "2|1|PATRICIA|JOHNSON\n" +
"3|1|LINDA|WILLIAMS\n" +
"5|2|ALEXANDER|ANDERSON\n";
try (
BufferedReader in1 = new BufferedReader(new StringReader(file1data));
BufferedReader in2 = new BufferedReader(new StringReader(file2data));
BufferedReader in3 = new BufferedReader(new StringReader(file3data));
StringWriter out = new StringWriter();
) {
merge(out, in1, in2, in3);
System.out.print(out);
}
OUTPUT
1|1|MARY|SMITH
2|1|PATRICIA|JOHNSON
3|1|LINDA|WILLIAMS
4|2|BARBARA|JONES
5|2|ALEXANDER|ANDERSON
The code merges purely by ID value, and doesn't verify that rest of line is actually equal. Insert code at the optional comment to check for that, if needed.
This might help:
public static void main(String[] args) {
String files[] = {"Customer-1.txt", "Customer-2.txt", "Customer-3.txt"};
HashMap<Integer, String> customers = new HashMap<Integer, String>();
try {
String line;
for(int i = 0; i < files.length; i++) {
BufferedReader reader = new BufferedReader(new FileReader("data/" + files[i]));
while((line = reader.readLine()) != null) {
Integer uuid = Integer.valueOf(line.split("|")[0]);
customers.put(uuid, line);
}
reader.close();
}
BufferedWriter writer = new BufferedWriter(new FileWriter("data/Customer-final.txt"));
Iterator<String> it = customers.values().iterator();
while(it.hasNext()) writer.write(it.next() + "\n");
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
}
If you have any cquestions ask me.

Read file and get key=value without using java.util.Properties

I'm building a RMI game and the client would load a file that has some keys and values which are going to be used on several different objects. It is a save game file but I can't use java.util.Properties for this (it is under the specification). I have to read the entire file and ignore commented lines and the keys that are not relevant in some classes. These properties are unique but they may be sorted in any order. My file current file looks like this:
# Bio
playerOrigin=Newlands
playerClass=Warlock
# Armor
playerHelmet=empty
playerUpperArmor=armor900
playerBottomArmor=armor457
playerBoots=boot109
etc
These properties are going to be written and placed according to the player's progress and the filereader would have to reach the end of file and get only the matched keys. I've tried different approaches but so far nothing came close to the results that I would had using java.util.Properties. Any idea?
This will read your "properties" file line by line and parse each input line and place the values in a key/value map. Each key in the map is unique (duplicate keys are not allowed).
package samples;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.TreeMap;
public class ReadProperties {
public static void main(String[] args) {
try {
TreeMap<String, String> map = getProperties("./sample.properties");
System.out.println(map);
}
catch (IOException e) {
// error using the file
}
}
public static TreeMap<String, String> getProperties(String infile) throws IOException {
final int lhs = 0;
final int rhs = 1;
TreeMap<String, String> map = new TreeMap<String, String>();
BufferedReader bfr = new BufferedReader(new FileReader(new File(infile)));
String line;
while ((line = bfr.readLine()) != null) {
if (!line.startsWith("#") && !line.isEmpty()) {
String[] pair = line.trim().split("=");
map.put(pair[lhs].trim(), pair[rhs].trim());
}
}
bfr.close();
return(map);
}
}
The output looks like:
{playerBoots=boot109, playerBottomArmor=armor457, playerClass=Warlock, playerHelmet=empty, playerOrigin=Newlands, playerUpperArmor=armor900}
You access each element of the map with map.get("key string");.
EDIT: this code doesn't check for a malformed or missing "=" string. You could add that yourself on the return from split by checking the size of the pair array.
I 'm currently unable to come up with a framework that would just provide that (I'm sure there are plenty though), however, you should be able to do that yourself.
Basically you just read the file line by line and check whether the first non whitespace character is a hash (#) or whether the line is whitespace only. You'd ignore those lines and try to split the others on =. If for such a split you don't get an array of 2 strings you have a malformed entry and handle that accordingly. Otherwise the first array element is your key and the second is your value.
Alternately, you could use a regular expression to get the key/value pairs.
(?m)^[^#]([\w]+)=([\w]+)$
will return capture groups for each key and its value, and will ignore comment lines.
EDIT:
This can be made a bit simpler:
[^#]([\w]+)=([\w]+)
After some study i came up with this solution:
public static String[] getUserIdentification(File file) throws IOException {
String key[] = new String[3];
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
String lines;
try {
while ((lines = br.readLine()) != null) {
String[] value = lines.split("=");
if (lines.startsWith("domain=") && key[0] == null) {
if (value.length <= 1) {
throw new IOException(
"Missing domain information");
} else {
key[0] = value[1];
}
}
if (lines.startsWith("user=") && key[1] == null) {
if (value.length <= 1) {
throw new IOException("Missing user information");
} else {
key[1] = value[1];
}
}
if (lines.startsWith("password=") && key[2] == null) {
if (value.length <= 1) {
throw new IOException("Missing password information");
} else {
key[2] = value[1];
}
} else
continue;
}
br.close();
} catch (IOException e) {
e.printStackTrace();
}
return key;
}
I'm using this piece of code to check the properties. Of course it would be wiser to use Properties library but unfortunately I can't.
The shorter way how to do that:
Properties properties = new Properties();
String confPath = "src/main/resources/.env";
try {
properties.load(new FileInputStream(confPath));
} catch (IOException e) {
e.printStackTrace();
}
String specificValueByKey = properties.getProperty("KEY");
Set<Object> allKeys = properties.keySet();
Collection<Object> values = properties.values();

Categories