I'm building a RMI game and the client would load a file that has some keys and values which are going to be used on several different objects. It is a save game file but I can't use java.util.Properties for this (it is under the specification). I have to read the entire file and ignore commented lines and the keys that are not relevant in some classes. These properties are unique but they may be sorted in any order. My file current file looks like this:
# Bio
playerOrigin=Newlands
playerClass=Warlock
# Armor
playerHelmet=empty
playerUpperArmor=armor900
playerBottomArmor=armor457
playerBoots=boot109
etc
These properties are going to be written and placed according to the player's progress and the filereader would have to reach the end of file and get only the matched keys. I've tried different approaches but so far nothing came close to the results that I would had using java.util.Properties. Any idea?
This will read your "properties" file line by line and parse each input line and place the values in a key/value map. Each key in the map is unique (duplicate keys are not allowed).
package samples;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.TreeMap;
public class ReadProperties {
public static void main(String[] args) {
try {
TreeMap<String, String> map = getProperties("./sample.properties");
System.out.println(map);
}
catch (IOException e) {
// error using the file
}
}
public static TreeMap<String, String> getProperties(String infile) throws IOException {
final int lhs = 0;
final int rhs = 1;
TreeMap<String, String> map = new TreeMap<String, String>();
BufferedReader bfr = new BufferedReader(new FileReader(new File(infile)));
String line;
while ((line = bfr.readLine()) != null) {
if (!line.startsWith("#") && !line.isEmpty()) {
String[] pair = line.trim().split("=");
map.put(pair[lhs].trim(), pair[rhs].trim());
}
}
bfr.close();
return(map);
}
}
The output looks like:
{playerBoots=boot109, playerBottomArmor=armor457, playerClass=Warlock, playerHelmet=empty, playerOrigin=Newlands, playerUpperArmor=armor900}
You access each element of the map with map.get("key string");.
EDIT: this code doesn't check for a malformed or missing "=" string. You could add that yourself on the return from split by checking the size of the pair array.
I 'm currently unable to come up with a framework that would just provide that (I'm sure there are plenty though), however, you should be able to do that yourself.
Basically you just read the file line by line and check whether the first non whitespace character is a hash (#) or whether the line is whitespace only. You'd ignore those lines and try to split the others on =. If for such a split you don't get an array of 2 strings you have a malformed entry and handle that accordingly. Otherwise the first array element is your key and the second is your value.
Alternately, you could use a regular expression to get the key/value pairs.
(?m)^[^#]([\w]+)=([\w]+)$
will return capture groups for each key and its value, and will ignore comment lines.
EDIT:
This can be made a bit simpler:
[^#]([\w]+)=([\w]+)
After some study i came up with this solution:
public static String[] getUserIdentification(File file) throws IOException {
String key[] = new String[3];
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
String lines;
try {
while ((lines = br.readLine()) != null) {
String[] value = lines.split("=");
if (lines.startsWith("domain=") && key[0] == null) {
if (value.length <= 1) {
throw new IOException(
"Missing domain information");
} else {
key[0] = value[1];
}
}
if (lines.startsWith("user=") && key[1] == null) {
if (value.length <= 1) {
throw new IOException("Missing user information");
} else {
key[1] = value[1];
}
}
if (lines.startsWith("password=") && key[2] == null) {
if (value.length <= 1) {
throw new IOException("Missing password information");
} else {
key[2] = value[1];
}
} else
continue;
}
br.close();
} catch (IOException e) {
e.printStackTrace();
}
return key;
}
I'm using this piece of code to check the properties. Of course it would be wiser to use Properties library but unfortunately I can't.
The shorter way how to do that:
Properties properties = new Properties();
String confPath = "src/main/resources/.env";
try {
properties.load(new FileInputStream(confPath));
} catch (IOException e) {
e.printStackTrace();
}
String specificValueByKey = properties.getProperty("KEY");
Set<Object> allKeys = properties.keySet();
Collection<Object> values = properties.values();
Related
Im having trouble with the best approach to reading a CSV file in order to extract and compare certain things in it. The file is made up of strings, and I need to keep track if there are duplicated items. Here is what I have so far.
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
public class CSVReader {
public static void main(String[] args) {
String csvFile = "Cchallenge.csv";
String line = "";
String cvsSplitBy = ",";
try (BufferedReader br = new BufferedReader(new FileReader(csvFile))) {
while ((line = br.readLine()) != null) {
// use comma as separator
String[] country = line.split(cvsSplitBy);
} catch (IOException e) {
e.printStackTrace();
}
}
}
So I made an array called country with all the data. But when I go to print out the arrays length, it gives my a lot of different arrays with varying sizes. I am having a hard time traversing the arrays and extracting the duplicates. Any ideas will help, thanks.
If you simply wish to get a list of the items without any duplicates, then you could collect the items into a set, as sets do not allow duplicate items:
Set<String> items = new HashSet<>();
try (BufferedReader br = new BufferedReader(new FileReader(csvFile))) {
while ((line = br.readLine()) != null) {
items.addAll(Arrays.asList(line.split(cvsSplitBy)));
}
} catch (IOException e) {
e.printStackTrace();
}
If you also want to keep track of the duplicates, you could use another set and add items into it if they already exist in the first set. This would be an easy feat to accomplish, as the add method of Set returns a boolean in regards to if the set already contained the specified element or not:
Set<String> items = new HashSet<>();
Set<String> duplicates = new HashSet<>();
try (BufferedReader br = new BufferedReader(new FileReader(csvFile))) {
while ((line = br.readLine()) != null) {
for (String item : line.split(cvsSplitBy)) {
if (items.add(item)) {
continue;
}
duplicates.add(item);
}
}
} catch (IOException e) {
e.printStackTrace();
}
Tab-Separated File:
2019-06-06 10:00:00 1.0
2019-06-06 11:00:00 2.0
I'd like to iterate over the file once and add the value of each column to a list.
My working approach would be:
import java.util.*;
import java.io.*;
public class Program {
public static void main(String[] args)
{
ArrayList<Double> List_1 = new ArrayList<Double>();
ArrayList<Double> List_2 = new ArrayList<Double>();
String[] values = null;
String fileName = "File.txt";
File file = new File(fileName);
try
{
Scanner inputStream = new Scanner(file);
while (inputStream.hasNextLine()){
try {
String data = inputStream.nextLine();
values = data.split("\\t");
if (values[1] != null && !values[1].isEmpty() == true) {
double val_1 = Double.parseDouble(values[1]);
List_1.add(val_1);
}
if (values[2] != null && !values[2].isEmpty() == true) {
double val_2 = Double.parseDouble(values[2]);
List_2.add(val_2);
}
}
catch (ArrayIndexOutOfBoundsException exception){
}
}
inputStream.close();
}
catch (FileNotFoundException e) {
e.printStackTrace();
}
System.out.println(List_1);
System.out.println(List_2);
}
}
I get:
[1.0]
[2.0]
It doesn't work without the checks for null, ìsEmpty and the ArrayIndexOutOfBoundsException.
I would appreciate any hints on how to save a few lines while keeping the scanner approach.
One option is to create a Map of Lists using column number as a key. This approach gives you "unlimited" number of columns and exactly the same output than one in the question.
public class Program {
public static void main(String[] args) throws Exception
{
Map<Integer, List<Double>> listMap = new TreeMap<Integer, List<Double>>();
String[] values = null;
String fileName = "File.csv";
File file = new File(fileName);
Scanner inputStream = new Scanner(file);
while (inputStream.hasNextLine()){
String data = inputStream.nextLine();
values = data.split("\\t");
for (int column = 1; column < values.length; column++) {
List<Double> list = listMap.get(column);
if (list == null) {
listMap.put(column, list = new ArrayList<Double>());
}
if (!values[column].isEmpty()) {
list.add(Double.parseDouble(values[column]));
}
}
}
inputStream.close();
for(List<Double> list : listMap.values()) {
System.out.println(list);
}
}
}
You can clean up your code some by using try-with resources to open and close the Scanner for you:
try (Scanner inputStream = new Scanner(file))
{
//your code...
}
This is useful because the inputStream will be closed automatically once the try block is left and you will not need to close it manually with inputStream.close();.
Additionally if you really want to "save lines" you can also combine these steps:
double val_2 = Double.parseDouble(values[2]);
List_2.add(val_2);
Into a single step each, since you do not actually use the val_2 anywhere else:
List_2.add(Double.parseDouble(values[2]));
Finally you are also using !values[1].isEmpty() == true which is comparing a boolean value to true. This is typically bad practice and you can reduce it to !values[1].isEmpty() instead which will have the same functionality. Try not to use == with booleans as there is no need.
you can do it like below:
BufferedReader bfr = Files.newBufferedReader(Paths.get("inputFileDir.tsv"));
String line = null;
List<List<String>> listOfLists = new ArrayList<>(100);
while((line = bfr.readLine()) != null) {
String[] cols = line.split("\\t");
List<String> outputList = new ArrayList<>(cols);
//at this line your expected list of cols of each line is ready to use.
listOfLists.add(outputList);
}
As a matter of fact, it is a simple code in java. But because it seems that you are a beginner in java and code like a python programmer, I decided to write a sample code to let you have a good start point. good luck
So far, I have this project where I read in a properties file using PropertiesConfiguration (from Apache), edit the values I would like to edit, and then save change to the file. It keeps the comments and formatting and such, but one thing it does change is taking the multi-line values formatted like this:
key=value1,\
value2,\
value3
and turns it into the array style:
key=value1,value2,value3
I would like to be able to print those lines formatted as the were before.
I did this via this method:
PropertiesConfiguration config = new PropertiesConfiguration(configFile);
config.setProperty(key,value);
config.save();
I created a work around in case anyone else needs this functionality. Also, there is probably a better way to do this, but this solution currently works for me.
First, set your PropertiesConfiguration delimiter to the new line character like so:
PropertiesConfiguration config = new PropertiesConfiguration(configFile);
config.setListDelimiter('\n');
Then you will need to iterate through and update all properties (to set the format):
Iterator<String> keys = config.getKeys();
while (keys.hasNext()) {
String key = keys.next();
config.setProperty(key,setPropertyFormatter(key, config.getProperty(key))) ;
}
use this method to format your value list data (as shown above):
private List<String> setPropertyFormatter(String key, Object list) {
List<String> tempProperties = new ArrayList<>();
Iterator<?> propertyIterator = PropertyConverter.toIterator(list, '\n');;
String indent = new String(new char[key.length() + 1]).replace('\0', ' ');
Boolean firstIteration = true;
while (propertyIterator.hasNext()) {
String value = propertyIterator.next().toString();
Boolean lastIteration = !propertyIterator.hasNext();
if(firstIteration && lastIteration) {
tempProperties.add(value);
continue;
}
if(firstIteration) {
tempProperties.add(value + ",\\");
firstIteration = false;
continue;
}
if (lastIteration) {
tempProperties.add(indent + value);
continue;
}
tempProperties.add(indent + value + ",\\");
}
return tempProperties;
}
Then it is going to be almost correct, except the save function takes the double backslash that is stored in the List, and turns it into 4 back slashes in the file! So you need to replace those with a single backslash. I did this like so:
try {
config.save(new File(filePath));
byte[] readIn = Files.readAllBytes(Paths.get(filePath));
String replacer = new String(readIn, StandardCharsets.UTF_8).replace("\\\\\\\\", "\\");
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(filePath, false), "UTF-8"));
bw.write(replacer);
bw.close();
} catch (ConfigurationException | IOException e) {
e.printStackTrace();
}
With commons-configuration2, you would handle such cases with a custom PropertiesWriter implementation, as described in its documentation under "Custom properties readers and writers" (Reader biased though).
A writer provides a way to govern writing of each character that is to be written to the properties file, so you can achieve pretty much anything you desire with it (via PropertiesWriter.write(String)). There is also a convenient method that writes proper newlines (PropertiesWriter.writeln(String)).
For example, I had to handle classpath entries in a Netbeans Ant project project.properties file:
public class ClasspathPropertiesWriter extends PropertiesConfiguration.PropertiesWriter {
public ClasspathPropertiesWriter(Writer writer, ListDelimiterHandler delimiter) {
super(writer, delimiter);
}
#Override
public void writeProperty(String key, Object value, boolean forceSingleLine) throws IOException {
switch (key) {
case "javac.classpath":
case "run.classpath":
case "javac.test.classpath":
case "run.test.classpath":
String str = (String) value;
String[] split = str.split(":");
if (split.length > 1) {
write(key);
write("=\\");
writeln(null);
for (int i = 0; i < split.length; i++) {
write(" ");
write(split[i]);
if (i != split.length - 1) {
write(":\\");
}
writeln(null);
}
} else {
super.writeProperty(key, value, forceSingleLine);
}
break;
default:
super.writeProperty(key, value, forceSingleLine);
break;
}
}
}
public class CustomIOFactory extends PropertiesConfiguration.DefaultIOFactory {
#Override
public PropertiesConfiguration.PropertiesWriter createPropertiesWriter(
Writer out, ListDelimiterHandler handler) {
return new ClasspathPropertiesWriter(out, handler);
}
}
Parameters params = new Parameters();
FileBasedConfigurationBuilder<Configuration> builder =
new FileBasedConfigurationBuilder<Configuration>(PropertiesConfiguration.class)
.configure(params.properties()
.setFileName("project.properties")
.setIOFactory(new CustomIOFactory());
Configuration config = builder.getConfiguration();
builder.save();
I have a text file containing data in below format
Vehicle:Bike
MOdel:FZ
Make:
Yamaha
Description
abcdefgh
ijklmn
problems
gear problem, fork bend.
this is auto data
***********************************end***********************
Vehicle:Bike
MOdel:R15
Make:
Yamaha
Description
1234,
567.
890
problems
gear problem, fork bend.
oil leakage
this is auto data
***********************************end***********************
i have given 2 datas but there are many more such in a text file i want to read it and store it in a hashmap such that
Bike:FZ:Yamaha:abcdefghijklmn:gear problem,fork bend.
Bike:R15:Yamaha:1234,567.890:gear problem,fork bend.oil leakage
My sample code:
public static void main(String[] args) {
try {
BufferedReader br = new BufferedReader(new FileReader("data.txt"));
String sCurrentLine;
int i = 0;
int j = 0;
hmap = new HashMap<String, Integer>();
while ((sCurrentLine = br.readLine()) != null) {
System.out.println(sCurrentLine);
sCurrentLine = sCurrentLine.trim();
if (!sCurrentLine.equals("")) // don't write out blank lines
{
if (sCurrentLine.startsWith("***********")) {
i++;
} else {
if (sCurrentLine.startsWith("Vehicle:")) {
String[] veh = sCurrentLine.split(":");
String vehicle = tType[1];
}
if (sCurrentLine.startsWith("Model:")) {
String[] mod = sCurrentLine.split(":");
String model = iShield[1];
}
hmap.put(0,i+":"+vehicle+":"+model);
}
}
j++;
}
} catch (IOException e) {
e.printStackTrace();
}
}
not sure how to read ---> make, description & problems attributes.
You'll need an ObjectInputStream.
An example:
/* Create an ObjectInputStream for your text file
* and a hash map to store the values in. */
ObjectInputStream obj = new ObjectInputStream(new FileInputStream(textFile));
hmap = (HashMap<String, String>) obj.readObject(); // I assume you want strings.
hmap.put("value", var); // Var can be whatever other strings you created.
// It is always a good idea to close streams.
obj.close();
Just remember that, if you need another variable type placed into the hash map, you can create it with something like HashMap<String, byte[]>.
Obviously, you'll need to implement your already-created methods to determine each variable.
If I have not been specific enough, or have missed something important, let me know.
I'm trying to read a line of text from a text file and put each line into a Map so that I can delete duplicate words (e.g. test test) and print out the lines without the duplicate words. I must be doing something wrong though because I basically get just one line as my key, vs each line being read one at a time. Any thoughts? Thanks.
public DeleteDup(File f) throws IOException {
line = new HashMap<String, Integer>();
try {
BufferedReader in = new BufferedReader(new FileReader(f));
Integer lineCount = 0;
for (String s = null; (s = in.readLine()) != null;) {
line.put(s, lineCount);
lineCount++;
System.out.println("s: " + s);
}
}
catch(IOException e) {
e.printStackTrace();
}
this.deleteDuplicates(line);
}
private Map<String, Integer> line;
To be honest, your question isn't particularly clear - it's not obvious why you've got the lineCount, or what deleteDuplicates will do, or why you've named the line variable that way when it's not actually a line - it's a map from lines to the last line number on which that line appeared.
Unless you need the line numbers, I'd use a Set<String> instead.
However, all that aside, if you look at the keySet of line afterwards, it will be all the lines. That's assuming that the text file is genuinely in the default encoding for your system (which is what FileReader uses, unfortunately - I generally use InputStreamReader and specify the encoding explicitly).
If you could give us a short but complete program, the text file you're using as input, the expected output and the actual output, that would be helpful.
What I understood from your question is to print the lines which do not have duplicate words in the line.
May be you could try the following snippet for it.
public void deleteDup(File f)
{
try
{
BufferedReader in = new BufferedReader(new FileReader(f));
Integer wordCount = 0;
boolean isDuplicate = false;
String [] arr = null;
for (String line = null; (line = in.readLine()) != null;)
{
isDuplicate = false;
wordCount = 0;
wordMap.clear();
arr = line.split("\\s+");
for(String word : arr)
{
wordCount = wordMap.get(word);
if(null == wordCount)
{
wordCount = 1;
}
else
{
wordCount++;
isDuplicate = true;
break;
}
wordMap.put(word, wordCount);
}
if(!isDuplicate)
{
lines.add(line);
}
}
}
catch(IOException e)
{
e.printStackTrace();
}
}
private Map<String, Integer> wordMap = new HashMap<String, Integer>();
private List<String> lines = new ArrayList<String>();
In this snippet, lines will contain the lines which do not have duplicate words in it.
It would have been easier to find your problem if we knew what
this.deleteDuplicates(line);
tries to do. Maybe it is not clearing any of the data structure used. Hence, the words checked in previous lines will be checked for other lines too though they are not present.
Your question is not very clear.
But while going through your code snippet, I think you tried to remove duplicate words in each line.
Following code snippet might be helpful.
public class StackOverflow {
public static void main(String[] args) throws IOException {
List<Set<String>> unique = new ArrayList<Set<String>>();
BufferedReader reader = new BufferedReader(
new FileReader("C:\\temp\\testfile.txt"));
String line =null;
while((line = reader.readLine()) != null){
String[] stringArr = line.split("\\s+");
Set<String> strSet = new HashSet<String>();
for(String tmpStr : stringArr){
strSet.add(tmpStr);
}
unique.add(strSet);
}
}
}
Only problem with your code I see is That DeleteDup doesn't have return type specified.
Otherwise code looks fine and reads from file properly.
Please post deleteDuplicates method code and file used.
You are printing out every line read, not just the unique lines.
Your deleteDuplicateLines() method won't do anything, as there will never be any duplicates in the HashMap.
So it isn't at all clear what your actual problem is.