Adding new key-value pair gets other keys' values replaced in HashMap - java

So, I have a HashMap<String,ArrayList> that stores an arraylist per String. But when I add another pair with new value of ArrayList, the other key values are being replaced. Hence, all the values for the different keys are getting the same.
public class Reports{
private ArrayList<Resource> resourceList;
private HashMap<String,ArrayList<Resource>> consolidatedAttendance = new HashMap<String,ArrayList<Resource>>();
public void readReport(String reportFile){
//initialized with resources from config file
ArrayList<Resource> repResourceList = new ArrayList<Resource>(getResourceList());
try (BufferedReader br = new BufferedReader(new FileReader(reportFile))) {
String line;
line = br.readLine(); // disregards first line (columns)
while ((line = br.readLine()) != null) {
if(line.length()==0){
break;
}
//store each resource status in resourceList
String[] values = line.split(",");
String resourceName = values[1], resourceStatus = values[2];
int resourceIndex = indexOfResource(resourceList, resourceName);
// to add validation
if(resourceIndex!=-1){
repResourceList.get(resourceIndex).setStatus(resourceStatus);
}
}
}catch(IOException e){
e.printStackTrace();
}
//get Date
String reportFilename = reportFile.substring(0, reportFile.indexOf("."));
String strDate = reportFilename.substring(reportFilename.length()-9);
consolidateRecords(strDate, new ArrayList<Resource>(repResourceList));
}
public void consolidateRecords(String strDate, ArrayList<Resource> repResourceList){
//consolidate records in hashmap
consolidatedAttendance.put(strDate, repResourceList);
// test print
for (String key: consolidatedAttendance.keySet()){
ArrayList<Resource> resources = consolidatedAttendance.get(key);
for(Resource resource: resources){
System.out.println(key+": "+resource.getNickname()+" "+resource.getEid()+" "+resource.getStatus());
}
}
}
}
So the output for the map when it is printed is:
First key added:
"21-Dec-20": John Working
"21-Dec-20": Alice Working
"21-Dec-20": Jess Working
For second key, there's difference in the list. But,
When second key is added (after put() method), the first key's values have been replaced.
"21-Dec-20": John SL
"21-Dec-20": Alice Working
"21-Dec-20": Jess SL
"28-Dec-20": John SL
"28-Dec-20": Alice Working
"28-Dec-20": Jess SL

The values of your Map are Lists whose elements are the same as the elements of the List returned by getResourceList(). The fact that you are creating a copy of that List (twice), doesn't change that.
If each call to getResourceList() returns a List containing the same instances, all the keys in your Map will be associated with different Lists that contain the same instances.

Related

How to select random text value from specific row using java

I have three input fields.
First Name
Last item
Date Of Birth
I would like to get random data for each input from a property file.
This is how the property file looks. Field name and = should be ignored.
- First Name= Robert, Brian, Shawn, Bay, John, Paul
- Last Name= Jerry, Adam ,Lu , Eric
- Date of Birth= 01/12/12,12/10/12,1/2/17
Example: For First Name: File should randomly select one name from the following names
Robert, Brian, Shawn, Bay, John, Paul
Also I need to ignore anything before =
FileInputStream objfile = new FileInputStream(System.getProperty("user.dir "+path);
in = new BufferedReader(new InputStreamReader(objfile ));
String line = in.readLine();
while (line != null && !line.trim().isEmpty()) {
String eachRecord[]=line.trim().split(",");
Random rand = new Random();
//I need to pick first name randomly from the file from row 1.
send(firstName,(eachRecord[0]));
If you know that you're always going to have just those 3 lines in your property file I would get put each into a map with an index as the key then randomly generate a key in the range of the map.
// your code here to read the file in
HashMap<String, String> firstNameMap = new HashMap<String, String>();
HashMap<String, String> lastNameMap = new HashMap<String, String>();
HashMap<String, String> dobMap = new HashMap<String, String>();
String line;
while (line = in.readLine() != null) {
String[] parts = line.split("=");
if(parts[0].equals("First Name")) {
String[] values = lineParts[1].split(",");
for (int i = 0; i < values.length; ++i) {
firstNameMap.put(i, values[i]);
}
}
else if(parts[0].equals("Last Name")) {
// do the same as FN but for lastnamemap
}
else if(parts[0].equals("Date of Birth") {
// do the same as FN but for dobmap
}
}
// Now you can use the length of the map and a random number to get a value
// first name for instance:
int randomNum = ThreadLocalRandom.current().nextInt(0, firstNameMap.size(0 + 1);
System.out.println("First Name: " + firstNameMap.get(randomNum));
// and you would do the same for the other fields
The code can easily be refactored with some helper methods to make it cleaner, we'll leave that as a HW assignment :)
This way you have a cache of all your values that you can call at anytime and get a random value. I realize this isn't the most optimum solution having nested loops and 3 different maps but if your input file only contains 3 lines and you're not expecting to have millions of inputs it should be just fine.
Haven't programmed stuff like this in a long time.
Feel free to test it, and let me know if it works.
The result of this code should be a HashMap object called values
You can then get the specific fields you want from it, using get(field_name)
For example - values.get("First Name"). Make sure to use to correct case, because "first name" won't work.
If you want it all to be lower case, you can just add .toLowerCase() at the end of the line that puts the field and value into the HashMap
import java.lang.Math;
import java.util.HashMap;
public class Test
{
// arguments are passed using the text field below this editor
public static void main(String[] args)
{
// set the value of "in" here, so you actually read from it
HashMap<String, String> values = new HashMap<String, String>();
String line;
while (((line = in.readLine()) != null) && !line.trim().isEmpty()) {
if(!line.contains("=")) {
continue;
}
String[] lineParts = line.split("=");
String[] eachRecord = lineParts[1].split(",");
System.out.println("adding value of field type = " + lineParts[0].trim());
// now add the mapping to the values HashMap - values[field_name] = random_field_value
values.put(lineParts[0].trim(), eachRecord[(int) (Math.random() * eachRecord.length)].trim());
}
System.out.println("First Name = " + values.get("First Name"));
System.out.println("Last Name = " + values.get("Last Name"));
System.out.println("Date of Birth = " + values.get("Date of Birth"));
}
}

Java, sort and display data from txt file

i'm new with java and have some trouble with one task.
i have txt file which looks like this:
John Doe,01-01-1980,Development,Senior Developer
Susan Smith,07-12-1983,Development,Head of Development
Ane Key,06-06-1989,BA,Junior Analyst
Nina Simone,21-09-1979,BA,Head of BA
Tom Popa,23-02-1982,Development,Developer
Tyrion Lannyster,17-03-1988,BA,Analyst
and i want to to sort it by departments.
for example:
Members are :
[Employee Full Name] - [Employee Age] - [Employee Position] - [Employee Salary(default value x)]
Deparment : Development
Members are :
Susan Smith ......
John Doe ......
Tom Popa ......
Department : BA
Members are :
Nina Simone .......
Ane Key ...........
Tyrion Lannyster ........
at first read file and made 2d array but can't continue how to correctly sort it.
public static void main(String[] args) {
String csvFile = "C:\\Employees.txt";
BufferedReader br = null;
String line = "";
String SplitBy = ",";
String myArray[][] = new String[6][5];
int row = 0;
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
String nums[] = line.split(SplitBy);
for (int col = 0; col < nums.length; col++){
String n =nums[col];
myArray[row][col] = n;
// System.out.println(n);
}
row++;
}
}
catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
Providing the code solution here would not help you learn it. But I can give you hints on how to proceed. Using array is not really recommended.
The easy, but dirty way -
Instead of two dimensional array, use a TreeMap<String, String[]> where key is the department concatenated with name and value is the one dimensional array of the individual's details. As we're using TreeMap, the result is naturally sorted based on Department followed by Name of the person. Loop through the entrySet and print all the results.
Right way -
Define new object Person with all the members needed. Implement Comparable interface. Read all the input data, populate the same in Person object, add each such objects in an ArrayList and use Collections API's sort method to sort all the objects. Alternatively you can adapt the Comparator way.
The Java Collections APIallows you to Sort as well as util.Arrays.
You will need the arrays methods for you code, but consider moving to some sort of Collection. Perhaps a List to start with.
The easiest way would put the contents of the lines in Java Beans and then sort them using sort.
public User {
private String name;
// ... all the fields with getters and setters
}
Then adapt your code to something like this:
// create a nice List for the users.
List<User> userList = new ArrayList<>();
while ((line = br.readLine()) != null) {
User user = new User();
String nums[] = line.split(SplitBy);
user.setName(nums[0]);
// create nice method to convert String to Date
user.setDate(convertStringToDate(nums[1]));
// add the user to the list
userList.add(user);
}
// Then finally sort the data according to the desired field.
Arrays.sort(userList, (a,b) -> a.name.compareTo(b.name));

Reading and matching contents of two big files

I have two files each having the same format with approximately 100,000 lines. For each line in file one I am extracting the second component or column and if I find a match in the second column of second file, I extract their third components and combine them, store or output it.
Though my implementation works but the programs runs extremely slow, it takes more than an hour to iterate over the files, compare and output all the results.
I am reading and storing the data of both files in ArrayList then iterate over those list and do the comparison. Below is my code, is there any performance related glitch or its just normal for such an operation.
Note : I was using String.split() but I understand form other post that StringTokenizer is faster.
public ArrayList<String> match(String file1, String file2) throws IOException{
ArrayList<String> finalOut = new ArrayList<>();
try {
ArrayList<String> data = readGenreDataIntoMemory(file1);
ArrayList<String> data1 = readGenreDataIntoMemory(file2);
StringTokenizer st = null;
for(String line : data){
HashSet<String> genres = new HashSet<>();
boolean sameMovie = false;
String movie2 = "";
st = new StringTokenizer(line, "|");
//String line[] = fline.split("\\|");
String ratingInfo = st.nextToken();
String movie1 = st.nextToken();
String genreInfo = st.nextToken();
if(!genreInfo.equals("null")){
for(String s : genreInfo.split(",")){
genres.add(s);
}
}
StringTokenizer st1 = null;
for(String line1 : data1){
st1 = new StringTokenizer(line1, "|");
st1.nextToken();
movie2 = st1.nextToken();
String genreInfo2= st1.nextToken();
//If the movie name are similar then they should have the same genre
//Update their genres to be the same
if(!genreInfo2.equals("null") && movie1.equals(movie2)){
for(String s : genreInfo2.split(",")){
genres.add(s);
}
sameMovie = true;
break;
}
}
if(sameMovie){
finalOut.add(ratingInfo+""+movieName+""+genres.toString()+"\n");
}else if(sameMovie == false){
finalOut.add(line);
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
}
return finalOut;
}
I would use the Streams API
String file1 = "files1.txt";
String file2 = "files2.txt";
// get all the lines by movie name for each file.
Map<String, List<String[]>> map = Stream.of(Files.lines(Paths.get(file1)),
Files.lines(Paths.get(file2)))
.flatMap(p -> p)
.parallel()
.map(s -> s.split("[|]", 3))
.collect(Collectors.groupingByConcurrent(sa -> sa[1], Collectors.toList()));
// merge all the genres for each movie.
map.forEach((movie, lines) -> {
Set<String> genres = lines.stream()
.flatMap(l -> Stream.of(l[2].split(",")))
.collect(Collectors.toSet());
System.out.println("movie: " + movie + " genres: " + genres);
});
This has the advantage of being O(n) instead of O(n^2) and it's multi-threaded.
Do a hash join.
As of now you are doing an outer loop join which is O(n^2), the hash join will be amortized O(n)
Put the contents of each file in a hash map, with key the field you want (second field).
Map<String,String> map1 = new HashMap<>();
// build the map from file1
Then do the hash join
for(String key1 : map1.keySet()){
if(map2.containsKey(key1)){
// do your thing you found the match
}
}

How to find different genre then add to arrayed linked list

I'm trying to get genres from a file of movies and then check to see if genre is the same. so if in the file there are movies with the same genre they should be stored in the same array of linklist.
i need check the genre's hashcode and then find out if any other movie in the file hash the same hashcode for genre then place that particular genre in a linked list. so basically I have this so far and i also have a link list class etc and not using java.util
public class LoadingMovies {
private static final int size = 22;
private static HashMap<String, Movies> hash = new HashMap(size);
private static HashEntry<String, Movies> hasher = new HashEntry();
private static List<Movies> linked = new List<>();
public static void loadMovies(String filename) throws FileNotFoundException {
String split = ","; //split with comma
Scanner in = new Scanner(new File(filename));
ArrayList<String> arraylist = new ArrayList<>();
String wordIn;
Movies movie = new Movies();
while (in.hasNextLine()) {
wordIn = in.nextLine();
String splitter[] = wordIn.split(split);
String movieTitle = splitter[0];
String movieGenre = splitter[1];
String ageRating = splitter[2];
double scoreRating = Double.parseDouble(splitter[3]);
movie.setTitle(movieTitle);
movie.setGenre(movieGenre);
movie.setAgeRating(ageRating);
movie.setScoreRating(scoreRating);
//System.out.println(movie.getGenre());
arraylist.add(movie.getGenre);
// hash.find(movie.getGenre().hashCode());
// hash.insert(movie.getGenre().hashCode(), movie);
}
}
}
This is what i have so far. I already read in the file now I want to check to see if a genre (String) in the file is same and then add that genre to link list. how can I do this?
A HashMap<String, List<Movie>> seems to be what you're looking for:
Map<String, List<Movie>> movieGenres = new HashMap<>();
while (in.hasNextLine()) {
// code you have
List<Movie> moviesInThisGenre = moviewGenres.get(genre);
if (moviesInThisGenre == null) {
moviesInThisGenre = new LinkedList<>();
movieGenres.put(genre, moviesInThisGenre);
}
moviesInThisGenre.add(movie);
}
You would need a Map that maps from genre String to Lists of Movies:
HashMap<String, List<Movies>> genres = new Hash...
then when you add a Movie:
String g = movie.getGenre();
if (!genres.containsKey(g)
genres.put(g, new ArrayList<Movies>));
genres.get(g).add(movie);
Explanation
A HashMap stores values for key objects, where each key object can only occur once in the map. Thus, when you want to store multiple movies for one genre String, the value type should be a collection (List, Set, etc.).
e.g.
HashMap<String, Movies> genres ... ;
...
genres.put(g, movie);
will override any Movie value you had for that genre before.
But, since you cannot know at runtime if the genre already exists in your Map, you have to put a new (empty) list for an unknown genre. Any movies with that genre can now be added to that list.

Using Java need to create a hashmap that is populated with data from a file

I am trying to figure out a way to iterate through a file and generate a new hashset based on the first column. The value in the first column will server as the key in a hashmap. So for example, say I have a file that contains the following:
element1 value1
element1 value2
element1 value3
element2 value1
element2 value2
I need to have a hashmap with a key of element1 and values of value1 value2 and value3. The next key will be of element2 and values of value1 and value2. Use of the hashset of type User is required.
I can get through element1 and populate it fine. But when it gets to element2 I am not sure how to grab that information without erasing the entire hashset. Or what would be the best way to generate a new hashset dynamically as the values of the file are not static.
Of course any help will be appreciated.
`
public class User {
public T Username;
String key = null;
public HashSet<User> friends = new HashSet<User>();
public HashSet<User> temps = new HashSet<User>();
HashMap<String, HashSet<User>> map = new HashMap<String, HashSet<User>>();
public HashSet readFile(){
String temp = null;
try
{
File f = new File("file.txt");
Scanner input = new Scanner(f);
String line = null;
line = input.nextLine();
int count = 0;
while (input.hasNextLine()){
String parts[] = line.split("\t");
temp = parts[0];
if(!parts[0].equals(temp)){
friends.add(new User(parts[1]));
map.put(parts[0], friends);
//here I had friends.clear(); thinking to clear out
//the set and start to populate with the values
//of new hashset but it clears set for all keys.
}else if(parts[0].equals(temp)){
friends.add(new User(parts[1]));
map.put(temp, friends);
}
temp = parts[0];
line = input.nextLine();
}
}
catch (FileNotFoundException e){
System.out.println("File not found");
}
return friends;
}
public void setKey(String fKey){
key = fKey;
//return key;
}
public static void outputSet(HashSet<User> set){
Iterator<User> i = set.iterator();
while (i.hasNext()){
System.out.print(i.next() + " ");
}
System.out.println();
}
public void buildMap(String fKey, HashSet<User> mappy){
map.put(fKey, mappy);
System.out.println(map);
}
#Override
public String toString() {
return ("" + Username +"");
}
}
`
It looks like you want friends = new HashSet<User>(); instead of friends.clear() to create a new list. But this code is really messy, I suggest you head over to Code Review to get help with cleaning it.

Categories