Converting singly linked list to a map - java

I have been given an assignment to change to upgrade an existing one.
Figure out how to recode the qualifying exam problem using a Map for each terminal line, on the
assumption that the size of the problem is dominated by the number of input lines, not the 500
terminal lines
The program takes in a text file that has number, name. The number is the PC number and the name is the user who logged on. The program returns the user for each pc that logged on the most. Here is the existing code
public class LineUsageData {
SinglyLinkedList<Usage> singly = new SinglyLinkedList<Usage>();
//function to add a user to the linked list or to increment count by 1
public void addObservation(Usage usage){
for(int i = 0; i < singly.size(); ++i){
if(usage.getName().equals(singly.get(i).getName())){
singly.get(i).incrementCount(1);
return;
}
}
singly.add(usage);
}
//returns the user with the most connections to the PC
public String getMaxUsage(){
int tempHigh = 0;
int high = 0;
String userAndCount = "";
for(int i = 0; i < singly.size(); ++i){//goes through list and keeps highest
tempHigh = singly.get(i).getCount();
if(tempHigh > high){
high = tempHigh;
userAndCount = singly.get(i).getName() + " " + singly.get(i).getCount();
}
}
return userAndCount;
}
}
I am having trouble on the theoretical side. We can use a hashmap or a treemap. I am trying to think through how I would form a map that would hold the list of users for each pc? I can reuse the Usage object which will hold the name and the count of the user. I am not supposed to alter that object though

When checking if Usage is present in the list you perform a linear search each time (O(N)). If you replace your list with the Map<String,Usage>, you'll be able to search for name in sublinear time. TreeMap has O(log N) time for search and update, HashMap has amortized O(1)(constant) time.
So, the most effective data structure in this case is HashMap.
import java.util.*;
public class LineUsageData {
Map<String, Usage> map = new HashMap<String, Usage>();
//function to add a user to the map or to increment count by 1
public void addObservation(Usage usage) {
Usage existentUsage = map.get(usage.getName());
if (existentUsage == null) {
map.put(usage.getName(), usage);
} else {
existentUsage.incrementCount(1);
}
}
//returns the user with the most connections to the PC
public String getMaxUsage() {
Usage maxUsage = null;
for (Usage usage : map.values()) {
if (maxUsage == null || usage.getCount() > maxUsage.getCount()) {
maxUsage = usage;
}
}
return maxUsage == null ? null : maxUsage.getName() + " " + maxUsage.getCount();
}
// alternative version that uses Collections.max
public String getMaxUsageAlt() {
Usage maxUsage = map.isEmpty() ? null :
Collections.max(map.values(), new Comparator<Usage>() {
#Override
public int compare(Usage o1, Usage o2) {
return o1.getCount() - o2.getCount();
}
});
return maxUsage == null ? null : maxUsage.getName() + " " + maxUsage.getCount();
}
}
Map can also be iterated in the time proportional to it's size, so you can use the same procedure to find maximum element in it. I gave you two options, either manual approach, or usage of Collections.max utility method.

With simple words: You use a LinkedList (singly or doubly) when you have a list of items, and you usually plan to traverse them,
and a Map implementation when you have "Dictionary-like" entries, where a key corresponds to a value and you plan to access the value using the key.
In order to convert your SinglyLinkedList to a HashMap or TreeMap, you need find out which property of your item will be used as your key (it must be an element with unique values).
Assuming you are using the name property from your Usage class, you can do this
(a simple example):
//You could also use TreeMap, depending on your needs.
Map<String, Usage> usageMap = new HashMap<String, Usage>();
//Iterate through your SinglyLinkedList.
for(Usage usage : singly) {
//Add all items to the Map
usageMap.put(usage.getName(), usage);
}
//Access a value using its name as the key of the Map.
Usage accessedUsage = usageMap.get("AUsageName");
Also note that:
Map<string, Usage> usageMap = new HashMap<>();
Is valid, due to diamond inference.

I Solved this offline and didn't get a chance to see some of the answers which looked to be both very helpful. Sorry about that Nick and Aivean and thanks for the responses. Here is the code i ended up writing to get this to work.
public class LineUsageData {
Map<Integer, Usage> map = new HashMap<Integer, Usage>();
int hash = 0;
public void addObservation(Usage usage){
hash = usage.getName().hashCode();
System.out.println(hash);
while((map.get(hash)) != null){
if(map.get(hash).getName().equals(usage.name)){
map.get(hash).count++;
return;
}else{
hash++;
}
}
map.put(hash, usage);
}
public String getMaxUsage(){
String str = "";
int tempHigh = 0;
int high = 0;
//for loop
for(Integer key : map.keySet()){
tempHigh = map.get(key).getCount();
if(tempHigh > high){
high = tempHigh;
str = map.get(key).getName() + " " + map.get(key).getCount();
}
}
return str;
}
}

Related

Best Data Structure for fast retrieval, update, and keeping ordering

The problem is as follows
I need to keep track of url + click count.
I need to be able to update url quickly with click count when user click on that url.
I need to be able to retrieve the top 10 click count URL quickly.
NOTE: Assuming you cannot use the database.
What is the best data structure to achieve the result?
I have thought about using a map before, but map doesn't keep track of ordering of the top 10 clicks.
You need an additional List<Map.Entry<URL,Integer>> for holding the top ten, with T being the click count for the lowermost.
If you count another click and this count is still not greater than T: do nothing.
If the increased count is greater than T, check whether the URL is in the list or not. If it is, do nothing. If it is not, add this entry to the List, sort and delete the last entry if the list has more than 10 entries. Update T.
The best data structure I can think is of using the TreeSet.
The elements of TreeSet are sorted, so you can easily find top items.
Also make sure for URL you maintain a separate comparator class which implements
Comparator, so you can put your logic of keeping elements sorted all
the time based on count. Use this comparator while creating the TreeSet. Insertion/Update/delete/Get all operations happen in O(logn)
Here is the code, how you should define the structure.
TreeSet<URL> treeSet = new TreeSet<URL>(new URLComparator());
class URL {
private String url;
int count;
public URL(String string, int i) {
url = string;
count = i;
}
#Override
public int hashCode() {
return url.hashCode();
}
#Override // No need to write this method. Just used it for testing
public String toString() {
return "url : " + url + " ,count : " + count+"\n";
}
}
One more info- Use hashcode method of your URL class as hashcode of your url.
This is how you define URLComparator class. compare logic is based on URL count.
class URLComparator implements Comparator<URL> {
#Override
public int compare(URL o1, URL o2) {
return new Integer(o2.count).compareTo(o1.count);
}
}
Testing
TreeSet<URL> treeSet = new TreeSet<URL>(new URLComparator());
treeSet.add(new URL("url1", 12));
treeSet.add(new URL("url2", 0));
treeSet.add(new URL("url3", 5));
System.out.println(treeSet);
Output:-
[url : url1 ,count : 12
, url : url3 ,count : 5
, url : url2 ,count : 0
]
To print top 10 elements, use following code.
Iterator<URL> iterator = treeSet.iterator();
int count = 0;
while(count < 10 && iterator.hasNext() ){
System.out.println(iterator.next());
count++;
}
You can use a Map<String, Integer> for the use case as:
It keeps track of key(url) and value(click count)
You can put to the map an updated url with mapped click count when user click on that url.
You can retrieve the top 10 click count after sorting the map based on the entryset
// create a list out of the entryset of your map
Set<Map.Entry<String, Integer>> set = map.entrySet();
List<Map.Entry<String, Integer>> list = new ArrayList<>(set);
// this can be clubbed in another stub to act on top 'N' click counts
list.sort((o1, o2) -> (o2.getValue()).compareTo(o1.getValue()));
list.stream().limit(10).forEach(entry ->
System.out.println(entry.getKey() + " ==== " + entry.getValue()));
Using Map, you will have to sort the values for top 10 urls.
which will egt you o(nlogn) complexity using comparator for sorting by values.
Another Way is:
Using Doubly linked list(of size 10) with a HashMap (And proceeding in a LRU cache way)
Retrieve/Update will be o(1).
Top 10 results will be items in list.
Structure of Doubly list :
class UrlAndCountNode{
String url;
int count;
UrlAndCountNode next;
UrlAndCountNode prev;
}
Structure of Map:
Map<String, UrlAndCountNode>
That's an interesting question IMO. It seems you need something that is sorted by clicks, but at the same time you need to alter these values, the only way to do that with a data structure is to remove that entry (that you want to update) and put the updated one back. Simply updating clicks will not work. As such I think that keeping them sorted by clicks is a batter option.
The downside is that if there are entries with the same number of clicks, they will get overriden, as such something like guava multiset would be a much better option.
As such I would do this:
static class Holder {
private final String name;
private final int clicks;
public Holder(String name, int clicks) {
super();
this.name = name;
this.clicks = clicks;
}
public String getName() {
return name;
}
public int getClicks() {
return clicks;
}
#Override
public String toString() {
return "name = " + name + " clicks = " + clicks;
}
}
And methods would look like this:
private static List<Holder> firstN(Multiset<Holder> set, int n) {
return set.stream().limit(n).collect(Collectors.toList());
}
private static void updateOne(Multiset<Holder> set, String urlName, int more) {
Iterator<Holder> iter = set.iterator();
int currentClicks = 0;
boolean found = false;
while (iter.hasNext()) {
Holder h = iter.next();
if (h.getName().equals(urlName)) {
currentClicks = h.getClicks();
iter.remove();
found = true;
}
}
if (found) {
set.add(new Holder(urlName, currentClicks + more));
}
}

Optimisation of searching HashMap with list of values

I have a map in which values have references to lists of objects.
//key1.getElements() - produces the following
[Element N330955311 ({}), Element N330955300 ({}), Element N3638066598 ({})]
I would like to search the list of every key and find the occurrence of a given element (>= 2).
Currently my approach to this is every slow, I have a lot of data and I know execution time is relative but it takes 40seconds~.
My approach..
public String occurance>=2 (String id)
//Search for id
//Outer loop through Map
//get first map value and return elements
//inner loop iterating through key.getElements()
//if match with id..then iterate count
//return Strings with count == 2 else return null
The reason why this is so slow is because I have a lot of ids which I'm searching for - 8000~ and I have 3000~ keys in my map. So its > 8000*3000*8000 (given that every id/element exists in the key/valueSet map at least once)
Please help me with a more efficient way to make this search. I'm not too deep into practicing Java, so perhaps there's something obvious I'm missing.
Edited in real code after request:
public void findAdjacents() {
for (int i = 0; i < nodeList.size(); i++) {
count = 0;
inter = null;
container = findIntersections(nodeList.get(i));
if (container != null) {
intersections.add(container);
}
}
}
public String findIntersections(String id) {
Set<Map.Entry<String, Element>> entrySet = wayList.entrySet();
for (Map.Entry entry : entrySet) {
w1 = (Way) wayList.get(entry.getKey());
for (Node n : w1.getNodes()) {
container2 = String.valueOf(n);
if (container2.contains(id)) {
count++;
}
if (count == 2) {
inter = id;
count = 0;
}
}
}
if (inter != (null))
return inter;
else
return null;
}
Based on the pseudocode provided by you, there is no need to iterate all the keys in the Map. You can directly do a get(id) on the map. If the Map has it, you will get the list of elements on which you can iterate and get the element if its count is > 2. If the id is not there then null will be returned. So in that case you can optimize your code a bit.
Thanks

how to find the duplicates in ArrayList using hashmap in java?

my program is reading large txt files(in MBs) which contain the source ip and destination ip(for example 192.168.125.10,112.25.2.1) ,,,Here read is an ArrayList in which the data is present.
i have generated unique ids(uid int type) using srcip and destip and now i am storing in
static ArrayList<Integer[]> prev = new ArrayList<Integer[]>();
where Array is
:-
static Integer[] multi1;
multi1 = new Integer[]{(int)uid,count,flag};
i have to print the all uids with there count or their frequencies using hashmap.
Plz give some solution...
for (ArrayList<String> read : readFiles.values())
{
if(file_count<=2)
{
for(int i=0 ; i<read.size() ; i++)
{
String str1=read.get(i).split(",")[0];//get only srcIP
String str2=read.get(i).split(",")[1];//get only destIP
StringTokenizer tokenizer1=new StringTokenizer(str1,".");
StringTokenizer tokenizer2=new StringTokenizer(str2,".");
if(tokenizer1.hasMoreTokens()&&tokenizer2.hasMoreTokens())
{
sip_oct1=Integer.parseInt(tokenizer1.nextToken());
sip_oct2=Integer.parseInt(tokenizer1.nextToken());
sip_oct3=Integer.parseInt(tokenizer1.nextToken());
sip_oct4=Integer.parseInt(tokenizer1.nextToken());
dip_oct1=Integer.parseInt(tokenizer2.nextToken());
dip_oct2=Integer.parseInt(tokenizer2.nextToken());
dip_oct3=Integer.parseInt(tokenizer2.nextToken());
dip_oct4=Integer.parseInt(tokenizer2.nextToken());
uid=uniqueIdGenerator(sip_oct1,sip_oct2,sip_oct3,sip_oct4,dip_oct1,dip_oct2,dip_oct3,dip_oct4);
}
multi1 = new Integer[]{(int)uid,count,flag};
prev.add(multi1);
System.out.println(prev.get(i)[0]);//getting uids from prev
Map<ArrayList<Integer []> , Integer> map = new HashMap<ArrayList<Integer[]>, Integer>();
for (int j=0 ; j<prev.size() ; j++)
{
Integer temp=map.get(prev.get(i)[0]);
count = map.get(temp);
map.put(temp, (count == null) ? 1 : count++);
}
printMap(map);
System.out.println("uids--->"+prev.get(i)[0]+" Count--- >"+count+" flag--->"+prev.get(i)[2]);
}
}
file_count++;
}
}
public static void printMap(Map<ArrayList<Integer[]>, Integer> map)
{
for (Entry<ArrayList<Integer[]>, Integer> entry : map.entrySet())
{
System.out.println(" Value : "+ entry.getValue()+"key : "+entry.getKey());
}
}
public static double uniqueIdGenerator(int oc1,int oc2,int oc3,int oc4,int oc5,int oc6,int oc7,int oc8)
{
int a,b;
double c;
a=((oc1*10+oc2)*10+oc3)*10+oc4;
b=((oc5*10+oc6)*10+oc7)*10+oc8;
c= Math.log(a)+Math.log(b);
return Math.round(c*1000);
}
Now understanding what you want, there are (at least) 2 ways of doing this.
1st: Make a list with the uid's. Then a second list where you can have a value (your uid) and keep a count. Was thinking of HashMap, but there you can not easily change the count. Maybe an ArrayList of a list with 2 values.
Then loop over your list with the uid's, check with a second for loop if the uid is already in the second list. If it is, add one to the count. If it is not, add it to the list.
2nd: Do the same thing, but then with classes (very Java). Then you can put even more info into the class ;)
Hope this helps!
*edit: #RC. indeed gives cleaner code.

Search multiple HashMaps at the same time

tldr: How can I search for an entry in multiple (read-only) Java HashMaps at the same time?
The long version:
I have several dictionaries of various sizes stored as HashMap< String, String >. Once they are read in, they are never to be changed (strictly read-only).
I want to check whether and which dictionary had stored an entry with my key.
My code was originally looking for a key like this:
public DictionaryEntry getEntry(String key) {
for (int i = 0; i < _numDictionaries; i++) {
HashMap<String, String> map = getDictionary(i);
if (map.containsKey(key))
return new DictionaryEntry(map.get(key), i);
}
return null;
}
Then it got a little more complicated: my search string could contain typos, or was a variant of the stored entry. Like, if the stored key was "banana", it is possible that I'd look up "bannana" or "a banana", but still would like the entry for "banana" returned. Using the Levenshtein-Distance, I now loop through all dictionaries and each entry in them:
public DictionaryEntry getEntry(String key) {
for (int i = 0; i < _numDictionaries; i++) {
HashMap<String, String> map = getDictionary(i);
for (Map.Entry entry : map.entrySet) {
// Calculate Levenshtein distance, store closest match etc.
}
}
// return closest match or null.
}
So far everything works as it should and I'm getting the entry I want. Unfortunately I have to look up around 7000 strings, in five dictionaries of various sizes (~ 30 - 70k entries) and it takes a while. From my processing output I have the strong impression my lookup dominates overall runtime.
My first idea to improve runtime was to search all dictionaries parallely. Since none of the dictionaries is to be changed and no more than one thread is accessing a dictionary at the same time, I don't see any safety concerns.
The question is just: how do I do this? I have never used multithreading before. My search only came up with Concurrent HashMaps (but to my understanding, I don't need this) and the Runnable-class, where I'd have to put my processing into the method run(). I think I could rewrite my current class to fit into Runnable, but I was wondering if there is maybe a simpler method to do this (or how can I do it simply with Runnable, right now my limited understanding thinks I have to restructure a lot).
Since I was asked to share the Levenshtein-Logic: It's really nothing fancy, but here you go:
private int _maxLSDistance = 10;
public Map.Entry getClosestMatch(String key) {
Map.Entry _closestMatch = null;
int lsDist;
if (key == null) {
return null;
}
for (Map.Entry entry : _dictionary.entrySet()) {
// Perfect match
if (entry.getKey().equals(key)) {
return entry;
}
// Similar match
else {
int dist = StringUtils.getLevenshteinDistance((String) entry.getKey(), key);
// If "dist" is smaller than threshold and smaller than distance of already stored entry
if (dist < _maxLSDistance) {
if (_closestMatch == null || dist < _lsDistance) {
_closestMatch = entry;
_lsDistance = dist;
}
}
}
}
return _closestMatch
}
In order to use multi-threading in your case, could be something like:
The "monitor" class, which basically stores the results and coordinates the threads;
public class Results {
private int nrOfDictionaries = 4; //
private ArrayList<String> results = new ArrayList<String>();
public void prepare() {
nrOfDictionaries = 4;
results = new ArrayList<String>();
}
public synchronized void oneDictionaryFinished() {
nrOfDictionaries--;
System.out.println("one dictionary finished");
notifyAll();
}
public synchronized boolean isReady() throws InterruptedException {
while (nrOfDictionaries != 0) {
wait();
}
return true;
}
public synchronized void addResult(String result) {
results.add(result);
}
public ArrayList<String> getAllResults() {
return results;
}
}
The Thread it's self, which can be set to search for the specific dictionary:
public class ThreadDictionarySearch extends Thread {
// the actual dictionary
private String dictionary;
private Results results;
public ThreadDictionarySearch(Results results, String dictionary) {
this.dictionary = dictionary;
this.results = results;
}
#Override
public void run() {
for (int i = 0; i < 4; i++) {
// search dictionary;
results.addResult("result of " + dictionary);
System.out.println("adding result from " + dictionary);
}
results.oneDictionaryFinished();
}
}
And the main method for demonstration:
public static void main(String[] args) throws Exception {
Results results = new Results();
ThreadDictionarySearch threadA = new ThreadDictionarySearch(results, "dictionary A");
ThreadDictionarySearch threadB = new ThreadDictionarySearch(results, "dictionary B");
ThreadDictionarySearch threadC = new ThreadDictionarySearch(results, "dictionary C");
ThreadDictionarySearch threadD = new ThreadDictionarySearch(results, "dictionary D");
threadA.start();
threadB.start();
threadC.start();
threadD.start();
if (results.isReady())
// it stays here until all dictionaries are searched
// because in "Results" it's told to wait() while not finished;
for (String string : results.getAllResults()) {
System.out.println("RESULT: " + string);
}
I think the easiest would be to use a stream over the entry set:
public DictionaryEntry getEntry(String key) {
for (int i = 0; i < _numDictionaries; i++) {
HashMap<String, String> map = getDictionary(i);
map.entrySet().parallelStream().foreach( (entry) ->
{
// Calculate Levenshtein distance, store closest match etc.
}
);
}
// return closest match or null.
}
Provided you are using java 8 of course. You could also wrap the outer loop into an IntStream as well. Also you could directly use the Stream.reduce to get the entry with the smallest distance.
Maybe try thread pools:
ExecutorService es = Executors.newFixedThreadPool(_numDictionaries);
for (int i = 0; i < _numDictionaries; i++) {
//prepare a Runnable implementation that contains a logic of your search
es.submit(prepared_runnable);
}
I believe you may also try to find a quick estimate of strings that completely do not match (i.e. significant difference in length), and use it to finish your logic ASAP, moving to next candidate.
I have my strong doubts that HashMaps are a suitable solution here, especially if you want to have some fuzzing and stop words. You should utilize a proper full text search solutions like ElaticSearch or Apache Solr or at least an available engine like Apache Lucene.
That being said, you can use a poor man's version: Create an array of your maps and a SortedMap, iterate over the array, take the keys of the current HashMap and store them in the SortedMap with the index of their HashMap. To retrieve a key, you first search in the SortedMap for said key, get the respective HashMap from the array using the index position and lookup the key in only one HashMap. Should be fast enough without the need for multiple threads to dig through the HashMaps. However, you could make the code below into a runnable and you can have multiple lookups in parallel.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.SortedMap;
import java.util.TreeMap;
public class Search {
public static void main(String[] arg) {
if (arg.length == 0) {
System.out.println("Must give a search word!");
System.exit(1);
}
String searchString = arg[0].toLowerCase();
/*
* Populating our HashMaps.
*/
HashMap<String, String> english = new HashMap<String, String>();
english.put("banana", "fruit");
english.put("tomato", "vegetable");
HashMap<String, String> german = new HashMap<String, String>();
german.put("Banane", "Frucht");
german.put("Tomate", "Gemüse");
/*
* Now we create our ArrayList of HashMaps for fast retrieval
*/
List<HashMap<String, String>> maps = new ArrayList<HashMap<String, String>>();
maps.add(english);
maps.add(german);
/*
* This is our index
*/
SortedMap<String, Integer> index = new TreeMap<String, Integer>(String.CASE_INSENSITIVE_ORDER);
/*
* Populating the index:
*/
for (int i = 0; i < maps.size(); i++) {
// We iterate through or HashMaps...
HashMap<String, String> currentMap = maps.get(i);
for (String key : currentMap.keySet()) {
/* ...and populate our index with lowercase versions of the keys,
* referencing the array from which the key originates.
*/
index.put(key.toLowerCase(), i);
}
}
// In case our index contains our search string...
if (index.containsKey(searchString)) {
/*
* ... we find out in which map of the ones stored in maps
* the word in the index originated from.
*/
Integer mapIndex = index.get(searchString);
/*
* Next, we look up said map.
*/
HashMap<String, String> origin = maps.get(mapIndex);
/*
* Last, we retrieve the value from the origin map
*/
String result = origin.get(searchString);
/*
* The above steps can be shortened to
* String result = maps.get(index.get(searchString).intValue()).get(searchString);
*/
System.out.println(result);
} else {
System.out.println("\"" + searchString + "\" is not in the index!");
}
}
}
Please note that this is a rather naive implementation only provided for illustration purposes. It doesn't address several problems (you can't have duplicate index entries, for example).
With this solution, you are basically trading startup speed for query speed.
Okay!!..
Since your concern is to get faster response.
I would suggest you to divide the work between threads.
Lets you have 5 dictionaries May be keep three dictionaries to one thread and rest two will take care by another thread.
And then witch ever thread finds the match will halt or terminate the other thread.
May be you need an extra logic to do that dividing work ... But that wont effect your performance time.
And may be you need little more changes in your code to get your close match:
for (Map.Entry entry : _dictionary.entrySet()) {
you are using EntrySet But you are not using values anyway it seems getting entry set is a bit expensive. And I would suggest you to just use keySet since you are not really interested in the values in that map
for (Map.Entry entry : _dictionary.keySet()) {
For more details on the proformance of map Please read this link Map performances
Iteration over the collection-views of a LinkedHashMap requires time proportional to the size of the map, regardless of its capacity. Iteration over a HashMap is likely to be more expensive, requiring time proportional to its capacity.

Java: How do I return the object properly?

Ok, so I have been working on a school project for a while, and I am trying to count up mutual interests of different users. So, I am trying to store the "points" they get in a HashMap for each mutual interest they have, and then choose the user with the most mutual interests(highest HashMap key). I have the comparison of the integers done, but how do I return the User with the most points?
Example TXT file it reads from and loads into List:
Daniel: adcbadcbd
Jimmy: abdcbdcab
public User getMutualUser(User user) {
final Map<User, Integer> points = new HashMap<User, Integer>();
for(User u : users) {
if(u.getName().equals(user.getName())) continue;
for(int i = 0; i < u.getAnswers().size(); i++) {
if(u.getAnswers().get(i).equals(user.getAnswers().get(i))) {
System.out.println(u.getName() + " - " + u.getAnswers().get(i));
int current = points.get(u);
points.put(u, current + 1);
}
}
}
Collections.sort(users, new Comparator<User>() {
public int compare(User u1, User u2) {
Integer score1 = points.get(u1);
Integer score2 = points.get(u2);
return score1.compareTo(score2);
}
});
}
Everything looks good - you just need to return the last user in the list!
Add this line at the end of your method:
return users.get(users.size() - 1);
It can be easier to sort in the opposite direction then return the first element:
In the comparator, reverse to order:
return score2.compareTo(score1);
Then
return users.get0);
A Map can only have 1 value per key, which means if more than one User gets the same number of points (ie. score1.equals(score2)) you will lose data (one of the Users) from the Map.
Additionally, you may have the same User more than once, if they have been entered into the Map twice as their points get incremented.
Finally, for a Map it is a bad idea to use as a key a value which will end up changing. The whole point of a key is that it is constant for a given value.
I recommend swapping the key and values around in your Map, and in fact, just use a HashMap (or a Guava Multiset) and keep a separate List of Users and run a Collections.sort() using a Comparator which uses the Map as a lookup. (The Map would have to be final.)
List<User> users = new ArrayList<User>();
final Map<User, Integer> points = new HashMap<User, Integer>(); // assumes User has hashcode() / equals() defined
for(User u : users) {
// populate the points Map
}
Collections.sort(users, new Comparator<User>() {
public int compare(User u1, User u2) {
Integer score1 = points.get(u1);
Integer score2 = points.get(u2);
return score1.compareTo(score2);
}
});
Finally, return your highest scoring User with users.get(0).
Note: if users.get(0) turns out to be the lowest scoring user, just swap score1.compareTo(score2) for score2.compareTo(score1) in the Comparator.

Categories