We have a linkedlist called ratings that contains 3 integers
userId, ItemId and value of the actual rating (example from 0 to 10)
this method actually returns rating of User i and item j that the programs reads it from a File and returns -1 if there is no rating
the method that is BigOh(n) :
public int getRating(int i, int j){
ratings.findFirst();
while(!ratings.empty()){
if(ratings.retrieve().getUserId() == i && ratings.retrieve().getItemId() == j)
return ratings.retrieve().getValue();
else
ratings.findNext();
}
return -1;
}
How can I do this in BigOh(logn)?
Or is there anyway I can solve it using Binary Search tree?
The short answer is: use a different data structure. Linked lists aren't capable of doing searches in anything other than linear time, since each element is linked together without any real semblance or order (and even if the list were sorted, you'd still have to do some kind of timed traversal).
One data structure that you could use would be a Table from Guava. With this data structure, you'd have to do more work to add an element in...
Table<Integer, Integer, Rating> ratings = HashBasedTable.create();
ratings.put(rating.getUserId(), rating.getItemId(), rating);
...but you can retrieve very quickly - in roughly O(1) time since HashBasedTable is backed by LinkedHashSet<Integer, LinkedHashSet<Integer, Rating>>.
ratings.get(i, j);
You can use hashing to achieve your task in O(1). Please read this article to gain a deeper understanding about hashing.
Since you are using Java, you can use HashMap to accomplish your task. Note that, worst case time complexity for hashing technique is O(log n) but on average it is O(1). If you are more interested to know about hash tables and amortized analysis, please go through this article.
Code Example: You can create a Class with the required attributes and implement equals and hashCode method as follows. [read Java collections - hashCode() and equals()]
class Rating {
public int user_id; // id of the user who rated
public int item_id; // id of the item being rated
public Rating(int user_id, int item_id) {
this.user_id = user_id;
this.item_id = item_id;
}
#Override
public boolean equals(Object o) {
if (o == this) {
return true;
}
if (!(o instanceof Rating)) {
return false;
}
Rating ratingObj = (Rating) o;
return ratingObj.user_id == user_id
&& ratingObj.item_id == item_id;
}
#Override
public int hashCode() {
int result = 17;
result = 31 * result + user_id;
result = 31 * result + item_id;
return result;
}
}
Then store values in HashMap as follows:
public static void main(String[] args) {
HashMap<Rating, Integer> ratingMap = new HashMap<>();
Rating rt = new Rating(1, 5); // user id = 1, item id = 5
ratingMap.put(rt, 3);
rt = new Rating(1, 2); // user id = 1, item id = 2
ratingMap.put(rt, 4);
rt = new Rating(1, 3); // user id = 1, item id = 3
ratingMap.put(rt, 5);
// now search in HashMap
System.out.println(ratingMap.get(new Rating(1, 3))); // prints 5
}
As presented, this could hardly be done in O(log n). You're looking through elements until you find the one you need. In the worst case, you won't find the element you want until the end of the loop, thus making it O(n).
Of course, if ratings were a dictionary you'd retrieve the value in almost O(1): user ids as keys and for example a list of ratings as value. Insertion would be a bit slower but not much.
Related
I have a list of objects, in that list could be duplicate elements and I need to check each element in that list with all elements the list, if I found a duplicate element I have to mark it as duplicate.
I did this with normal for loop as show :
for (int i = 0; i < records.size() - 1; i++) {
Record record = records.get(i);
for (int k = i + 1; k < records.size(); k++) {
Record currentRecord = records.get(k);
if (RecordsParser.isDuplicateRecord(record, currentRecord)) {
currentRecord.setValid(false);
currentRecord.setErrorCode(ErrorCodes.DUPLICATE_ID);
}
}
}
So my question is : Is there any way to this logic with lambda expresion in a cleanner way ?
I would suggest not using lamda for something like this because as ParkerHalo said lamda expressions are not cleaner always. Your implementation has the worst complexity which is O(n^2). If I have well understand the problem I would use the following implementation for something more efficient (O(n)) and cleaner:
for (Element e : records) {
if (set.add(e) == false) {
e.setValid(false);
e.setErrorCode(ErrorCodes.DUPLICATE_ID);
}
}
I'll try to explain my thoughts to this as clear as I can, but since I don't know what your Record class looks like, I might as well miss the topic.
As Dimitris stated, you have a complexity of O(n²) which is really bad for performance. Your goal should be to reach linear complexity O(n) or at least O(n*log(n)).
How could you achieve that?
Use a HashSet to store the elements one by one
If your hash function is good the lookup of an element will (usually) be constant O(1)
Iterating over every single element with a constant lookup will result in a total complexity of O(n)
Small example:
class Record
{
// The records are compared with these fields:
int field1;
int field2;
#Override
public int hashCode()
{
// ** You'll have to think about a good hash function for your example!**
return 31 * field1 + 17 * field2;
}
#Override
public boolean equals(Object obj)
{
// You'll have to adapt your equals method to your own record class
if (!(obj instanceof Record))
return false;
Record other = (Record) obj;
return this.field1 == other.field1 && this.field2 == other.field2;
}
}
And this is how you use it:
HashSet<Record> set = new HashSet<>();
for(Record r : records)
{
// If your hashCode function is good this will most likely be O(1)
if (set.contains(r))
{
// You found a duplicate. Handle it here accordingly.
// ...
}
else
{
// No duplicate, add it to the set. (Good hashCode --> mostly O(1) )
set.add(r);
}
}
Please note that this is only a vague example and that you'll have to adapt your hashCode and equals methods accordingly!
I have been given an assignment to change to upgrade an existing one.
Figure out how to recode the qualifying exam problem using a Map for each terminal line, on the
assumption that the size of the problem is dominated by the number of input lines, not the 500
terminal lines
The program takes in a text file that has number, name. The number is the PC number and the name is the user who logged on. The program returns the user for each pc that logged on the most. Here is the existing code
public class LineUsageData {
SinglyLinkedList<Usage> singly = new SinglyLinkedList<Usage>();
//function to add a user to the linked list or to increment count by 1
public void addObservation(Usage usage){
for(int i = 0; i < singly.size(); ++i){
if(usage.getName().equals(singly.get(i).getName())){
singly.get(i).incrementCount(1);
return;
}
}
singly.add(usage);
}
//returns the user with the most connections to the PC
public String getMaxUsage(){
int tempHigh = 0;
int high = 0;
String userAndCount = "";
for(int i = 0; i < singly.size(); ++i){//goes through list and keeps highest
tempHigh = singly.get(i).getCount();
if(tempHigh > high){
high = tempHigh;
userAndCount = singly.get(i).getName() + " " + singly.get(i).getCount();
}
}
return userAndCount;
}
}
I am having trouble on the theoretical side. We can use a hashmap or a treemap. I am trying to think through how I would form a map that would hold the list of users for each pc? I can reuse the Usage object which will hold the name and the count of the user. I am not supposed to alter that object though
When checking if Usage is present in the list you perform a linear search each time (O(N)). If you replace your list with the Map<String,Usage>, you'll be able to search for name in sublinear time. TreeMap has O(log N) time for search and update, HashMap has amortized O(1)(constant) time.
So, the most effective data structure in this case is HashMap.
import java.util.*;
public class LineUsageData {
Map<String, Usage> map = new HashMap<String, Usage>();
//function to add a user to the map or to increment count by 1
public void addObservation(Usage usage) {
Usage existentUsage = map.get(usage.getName());
if (existentUsage == null) {
map.put(usage.getName(), usage);
} else {
existentUsage.incrementCount(1);
}
}
//returns the user with the most connections to the PC
public String getMaxUsage() {
Usage maxUsage = null;
for (Usage usage : map.values()) {
if (maxUsage == null || usage.getCount() > maxUsage.getCount()) {
maxUsage = usage;
}
}
return maxUsage == null ? null : maxUsage.getName() + " " + maxUsage.getCount();
}
// alternative version that uses Collections.max
public String getMaxUsageAlt() {
Usage maxUsage = map.isEmpty() ? null :
Collections.max(map.values(), new Comparator<Usage>() {
#Override
public int compare(Usage o1, Usage o2) {
return o1.getCount() - o2.getCount();
}
});
return maxUsage == null ? null : maxUsage.getName() + " " + maxUsage.getCount();
}
}
Map can also be iterated in the time proportional to it's size, so you can use the same procedure to find maximum element in it. I gave you two options, either manual approach, or usage of Collections.max utility method.
With simple words: You use a LinkedList (singly or doubly) when you have a list of items, and you usually plan to traverse them,
and a Map implementation when you have "Dictionary-like" entries, where a key corresponds to a value and you plan to access the value using the key.
In order to convert your SinglyLinkedList to a HashMap or TreeMap, you need find out which property of your item will be used as your key (it must be an element with unique values).
Assuming you are using the name property from your Usage class, you can do this
(a simple example):
//You could also use TreeMap, depending on your needs.
Map<String, Usage> usageMap = new HashMap<String, Usage>();
//Iterate through your SinglyLinkedList.
for(Usage usage : singly) {
//Add all items to the Map
usageMap.put(usage.getName(), usage);
}
//Access a value using its name as the key of the Map.
Usage accessedUsage = usageMap.get("AUsageName");
Also note that:
Map<string, Usage> usageMap = new HashMap<>();
Is valid, due to diamond inference.
I Solved this offline and didn't get a chance to see some of the answers which looked to be both very helpful. Sorry about that Nick and Aivean and thanks for the responses. Here is the code i ended up writing to get this to work.
public class LineUsageData {
Map<Integer, Usage> map = new HashMap<Integer, Usage>();
int hash = 0;
public void addObservation(Usage usage){
hash = usage.getName().hashCode();
System.out.println(hash);
while((map.get(hash)) != null){
if(map.get(hash).getName().equals(usage.name)){
map.get(hash).count++;
return;
}else{
hash++;
}
}
map.put(hash, usage);
}
public String getMaxUsage(){
String str = "";
int tempHigh = 0;
int high = 0;
//for loop
for(Integer key : map.keySet()){
tempHigh = map.get(key).getCount();
if(tempHigh > high){
high = tempHigh;
str = map.get(key).getName() + " " + map.get(key).getCount();
}
}
return str;
}
}
I have a class
public class User {
String country;
Double rating;
Double status;
}
I need to sort a List of this class based on two conditions.
At the start of the list needs to be Users which have a certain value for country. These users sort on rating. If rating is same, compare status.
If User has another value for country, just sort it on rating.
I have tried many attempts, and this is the most recent:
final String country = me.getCountry();
Collections.sort(usersArray, new Comparator<User>() {
#Override
public int compare(User lhs, User rhs) {
User user1 = lhs.getUser();
String country1 = user1.getCountry();
int result = country.equals(country1) ? 1 : 0;
if (result == 0) {
result = Double.compare(lhs.rating, rhs.rating);
if (result == 0) {
return Double.compare(lhs.status, rhs.status);
} else
return result;
}
return Double.compare(lhs.rating, rhs.rating);
}
});
Consider using CompareToBuilder from Apache Commons Lang. You tell it which fields to compare in which order and it does the rest. Less code for you, too.
return new CompareToBuilder()
.append(lhs.country, rhs.country)
.append(lhs.rating, rhs.rating)
.append(lhs.status, rhs.status)
.toComparison();
}
The users which have another country value, not my - located at top
This happens because if countries are equal you return 1; otherwise you check rating/status. If one of these comparison return 1 then your Comparator also return 1. So countries are not equal but user from not your country has greater rating and Comparator return 1. That's why list is not sorted properly.
int result = country.equals(country1) ? 1 : 0; // DEBUG result = 0
if (result == 0) { # DEBUG enter
result = Double.compare(lhs.rating, rhs.rating); // DEBUG result = 1
if (result == 0) {
result = Double.compare(lhs.status, rhs.status);
} else
return result; // this could be omitted
}
return result; // DEBUG method return 1.
After edit:
You have to specify all possible situation:
country can be your or not (2 situations)
rating can be greater, equal, lower (3 situations)
status can be greater, equal, lower (3 situations)
If I calculated correctly (probability is not my strongest side) you have 18 situations.
Using streams from Java 8, you can do this:
Arrays.stream(usersArray)
.sorted((lhs, rhs) -> Double.compare(lhs.status, rhs.status))
.sorted((lhs, rhs) -> Double.compare(lhs.rating, rhs.rating))
.toArray(size -> new User[size])
Because the resulting stream from the array is ordered, Java guarantees that the sorting is stable. This means that the order of the statuses is kept stable when the ratings are equal. This is why I first sort on status, then on ratings.
By the way, this approach does not change the order of the original array. It returns a new one.
I need a Collection that sorts the element, but does not removes the duplicates.
I have gone for a TreeSet, since TreeSet actually adds the values to a backed TreeMap:
public boolean add(E e) {
return m.put(e, PRESENT)==null;
}
And the TreeMap removes the duplicates using the Comparators compare logic
I have written a Comparator that returns 1 instead of 0 in case of equal elements. Hence in the case of equal elements the TreeSet with this Comparator will not overwrite the duplicate and will just sort it.
I have tested it for simple String objects, but I need a Set of Custom objects.
public static void main(String[] args)
{
List<String> strList = Arrays.asList( new String[]{"d","b","c","z","s","b","d","a"} );
Set<String> strSet = new TreeSet<String>(new StringComparator());
strSet.addAll(strList);
System.out.println(strSet);
}
class StringComparator implements Comparator<String>
{
#Override
public int compare(String s1, String s2)
{
if(s1.compareTo(s2) == 0){
return 1;
}
else{
return s1.compareTo(s2);
}
}
}
Is this approach fine or is there a better way to achieve this?
EDIT
Actually I am having a ArrayList of the following class:
class Fund
{
String fundCode;
BigDecimal fundValue;
.....
public boolean equals(Object obj) {
// uses fundCode for equality
}
}
I need all the fundCode with highest fundValue
You can use a PriorityQueue.
PriorityQueue<Integer> pQueue = new PriorityQueue<Integer>();
PriorityQueue(): Creates a PriorityQueue with the default initial capacity (11) that orders its elements according to their natural ordering.
This is a link to doc: https://docs.oracle.com/javase/8/docs/api/java/util/PriorityQueue.html
I need all the fundCode with highest fundValue
If that's the only reason why you want to sort I would recommend not to sort at all. Sorting comes mostly with a complexity of O(n log(n)). Finding the maximum has only a complexity of O(n) and is implemented in a simple iteration over your list:
List<Fund> maxFunds = new ArrayList<Fund>();
int max = 0;
for (Fund fund : funds) {
if (fund.getFundValue() > max) {
maxFunds.clear();
max = fund.getFundValue();
}
if (fund.getFundValue() == max) {
maxFunds.add(fund);
}
}
You can avoid that code by using a third level library like Guava. See: How to get max() element from List in Guava
you can sort a List using Collections.sort.
given your Fund:
List<Fund> sortMe = new ArrayList(...);
Collections.sort(sortMe, new Comparator<Fund>() {
#Override
public int compare(Fund left, Fund right) {
return left.fundValue.compareTo(right.fundValue);
}
});
// sortMe is now sorted
In case of TreeSet either Comparator or Comparable is used to compare and store objects . Equals are not called and that is why it does not recognize the duplicate one
Instead of the TreeSet we can use List and implement the Comparable interface.
public class Fund implements Comparable<Fund> {
String fundCode;
int fundValue;
public Fund(String fundCode, int fundValue) {
super();
this.fundCode = fundCode;
this.fundValue = fundValue;
}
public String getFundCode() {
return fundCode;
}
public void setFundCode(String fundCode) {
this.fundCode = fundCode;
}
public int getFundValue() {
return fundValue;
}
public void setFundValue(int fundValue) {
this.fundValue = fundValue;
}
public int compareTo(Fund compareFund) {
int compare = ((Fund) compareFund).getFundValue();
return compare - this.fundValue;
}
public static void main(String args[]){
List<Fund> funds = new ArrayList<Fund>();
Fund fund1 = new Fund("a",100);
Fund fund2 = new Fund("b",20);
Fund fund3 = new Fund("c",70);
Fund fund4 = new Fund("a",100);
funds.add(fund1);
funds.add(fund2);
funds.add(fund3);
funds.add(fund4);
Collections.sort(funds);
for(Fund fund : funds){
System.out.println("Fund code: " + fund.getFundCode() + " Fund value : " + fund.getFundValue());
}
}
}
Add the elements to the arraylist and then sort the elements using utility Collections.sort,. then implement comparable and write your own compareTo method according to your key.
wont remove duplicates as well, can be sorted also:
List<Integer> list = new ArrayList<>();
Collections.sort(list,new Comparator<Integer>()
{
#Override
public int compare(Objectleft, Object right) {
**your logic**
return '';
}
}
)
;
I found a way to get TreeSet to store duplicate keys.
The problem originated when I wrote some code in python using SortedContainers. I have a spatial index of objects where I want to find all objects between a start/end longitude.
The longitudes could be duplicates but I still need the ability to efficiently add/remove specific objects from the index. Unfortunately I could not find the Java equivalent of the Python SortedKeyList that separates the sort key from the type being stored.
To illustrate this consider that we have a large list of retail purchases and we want to get all purchases where the cost is in a specific range.
// We are using TreeSet as a SortedList
TreeSet _index = new TreeSet<PriceBase>()
// populate the index with the purchases.
// Note that 2 of these have the same cost
_index.add(new Purchase("candy", 1.03));
Purchase _bananas = new Purchase("bananas", 1.45);
_index.add(new Purchase(_bananas);
_index.add(new Purchase("celery", 1.45));
_index.add(new Purchase("chicken", 4.99));
// Range scan. This iterator should return "candy", "bananas", "celery"
NavigableSet<PriceBase> _iterator = _index.subset(
new PriceKey(0.99), new PriceKey(3.99));
// we can also remove specific items from the list and
// it finds the specific object even through the sort
// key is the same
_index.remove(_bananas);
There are 3 classes created for the list
PriceBase: Base class that returns the sort key (the price).
Purchase: subclass that contains transaction data.
PriceKey: subclass used for the range search.
When I initially implemented this with TreeSet it worked except in the case where the prices are the same. The trick is to define the compareTo() so that it is polymorphic:
If we are comparing Purchase to PriceKey then only compare the price.
If we are comparing Purchase to Purchase then compare the price and the name if the prices are the same.
For example here are the compareTo() functions for the PriceBase and Purchase classes.
// in PriceBase
#Override
public int compareTo(PriceBase _other) {
return Double.compare(this.getPrice(), _other.getPrice());
}
// in Purchase
#Override
public int compareTo(PriceBase _other) {
// compare by price
int _compare = super.compareTo(_other);
if(_compare != 0) {
// prices are not equal
return _compare;
}
if(_other instanceof Purchase == false) {
throw new RuntimeException("Right compare must be a Purchase");
}
// compare by item name
Purchase _otherPurchase = (Purchase)_other;
return this.getName().compareTo(_otherChild.getName());
}
This trick allows the TreeSet to sort the purchases by price but still do a real comparison when one needs to be uniquely identified.
In summary I needed an object index to support a range scan where the key is a continuous value like double and add/remove is efficient.
I understand there are many other ways to solve this problem but I wanted to avoid writing my own tree class. My solution seems like a hack and I am surprised that I can't find anything else. if you know of a better way then please comment.
I am working on an assignment where I have to implement my own HashMap. In the assignment text it is being described as an Array of Lists, and whenever you want to add an element the place it ends up in the Array is determined by its hashCode. In my case it is positions from a spreadsheet, so I have just taken columnNumber + rowNumber and then converted that to a String and then to an int, as the hashCode, and then I insert it that place in the Array. It is of course inserted in the form of a Node(key, value), where the key is the position of the cell and the value is the value of the cell.
But I must say I do not understand why we need an Array of Lists, because if we then end up with a list with more than one element, will it not increase the look up time quite considerably? So should it not rather be an Array of Nodes?
Also I have found this implementation of a HashMap in Java:
public class HashEntry {
private int key;
private int value;
HashEntry(int key, int value) {
this.key = key;
this.value = value;
}
public int getKey() {
return key;
}
public int getValue() {
return value;
}
}
public class HashMap {
private final static int TABLE_SIZE = 128;
HashEntry[] table;
HashMap() {
table = new HashEntry[TABLE_SIZE];
for (int i = 0; i < TABLE_SIZE; i++)
table[i] = null;
}
public int get(int key) {
int hash = (key % TABLE_SIZE);
while (table[hash] != null && table[hash].getKey() != key)
hash = (hash + 1) % TABLE_SIZE;
if (table[hash] == null)
return -1;
else
return table[hash].getValue();
}
public void put(int key, int value) {
int hash = (key % TABLE_SIZE);
while (table[hash] != null && table[hash].getKey() != key)
hash = (hash + 1) % TABLE_SIZE;
table[hash] = new HashEntry(key, value);
}
}
So is it correct that the put method, looks first at the table[hash], and if that is not empty and if what is in there has not got the key, being inputted in the method put, then it moves on to table[(hash + 1) % TABLE_SIZE]. But if it is the same key it simply overwrites the value. So is that correctly understood? And is it because the get and put method use the same method of looking up the place in the Array, that given the same key they would end up at the same place in the Array?
I know these questions might be a bit basic, but I have spend quite some time trying to get this sorted out, why any help would be much appreciated!
Edit
So now I have tried implementing the HashMap myself via a Node class, which just
constructs a node with a key and a corresponding value, it has also got a getHashCode method, where I just concatenate the two values on each other.
I have also constructed a SinglyLinkedList (part of a previous assignment), which I use as the bucket.
And my Hash function is simply hashCode % hashMap.length.
Here is my own implementation, so what do you think of it?
package spreadsheet;
public class HashTableMap {
private SinglyLinkedListMap[] hashArray;
private int size;
public HashTableMap() {
hashArray = new SinglyLinkedListMap[64];
size = 0;
}
public void insert(final Position key, final Expression value) {
Node node = new Node(key, value);
int hashNumber = node.getHashCode() % hashArray.length;
SinglyLinkedListMap bucket = new SinglyLinkedListMap();
bucket.insert(key, value);
if(hashArray[hashNumber] == null) {
hashArray[hashNumber] = bucket;
size++;
}
if(hashArray[hashNumber] != null) {
SinglyLinkedListMap bucket2 = hashArray[hashNumber];
bucket2.insert(key, value);
hashArray[hashNumber] = bucket2;
size++;
}
if (hashArray.length == size) {
SinglyLinkedListMap[] newhashArray = new SinglyLinkedListMap[size * 2];
for (int i = 0; i < size; i++) {
newhashArray[i] = hashArray[i];
}
hashArray = newhashArray;
}
}
public Expression lookUp(final Position key) {
Node node = new Node(key, null);
int hashNumber = node.getHashCode() % hashArray.length;
SinglyLinkedListMap foundBucket = hashArray[hashNumber];
return foundBucket.lookUp(key);
}
}
The look up time should be around O(1), so I would like to know if that is the case? And if not how can I improve it, in that regard?
You have to have some plan to deal with hash collisions, in which two distinct keys fall in the same bucket, the same element of your array.
One of the simplest solutions is to keep a list of entries for each bucket.
If you have a good hashing algorithm, and make sure the number of buckets is bigger than the number of elements, you should end up with most buckets having zero or one items, so the list search should not take long. If the lists are getting too long it is time to rehash with more buckets to spread the data out.
It really depends on how good your hashcode method is. Lets say you tried to make it as bad as possible: You made hashcode return 1 every time. If that were the case, you'd have an array of lists, but only 1 element of the array would have any data in it. That element would just grow to have a huge list in it.
If you did that, you'd have a really inefficient hashmap. But, if your hashcode were a little better, it'd distribute the objects into many different array elements and as a result it'd be much more efficient.
The most ideal case (which often isn't achievable) is to have a hashcode method that returns a unique number no matter what object you put into it. If you could do that, you wouldn't ever need an array of lists. You could just use an array. But since your hashcode isn't "perfect" it's possible for two different objects to have the same hashcode. You need to be able to handle that scenario by putting them in a list at the same array element.
But, if your hashcode method was "pretty good" and rarely had collisions, you rarely would have more than 1 element in the list.
The Lists are often referred to as buckets and are a way of dealing with collisions. When two data elements have the same hash code mod TABLE SIZE they collide, but both must be stored.
A worse kind of collision is two different data point having the same key -- this is disallowed in hash tables and one will overwrite the others. If you just add row to column, then (2,1) and (1,2) will both have a key of 3, which means they cannot be stored in the same hash table. If you concatenated the strings together without a separator then the problem is with (12,1) versus (1, 21) --- both have key "121" With a separator (such as a comma) all the keys will be distinct.
Distinct keys can land in the same buck if there hashcodes are the same mod TABLE_SIZE. Those lists are one way to store the two values in the same bucket.
class SpreadSheetPosition {
int column;
int row;
#Override
public int hashCode() {
return column + row;
}
}
class HashMap {
private Liat[] buckets = new List[N];
public void put(Object key, Object value) {
int keyHashCode = key.hashCode();
int bucketIndex = keyHashCode % N;
...
}
}
Compare having N lists, with having just one list/array. For searching in a list one has to traverse possibly the entire list. By using an array of lists, one at least reduces the single lists. Possibly even getting a list of one or zero elements (null).
If the hashCode() is as unique as possible the chance for an immediate found is high.