Sorting a Map by an attribute of object - java

Disclaimer : I have already once posted such a question and it was marked duplicate. Please try and help me. I have gone through all previous methods on stackoverflow and none was of any help. All the methods mentioned to sort Map(Key,Values) didn't work in my case as I have one step further i.e. retrieving the attribute of the value. This time, I tried to go with full detail.
I've a Map (String,Object) in Java and I want to sort it using one of the attributes of the Object.
e.g. Suppose I have a class
class Entry
{
int id;
String name;
String address;
//Rest of the code
}
Now, I created a map
Map<String,Entry>
I want to sort the map by the attribute id of the class Entry (Entry.id)
Please help !
For example, I have three objects of Entry class
entry1 :
id=1
name="abc"
address="india"
entry2 :
id=2
name="xyz"
address="india"
entry3 :
id=3
name="pqr"
address="india"
Now, I have the Map initially as follows :
Key : Value
first: entry2
second: entry3
third : entry1
After sorting, it should be like
Key : Value
third : entry1
first: entry2
second: entry3

You can easily accomplish the task with the stream API:
Map<String, Entry> resultSet = myMap.entrySet()
.stream()
.sorted(Comparator.comparingInt(e -> e.getValue().getId()))
.collect(Collectors.toMap(Map.Entry::getKey,
Map.Entry::getValue,
(left, right) -> left,
LinkedHashMap::new));

Your requirement is typically a symptom of bad data-structure usage.
If you wanted the map to be sorted by an attribute of key objects, you would just write a custom comparator, but since you want to sort by values, it's a bit more complicated.
Try understanding answers to this question: Sort a Map<Key, Value> by values. And then try using a custom comparator in combination.

You can’t sort a map.
You can have a map that keeps its entries sorted, or you can sort a List<Map.Entry>.
Try this:
Map<String, Entry> map; // your map
Map<String, Entry> sorted = new TreeMap<>(Comparator.comparing(s -> map.get(s).getId());
sorted.putAll(map);

Related

How to remove Keys that would cause Collisions before executing Collectors.toMap()

I have a stream of objects similar to this previous question, however, instead of ignoring duplicate values, I would like to remove any values from that stream beforehand and print them out.
For example, from this snippet:
Map<String, String> phoneBook = people.stream()
.collect(toMap(Person::getName,
Person::getAddress));
If there were duplicate entries, it would cause a java.lang.IllegalStateException: Duplicate key error to be thrown.
The solution proposed in that question used a mergeFunction to keep the first entry if a collision was found.
Map<String, String> phoneBook =
people.stream()
.collect(Collectors.toMap(
Person::getName,
Person::getAddress,
(address1, address2) -> {
System.out.println("duplicate key found!");
return address1;
}
));
Instead of keeping the first entry, if there is a collision from a duplicate key in the stream, I want to know which value caused the collision and make sure that there are no occurrences of that value within the resulting map.
I.e. if "Bob" appeared three times in the stream, it should not be in the map even once.
In the process of creating that map, I would like to filter out any duplicate names and record them some way.
I want to make sure that when creating the list there can be no duplicate entry and for there to be some way to know which entries had duplicate keys in incoming stream. I was thinking about using groupingBy and filter beforehand to find the duplicate keys, but I am not sure what the best way to do it is.
I would like to remove any values from that stream beforehand.
As #JimGarrison has pointed out, preprocessing the data doesn't make sense.
You can't know it in advance whether a name is unique or not until the all data set has been processed.
Another thing that you have to consider that inside the stream pipeline (before the collector) you have knowledge on what data has been encountered previously. Because results of intermediate operations should not depend on any state.
In case if you are thinking that streams are acting like a sequence of loops and therefore assuming that it's possible to preprocess stream elements before collecting them, that's not correct. Elements of the stream pipeline are being processed lazily one at a time. I.e. all the operations in the pipeline will get applied on a single element and each operation will be applied only if it's needed (that's what laziness means).
For more information, have a look at this tutorial and API documentation
Implementations
You can segregate unique values and duplicates in a single stream statement by utilizing Collectors.teeing() and a custom object that will contain separate collections of duplicated and unique entries of the phone book.
Since the primarily function of this object only to carry the data I've implemented it as Java 16 record.
public record FilteredPhoneBook(Map<String, String> uniquePersonsAddressByName,
List<String> duplicatedNames) {}
Collector teeing() expects three arguments: two collectors and a function that merges the results produced by both collectors.
The map generated by the groupingBy() in conjunction with counting(), is meant to determine duplicated names.
Since there's no point to processing the data, toMap() which is used as the second collector will create a map containing all names.
When both collectors will hand out their results to the merger function, it will take care of removing the duplicates.
public static FilteredPhoneBook getFilteredPhoneBook(Collection<Person> people) {
return people.stream()
.collect(Collectors.teeing(
Collectors.groupingBy(Person::getName, Collectors.counting()), // intermediate Map<String, Long>
Collectors.toMap( // intermediate Map<String, String>
Person::getName,
Person::getAddress,
(left, right) -> left),
(Map<String, Long> countByName, Map<String, String> addressByName) -> {
countByName.values().removeIf(count -> count == 1); // removing unique names
addressByName.keySet().removeAll(countByName.keySet()); // removing all duplicates
return new FilteredPhoneBook(addressByName, new ArrayList<>(countByName.keySet()));
}
));
}
Another way to address this problem to utilize Map<String,Boolean> as the mean of discovering duplicates, as #Holger have suggested.
With the first collector will be written using toMap(). And it will associate true with a key that has been encountered only once, and its mergeFunction will assign the value of false if at least one duplicate was found.
The rest logic remains the same.
public static FilteredPhoneBook getFilteredPhoneBook(Collection<Person> people) {
return people.stream()
.collect(Collectors.teeing(
Collectors.toMap( // intermediate Map<String, Boolean>
Person::getName,
person -> true, // not proved to be a duplicate and initially considered unique
(left, right) -> false), // is a duplicate
Collectors.toMap( // intermediate Map<String, String>
Person::getName,
Person::getAddress,
(left, right) -> left),
(Map<String, Boolean> isUniqueByName, Map<String, String> addressByName) -> {
isUniqueByName.values().removeIf(Boolean::booleanValue); // removing unique names
addressByName.keySet().removeAll(isUniqueByName.keySet()); // removing all duplicates
return new FilteredPhoneBook(addressByName, new ArrayList<>(isUniqueByName.keySet()));
}
));
}
main() - demo
public static void main(String[] args) {
List<Person> people = List.of(
new Person("Alise", "address1"),
new Person("Bob", "address2"),
new Person("Bob", "address3"),
new Person("Carol", "address4"),
new Person("Bob", "address5")
);
FilteredPhoneBook filteredPhoneBook = getFilteredPhoneBook(people);
System.out.println("Unique entries:");
filteredPhoneBook.uniquePersonsAddressByName.forEach((k, v) -> System.out.println(k + " : " + v));
System.out.println("\nDuplicates:");
filteredPhoneBook.duplicatedNames().forEach(System.out::println);
}
Output
Unique entries:
Alise : address1
Carol : address4
Duplicates:
Bob
You can't know which keys are duplicates until you have processed the entire input stream. Therefore, any pre-processing step has to make a complete pass of the input before your main logic, which is wasteful.
An alternate approach could be:
Use the merge function to insert a dummy value for the offending key
At the same time, insert the offending key into a Set<K>
After the input stream is processed, iterate over the Set<K> to remove offending keys from the primary map.
In mathematical terms you want to partition your grouped aggregate and handle both parts separately.
Map<String, String> makePhoneBook(Collection<Person> people) {
Map<Boolean, List<Person>> phoneBook = people.stream()
.collect(Collectors.groupingBy(Person::getName))
.values()
.stream()
.collect(Collectors.partitioningBy(list -> list.size() > 1,
Collectors.mapping(r -> r.get(0),
Collectors.toList())));
// handle duplicates
phoneBook.get(true)
.forEach(x -> System.out.println("duplicate found " + x));
return phoneBook.get(false).stream()
.collect(Collectors.toMap(
Person::getName,
Person::getAddress));
}

Java 8 Need advice with stream

I have a List:
class DummyClass {
List<String> rname;
String name;
}
The values in my List look like this:
list.add(DummyClass(Array.asList("a","b"),"apple"))
list.add(DummyClass(Array.asList("a","b"),"banana"))
list.add(DummyClass(Array.asList("a","c"),"orange"))
list.add(DummyClass(null,"apple"))
I want to convert the above List into a Map<String, Set>, where key is rname and value is Set of name field.
{
"a"-> ["apple", "orange", "banana"],
"b"-> ["apple", "banana"]
"c" -> ["orange"]
}
I am trying to use java stream and facing null pointer exception . Can someone please guide
Map<String, Set<String>> map =
list.stream()
.collect(Collectors.groupingBy(DummyClass::rname,
Collectors.mapping(DummyClass::getName,
Collectors.toSet())));
I am not able to process {(Array.asList("a","b"))}each element of list in stream.
There is some flaw here :
Collectors.groupingBy(DummyClass::rname,
Collectors.mapping(DummyClass::getName,
Collectors.toSet())))
where I am processing the entire list together, rather than each element . Shall I use another stream
You need to do a filter - many of the util classes to construct collections no longer allow null e.g. Map.of or the groupingBy you have above.
You can filter or first map, replace null with a string and then group by.
Map<String, Set<String>> map =
list.stream().filter(v-> v.getName() != null)
.collect(Collectors.groupingBy(DummyClass::rname,
Collectors.mapping(DummyClass::getName,
Collectors.toSet())));
Or if you don't want to drop null values, do a map and produce a key that all null names can be grouped under something like:
Map<String, Set<String>> map =
list.stream().map(v-> Map.entry(v.getName() == null? "null": v.getName(), v))
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.mapping(Map.Entry::getKey,
Collectors.toSet())));
The groupingBy that I have above needs to be changed as it now has a Map.Entry rather than your desired type.
I'm writing this on a mobile...without an editor so will leave that part to you :)

How to convert a nested map to a list<Object>

How to convert a nested map to a list:
the map is:
Map<Integer, Map<Integer, Map<String, Double>>> list
the Object class is:
public class employee {
private Integer id;
private Integer number;
private String name;
private Double salary;
How to convert the nested map to the List?
Iterate over the map entries. For each inner map, also iterate over its entries, etc. For each entry in the innermost map, create an Employee and add it to your list.
The standard way to iterate over a map is to iterate over its entry set. vefthym’s answer shows you how to do this with a for loop. You may eloborate that code into what you need.
You may also do it with streams, provided you can use Java 8. I am assuming that your outer map maps from ID to an intermediate map (I would expect that intermediate map to hold exactly one entry; but my code will also work with more or fewer). The next map maps from number to a map from name to salary.
List<Employee> empls = list.entrySet()
.stream()
.flatMap(oe -> oe.getValue()
.entrySet()
.stream()
.flatMap((Map.Entry<Integer, Map<String, Double>> me) -> me.getValue()
.entrySet()
.stream()
.map((Map.Entry<String, Double> ie)
-> new Employee(oe.getKey(), me.getKey(), ie.getKey(), ie.getValue()))))
.collect(Collectors.toList());
That was meant to be oe for outer entry, that is, entry in the outer map. Similarly me for middle entry and ie for inner entry. I have renamed your class to begin with a capital E to follow Java naming conventions, and I have assumed a convenient constructor.
EDIT: vefthym, where did your answer go now that I was referring to it? I know you were not too happy about it yourself, it’s fair enough. In any case, the standard way to iterate over a map with a for loop is:
for (Map.Entry<Integer, String> currentEntry : yourMap.entrySet()) {
// do your stuff here
// use currentEntry.getKey() and currentEntry.getValue() to get the key and value from the current entry
}
You need to repeat the type arguments from your map declaration in the <> after Map.Entry.

Java 8 streams: iterate over Map of Lists

I have the following Object and a Map:
MyObject
String name;
Long priority;
foo bar;
Map<String, List<MyObject>> anotherHashMap;
I want to convert the Map in another Map. The Key of the result map is the key of the input map. The value of the result map ist the Property "name" of My object, ordered by priority.
The ordering and extracting the name is not the problem, but I could not put it into the result map. I do it the old Java 7 way, but it would be nice it is possible to use the streaming API.
Map<String, List<String>> result = new HashMap<>();
for (String identifier : anotherHashMap.keySet()) {
List<String> generatedList = anotherHashMap.get(identifier).stream()...;
teaserPerPage.put(identifier, generatedList);
}
Has anyone an idea? I tried this, but got stuck:
anotherHashMap.entrySet().stream().collect(Collectors.asMap(..., ...));
Map<String, List<String>> result = anotherHashMap
.entrySet().stream() // Stream over entry set
.collect(Collectors.toMap( // Collect final result map
Map.Entry::getKey, // Key mapping is the same
e -> e.getValue().stream() // Stream over list
.sorted(Comparator.comparingLong(MyObject::getPriority)) // Sort by priority
.map(MyObject::getName) // Apply mapping to MyObject
.collect(Collectors.toList())) // Collect mapping into list
);
Essentially, you stream over each entry set and collect it into a new map. To compute the value in the new map, you stream over the List<MyOjbect> from the old map, sort, and apply a mapping and collection function to it. In this case I used MyObject::getName as the mapping and collected the resulting names into a list.
For generating another map, we can have something like following:
HashMap<String, List<String>> result = anotherHashMap.entrySet().stream().collect(Collectors.toMap(elem -> elem.getKey(), elem -> elem.getValue() // can further process it);
Above I am recreating the map again, but you can process the key or the value according to your needs.
Map<String, List<String>> result = anotherHashMap.entrySet().stream().collect(Collectors.toMap(
Map.Entry::getKey,
e -> e.getValue().stream()
.sorted(comparing(MyObject::getPriority))
.map(MyObject::getName)
.collect(Collectors.toList())));
Similar to answer of Mike Kobit, but sorting is applied in the correct place (i.e. value is sorted, not map entries) and more concise static method Comparator.comparing is used to get Comparator for sorting.

Foreach loop in java for Dictionary

I want to go through every items in a dictionary in java. to clarify what I want to do, this is the C# code
Dictionary<string, Label> LableList = new Dictionary<string, Label>();
foreach (KeyValuePair<string, Label> z in LabelList);
I don't know how to do this is java, for example I did this
for(Object z: dic)
but it says it's not iterable. Please advise......
I'm assuming you have a Map<String, Label> which is the Java built-in dictionary structure. Java doesn't let you iterate directly over a Map (i.e. it doesn't implement Iterable) because it would be ambiguous what you're actually iterating over.
It's just a matter of choosing to iterate through the keys, values or entries (both).
e.g.
Map<String, Label> map = new HashMap<String, Label>();
//...
for ( String key : map.keySet() ) {
}
for ( Label value : map.values() ) {
}
for ( Map.Entry<String, Label> entry : map.entrySet() ) {
String key = entry.getKey();
Label value = entry.getValue();
}
Your C# code seems to be the same as iterating over the entries (the last example).
java.util.Map is the Dictionary equvivalent and below is an example on how you can iterate through each entry
for(Map.Entry<K, V> e : map.entrySet())
{
System.out.println(e.getKey()+": "+e.getValue());
}
Your best bet is to use this:
for (String key : LableList.keys()) {
Label value = LableList.get(key);
// do what you wish with key and value here
}
In Java however, a better bet is to not use Dictionary as you would in .NET but to use one of the Map subclasses, e.g. HashMap. You can iterate through one of these like this:
for (Entry<String, Label> e : myMap.entrySet()) {
// Do what you wish with e.getKey() and e.getValue()
}
You are also advised against using Dictionary in the official javadoc.
I was trying to add the contents of one HashMap (a) into another HashMap (b).
I found it simple to iterate through HashMap a this way:
a.forEach((k, v) -> b.put(k, v));
You can manipulate my example to do what ever you want on the other side of "->".
Note that this is a Lambda expression, and that you would have to use Java 1.8 (Java 8) or later for this to work. :-)

Categories