Java 8 Distinct by property - java
In Java 8 how can I filter a collection using the Stream API by checking the distinctness of a property of each object?
For example I have a list of Person object and I want to remove people with the same name,
persons.stream().distinct();
Will use the default equality check for a Person object, so I need something like,
persons.stream().distinct(p -> p.getName());
Unfortunately the distinct() method has no such overload. Without modifying the equality check inside the Person class is it possible to do this succinctly?
Consider distinct to be a stateful filter. Here is a function that returns a predicate that maintains state about what it's seen previously, and that returns whether the given element was seen for the first time:
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> seen.add(keyExtractor.apply(t));
}
Then you can write:
persons.stream().filter(distinctByKey(Person::getName))
Note that if the stream is ordered and is run in parallel, this will preserve an arbitrary element from among the duplicates, instead of the first one, as distinct() does.
(This is essentially the same as my answer to this question: Java Lambda Stream Distinct() on arbitrary key?)
An alternative would be to place the persons in a map using the name as a key:
persons.collect(Collectors.toMap(Person::getName, p -> p, (p, q) -> p)).values();
Note that the Person that is kept, in case of a duplicate name, will be the first encontered.
You can wrap the person objects into another class, that only compares the names of the persons. Afterward, you unwrap the wrapped objects to get a person stream again. The stream operations might look as follows:
persons.stream()
.map(Wrapper::new)
.distinct()
.map(Wrapper::unwrap)
...;
The class Wrapper might look as follows:
class Wrapper {
private final Person person;
public Wrapper(Person person) {
this.person = person;
}
public Person unwrap() {
return person;
}
public boolean equals(Object other) {
if (other instanceof Wrapper) {
return ((Wrapper) other).person.getName().equals(person.getName());
} else {
return false;
}
}
public int hashCode() {
return person.getName().hashCode();
}
}
Another solution, using Set. May not be the ideal solution, but it works
Set<String> set = new HashSet<>(persons.size());
persons.stream().filter(p -> set.add(p.getName())).collect(Collectors.toList());
Or if you can modify the original list, you can use removeIf method
persons.removeIf(p -> !set.add(p.getName()));
There's a simpler approach using a TreeSet with a custom comparator.
persons.stream()
.collect(Collectors.toCollection(
() -> new TreeSet<Person>((p1, p2) -> p1.getName().compareTo(p2.getName()))
));
We can also use RxJava (very powerful reactive extension library)
Observable.from(persons).distinct(Person::getName)
or
Observable.from(persons).distinct(p -> p.getName())
You can use groupingBy collector:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().forEach(t -> System.out.println(t.get(0).getId()));
If you want to have another stream you can use this:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream().map(l -> (l.get(0)));
You can use the distinct(HashingStrategy) method in Eclipse Collections.
List<Person> persons = ...;
MutableList<Person> distinct =
ListIterate.distinct(persons, HashingStrategies.fromFunction(Person::getName));
If you can refactor persons to implement an Eclipse Collections interface, you can call the method directly on the list.
MutableList<Person> persons = ...;
MutableList<Person> distinct =
persons.distinct(HashingStrategies.fromFunction(Person::getName));
HashingStrategy is simply a strategy interface that allows you to define custom implementations of equals and hashcode.
public interface HashingStrategy<E>
{
int computeHashCode(E object);
boolean equals(E object1, E object2);
}
Note: I am a committer for Eclipse Collections.
Similar approach which Saeed Zarinfam used but more Java 8 style:)
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream()
.map(plans -> plans.stream().findFirst().get())
.collect(toList());
You can use StreamEx library:
StreamEx.of(persons)
.distinct(Person::getName)
.toList()
I recommend using Vavr, if you can. With this library you can do the following:
io.vavr.collection.List.ofAll(persons)
.distinctBy(Person::getName)
.toJavaSet() // or any another Java 8 Collection
Extending Stuart Marks's answer, this can be done in a shorter way and without a concurrent map (if you don't need parallel streams):
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
final Set<Object> seen = new HashSet<>();
return t -> seen.add(keyExtractor.apply(t));
}
Then call:
persons.stream().filter(distinctByKey(p -> p.getName());
My approach to this is to group all the objects with same property together, then cut short the groups to size of 1 and then finally collect them as a List.
List<YourPersonClass> listWithDistinctPersons = persons.stream()
//operators to remove duplicates based on person name
.collect(Collectors.groupingBy(p -> p.getName()))
.values()
.stream()
//cut short the groups to size of 1
.flatMap(group -> group.stream().limit(1))
//collect distinct users as list
.collect(Collectors.toList());
Distinct objects list can be found using:
List distinctPersons = persons.stream()
.collect(Collectors.collectingAndThen(
Collectors.toCollection(() -> new TreeSet<>(Comparator.comparing(Person:: getName))),
ArrayList::new));
I made a generic version:
private <T, R> Collector<T, ?, Stream<T>> distinctByKey(Function<T, R> keyExtractor) {
return Collectors.collectingAndThen(
toMap(
keyExtractor,
t -> t,
(t1, t2) -> t1
),
(Map<R, T> map) -> map.values().stream()
);
}
An exemple:
Stream.of(new Person("Jean"),
new Person("Jean"),
new Person("Paul")
)
.filter(...)
.collect(distinctByKey(Person::getName)) // return a stream of Person with 2 elements, jean and Paul
.map(...)
.collect(toList())
Another library that supports this is jOOλ, and its Seq.distinct(Function<T,U>) method:
Seq.seq(persons).distinct(Person::getName).toList();
Under the hood, it does practically the same thing as the accepted answer, though.
Set<YourPropertyType> set = new HashSet<>();
list
.stream()
.filter(it -> set.add(it.getYourProperty()))
.forEach(it -> ...);
While the highest upvoted answer is absolutely best answer wrt Java 8, it is at the same time absolutely worst in terms of performance. If you really want a bad low performant application, then go ahead and use it. Simple requirement of extracting a unique set of Person Names shall be achieved by mere "For-Each" and a "Set".
Things get even worse if list is above size of 10.
Consider you have a collection of 20 Objects, like this:
public static final List<SimpleEvent> testList = Arrays.asList(
new SimpleEvent("Tom"), new SimpleEvent("Dick"),new SimpleEvent("Harry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Huckle"),new SimpleEvent("Berry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("Cherry"),
new SimpleEvent("Roses"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("gotya"),
new SimpleEvent("Gotye"),new SimpleEvent("Nibble"),new SimpleEvent("Berry"),new SimpleEvent("Jibble"));
Where you object SimpleEvent looks like this:
public class SimpleEvent {
private String name;
private String type;
public SimpleEvent(String name) {
this.name = name;
this.type = "type_"+name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
}
And to test, you have JMH code like this,(Please note, im using the same distinctByKey Predicate mentioned in accepted answer) :
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aStreamBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = testList
.stream()
.filter(distinctByKey(SimpleEvent::getName))
.map(SimpleEvent::getName)
.collect(Collectors.toSet());
blackhole.consume(uniqueNames);
}
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aForEachBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = new HashSet<>();
for (SimpleEvent event : testList) {
uniqueNames.add(event.getName());
}
blackhole.consume(uniqueNames);
}
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(MyBenchmark.class.getSimpleName())
.forks(1)
.mode(Mode.Throughput)
.warmupBatchSize(3)
.warmupIterations(3)
.measurementIterations(3)
.build();
new Runner(opt).run();
}
Then you'll have Benchmark results like this:
Benchmark Mode Samples Score Score error Units
c.s.MyBenchmark.aForEachBasedUniqueSet thrpt 3 2635199.952 1663320.718 ops/s
c.s.MyBenchmark.aStreamBasedUniqueSet thrpt 3 729134.695 895825.697 ops/s
And as you can see, a simple For-Each is 3 times better in throughput and less in error score as compared to Java 8 Stream.
Higher the throughput, better the performance
I would like to improve Stuart Marks answer. What if the key is null, it will through NullPointerException. Here I ignore the null key by adding one more check as keyExtractor.apply(t)!=null.
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> keyExtractor.apply(t)!=null && seen.add(keyExtractor.apply(t));
}
This works like a charm:
Grouping the data by unique key to form a map.
Returning the first object from every value of the map (There could be multiple person having same name).
persons.stream()
.collect(groupingBy(Person::getName))
.values()
.stream()
.flatMap(values -> values.stream().limit(1))
.collect(toList());
The easiest way to implement this is to jump on the sort feature as it already provides an optional Comparator which can be created using an element’s property. Then you have to filter duplicates out which can be done using a statefull Predicate which uses the fact that for a sorted stream all equal elements are adjacent:
Comparator<Person> c=Comparator.comparing(Person::getName);
stream.sorted(c).filter(new Predicate<Person>() {
Person previous;
public boolean test(Person p) {
if(previous!=null && c.compare(previous, p)==0)
return false;
previous=p;
return true;
}
})./* more stream operations here */;
Of course, a statefull Predicate is not thread-safe, however if that’s your need you can move this logic into a Collector and let the stream take care of the thread-safety when using your Collector. This depends on what you want to do with the stream of distinct elements which you didn’t tell us in your question.
There are lot of approaches, this one will also help - Simple, Clean and Clear
List<Employee> employees = new ArrayList<>();
employees.add(new Employee(11, "Ravi"));
employees.add(new Employee(12, "Stalin"));
employees.add(new Employee(23, "Anbu"));
employees.add(new Employee(24, "Yuvaraj"));
employees.add(new Employee(35, "Sena"));
employees.add(new Employee(36, "Antony"));
employees.add(new Employee(47, "Sena"));
employees.add(new Employee(48, "Ravi"));
List<Employee> empList = new ArrayList<>(employees.stream().collect(
Collectors.toMap(Employee::getName, obj -> obj,
(existingValue, newValue) -> existingValue))
.values());
empList.forEach(System.out::println);
// Collectors.toMap(
// Employee::getName, - key (the value by which you want to eliminate duplicate)
// obj -> obj, - value (entire employee object)
// (existingValue, newValue) -> existingValue) - to avoid illegalstateexception: duplicate key
Output - toString() overloaded
Employee{id=35, name='Sena'}
Employee{id=12, name='Stalin'}
Employee{id=11, name='Ravi'}
Employee{id=24, name='Yuvaraj'}
Employee{id=36, name='Antony'}
Employee{id=23, name='Anbu'}
Here is the example
public class PayRoll {
private int payRollId;
private int id;
private String name;
private String dept;
private int salary;
public PayRoll(int payRollId, int id, String name, String dept, int salary) {
super();
this.payRollId = payRollId;
this.id = id;
this.name = name;
this.dept = dept;
this.salary = salary;
}
}
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collector;
import java.util.stream.Collectors;
public class Prac {
public static void main(String[] args) {
int salary=70000;
PayRoll payRoll=new PayRoll(1311, 1, "A", "HR", salary);
PayRoll payRoll2=new PayRoll(1411, 2 , "B", "Technical", salary);
PayRoll payRoll3=new PayRoll(1511, 1, "C", "HR", salary);
PayRoll payRoll4=new PayRoll(1611, 1, "D", "Technical", salary);
PayRoll payRoll5=new PayRoll(711, 3,"E", "Technical", salary);
PayRoll payRoll6=new PayRoll(1811, 3, "F", "Technical", salary);
List<PayRoll>list=new ArrayList<PayRoll>();
list.add(payRoll);
list.add(payRoll2);
list.add(payRoll3);
list.add(payRoll4);
list.add(payRoll5);
list.add(payRoll6);
Map<Object, Optional<PayRoll>> k = list.stream().collect(Collectors.groupingBy(p->p.getId()+"|"+p.getDept(),Collectors.maxBy(Comparator.comparingInt(PayRoll::getPayRollId))));
k.entrySet().forEach(p->
{
if(p.getValue().isPresent())
{
System.out.println(p.getValue().get());
}
});
}
}
Output:
PayRoll [payRollId=1611, id=1, name=D, dept=Technical, salary=70000]
PayRoll [payRollId=1811, id=3, name=F, dept=Technical, salary=70000]
PayRoll [payRollId=1411, id=2, name=B, dept=Technical, salary=70000]
PayRoll [payRollId=1511, id=1, name=C, dept=HR, salary=70000]
Late to the party but I sometimes use this one-liner as an equivalent:
((Function<Value, Key>) Value::getKey).andThen(new HashSet<>()::add)::apply
The expression is a Predicate<Value> but since the map is inline, it works as a filter. This is of course less readable but sometimes it can be helpful to avoid the method.
Building on #josketres's answer, I created a generic utility method:
You could make this more Java 8-friendly by creating a Collector.
public static <T> Set<T> removeDuplicates(Collection<T> input, Comparator<T> comparer) {
return input.stream()
.collect(toCollection(() -> new TreeSet<>(comparer)));
}
#Test
public void removeDuplicatesWithDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(7), new C(42), new C(42));
Collection<C> result = removeDuplicates(input, (c1, c2) -> Integer.compare(c1.value, c2.value));
assertEquals(2, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 7));
assertTrue(result.stream().anyMatch(c -> c.value == 42));
}
#Test
public void removeDuplicatesWithoutDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(1), new C(2), new C(3));
Collection<C> result = removeDuplicates(input, (t1, t2) -> Integer.compare(t1.value, t2.value));
assertEquals(3, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 1));
assertTrue(result.stream().anyMatch(c -> c.value == 2));
assertTrue(result.stream().anyMatch(c -> c.value == 3));
}
private class C {
public final int value;
private C(int value) {
this.value = value;
}
}
Maybe will be useful for somebody. I had a little bit another requirement. Having list of objects A from 3rd party remove all which have same A.b field for same A.id (multiple A object with same A.id in list). Stream partition answer by Tagir Valeev inspired me to use custom Collector which returns Map<A.id, List<A>>. Simple flatMap will do the rest.
public static <T, K, K2> Collector<T, ?, Map<K, List<T>>> groupingDistinctBy(Function<T, K> keyFunction, Function<T, K2> distinctFunction) {
return groupingBy(keyFunction, Collector.of((Supplier<Map<K2, T>>) HashMap::new,
(map, error) -> map.putIfAbsent(distinctFunction.apply(error), error),
(left, right) -> {
left.putAll(right);
return left;
}, map -> new ArrayList<>(map.values()),
Collector.Characteristics.UNORDERED)); }
I had a situation, where I was suppose to get distinct elements from list based on 2 keys.
If you want distinct based on two keys or may composite key, try this
class Person{
int rollno;
String name;
}
List<Person> personList;
Function<Person, List<Object>> compositeKey = personList->
Arrays.<Object>asList(personList.getName(), personList.getRollno());
Map<Object, List<Person>> map = personList.stream().collect(Collectors.groupingBy(compositeKey, Collectors.toList()));
List<Object> duplicateEntrys = map.entrySet().stream()`enter code here`
.filter(settingMap ->
settingMap.getValue().size() > 1)
.collect(Collectors.toList());
A variation of the top answer that handles null:
public static <T, K> Predicate<T> distinctBy(final Function<? super T, K> getKey) {
val seen = ConcurrentHashMap.<Optional<K>>newKeySet();
return obj -> seen.add(Optional.ofNullable(getKey.apply(obj)));
}
In my tests:
assertEquals(
asList("a", "bb"),
Stream.of("a", "b", "bb", "aa").filter(distinctBy(String::length)).collect(toList()));
assertEquals(
asList(5, null, 2, 3),
Stream.of(5, null, 2, null, 3, 3, 2).filter(distinctBy(x -> x)).collect(toList()));
val maps = asList(
hashMapWith(0, 2),
hashMapWith(1, 2),
hashMapWith(2, null),
hashMapWith(3, 1),
hashMapWith(4, null),
hashMapWith(5, 2));
assertEquals(
asList(0, 2, 3),
maps.stream()
.filter(distinctBy(m -> m.get("val")))
.map(m -> m.get("i"))
.collect(toList()));
In my case I needed to control what was the previous element. I then created a stateful Predicate where I controled if the previous element was different from the current element, in that case I kept it.
public List<Log> fetchLogById(Long id) {
return this.findLogById(id).stream()
.filter(new LogPredicate())
.collect(Collectors.toList());
}
public class LogPredicate implements Predicate<Log> {
private Log previous;
public boolean test(Log atual) {
boolean isDifferent = previouws == null || verifyIfDifferentLog(current, previous);
if (isDifferent) {
previous = current;
}
return isDifferent;
}
private boolean verifyIfDifferentLog(Log current, Log previous) {
return !current.getId().equals(previous.getId());
}
}
My solution in this listing:
List<HolderEntry> result ....
List<HolderEntry> dto3s = new ArrayList<>(result.stream().collect(toMap(
HolderEntry::getId,
holder -> holder, //or Function.identity() if you want
(holder1, holder2) -> holder1
)).values());
In my situation i want to find distinct values and put their in List.
Related
Sort list by multiple fields(not then compare) in java
Now I have an object: public class Room{ private long roomId; private long roomGroupId; private String roomName; ... getter ... setter } I want sort list of rooms by 'roomId', but in the meantime while room objects has 'roomGroupId' greator than zero and has same value then make them close to each other. Let me give you some example: input: [{"roomId":3,"roomGroupId":0}, {"roomId":6,"roomGroupId":0}, {"roomId":1,"roomGroupId":1}, {"roomId":2,"roomGroupId":0}, {"roomId":4,"roomGroupId":1}] output: [{"roomId":6,"roomGroupId":0}, {"roomId":4,"roomGroupId":1}, {"roomId":1,"roomGroupId":1}, {"roomId":3,"roomGroupId":0}, {"roomId":2,"roomGroupId":0}] As shown above, the list sort by 'roomId', but 'roomId 4' and 'roomId 1' are close together, because they has the same roomGroupId.
This does not have easy nice solution (maybe I am wrong). You can do this like this TreeMap<Long, List<Room>> roomMap = new TreeMap<>(); rooms.stream() .collect(Collectors.groupingBy(Room::getRoomGroupId)) .forEach((key, value) -> { if (key.equals(0L)) { value.forEach(room -> roomMap.put(room.getRoomId(), Arrays.asList(room))); } else { roomMap.put( Collections.max(value, Comparator.comparing(Room::getRoomId)) .getRoomId(), value .stream() .sorted(Comparator.comparing(Room::getRoomId) .reversed()) .collect(Collectors.toList()) ); } }); List<Room> result = roomMap.descendingMap() .entrySet() .stream() .flatMap(entry -> entry.getValue() .stream()) .collect(Collectors.toList());
If you're in Java 8, you can use code like this Collections.sort(roomList, Comparator.comparing(Room::getRoomGroupId) .thenComparing(Room::getRoomId)); If not, you should use a comparator class SortRoom implements Comparator<Room> { public int compare(Room a, Room b) { if (a.getRoomGroupId().compareTo(b.getRoomGroupId()) == 0) { return a.getRoomId().compareTo(b.getRoomId()); } return a.getRoomGroupId().compareTo(b.getRoomGroupId(); } } and then use it like this Collections.sort(roomList, new SortRoom());
Java - How to filter items from a list so that only one per element attribute is present? [duplicate]
In Java 8 how can I filter a collection using the Stream API by checking the distinctness of a property of each object? For example I have a list of Person object and I want to remove people with the same name, persons.stream().distinct(); Will use the default equality check for a Person object, so I need something like, persons.stream().distinct(p -> p.getName()); Unfortunately the distinct() method has no such overload. Without modifying the equality check inside the Person class is it possible to do this succinctly?
Consider distinct to be a stateful filter. Here is a function that returns a predicate that maintains state about what it's seen previously, and that returns whether the given element was seen for the first time: public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) { Set<Object> seen = ConcurrentHashMap.newKeySet(); return t -> seen.add(keyExtractor.apply(t)); } Then you can write: persons.stream().filter(distinctByKey(Person::getName)) Note that if the stream is ordered and is run in parallel, this will preserve an arbitrary element from among the duplicates, instead of the first one, as distinct() does. (This is essentially the same as my answer to this question: Java Lambda Stream Distinct() on arbitrary key?)
An alternative would be to place the persons in a map using the name as a key: persons.collect(Collectors.toMap(Person::getName, p -> p, (p, q) -> p)).values(); Note that the Person that is kept, in case of a duplicate name, will be the first encontered.
You can wrap the person objects into another class, that only compares the names of the persons. Afterward, you unwrap the wrapped objects to get a person stream again. The stream operations might look as follows: persons.stream() .map(Wrapper::new) .distinct() .map(Wrapper::unwrap) ...; The class Wrapper might look as follows: class Wrapper { private final Person person; public Wrapper(Person person) { this.person = person; } public Person unwrap() { return person; } public boolean equals(Object other) { if (other instanceof Wrapper) { return ((Wrapper) other).person.getName().equals(person.getName()); } else { return false; } } public int hashCode() { return person.getName().hashCode(); } }
Another solution, using Set. May not be the ideal solution, but it works Set<String> set = new HashSet<>(persons.size()); persons.stream().filter(p -> set.add(p.getName())).collect(Collectors.toList()); Or if you can modify the original list, you can use removeIf method persons.removeIf(p -> !set.add(p.getName()));
There's a simpler approach using a TreeSet with a custom comparator. persons.stream() .collect(Collectors.toCollection( () -> new TreeSet<Person>((p1, p2) -> p1.getName().compareTo(p2.getName())) ));
We can also use RxJava (very powerful reactive extension library) Observable.from(persons).distinct(Person::getName) or Observable.from(persons).distinct(p -> p.getName())
You can use groupingBy collector: persons.collect(Collectors.groupingBy(p -> p.getName())).values().forEach(t -> System.out.println(t.get(0).getId())); If you want to have another stream you can use this: persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream().map(l -> (l.get(0)));
You can use the distinct(HashingStrategy) method in Eclipse Collections. List<Person> persons = ...; MutableList<Person> distinct = ListIterate.distinct(persons, HashingStrategies.fromFunction(Person::getName)); If you can refactor persons to implement an Eclipse Collections interface, you can call the method directly on the list. MutableList<Person> persons = ...; MutableList<Person> distinct = persons.distinct(HashingStrategies.fromFunction(Person::getName)); HashingStrategy is simply a strategy interface that allows you to define custom implementations of equals and hashcode. public interface HashingStrategy<E> { int computeHashCode(E object); boolean equals(E object1, E object2); } Note: I am a committer for Eclipse Collections.
Similar approach which Saeed Zarinfam used but more Java 8 style:) persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream() .map(plans -> plans.stream().findFirst().get()) .collect(toList());
You can use StreamEx library: StreamEx.of(persons) .distinct(Person::getName) .toList()
I recommend using Vavr, if you can. With this library you can do the following: io.vavr.collection.List.ofAll(persons) .distinctBy(Person::getName) .toJavaSet() // or any another Java 8 Collection
Extending Stuart Marks's answer, this can be done in a shorter way and without a concurrent map (if you don't need parallel streams): public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) { final Set<Object> seen = new HashSet<>(); return t -> seen.add(keyExtractor.apply(t)); } Then call: persons.stream().filter(distinctByKey(p -> p.getName());
My approach to this is to group all the objects with same property together, then cut short the groups to size of 1 and then finally collect them as a List. List<YourPersonClass> listWithDistinctPersons = persons.stream() //operators to remove duplicates based on person name .collect(Collectors.groupingBy(p -> p.getName())) .values() .stream() //cut short the groups to size of 1 .flatMap(group -> group.stream().limit(1)) //collect distinct users as list .collect(Collectors.toList());
Distinct objects list can be found using: List distinctPersons = persons.stream() .collect(Collectors.collectingAndThen( Collectors.toCollection(() -> new TreeSet<>(Comparator.comparing(Person:: getName))), ArrayList::new));
I made a generic version: private <T, R> Collector<T, ?, Stream<T>> distinctByKey(Function<T, R> keyExtractor) { return Collectors.collectingAndThen( toMap( keyExtractor, t -> t, (t1, t2) -> t1 ), (Map<R, T> map) -> map.values().stream() ); } An exemple: Stream.of(new Person("Jean"), new Person("Jean"), new Person("Paul") ) .filter(...) .collect(distinctByKey(Person::getName)) // return a stream of Person with 2 elements, jean and Paul .map(...) .collect(toList())
Another library that supports this is jOOλ, and its Seq.distinct(Function<T,U>) method: Seq.seq(persons).distinct(Person::getName).toList(); Under the hood, it does practically the same thing as the accepted answer, though.
Set<YourPropertyType> set = new HashSet<>(); list .stream() .filter(it -> set.add(it.getYourProperty())) .forEach(it -> ...);
While the highest upvoted answer is absolutely best answer wrt Java 8, it is at the same time absolutely worst in terms of performance. If you really want a bad low performant application, then go ahead and use it. Simple requirement of extracting a unique set of Person Names shall be achieved by mere "For-Each" and a "Set". Things get even worse if list is above size of 10. Consider you have a collection of 20 Objects, like this: public static final List<SimpleEvent> testList = Arrays.asList( new SimpleEvent("Tom"), new SimpleEvent("Dick"),new SimpleEvent("Harry"),new SimpleEvent("Tom"), new SimpleEvent("Dick"),new SimpleEvent("Huckle"),new SimpleEvent("Berry"),new SimpleEvent("Tom"), new SimpleEvent("Dick"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("Cherry"), new SimpleEvent("Roses"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("gotya"), new SimpleEvent("Gotye"),new SimpleEvent("Nibble"),new SimpleEvent("Berry"),new SimpleEvent("Jibble")); Where you object SimpleEvent looks like this: public class SimpleEvent { private String name; private String type; public SimpleEvent(String name) { this.name = name; this.type = "type_"+name; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getType() { return type; } public void setType(String type) { this.type = type; } } And to test, you have JMH code like this,(Please note, im using the same distinctByKey Predicate mentioned in accepted answer) : #Benchmark #OutputTimeUnit(TimeUnit.SECONDS) public void aStreamBasedUniqueSet(Blackhole blackhole) throws Exception{ Set<String> uniqueNames = testList .stream() .filter(distinctByKey(SimpleEvent::getName)) .map(SimpleEvent::getName) .collect(Collectors.toSet()); blackhole.consume(uniqueNames); } #Benchmark #OutputTimeUnit(TimeUnit.SECONDS) public void aForEachBasedUniqueSet(Blackhole blackhole) throws Exception{ Set<String> uniqueNames = new HashSet<>(); for (SimpleEvent event : testList) { uniqueNames.add(event.getName()); } blackhole.consume(uniqueNames); } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(MyBenchmark.class.getSimpleName()) .forks(1) .mode(Mode.Throughput) .warmupBatchSize(3) .warmupIterations(3) .measurementIterations(3) .build(); new Runner(opt).run(); } Then you'll have Benchmark results like this: Benchmark Mode Samples Score Score error Units c.s.MyBenchmark.aForEachBasedUniqueSet thrpt 3 2635199.952 1663320.718 ops/s c.s.MyBenchmark.aStreamBasedUniqueSet thrpt 3 729134.695 895825.697 ops/s And as you can see, a simple For-Each is 3 times better in throughput and less in error score as compared to Java 8 Stream. Higher the throughput, better the performance
I would like to improve Stuart Marks answer. What if the key is null, it will through NullPointerException. Here I ignore the null key by adding one more check as keyExtractor.apply(t)!=null. public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) { Set<Object> seen = ConcurrentHashMap.newKeySet(); return t -> keyExtractor.apply(t)!=null && seen.add(keyExtractor.apply(t)); }
This works like a charm: Grouping the data by unique key to form a map. Returning the first object from every value of the map (There could be multiple person having same name). persons.stream() .collect(groupingBy(Person::getName)) .values() .stream() .flatMap(values -> values.stream().limit(1)) .collect(toList());
The easiest way to implement this is to jump on the sort feature as it already provides an optional Comparator which can be created using an element’s property. Then you have to filter duplicates out which can be done using a statefull Predicate which uses the fact that for a sorted stream all equal elements are adjacent: Comparator<Person> c=Comparator.comparing(Person::getName); stream.sorted(c).filter(new Predicate<Person>() { Person previous; public boolean test(Person p) { if(previous!=null && c.compare(previous, p)==0) return false; previous=p; return true; } })./* more stream operations here */; Of course, a statefull Predicate is not thread-safe, however if that’s your need you can move this logic into a Collector and let the stream take care of the thread-safety when using your Collector. This depends on what you want to do with the stream of distinct elements which you didn’t tell us in your question.
There are lot of approaches, this one will also help - Simple, Clean and Clear List<Employee> employees = new ArrayList<>(); employees.add(new Employee(11, "Ravi")); employees.add(new Employee(12, "Stalin")); employees.add(new Employee(23, "Anbu")); employees.add(new Employee(24, "Yuvaraj")); employees.add(new Employee(35, "Sena")); employees.add(new Employee(36, "Antony")); employees.add(new Employee(47, "Sena")); employees.add(new Employee(48, "Ravi")); List<Employee> empList = new ArrayList<>(employees.stream().collect( Collectors.toMap(Employee::getName, obj -> obj, (existingValue, newValue) -> existingValue)) .values()); empList.forEach(System.out::println); // Collectors.toMap( // Employee::getName, - key (the value by which you want to eliminate duplicate) // obj -> obj, - value (entire employee object) // (existingValue, newValue) -> existingValue) - to avoid illegalstateexception: duplicate key Output - toString() overloaded Employee{id=35, name='Sena'} Employee{id=12, name='Stalin'} Employee{id=11, name='Ravi'} Employee{id=24, name='Yuvaraj'} Employee{id=36, name='Antony'} Employee{id=23, name='Anbu'}
Here is the example public class PayRoll { private int payRollId; private int id; private String name; private String dept; private int salary; public PayRoll(int payRollId, int id, String name, String dept, int salary) { super(); this.payRollId = payRollId; this.id = id; this.name = name; this.dept = dept; this.salary = salary; } } import java.util.ArrayList; import java.util.Comparator; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.stream.Collector; import java.util.stream.Collectors; public class Prac { public static void main(String[] args) { int salary=70000; PayRoll payRoll=new PayRoll(1311, 1, "A", "HR", salary); PayRoll payRoll2=new PayRoll(1411, 2 , "B", "Technical", salary); PayRoll payRoll3=new PayRoll(1511, 1, "C", "HR", salary); PayRoll payRoll4=new PayRoll(1611, 1, "D", "Technical", salary); PayRoll payRoll5=new PayRoll(711, 3,"E", "Technical", salary); PayRoll payRoll6=new PayRoll(1811, 3, "F", "Technical", salary); List<PayRoll>list=new ArrayList<PayRoll>(); list.add(payRoll); list.add(payRoll2); list.add(payRoll3); list.add(payRoll4); list.add(payRoll5); list.add(payRoll6); Map<Object, Optional<PayRoll>> k = list.stream().collect(Collectors.groupingBy(p->p.getId()+"|"+p.getDept(),Collectors.maxBy(Comparator.comparingInt(PayRoll::getPayRollId)))); k.entrySet().forEach(p-> { if(p.getValue().isPresent()) { System.out.println(p.getValue().get()); } }); } } Output: PayRoll [payRollId=1611, id=1, name=D, dept=Technical, salary=70000] PayRoll [payRollId=1811, id=3, name=F, dept=Technical, salary=70000] PayRoll [payRollId=1411, id=2, name=B, dept=Technical, salary=70000] PayRoll [payRollId=1511, id=1, name=C, dept=HR, salary=70000]
Late to the party but I sometimes use this one-liner as an equivalent: ((Function<Value, Key>) Value::getKey).andThen(new HashSet<>()::add)::apply The expression is a Predicate<Value> but since the map is inline, it works as a filter. This is of course less readable but sometimes it can be helpful to avoid the method.
Building on #josketres's answer, I created a generic utility method: You could make this more Java 8-friendly by creating a Collector. public static <T> Set<T> removeDuplicates(Collection<T> input, Comparator<T> comparer) { return input.stream() .collect(toCollection(() -> new TreeSet<>(comparer))); } #Test public void removeDuplicatesWithDuplicates() { ArrayList<C> input = new ArrayList<>(); Collections.addAll(input, new C(7), new C(42), new C(42)); Collection<C> result = removeDuplicates(input, (c1, c2) -> Integer.compare(c1.value, c2.value)); assertEquals(2, result.size()); assertTrue(result.stream().anyMatch(c -> c.value == 7)); assertTrue(result.stream().anyMatch(c -> c.value == 42)); } #Test public void removeDuplicatesWithoutDuplicates() { ArrayList<C> input = new ArrayList<>(); Collections.addAll(input, new C(1), new C(2), new C(3)); Collection<C> result = removeDuplicates(input, (t1, t2) -> Integer.compare(t1.value, t2.value)); assertEquals(3, result.size()); assertTrue(result.stream().anyMatch(c -> c.value == 1)); assertTrue(result.stream().anyMatch(c -> c.value == 2)); assertTrue(result.stream().anyMatch(c -> c.value == 3)); } private class C { public final int value; private C(int value) { this.value = value; } }
Maybe will be useful for somebody. I had a little bit another requirement. Having list of objects A from 3rd party remove all which have same A.b field for same A.id (multiple A object with same A.id in list). Stream partition answer by Tagir Valeev inspired me to use custom Collector which returns Map<A.id, List<A>>. Simple flatMap will do the rest. public static <T, K, K2> Collector<T, ?, Map<K, List<T>>> groupingDistinctBy(Function<T, K> keyFunction, Function<T, K2> distinctFunction) { return groupingBy(keyFunction, Collector.of((Supplier<Map<K2, T>>) HashMap::new, (map, error) -> map.putIfAbsent(distinctFunction.apply(error), error), (left, right) -> { left.putAll(right); return left; }, map -> new ArrayList<>(map.values()), Collector.Characteristics.UNORDERED)); }
I had a situation, where I was suppose to get distinct elements from list based on 2 keys. If you want distinct based on two keys or may composite key, try this class Person{ int rollno; String name; } List<Person> personList; Function<Person, List<Object>> compositeKey = personList-> Arrays.<Object>asList(personList.getName(), personList.getRollno()); Map<Object, List<Person>> map = personList.stream().collect(Collectors.groupingBy(compositeKey, Collectors.toList())); List<Object> duplicateEntrys = map.entrySet().stream()`enter code here` .filter(settingMap -> settingMap.getValue().size() > 1) .collect(Collectors.toList());
A variation of the top answer that handles null: public static <T, K> Predicate<T> distinctBy(final Function<? super T, K> getKey) { val seen = ConcurrentHashMap.<Optional<K>>newKeySet(); return obj -> seen.add(Optional.ofNullable(getKey.apply(obj))); } In my tests: assertEquals( asList("a", "bb"), Stream.of("a", "b", "bb", "aa").filter(distinctBy(String::length)).collect(toList())); assertEquals( asList(5, null, 2, 3), Stream.of(5, null, 2, null, 3, 3, 2).filter(distinctBy(x -> x)).collect(toList())); val maps = asList( hashMapWith(0, 2), hashMapWith(1, 2), hashMapWith(2, null), hashMapWith(3, 1), hashMapWith(4, null), hashMapWith(5, 2)); assertEquals( asList(0, 2, 3), maps.stream() .filter(distinctBy(m -> m.get("val"))) .map(m -> m.get("i")) .collect(toList()));
In my case I needed to control what was the previous element. I then created a stateful Predicate where I controled if the previous element was different from the current element, in that case I kept it. public List<Log> fetchLogById(Long id) { return this.findLogById(id).stream() .filter(new LogPredicate()) .collect(Collectors.toList()); } public class LogPredicate implements Predicate<Log> { private Log previous; public boolean test(Log atual) { boolean isDifferent = previouws == null || verifyIfDifferentLog(current, previous); if (isDifferent) { previous = current; } return isDifferent; } private boolean verifyIfDifferentLog(Log current, Log previous) { return !current.getId().equals(previous.getId()); } }
My solution in this listing: List<HolderEntry> result .... List<HolderEntry> dto3s = new ArrayList<>(result.stream().collect(toMap( HolderEntry::getId, holder -> holder, //or Function.identity() if you want (holder1, holder2) -> holder1 )).values()); In my situation i want to find distinct values and put their in List.
Filter stream using correlation between stream elements
Let’s say there’s a Person class that looks like this: public class Person { private int id; private String discriminator; // some other fields plus getters/setters } Now I have a Stream of Person elements and that stream may contain multiple Person instances that have same id, but different discriminator values, i.e. [Person{“id”: 1, “discriminator”: “A”}, Person{“id”: 1, “discriminator”: “B”}, Person{“id”: 2, “discriminator”: “A”}, ...] What I’d like to do is to filter out all Person instances with some id if there’s at least one Person instance with that id that has a particular discriminator value. So, continuing with the above example, filtering by discriminator value “A” would yield an empty collection (after the reduction operation, of course) and filtering by discriminator value “B” would yield a collection not containing any Person instance with id equal to 1. I know I can reduce the stream using groupingBy collector and group elements by Person.id and then remove the mapping from the resulting Map if the mapped list contains a Person element with the specified discriminator value, but am still wondering if there’s a simpler way to achieve the same result?
If I understood your problem correctly, you would first find all IDs that would match a discriminator: Set<Integer> ids = persons.stream() .filter(p -> "A".equalsIgnoreCase(p.getDiscriminator())) .map(Person::getId) .collect(Collectors.toSet()) And then remove the entries that would match those: persons.removeIf(x -> ids.contains(x.getId()))
Eugenes answer works great, but I personally prefer a single statement. So I took his code and put it all together into a single operation. Which looks like this: final List<Person> result = persons.stream() .filter(p -> "B".equalsIgnoreCase(p.getDiscriminator())) .map(Person::getId) .collect( () -> new ArrayList<>(persons), ( list, id ) -> list.removeIf(p -> p.getId() == id), ( a, b ) -> {throw new UnsupportedOperationException();} ); Probably need to mention that the copy of persons is needed, else the Stream will corrupt and encounters null values. SideNote: This version is currently throwing an UnsupportedOperationException when trying to use in parallel.
So below I present the solution that I've come up with. First I group the input Person collection/stream by the Person.id attribute and then I stream over the map entries and filter out those that have at least one value that matches given discriminator. import java.util.Arrays; import java.util.List; import java.util.Objects; import java.util.stream.Collectors; public class Main { public static void main(String[] args) { List<Person> persons = Arrays.asList( new Person(1, "A"), new Person(1, "B"), new Person(1, "C"), new Person(2, "B"), new Person(2, "C"), new Person(3, "A"), new Person(3, "C"), new Person(4, "B") ); System.out.println(persons); System.out.println(filterByDiscriminator(persons, "A")); System.out.println(filterByDiscriminator(persons, "B")); System.out.println(filterByDiscriminator(persons, "C")); } private static List<Person> filterByDiscriminator(List<Person> input, String discriminator) { return input.stream() .collect(Collectors.groupingBy(Person::getId)) .entrySet().stream() .filter(entry -> entry.getValue().stream().noneMatch(person -> person.getDiscriminator().equals(discriminator))) .flatMap(entry -> entry.getValue().stream()) .collect(Collectors.toList()); } } class Person { private final Integer id; private final String discriminator; public Person(Integer id, String discriminator) { Objects.requireNonNull(id); Objects.requireNonNull(discriminator); this.id = id; this.discriminator = discriminator; } public Integer getId() { return id; } public String getDiscriminator() { return discriminator; } #Override public String toString() { return String.format("%s{\"id\": %d, \"discriminator\": \"%s\"}", getClass().getSimpleName(), id, discriminator); } }
Java 8 Streams & lambdas maintaining strict FP
Java 8 lambdas are very useful in many situations to implement code in a FP fashion in a compact way. But there are situations where we may have to access/mutate external state which is not a good practice as per FP practices. (because Java 8 Functional interfaces have strict input and output signatures we can't pass extra arguments) Eg: class Country{ List<State> states; } class State{ BigInt population; String capital; } class Main{ List<Country> countries; //code to fill } Let's say the use case is to get list of all capitals and and the whole population of all states in all countries Normal Implmentation: List<String> capitals = new ArrayList<>(); BigInt population = new BigInt(0); for(Country country:countries){ for(State state:states){ capitals.add(state.capital); population.add(state.population) } } How to implement the same with Java 8 Streams in a more optimized manner? Stream<State> statesStream = countries.stream().flatMap(country->country.getStates()); capitals = statesStream.get().collect(toList()); population = statesStream.get().reduce((pop1,pop2) -> return pop1+pop2); But the above Implementation is not very efficient.Any other better way to manipulate more than one collection using Java 8 Streams
If you want to collect multiple results in one pipeline you should create a result container and a custom Collector. class MyResult { private BigInteger population = BigInteger.ZERO; private List<String> capitals = new ArrayList<>(); public void accumulate(State state) { population = population.add(state.population); capitals.add(state.capital); } public MyResult merge(MyResult other) { population = population.add(other.population); capitals.addAll(other.capitals); return this; } } MyResult result = countries.stream() .flatMap(c -> c.getStates().stream()) .collect(Collector.of(MyResult::new, MyResult::accumulate, MyResult::merge)); BigInteger population = result.population; List<String> capitals = result.capitals; Or stream twice, as you did.
You can only consume a stream once, so you need to create an aggregate that can be reduced: public class CapitalsAndPopulation { private List<String> capitals; private BigInt population; // constructors and getters omitted for conciseness public CapitalsAndPopulation merge(CapitalsAndPopulation other) { return new CapitalsAndPopulation( Lists.concat(this.capitals, other.capitals), this.population + other.population); } } Then you produce the pipeline: countries.stream() .flatMap(country-> country.getStates() .stream()) .map(state -> new CapitalsAndPopulation(Collections.singletonList(state.getCapital()), state.population)) .reduce(CapitalsAndPopulation::merge); The reason this looks so ugly is that you don't have nice syntax for structures like tuples or maps, so you need to create classes to make the pipelines look nice...
Try this. class Pair<T, U> { T first; U second; Pair(T first, U second) { this.first = first; this.second = second; } } Pair<List<String>, BigInteger> result = countries.stream() .flatMap(country -> country.states.stream()) .collect(() -> new Pair<>( new ArrayList<>(), BigInteger.ZERO ), (acc, state) -> { acc.first.add(state.capital); acc.second = acc.second.add(state.population); }, (a, b) -> { a.first.addAll(b.first); a.second = a.second.add(b.second); }); You can use AbstractMap.Entry<K, V> instead of Pair<T, U>. Entry<List<String>, BigInteger> result = countries.stream() .flatMap(country -> country.states.stream()) .collect(() -> new AbstractMap.SimpleEntry<>( new ArrayList<>(), BigInteger.ZERO ), (acc, state) -> { acc.getKey().add(state.capital); acc.setValue(acc.getValue().add(state.population)); }, (a, b) -> { a.getKey().addAll(b.getKey()); a.setValue(a.getValue().add(b.getValue())); });
Java 8 Stream API - Selecting only values after Collectors.groupingBy(..)
Say I have the following collection of Student objects which consist of Name(String), Age(int) and City(String). I am trying to use Java's Stream API to achieve the following sql-like behavior: SELECT MAX(age) FROM Students GROUP BY city Now, I found two different ways to do so: final List<Integer> variation1 = students.stream() .collect(Collectors.groupingBy(Student::getCity, Collectors.maxBy((s1, s2) -> s1.getAge() - s2.getAge()))) .values() .stream() .filter(Optional::isPresent) .map(Optional::get) .map(Student::getAge) .collect(Collectors.toList()); And the other one: final Collection<Integer> variation2 = students.stream() .collect(Collectors.groupingBy(Student::getCity, Collectors.collectingAndThen(Collectors.maxBy((s1, s2) -> s1.getAge() - s2.getAge()), optional -> optional.get().getAge()))) .values(); In both ways, one has to .values() ... and filter the empty groups returned from the collector. Is there any other way to achieve this required behavior? These methods remind me of over partition by sql statements... Thanks Edit: All the answers below were really interesting, but unfortunately this is not what I was looking for, since what I try to get is just the values. I don't need the keys, just the values.
Do not always stick with groupingBy. Sometimes toMap is the thing you need: Collection<Integer> result = students.stream() .collect(Collectors.toMap(Student::getCity, Student::getAge, Integer::max)) .values(); Here you just create a Map where keys are cities and values are ages. In case when several students have the same city, merge function is used which just selects maximal age here. It's faster and cleaner.
As addition to Tagir’s great answer using toMap instead of groupingBy, here the short solution, if you want to stick to groupingBy: Collection<Integer> result = students.stream() .collect(Collectors.groupingBy(Student::getCity, Collectors.reducing(-1, Student::getAge, Integer::max))) .values(); Note that this three arg reducing collector already performs a mapping operation, so we don’t need to nest it with a mapping collector, further, providing an identity value avoids dealing with Optional. Since ages are always positive, providing -1 is sufficient and since a group will always have at least one element, the identity value will never show up as a result. Still, I think Tagir’s toMap based solution is preferable in this scenario. The groupingBy based solution becomes more interesting when you want to get the actual students having the maximum age, e.g Collection<Student> result = students.stream().collect( Collectors.groupingBy(Student::getCity, Collectors.reducing(null, BinaryOperator.maxBy( Comparator.nullsFirst(Comparator.comparingInt(Student::getAge))))) ).values(); well, actually, even this can also be expressed using the toMap collector: Collection<Student> result = students.stream().collect( Collectors.toMap(Student::getCity, Function.identity(), BinaryOperator.maxBy(Comparator.comparingInt(Student::getAge))) ).values(); You can express almost everything with both collectors, but groupingBy has the advantage on its side when you want to perform a mutable reduction on the values.
The second approach calls get() on an Optional; this is usually a bad idea as you don't know if the optional will be empty or not (use orElse(), orElseGet(), orElseThrow() methods instead). While you might argue that in this case there always be a value since you generate the values from the student list itself, this is something to keep in mind. Based on that, you might turn the variation 2 into: final Collection<Integer> variation2 = students.stream() .collect(collectingAndThen(groupingBy(Student::getCity, collectingAndThen( mapping(Student::getAge, maxBy(naturalOrder())), Optional::get)), Map::values)); Although it really starts to be difficult to read, I'll probably use the variant 1: final List<Integer> variation1 = students.stream() .collect(groupingBy(Student::getCity, mapping(Student::getAge, maxBy(naturalOrder())))) .values() .stream() .map(Optional::get) .collect(toList());
Here is my implementation public class MaxByTest { static class Student { private int age; private int city; public Student(int age, int city) { this.age = age; this.city = city; } public int getCity() { return city; } public int getAge() { return age; } #Override public String toString() { return " City : " + city + " Age : " + age; } } static List<Student> students = Arrays.asList(new Student[]{ new Student(10, 1), new Student(9, 2), new Student(8, 1), new Student(6, 1), new Student(4, 1), new Student(8, 2), new Student(9, 2), new Student(7, 2), }); public static void main(String[] args) { final Comparator<Student> comparator = (p1, p2) -> Integer.compare( p1.getAge(), p2.getAge()); final List<Student> studets = students.stream() .collect(Collectors.groupingBy(Student::getCity, Collectors.maxBy(comparator))).values().stream().map(Optional::get).collect(Collectors.toList()); System.out.println(studets); } }
List<BeanClass> list1 = new ArrayList<BeanClass>(); DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); list1.add(new BeanClass(123,abc,99.0,formatter.parse("2018-02-01"))); list1.add(new BeanClass(456,xyz,99.0,formatter.parse("2014-01-01"))); list1.add(new BeanClass(789,pqr,95.0,formatter.parse("2014-01-01"))); list1.add(new BeanClass(1011,def,99.0,formatter.parse("2014-01-01"))); Map<Object, Optional<Double>> byDate = list1.stream() .collect(Collectors.groupingBy(p -> formatter.format(p.getCurrentDate()), Collectors.mapping(BeanClass::getAge, Collectors.maxBy(Double::compare))));