Related
I am trying to get aggregate Marks of each student grouped by the department and sorted by their aggregate marks.
This is how I am trying.
Student class properties:
private String firstName,lastName,branch,nationality,grade,shName;
private SubjectMarks subject;
private LocalDate dob;
SubjectMarks class:
public SubjectMarks(int maths, int biology, int computers) {
this.maths = maths;
this.biology = biology;
this.computers = computers;
}
public double getAverageMarks() {
double avg = (getBiology() + getMaths() + getComputers())/3;
return avg;
}
Main class:
Collections.sort(stList, new Comparator<Student>() {
#Override
public int compare(Student m1, Student m2) {
if(m1.getSubject().getAverageMarks() == m2.getSubject().getAverageMarks()){
return 0;
}
return m1.getSubject().getAverageMarks()< m2.getSubject().getAverageMarks()? 1 : -1;
}
});
Map<String, List<Student>> groupSt=stList.stream().collect(Collectors.groupingBy(Student::getBranch,
LinkedHashMap::new,Collectors.toList()));
groupSt.forEach((k, v) -> System.out.println("\nBranch Name: " + k + "\n" + v.stream()
.flatMap(stud->Stream.of(stud.getFirstName(),stud.getSubject().getAverageMarks())).collect(Collectors.toList())));
updated code: This is how I am getting the output.
Branch Name: ECE
[Bob, 96.0, TOM, 84.33333333333333]
Branch Name: CSE
[Karthik, 94.33333333333333, Angelina, 91.0, Arun, 80.66666666666667]
Branch Name: EEE
[Meghan, 85.0]
This is the actual sorted order but Student objects are getting flattened in one line separated by a comma(,).
In the above output, since Bob got the highest aggregate marks of all branches, ECE comes first and followed by other branches sorted with student aggregate marks.
The Expected result is :
List of student names with their aggregate marks sorted.
Branch Name: ECE
[{Bob, 96.0},{TOM, 84.33333333333333}]
Branch Name: CSE
[{Karthik, 94.33333333333333}, {Angelina, 91.0}, {Arun,
80.66666666666667}]
Branch Name: EEE
[Meghan, 85.0]
Is there any way to map both name and average on groupingBy a property using streams?
You could rather prefer to choose the return type to be a Map<String, Map<String, Double>> or a custom class with appropriate equals and hashCode to ensure the uniqueness amongst the inner List<Custom>. I would frame the solution based on the former, and you can convert it to the one which is more readable to your actual code.
Once you have grouped each branch specific students, what you could do to ensure firstName is mapped to maximum average marks of that student is to perform a reduction using toMap with merge based on Double::max... and then collect these entries soted based on the marks (values).
Might look slightly complicated with the following code, but it could be broken into steps as well.
Map<String, LinkedHashMap<String, Double>> branchStudentsSortedOnMarks = stList.stream()
.collect(Collectors.groupingBy(Student::getBranch, // you got it!
Collectors.collectingAndThen(
Collectors.toMap(Student::getFirstName,
s -> s.getSubject().getAverageMarks(), Double::max), // max average marks per name
map -> map.entrySet().stream()
.sorted(Map.Entry.<String, Double>comparingByValue().reversed()) // sorted inn reverse based on value
.collect(Collectors.toMap(Map.Entry::getKey,
Map.Entry::getValue, (a, b) -> b, LinkedHashMap::new))
)));
Firstly, in your Map<String, List<Map<String,Double>>> the map inside the list would contain only one key-value pair. So I would suggest you to return Map<String, List<Entry<String, Double>>>. (Entry in java.util.Map)
Also, create a getAverageMarks in your student class which would return:
return subject.getAverageMarks();
// First define a function to sort based on average marks
UnaryOperator<List<Entry<String, Double>>> sort =
list -> {
Collections.sort(list, Collections.reverseOrder(Entry.comparingByValue()));
return list;
};
// function to create entry
Function<Student, Entry<String, Double>> getEntry =
s -> Map.entry(s.getFirstName(), s.getAverageMarks());
// return this
list.stream()
.collect(Collectors.groupingBy(
Student::getBranch,
Collectors.mapping(getEntry, // map each student
// collect and apply sort as finisher
Collector.of(ArrayList::new,
List::add,
(x,y) -> {x.addAll(y); return x;},
sort))));
Input:
List<Student> stList = Arrays.asList(
new Student("John", "Wall", "A", "a", "C", "sa", new SubjectMarks(65, 67, 100), LocalDate.now()),
new Student("Arun", "Wall", "B", "a", "C", "sa", new SubjectMarks(45, 61, 95), LocalDate.now()),
new Student("Marry", "Wall", "A", "a", "C", "sa", new SubjectMarks(90, 80, 92), LocalDate.now())
);
Idea:
group by "branch"
For each group (list) - sort by grade and map each student to a map of "name","grade".
Now it's easy to code:
Map<String, List<Map<String, Double>>> branchToSortedStudentsByGrade = stList.stream().collect(Collectors.groupingBy(
Student::getBranch, Collectors.collectingAndThen(Collectors.toList(),
l -> l.stream()
.sorted(Comparator.comparing(st -> st.getSubject().getAverageMarks(), Comparator.reverseOrder()))
.map(student -> Collections.singletonMap(student.getFirstName(), student.getSubject().getAverageMarks()))
.collect(Collectors.toList()))));
Output:
{
A=[{Marry=87.0}, {John=77.0}],
B=[{Arun=67.0}]
}
By the way:
Note that you divide by an integer in a floating-point context in getAverageMarks:
public double getAverageMarks() {
double avg = (getBiology() + getMaths() + getComputers())/3;
return avg;
}
This will cause all grades to be in this format- xx.0
If it's by mistake, I would recommend on this approach:
public double getAverageMarks() {
return DoubleStream.of(maths, biology, computers)
.average()
.getAsDouble();
}
I have two list on two Class where id and month is common
public class NamePropeties{
private String id;
private Integer name;
private Integer months;
}
public class NameEntries {
private String id;
private Integer retailId;
private Integer months;
}
List NamePropetiesList = new ArrayList<>();
List NameEntries = new ArrayList<>();
Now i want to JOIN two list (like Sql does, JOIN ON month and id coming from two results) and return the data in new list where month and id is same in the given two list.
if i will start iterating only one and check in another list then there can be a size iteration issue.
i have tried to do it in many ways but is there is any stream way?
The general idea has been sketched in the comments: iterate one list, create a map whose keys are the attributes you want to join by, then iterate the other list and check if there's an entry in the map. If there is, get the value from the map and create a new object from the value of the map and the actual element of the list.
It's better to create a map from the list with the higher number of joined elements. Why? Because searching a map is O(1), no matter the size of the map. So, if you create a map from the list with the higher number of joined elements, then, when you iterate the second list (which is smaller), you'll be iterating among less elements.
Putting all this in code:
public static <B, S, J, R> List<R> join(
List<B> bigger,
List<S> smaller,
Function<B, J> biggerKeyExtractor,
Function<S, J> smallerKeyExtractor,
BiFunction<B, S, R> joiner) {
Map<J, List<B>> map = new LinkedHashMap<>();
bigger.forEach(b ->
map.computeIfAbsent(
biggerKeyExtractor.apply(b),
k -> new ArrayList<>())
.add(b));
List<R> result = new ArrayList<>();
smaller.forEach(s -> {
J key = smallerKeyExtractor.apply(s);
List<B> bs = map.get(key);
if (bs != null) {
bs.forEach(b -> {
R r = joiner.apply(b, s);
result.add(r);
}
}
});
return result;
}
This is a generic method that joins bigger List<B> and smaller List<S> by J join keys (in your case, as the join key is a composite of String and Integer types, J will be List<Object>). It takes care of duplicates and returns a result List<R>. The method receives both lists, functions that will extract the join keys from each list and a joiner function that will create new result R elements from joined B and S elements.
Note that the map is actually a multimap. This is because there might be duplicates as per the biggerKeyExtractor join function. We use Map.computeIfAbsent to create this multimap.
You should create a class like this to store joined results:
public class JoinedResult {
private final NameProperties properties;
private final NameEntries entries;
public JoinedResult(NameProperties properties, NameEntries entries) {
this.properties = properties;
this.entries = entries;
}
// TODO getters
}
Or, if you are in Java 14+, you might just use a record:
public record JoinedResult(NameProperties properties, NameEntries entries) { }
Or actually, any Pair class from out there will do, or you could even use Map.Entry.
With the result class (or record) in place, you should call the join method this way:
long propertiesSize = namePropertiesList.stream()
.map(p -> Arrays.asList(p.getMonths(), p.getId()))
.distinct()
.count();
long entriesSize = nameEntriesList.steram()
.map(e -> Arrays.asList(e.getMonths(), e.getId()))
.distinct()
.count();
List<JoinedResult> result = propertiesSize > entriesSize ?
join(namePropertiesList,
nameEntriesList,
p -> Arrays.asList(p.getMonths(), p.getId()),
e -> Arrays.asList(e.getMonths(), e.getId()),
JoinedResult::new) :
join(nameEntriesList,
namePropertiesList,
e -> Arrays.asList(e.getMonths(), e.getId()),
p -> Arrays.asList(p.getMonths(), p.getId()),
(e, p) -> new JoinedResult(p, e));
The key is to use generics and call the join method with the right arguments (they are flipped, as per the join keys size comparison).
Note 1: we can use List<Object> as the key of the map, because all Java lists implement equals and hashCode consistently (thus they can safely be used as map keys)
Note 2: if you are on Java9+, you should use List.of instead of Arrays.asList
Note 3: I haven't checked for neither null nor invalid arguments
Note 4: there is room for improvements, i.e. key extractor functions could be memoized, join keys could be reused instead of calculated more than once and multimap could have Object values for single elements and lists for duplicates, etc
If performance and nesting (as discussed) is not too much of a concern you could employ something along the lines of a crossjoin with filtering:
Result holder class
public class Tuple<A, B> {
public final A a;
public final B b;
public Tuple(A a, B b) {
this.a = a;
this.b = b;
}
}
Join with a predicate:
public static <A, B> List<Tuple<A, B>> joinOn(
List<A> l1,
List<B> l2,
Predicate<Tuple<A, B>> predicate) {
return l1.stream()
.flatMap(a -> l2.stream().map(b -> new Tuple<>(a, b)))
.filter(predicate)
.collect(Collectors.toList());
}
Call it like this:
List<Tuple<NamePropeties, NameEntries>> joined = joinOn(
properties,
names,
t -> Objects.equals(t.a.id, t.b.id) && Objects.equals(t.a.months, t.b.months)
);
Suppose I have a list as below
Collection<?> mainList = new ArrayList<String>();
mainList=//some method call//
Currently, I am displaying the elements in the list as
System.out.println(mainList.stream().map(Object::toString).collect(Collectors.joining(",")).toString());
And I got the result as
a,b,c,d,e,f,g,h,i
How to print this list by adding a new line after every 3rd element in a list in java, so that it will print the result as below
a,b,c
d,e,f
g,h,i
Note: This is similar to How to Add newline after every 3rd element in arraylist in java?.But there formatting the file is done while reading itself.
I want to do it while printing the output.
If you want to stick to Java Stream API then your problem can be solved by partitioning initial list to sublists of size 3 and then representing each sublist as a String and joining results with \n.
import java.util.AbstractMap;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;
final class PartitionListExample {
public static void main(String[] args) {
final Collection<String> mainList = Arrays.asList("a", "b", "c", "d", "e", "f", "g", "h", "i");
final AtomicInteger idx = new AtomicInteger(0);
final int size = 3;
// Partition a list into list of lists size 3
final Collection<List<String>> rows = mainList.stream()
.collect(Collectors.groupingBy(
it -> idx.getAndIncrement() / size
))
.values();
// Write each row in the new line as a string
final String result = rows.stream()
.map(row -> String.join(",", row))
.collect(Collectors.joining("\n"));
System.out.println(result);
}
}
There are 3rd party libraries that provide utility classes that makes list partitioning easier (e.g. Guava or Apache Commons Collections) but this solution is built on Java 8 SDK only.
What it does is:
firstly we collect all elements by grouping by assigned row index and we store values as a list (e.g. {0=[a,b,c],1=[d,e,f],2=[g,h,i]}
then we take a list of all values like [[a,b,c],[d,e,f],[g,h,i]]
finally we represent list of lists as a String where each row is separated by \n
Output Demo
Running following program will print to console following output:
a,b,c
d,e,f
g,h,i
Getting more from following example
Alnitak played even more with following example and came up with a shorter solution by utilizing Collectors.joining(",") in .groupingBy collector and using String.join("\n", rows) in the end instead of triggering another stream reduction.
final Collection<String> rows = mainList.stream()
.collect(Collectors.groupingBy(
it -> idx.getAndIncrement() / size,
Collectors.joining(",")
))
.values();
// Write each row in the new line as a string
final String result = String.join("\n", rows);
System.out.println(result);
}
Final note
Keep in mind that this is not the most efficient way to print list of elements in your desired format. But partitioning list of any elements gives you flexibility if it comes to creating final result and is pretty easy to read and understand.
A side remark : in your actual code, map(Object::toString) could be removed if you replace
Collection<?> mainList = new ArrayList<String>(); by
Collection<String> mainList = new ArrayList<String>();.
If you manipulate Strings, create a Collection of String rather than Collection of ?.
But there formatting the file is done while reading itself.I want to
do it while printing the output.
After gotten the joined String, using replaceAll("(\\w*,\\w*,\\w*,)", "$1" + System.lineSeparator()) should do the job.
Iit will search and replace all series of 3 characters or more separated by a , character by the same thing ($1-> group capturing) but by concatenating it with a line separator.
Besides this :
String collect = mainList.stream().collect(Collectors.joining(","));
could be simplified by :
String collect = String.join(",", mainList);
Sample code :
public static void main(String[] args) {
Collection<String> mainList = Arrays.asList("a","b","c","d","e","f","g","h","i", "j");
String formattedValues = String.join(",", mainList).replaceAll("(\\w*,\\w*,\\w*,)", "$1" + System.lineSeparator());
System.out.println(formattedValues);
}
Output :
a,b,c,
d,e,f,
g,h,i,
j
Another approach that hasn't been answered here is to create a custom Collector.
import java.util.*;
import java.util.function.BiConsumer;
import java.util.function.BinaryOperator;
import java.util.function.Function;
import java.util.function.Supplier;
import java.util.stream.Collector;
import java.util.stream.Collectors;
public class PartitionListInPlace {
static class MyCollector implements Collector<String, List<List<String>>, String> {
private final List<List<String>> buckets;
private final int bucketSize;
public MyCollector(int numberOfBuckets, int bucketSize) {
this.bucketSize = bucketSize;
this.buckets = new ArrayList<>(numberOfBuckets);
for (int i = 0; i < numberOfBuckets; i++) {
buckets.add(new ArrayList<>(bucketSize));
}
}
#Override
public Supplier<List<List<String>>> supplier() {
return () -> this.buckets;
}
#Override
public BiConsumer<List<List<String>>, String> accumulator() {
return (buckets, element) -> buckets
.stream()
.filter(x -> x.size() < bucketSize)
.findFirst()
.orElseGet(() -> {
ArrayList<String> nextBucket = new ArrayList<>(bucketSize);
buckets.add(nextBucket);
return nextBucket;
})
.add(element);
}
#Override
public BinaryOperator<List<List<String>>> combiner() {
return (b1, b2) -> {
throw new UnsupportedOperationException();
};
}
#Override
public Function<List<List<String>>, String> finisher() {
return buckets -> buckets.stream()
.map(x -> x.stream()
.collect(Collectors.joining(", ")))
.collect(Collectors.joining(System.lineSeparator()));
}
#Override
public Set<Characteristics> characteristics() {
return new HashSet<>();
}
}
public static void main(String[] args) {
Collection<String> mainList = Arrays.asList("a","b","c","d","e","f","g","h","i", "j");
String formattedValues = mainList
.stream()
.collect(new MyCollector(mainList.size() / 3, 3));
System.out.println(formattedValues);
}
}
Explanation
This is a mutable collector that should not be used in parallel. If your necessities require that you process the stream in parallel you will have to transform this collector to be thread safe, which is pretty easy if you don't care about the order of the elements.
The combiner throws an exception because it is never called since run the stream sequentially.
The set of Characteristics has none that interests us, you can verify this by reading the javadoc
The supplier method will fetch the bucket in which we want to insert the element. The element will be insert in the next bucket that has space, otherwise we will create a new bucket and add it there.
The finisher is quite simple: Join the contents of each bucket by , and join the buckets themselves with System.lineSeparator()
Remember
Do not use this collector to process
Output
a, b, c
d, e, f
g, h, i
j
I have a list of the Class A, that includes a List itself.
public class A {
public double val;
public String id;
public List<String> names = new ArrayList<String>();
public A(double v, String ID, String name)
{
val = v;
id = ID;
names.add(name);
}
static public List<A> createAnExample()
{
List<A> items = new ArrayList<A>();
items.add(new A(8.0,"x1","y11"));
items.add(new A(12.0, "x2", "y21"));
items.add(new A(24.0,"x3","y31"));
items.get(0).names.add("y12");
items.get(1).names.add("y11");
items.get(1).names.add("y31");
items.get(2).names.add("y11");
items.get(2).names.add("y32");
items.get(2).names.add("y33");
return items;
}
The aim is to sum over average val per id over the List. I added the code in Main function by using some Java 8 stream.
My question is how can I rewrite it in a more elegant way without using the second Array and the for loop.
static public void main(String[] args) {
List<A> items = createAnExample();
List<A> items2 = new ArrayList<A>();
for (int i = 0; i < items.size(); i++) {
List<String> names = items.get(i).names;
double v = items.get(i).val / names.size();
String itemid = items.get(i).id;
for (String n : names) {
A item = new A(v, itemid, n);
items2.add(item);
}
}
Map<String, Double> x = items2.stream().collect(Collectors.groupingBy(item ->
item.names.isEmpty() ? "NULL" : item.names.get(0), Collectors.summingDouble(item -> item.val)));
for (Map.Entry entry : x.entrySet())
System.out.println(entry.getKey() + " --> " + entry.getValue());
}
You can do it with flatMap:
x = items.stream()
.flatMap(a -> a.names.stream()
.map(n -> new AbstractMap.SimpleEntry<>(n, a.val / a.names.size()))
).collect(groupingBy(
Map.Entry::getKey, summingDouble(Map.Entry::getValue)
));
If you find yourself dealing with problems like these often, consider a static method to create a Map.Entry:
static<K,V> Map.Entry<K,V> entry(K k, V v) {
return new AbstractMap.SimpleImmutableEntry<>(k,v);
}
Then you would have a less verbose .map(n -> entry(n, a.val/a.names.size()))
In my free StreamEx library which extends standard Stream API there are special operations which help building such complex maps. Using the StreamEx your problem can be solved like this:
Map<String, Double> x = StreamEx.of(createAnExample())
.mapToEntry(item -> item.names, item -> item.val / item.names.size())
.flatMapKeys(List::stream)
.grouping(Collectors.summingDouble(v -> v));
Here mapToEntry creates stream of map entries (so-called EntryStream) where keys are lists of names and values are averaged vals. Next we use flatMapKeys to flatten the keys leaving values as is (so we have stream of Entry<String, Double>). Finally we group them together summing the values for repeating keys.
For example, if I intend to partition some elements, I could do something like:
Stream.of("I", "Love", "Stack Overflow")
.collect(Collectors.partitioningBy(s -> s.length() > 3))
.forEach((k, v) -> System.out.println(k + " => " + v));
which outputs:
false => [I]
true => [Love, Stack Overflow]
But for me partioningBy is only a subcase of groupingBy. Although the former accepts a Predicate as parameter while the latter a Function, I just see a partition as a normal grouping function.
So the same code does exactly the same thing:
Stream.of("I", "Love", "Stack Overflow")
.collect(Collectors.groupingBy(s -> s.length() > 3))
.forEach((k, v) -> System.out.println(k + " => " + v));
which also results in a Map<Boolean, List<String>>.
So is there any reason I should use partioningBy instead of groupingBy? Thanks
partitioningBy will always return a map with two entries, one for where the predicate is true and one for where it is false.
It is possible that both entries will have empty lists, but they will exist.
That's something that groupingBy will not do, since it only creates entries when they are needed.
At the extreme case, if you send an empty stream to partitioningBy you will still get two entries in the map whereas groupingBy will return an empty map.
EDIT: As mentioned below this behavior is not mentioned in the Java docs, however changing it would take away the added value partitioningBy is currently providing. For Java 9 this is already in the specs.
partitioningBy is slightly more efficient, using a special Map implementation optimized for when the key is just a boolean.
(It might also help to clarify what you mean; partitioningBy helps to effectively get across that there's a boolean condition being used to partition the data.)
partitioningBy method will return a map whose key is always a Boolean value, but in case of groupingBy method, the key can be of any Object type
//groupingBy
Map<Object, List<Person>> list2 = new HashMap<Object, List<Person>>();
list2 = list.stream().collect(Collectors.groupingBy(p->p.getAge()==22));
System.out.println("grouping by age -> " + list2);
//partitioningBy
Map<Boolean, List<Person>> list3 = new HashMap<Boolean, List<Person>>();
list3 = list.stream().collect(Collectors.partitioningBy(p->p.getAge()==22));
System.out.println("partitioning by age -> " + list2);
As you can see, the key for map in case of partitioningBy method is always a Boolean value, but in case of groupingBy method, the key is Object type
Detailed code is as follows:
class Person {
String name;
int age;
Person(String name, int age) {
this.name = name;
this.age = age;
}
public String getName() {
return name;
}
public int getAge() {
return age;
}
public String toString() {
return this.name;
}
}
public class CollectorAndCollectPrac {
public static void main(String[] args) {
Person p1 = new Person("Kosa", 21);
Person p2 = new Person("Saosa", 21);
Person p3 = new Person("Tiuosa", 22);
Person p4 = new Person("Komani", 22);
Person p5 = new Person("Kannin", 25);
Person p6 = new Person("Kannin", 25);
Person p7 = new Person("Tiuosa", 22);
ArrayList<Person> list = new ArrayList<>();
list.add(p1);
list.add(p2);
list.add(p3);
list.add(p4);
list.add(p5);
list.add(p6);
list.add(p7);
// groupingBy
Map<Object, List<Person>> list2 = new HashMap<Object, List<Person>>();
list2 = list.stream().collect(Collectors.groupingBy(p -> p.getAge() == 22));
System.out.println("grouping by age -> " + list2);
// partitioningBy
Map<Boolean, List<Person>> list3 = new HashMap<Boolean, List<Person>>();
list3 = list.stream().collect(Collectors.partitioningBy(p -> p.getAge() == 22));
System.out.println("partitioning by age -> " + list2);
}
}
Another difference between groupingBy and partitioningBy is that the former takes a Function<? super T, ? extends K> and the latter a Predicate<? super T>.
When you pass a method reference or a lambda expression, such as s -> s.length() > 3, they can be used by either of these two methods (the compiler will infer the functional interface type based on the type required by the method you choose).
However, if you have a Predicate<T> instance, you can only pass it to Collectors.partitioningBy(). It won't be accepted by Collectors.groupingBy().
And similarly, if you have a Function<T,Boolean> instance, you can only pass it to Collectors.groupingBy(). It won't be accepted by Collectors.partitioningBy().
As denoted by the other answers, segregating a collection into two groups is useful in some scenarios. As these two partitions would always exist, it would be easier to utilize it further. In JDK, to segregate all the class files and config files, partitioningBy is used.
private static final String SERVICES_PREFIX = "META-INF/services/";
// scan the names of the entries in the JAR file
Map<Boolean, Set<String>> map = jf.versionedStream()
.filter(e -> !e.isDirectory())
.map(JarEntry::getName)
.filter(e -> (e.endsWith(".class") ^ e.startsWith(SERVICES_PREFIX)))
.collect(Collectors.partitioningBy(e -> e.startsWith(SERVICES_PREFIX),
Collectors.toSet()));
Set<String> classFiles = map.get(Boolean.FALSE);
Set<String> configFiles = map.get(Boolean.TRUE);
Code snippet is from jdk.internal.module.ModulePath#deriveModuleDescriptor