I want to compare getA(eg: 123) & getB(eg: 456) and find duplicate records.
P1 getA getB
1 000123000 456
P2 getA getB
2 000123001 456
I tried below but it finds duplicates based on getA & getB combination:
Map<Object, Boolean> findDuplicates = productsList.stream().collect(Collectors.toMap(cm -> Arrays.asList(cm.getB(),cm.getA().substring(3, cm.getCode().length() - 3)), cm -> false, (a, b) -> true));
Now I am trying to remove the record which has cm.getA value as lowest but unable to use comapartor here:
productsList.removeIf(cm -> cm.getA() && findDuplicates .get(Arrays.asList(cm.getB(),cm.getA().substring(3, cm.getA().length() - 3))));
Any help would be appreciated?
You can do it with two steps
Function<Product,Object> dupKey = cm ->
Arrays.asList(cm.getB(), cm.getA().substring(3, cm.getA().length() - 3));
Map<Object, Boolean> duplicates = productsList.stream()
.collect(Collectors.toMap(dupKey, cm -> false, (a, b) -> true));
Map<Object,Product> minDuplicates = productsList.stream()
.filter(cm -> duplicates.get(dupKey.apply(cm)))
.collect(Collectors.toMap(dupKey, Function.identity(),
BinaryOperator.minBy(Comparator.comparing(Product::getA))));
productsList.removeAll(minDuplicates.values());
First, it identifies the keys which have duplicates, then, it collects the minimum for each key, skipping elements not having duplicates. Finally, remove the selected values.
In principle, this can be done in one step, but then, it requires an object holding both information, whether there were duplicates for a particular key and which has minimum value of them:
BinaryOperator<Product> min = BinaryOperator.minBy(Comparator.comparing(Product::getA));
Set<Product> minDuplicates = productsList.stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(dupKey, cm -> Map.entry(false,cm),
(a, b) -> Map.entry(true, min.apply(a.getValue(), b.getValue()))),
m -> m.values().stream().filter(Map.Entry::getKey)
.map(Map.Entry::getValue).collect(Collectors.toSet())));
productsList.removeAll(minDuplicates);
This uses Map.Entry instances to hold two values of different type. For keeping the code readable, it uses Java 9’s Map.entry(K,V) factory method. When support for Java 8 is required, it’s recommended to create your own factory method to keep the code simple:
static <K, V> Map.Entry<K, V> entry(K k, V v) {
return new AbstractMap.SimpleImmutableEntry<>(k, v);
}
then use that method instead of Map.entry.
The logic stays the same as in the first variant, it maps values to false and the element itself and merges them to true and the minimum element, but now in one go. The filtering has to be done afterwards, to skip the false elements, then map to the minimum element and collect them into a Set.
Then, using removeAll is the same.
Instead of Map of the duplicate key to boolean, you can use a map of duplicate key and TreeSet also. This will make it in one step. As in the TreeSet, the elements always remain sorted, you don't need to sort it in in the next step to find the minimum value.
public class ProductDups {
public static void main(String[] args) {
List<Product> productsList = new ArrayList<>();
productsList.add(new Product("000123000", "456"));
productsList.add(new Product("000123001", "456"));
productsList.add(new Product("000124003", "567"));
productsList.add(new Product("000124002", "567"));
List<Product> minDuplicates = productsList.stream()
.collect(
Collectors.toMap(
p -> Arrays.asList(p.getB(),
p.getA().substring(3, p.getA().length() - 3)),
p -> {
TreeSet<Product> ts = new TreeSet<>(Comparator.comparing(Product::getA));
ts.addAll(Arrays.asList(p));
return ts;
},
(a, b) -> {
a.addAll(b);
return a;
}
)
)
.entrySet()
.stream()
.filter(e -> e.getValue().size() > 1)
.map(e -> e.getValue().first())
.collect(Collectors.toList());
System.out.println(minDuplicates);
}
}
class Product {
String a;
String b;
public Product(String a, String b) {
this.a = a;
this.b = b;
}
public String getA() {
return a;
}
public void setA(String a) {
this.a = a;
}
public String getB() {
return b;
}
public void setB(String b) {
this.b = b;
}
#Override
public String toString() {
return "Product{" +
"a='" + a + '\'' +
", b='" + b + '\'' +
'}';
}
}
You can make the string to an array list and the loop through the array list and compare it with the other array list if that is what you are trying to do.
Related
I have two list on two Class where id and month is common
public class NamePropeties{
private String id;
private Integer name;
private Integer months;
}
public class NameEntries {
private String id;
private Integer retailId;
private Integer months;
}
List NamePropetiesList = new ArrayList<>();
List NameEntries = new ArrayList<>();
Now i want to JOIN two list (like Sql does, JOIN ON month and id coming from two results) and return the data in new list where month and id is same in the given two list.
if i will start iterating only one and check in another list then there can be a size iteration issue.
i have tried to do it in many ways but is there is any stream way?
The general idea has been sketched in the comments: iterate one list, create a map whose keys are the attributes you want to join by, then iterate the other list and check if there's an entry in the map. If there is, get the value from the map and create a new object from the value of the map and the actual element of the list.
It's better to create a map from the list with the higher number of joined elements. Why? Because searching a map is O(1), no matter the size of the map. So, if you create a map from the list with the higher number of joined elements, then, when you iterate the second list (which is smaller), you'll be iterating among less elements.
Putting all this in code:
public static <B, S, J, R> List<R> join(
List<B> bigger,
List<S> smaller,
Function<B, J> biggerKeyExtractor,
Function<S, J> smallerKeyExtractor,
BiFunction<B, S, R> joiner) {
Map<J, List<B>> map = new LinkedHashMap<>();
bigger.forEach(b ->
map.computeIfAbsent(
biggerKeyExtractor.apply(b),
k -> new ArrayList<>())
.add(b));
List<R> result = new ArrayList<>();
smaller.forEach(s -> {
J key = smallerKeyExtractor.apply(s);
List<B> bs = map.get(key);
if (bs != null) {
bs.forEach(b -> {
R r = joiner.apply(b, s);
result.add(r);
}
}
});
return result;
}
This is a generic method that joins bigger List<B> and smaller List<S> by J join keys (in your case, as the join key is a composite of String and Integer types, J will be List<Object>). It takes care of duplicates and returns a result List<R>. The method receives both lists, functions that will extract the join keys from each list and a joiner function that will create new result R elements from joined B and S elements.
Note that the map is actually a multimap. This is because there might be duplicates as per the biggerKeyExtractor join function. We use Map.computeIfAbsent to create this multimap.
You should create a class like this to store joined results:
public class JoinedResult {
private final NameProperties properties;
private final NameEntries entries;
public JoinedResult(NameProperties properties, NameEntries entries) {
this.properties = properties;
this.entries = entries;
}
// TODO getters
}
Or, if you are in Java 14+, you might just use a record:
public record JoinedResult(NameProperties properties, NameEntries entries) { }
Or actually, any Pair class from out there will do, or you could even use Map.Entry.
With the result class (or record) in place, you should call the join method this way:
long propertiesSize = namePropertiesList.stream()
.map(p -> Arrays.asList(p.getMonths(), p.getId()))
.distinct()
.count();
long entriesSize = nameEntriesList.steram()
.map(e -> Arrays.asList(e.getMonths(), e.getId()))
.distinct()
.count();
List<JoinedResult> result = propertiesSize > entriesSize ?
join(namePropertiesList,
nameEntriesList,
p -> Arrays.asList(p.getMonths(), p.getId()),
e -> Arrays.asList(e.getMonths(), e.getId()),
JoinedResult::new) :
join(nameEntriesList,
namePropertiesList,
e -> Arrays.asList(e.getMonths(), e.getId()),
p -> Arrays.asList(p.getMonths(), p.getId()),
(e, p) -> new JoinedResult(p, e));
The key is to use generics and call the join method with the right arguments (they are flipped, as per the join keys size comparison).
Note 1: we can use List<Object> as the key of the map, because all Java lists implement equals and hashCode consistently (thus they can safely be used as map keys)
Note 2: if you are on Java9+, you should use List.of instead of Arrays.asList
Note 3: I haven't checked for neither null nor invalid arguments
Note 4: there is room for improvements, i.e. key extractor functions could be memoized, join keys could be reused instead of calculated more than once and multimap could have Object values for single elements and lists for duplicates, etc
If performance and nesting (as discussed) is not too much of a concern you could employ something along the lines of a crossjoin with filtering:
Result holder class
public class Tuple<A, B> {
public final A a;
public final B b;
public Tuple(A a, B b) {
this.a = a;
this.b = b;
}
}
Join with a predicate:
public static <A, B> List<Tuple<A, B>> joinOn(
List<A> l1,
List<B> l2,
Predicate<Tuple<A, B>> predicate) {
return l1.stream()
.flatMap(a -> l2.stream().map(b -> new Tuple<>(a, b)))
.filter(predicate)
.collect(Collectors.toList());
}
Call it like this:
List<Tuple<NamePropeties, NameEntries>> joined = joinOn(
properties,
names,
t -> Objects.equals(t.a.id, t.b.id) && Objects.equals(t.a.months, t.b.months)
);
I would like to use Streams API to process a call log and calculate the total billing amount for the same phone number. Here's the code that achieves it with a hybrid approach but I would like to use fully functional approach:
List<CallLog> callLogs = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.sorted(Comparator.comparingInt(callLog -> callLog.phoneNumber))
.collect(Collectors.toList());
for (int i = 0; i< callLogs.size() -1 ;i++) {
if (callLogs.get(i).phoneNumber == callLogs.get(i+1).phoneNumber) {
callLogs.get(i).billing += callLogs.get(i+1).billing;
callLogs.remove(i+1);
}
}
You can use Collectors.groupingBy to group CallLog object by phoneNumber with Collectors.summingInt to sum the billing of grouped elements
Map<Integer, Integer> likesPerType = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.collect(Collectors.groupingBy(CallLog::getPhoneNumber, Collectors.summingInt(CallLog::getBilling)));
Map<Integer, Integer> result = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.sorted(Comparator.comparingInt(callLog -> callLog.phoneNumber))
.collect(Collectors.toMap(
c -> c.phoneNumber(),
c -> c.billing(),
(a, b) -> a+b
));
And if you want to have a 'List callLogs' as a result:
List<CallLog> callLogs = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.collect(Collectors.toMap(
c -> c.phoneNumber(),
c -> c.billing(),
(a, b) -> a+b
))
.entrySet()
.stream()
.map(entry -> toCallLog(entry.getKey(), entry.getValue()))
.sorted(Comparator.comparingInt(callLog -> callLog.phoneNumber))
.collect(Collectors.toList())
You can save yourself the sorting -> collection to list -> iterating the list for values next to each other if you instead do the following
Create all CallLog objects.
Merge them by the phoneNumber field
combine the billing fields every time
Return the already merged items
This can be done using Collectors.toMap(Function, Function, BinaryOperator) where the third parameter is the merge function that defines how items with identical keys would be combined:
Collection<CallLog> callLogs = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.collect(Collectors.toMap( //a collector that will produce a map
CallLog::phoneNumber, //using phoneNumber as the key to group
x -> x, //the item itself as the value
(a, b) -> { //and a merge function that returns an object with combined billing
a.billing += b.billing;
return a;
}))
.values(); //just return the values from that map
In the end, you would have CallLog items with unique phoneNumber fields whose billing field is equal to the combination of all billings of the previously duplicate phoneNumbers.
What you are trying to do is to remove duplicate phone numbers, while adding their billing. The one thing streams are incompatible with are remove operations. So how can we do what you need without remove?
Well instead of sorting I would go with groupingBy phone numbers then I would map the list of groups of callLogs into callLogs with the billing already accumulated.
You could group the billing amount by phoneNumber, like VLAZ said. An example implementation could look something like this:
import java.util.Arrays;
import java.util.Map;
import java.util.stream.Collectors;
public class Demo {
public static void main(String[] args) {
final String s = "555123456;12.00\n"
+ "555123456;3.00\n"
+ "555123457;1.00\n"
+ "555123457;2.00\n"
+ "555123457;5.00";
final Map<Integer, Double> map = Arrays.stream(s.split("\n"))
.map(CallLog::new)
.collect(Collectors.groupingBy(CallLog::getPhoneNumber, Collectors.summingDouble(CallLog::getBilling)));
map.forEach((key, value) -> System.out.printf("%d: %.2f\n", key, value));
}
private static class CallLog {
private final int phoneNumber;
private final double billing;
public CallLog(int phoneNumber, double billing) {
this.phoneNumber = phoneNumber;
this.billing = billing;
}
public CallLog(String s) {
final String[] strings = s.split(";");
this.phoneNumber = Integer.parseInt(strings[0]);
this.billing = Double.parseDouble(strings[1]);
}
public int getPhoneNumber() {
return phoneNumber;
}
public double getBilling() {
return billing;
}
}
}
which produces the following output:
555123456: 15.00
555123457: 8.00
I want to compare getCode & getMode and find duplicate records.
Then there is one more product attribute getVode which always has different value(either true or false) in both records.
P1 getCode getMode getVode
1 001 123 true
P2 getCode getMode getVode
2 001 123 false
I tried below but it only finds duplicates:
List<ProductModel> uniqueProducts = productsList.stream()
.collect(Collectors.collectingAndThen(
toCollection(() -> new TreeSet<>(
Comparator.comparing(ProductModel::getCode)
.thenComparing(ProductModel::getMode)
)),
ArrayList::new));
So when I find duplicates records, I want to check the getVode value which is false and remove it from list.
Any help would be appreciated?
As far as I understood, you want to remove elements only if they are a duplicate and their getVode method returns false.
We can do this literally. First, we have to identify which elements are duplicates:
Map<Object, Boolean> isDuplicate = productsList.stream()
.collect(Collectors.toMap(pm -> Arrays.asList(pm.getCode(), pm.getMode()),
pm -> false, (a, b) -> true));
Then, remove those fulfilling the condition:
productsList.removeIf(pm -> !pm.getVode()
&& isDuplicate.get(Arrays.asList(pm.getCode(), pm.getMode())));
Or, not modifying the old list:
List<ProductModel> uniqueProducts = new ArrayList<>(productsList);
uniqueProducts.removeIf(pm -> !pm.getVode()
&& isDuplicate.get(Arrays.asList(pm.getCode(), pm.getMode())));
which can also be done via Stream operation:
List<ProductModel> uniqueProducts = productsList.stream()
.filter(pm -> pm.getVode()
|| !isDuplicate.get(Arrays.asList(pm.getCode(), pm.getMode())))
.collect(Collectors.toList());
Here you remove the duplicates whatever the getVode() value since it is not considered in the Comparator passed to the TreeSet.
Not easy with your approach.
You could create a Map<ProductModelId, List<ProductModelId>> by grouping the element according to their getCode() and getMode() values that you can represent with a ProductModelId class.
Then for each entry of the Map process it : if the list contains a single element, keep it, otherwise don't keep all these that have getVode() to false.
Map<ProductModelId, List<ProductModel>> map =
productsList.stream()
.collect(groupingBy(p -> new ProductModelId(p.getCode(), p.getMode());
List<ProductModel> listFiltered =
map.values()
.stream()
.flatMap(l -> {
if (l.size() == 1) {
return Stream.of(l.get(0));
} else {
return l.stream().filter(ProductModel::getVode);
}
}
)
.collect(toList());
Note that ProductModelId should override equals/hashCode by considering the value of the two fields to group them correctly in the map :
public class ProductModelId {
private String code;
private String mode;
public ProductModelId(String code, String mode) {
this.code = code;
this.mode = mode;
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof ProductModelId)) return false;
ProductModelId that = (ProductModelId) o;
return Objects.equals(code, that.code) &&
Objects.equals(mode, that.mode);
}
#Override
public int hashCode() {
return Objects.hash(code, mode);
}
}
You could grouping by combination of code and mode. And then in mergeFunction get the element with true vode:
Collection<ProductModel> uniqueProducts = products.stream()
.collect(toMap(
p -> Arrays.asList(p.getCode(), p.getMode()),
Function.identity(),
(p1, p2) -> p1.getVode() ? p1 : p2))
.values();
See javaDoc for toMap
If your vode can be true for multiple instances of ProductModel (otherwise if you expect a single true - this is even simpler, I'll let this an exercise for you) and you want to retain them all, may be this is what you are after:
List<ProductModel> models = List.of(
new ProductModel(1, 123, false),
new ProductModel(1, 123, true)); // just an example
Map<List<Integer>, List<ProductModel>> map = new HashMap<>();
models.forEach(x -> {
map.computeIfPresent(Arrays.asList(x.getMode(), x.getCode()),
(key, value) -> {
value.add(x);
value.removeIf(xx -> !xx.isVode());
return value;
});
map.computeIfAbsent(Arrays.asList(x.getMode(), x.getCode()),
key -> {
List<ProductModel> list = new ArrayList<>();
list.add(x);
return list;
});
});
map.values()
.stream()
.flatMap(List::stream)
.forEachOrdered(x -> System.out.println(x.getCode() + " " + x.getMode()));
where ProductModel is something like:
static class ProductModel {
private final int code;
private final int mode;
private final boolean vode;
// some other fields, getters, setters
}
This ins't that trivial to achieve. You first need to find if there are duplicates and only act accordingly when such are found. map.computeIfAbsent takes care to put into the map keys (a Key is made from Code/Mode wrapped in Arrays::asList - it properly overrides hashCode/equals).
When a duplicate is found based on that Key, we want to act on that via map.computeIfPresent. The "acting" isn't trivial either, considering that vode can be true across multiple instances (yes, this is my assumption). You don't know what vode was put into this map for the previous Key - was it false? if so, it mush be removed. But is the current false too? If so, it must be removed also.
I have a hashmap with key,value as Map<String,List<Trade>>. In the value object i.e. List<Trade> I have to compare each object. If any two of the object's property name "TradeType" is same then I have to remove those two from the list. I am trying to achieve as below. But I am getting "Array index out of bound exception" Also is there any better way to compare the same object inside list using streams??
Map<String,List<Trade>> tradeMap = new HashMap<>();
// Insert some data here
tradeMap.entrySet()
.stream()
.forEach(tradeList -> {
List<Trade> tl = tradeList.getValue();
String et = "";
// Is there a better way to refactor this?
for(int i = 0;i <= tl.size();i++){
if(i == 0) {
et = tl.get(0).getTradeType();
}
else {
if(et.equals(tl.get(i).getTradeType())){
tl.remove(i);
}
}
}
});
Your description is not completely in sync with what your code does so I will provide a couple of solutions in which you can choose the one you're after.
First and foremost as for the IndexOutOfBoundsException you can solve it by changing the loop condition from i <= tl.size() to i < tl.size(). This is because the last item in a list is at index tl.size() - 1 as lists are 0 based.
To improve upon your current attempt you can do it as follows:
tradeMap.values()
.stream()
.filter(list -> list.size() > 1)
.forEach(T::accept);
where accept is defined as:
private static void accept(List<Trade> list) {
String et = list.get(0).getTradeType();
list.subList(1, list.size()).removeIf(e -> e.getTradeType().equals(et));
}
and T should be substituted with the class containing the accept method.
The above code snippet only removes objects after the first element that are equal to the first element by trade type, which is what your example snippet is attempting to do. if however, you want distinct of all objects then one option would be to override equals and hashcode in the Trade class as follows:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Trade trade = (Trade) o;
return Objects.equals(tradeType, trade.tradeType);
}
#Override
public int hashCode() {
return Objects.hash(tradeType);
}
Then the accept method needs to be modified to become:
private static void accept(List<Trade> list) {
List<Trade> distinct = list.stream()
.distinct()
.collect(Collectors.toList());
list.clear(); // clear list
list.addAll(distinct); // repopulate with distinct objects by tradeType
}
or if you don't want to override equals and hashcode at all then you can use the toMap collector to get the distinct objects:
private static void accept(List<Trade> list) {
Collection<Trade> distinct = list.stream()
.collect(Collectors.toMap(Trade::getTradeType,
Function.identity(), (l, r) -> l, LinkedHashMap::new))
.values();
list.clear(); // clear list
list.addAll(distinct); // repopulate with distinct objects by tradeType
}
if however, when you say:
"If any two of the object's property name "TradeType" is same then I
have to remove those two from the list."
you actually want to remove all equal Trade objects by tradeType that have 2 or more occurrences then modify the accept method to be as follows:
private static void accept(List<Trade> list) {
list.stream()
.collect(Collectors.groupingBy(Trade::getTradeType))
.values()
.stream()
.filter(l -> l.size() > 1)
.map(l -> l.get(0))
.forEach(t -> list.removeIf(trade -> trade.getTradeType().equals(t.getTradeType())));
}
public void test(){
Map<String, List<Trade>> data = new HashMap<>();
List<Trade> list1 = Arrays.asList(new Trade("1"), new Trade("2"), new Trade("1"), new Trade("3"), new Trade("3"), new Trade("4"));
List<Trade> list2 = Arrays.asList(new Trade("1"), new Trade("2"), new Trade("2"), new Trade("3"), new Trade("3"), new Trade("4"));
data.put("a", list1);
data.put("b", list2);
Map<String, List<Trade>> resultMap = data.entrySet()
.stream()
.collect(Collectors.toMap(Entry::getKey, this::filterDuplicateListObjects));
resultMap.forEach((key, value) -> System.out.println(key + ": " + value));
// Don't forget to override the toString() method of Trade class.
}
public List<Trade> filterDuplicateListObjects(Entry<String, List<Trade>> entry){
return entry.getValue()
.stream()
.filter(trade -> isDuplicate(trade, entry.getValue()))
.collect(Collectors.toList());
}
public boolean isDuplicate(Trade trade, List<Trade> tradeList){
return tradeList.stream()
.filter(t -> !t.equals(trade))
.noneMatch(t -> t.getTradeType().equals(trade.getTradeType()));
}
Map<String,List<Trade>> tradeMap = new HashMap<>();
tradeMap.put("1",Arrays.asList(new Trade("A"),new Trade("B"),new Trade("A")));
tradeMap.put("2",Arrays.asList(new Trade("C"),new Trade("C"),new Trade("D")));
Map<String,Collection<Trade>> tradeMapNew = tradeMap.entrySet()
.stream()
.collect(Collectors.toMap(Entry::getKey,
e -> e.getValue().stream() //This is to remove the duplicates from the list.
.collect(Collectors.toMap(Trade::getTradeType,
t->t,
(t1,t2) -> t1,
LinkedHashMap::new))
.values()));
Output:
{1=[A, B], 2=[C, D]}
Map<String, List<Trade>> tradeMap = new HashMap<>();
tradeMap.values()
.stream()
.forEach(trades -> trades.stream()
.collect(Collectors.groupingBy(Trade::getType))
.values()
.stream()
.filter(tradesByType -> tradesByType.size() > 1)
.flatMap(Collection::stream)
.forEach(trades::remove));
I'm porting a piece of code from .NET to Java and stumbled upon a scenario where I want to use stream to map & reduce.
class Content
{
private String propA, propB, propC;
Content(String a, String b, String c)
{
propA = a; propB = b; propC = c;
}
public String getA() { return propA; }
public String getB() { return propB; }
public String getC() { return propC; }
}
List<Content> contentList = new ArrayList();
contentList.add(new Content("A1", "B1", "C1"));
contentList.add(new Content("A2", "B2", "C2"));
contentList.add(new Content("A3", "B3", "C3"));
I want to write a function that can stream through the contents of contentlist and return a class with result
content { propA = "A1, A2, A3", propB = "B1, B2, B3", propC = "C1, C2, C3" }
I'm fairly new to Java so you might find some code that resembles more like C# than java
You can use proper lambda for BinaryOperator in reduce function.
Content c = contentList
.stream()
.reduce((t, u) -> new Content(
t.getA() + ',' + u.getA(),
t.getB() + ',' + u.getB(),
t.getC() + ',' + u.getC())
).get();
The most generic way to deal with such tasks would be to combine the result of multiple collectors into a single one.
Using the jOOL library, you could have the following:
Content content =
Seq.seq(contentList)
.collect(
Collectors.mapping(Content::getA, Collectors.joining(", ")),
Collectors.mapping(Content::getB, Collectors.joining(", ")),
Collectors.mapping(Content::getC, Collectors.joining(", "))
).map(Content::new);
This creates a Seq from the input list and combines the 3 given collectors to create a Tuple3, which is simply a holder for 3 values. Those 3 values are then mapped into a Content using the constructor new Content(a, b, c). The collector themselves are simply mapping each Content into its a, b or c value and joining the results together separated with a ", ".
Without third-party help, we could create our own combiner collector like this (this is based of StreamEx pairing collector, which does the same thing for 2 collectors). It takes 3 collectors as arguments and performs a finisher operation on the result of the 3 collected values.
public interface TriFunction<T, U, V, R> {
R apply(T t, U u, V v);
}
public static <T, A1, A2, A3, R1, R2, R3, R> Collector<T, ?, R> combining(Collector<? super T, A1, R1> c1, Collector<? super T, A2, R2> c2, Collector<? super T, A3, R3> c3, TriFunction<? super R1, ? super R2, ? super R3, ? extends R> finisher) {
final class Box<A, B, C> {
A a; B b; C c;
Box(A a, B b, C c) {
this.a = a;
this.b = b;
this.c = c;
}
}
EnumSet<Characteristics> c = EnumSet.noneOf(Characteristics.class);
c.addAll(c1.characteristics());
c.retainAll(c2.characteristics());
c.retainAll(c3.characteristics());
c.remove(Characteristics.IDENTITY_FINISH);
return Collector.of(
() -> new Box<>(c1.supplier().get(), c2.supplier().get(), c3.supplier().get()),
(acc, v) -> {
c1.accumulator().accept(acc.a, v);
c2.accumulator().accept(acc.b, v);
c3.accumulator().accept(acc.c, v);
},
(acc1, acc2) -> {
acc1.a = c1.combiner().apply(acc1.a, acc2.a);
acc1.b = c2.combiner().apply(acc1.b, acc2.b);
acc1.c = c3.combiner().apply(acc1.c, acc2.c);
return acc1;
},
acc -> finisher.apply(c1.finisher().apply(acc.a), c2.finisher().apply(acc.b), c3.finisher().apply(acc.c)),
c.toArray(new Characteristics[c.size()])
);
}
and finally use it with
Content content = contentList.stream().collect(combining(
Collectors.mapping(Content::getA, Collectors.joining(", ")),
Collectors.mapping(Content::getB, Collectors.joining(", ")),
Collectors.mapping(Content::getC, Collectors.joining(", ")),
Content::new
));
static Content merge(List<Content> list) {
return new Content(
list.stream().map(Content::getA).collect(Collectors.joining(", ")),
list.stream().map(Content::getB).collect(Collectors.joining(", ")),
list.stream().map(Content::getC).collect(Collectors.joining(", ")));
}
EDIT: Expanding on Federico's inline collector, here is a concrete class dedicated to merging Content objects:
class Merge {
public static Collector<Content, ?, Content> collector() {
return Collector.of(Merge::new, Merge::accept, Merge::combiner, Merge::finisher);
}
private StringJoiner a = new StringJoiner(", ");
private StringJoiner b = new StringJoiner(", ");
private StringJoiner c = new StringJoiner(", ");
private void accept(Content content) {
a.add(content.getA());
b.add(content.getB());
c.add(content.getC());
}
private Merge combiner(Merge second) {
a.merge(second.a);
b.merge(second.b);
c.merge(second.c);
return this;
}
private Content finisher() {
return new Content(a.toString(), b.toString(), c.toString());
}
}
Used as:
Content merged = contentList.stream().collect(Merge.collector());
If you don't want to iterate 3 times over the list, or don't want to create too many Content intermediate objects, then you'd need to collect the stream with your own implementation:
public static Content collectToContent(Stream<Content> stream) {
return stream.collect(
Collector.of(
() -> new StringBuilder[] {
new StringBuilder(),
new StringBuilder(),
new StringBuilder() },
(StringBuilder[] arr, Content elem) -> {
arr[0].append(arr[0].length() == 0 ?
elem.getA() :
", " + elem.getA());
arr[1].append(arr[1].length() == 0 ?
elem.getB() :
", " + elem.getB());
arr[2].append(arr[2].length() == 0 ?
elem.getC() :
", " + elem.getC());
},
(arr1, arr2) -> {
arr1[0].append(arr1[0].length() == 0 ?
arr2[0].toString() :
arr2[0].length() == 0 ?
"" :
", " + arr2[0].toString());
arr1[1].append(arr1[1].length() == 0 ?
arr2[1].toString() :
arr2[1].length() == 0 ?
"" :
", " + arr2[1].toString());
arr1[2].append(arr1[2].length() == 0 ?
arr2[2].toString() :
arr2[2].length() == 0 ?
"" :
", " + arr2[2].toString());
return arr1;
},
arr -> new Content(
arr[0].toString(),
arr[1].toString(),
arr[2].toString())));
}
This collector first creates an array of 3 empty StringBuilder objects. Then defines an accumulator that appends each Contentelement's property to the corresponding StringBuilder. Then it defines a merge function that is only used when the stream is processed in parallel, which merges two previously accumulated partial results. Finally, it also defines a finisher function that transforms the 3 StringBuilder objects into a new instance of Content, with each property corresponding to the accumulated strings of the previous steps.
Please check Stream.collect() and Collector.of() javadocs for further reference.