I have a input object
#Getter
class Txn {
private String hash;
private String withdrawId;
private String depositId;
private Integer amount;
private String date;
}
and the output object is
#Builder
#Getter
class UserTxn {
private String hash;
private String walletId;
private String txnType;
private Integer amount;
}
In the Txn object transfers the amount from the withdrawId -> depositId.
what I am doing is I am adding all the transactions (Txn objects) in a single amount grouped by hash.
but for that I have to make two streams for groupingby withdrawId and second or for depositId and then the third stream for merging them
grouping by withdrawId
var withdrawStream = txnList.stream().collect(Collectors.groupingBy(Txn::getHash, LinkedHashMap::new,
Collectors.groupingBy(Txn::getWithdrawId, LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(withdrawEntrySet ->
UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(withdrawEntrySet.getKey())
.txnType("WITHDRAW")
.amount(withdrawEntrySet.getValue().stream().map(Txn::getAmount).reduce(0, Integer::sum))
.build()
));
grouping by depositId
var depositStream = txnList.stream().collect(Collectors.groupingBy(Txn::getHash, LinkedHashMap::new,
Collectors.groupingBy(Txn::getDepositId, LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(withdrawEntrySet ->
UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(withdrawEntrySet.getKey())
.txnType("DEPOSIT")
.amount(withdrawEntrySet.getValue().stream().map(Txn::getAmount).reduce(0, Integer::sum))
.build()
));
then merging them again, using deposites - withdraws
var res = Stream.concat(withdrawStream, depositStream).collect(Collectors.groupingBy(UserTxn::getHash, LinkedHashMap::new,
Collectors.groupingBy(UserTxn::getWalletId, LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(withdrawEntrySet -> {
var depositAmount = withdrawEntrySet.getValue().stream().filter(userTxn -> userTxn.txnType.equals("DEPOSIT")).map(UserTxn::getAmount).reduce(0, Integer::sum);
var withdrawAmount = withdrawEntrySet.getValue().stream().filter(userTxn -> userTxn.txnType.equals("WITHDRAW")).map(UserTxn::getAmount).reduce(0, Integer::sum);
var totalAmount = depositAmount-withdrawAmount;
return UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(withdrawEntrySet.getKey())
.txnType(totalAmount > 0 ? "DEPOSIT": "WITHDRAW")
.amount(totalAmount)
.build();
}
));
My question is, How can I do this in one stream.
Like by somehow groupingBy withdrawId and depositId is one grouping.
something like
res = txnList.stream()
.collect(Collectors.groupingBy(Txn::getHash,
LinkedHashMap::new,
Collectors.groupingBy(Txn::getWithdrawId && Txn::getDepositId,
LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(walletEntrySet ->
{
var totalAmount = walletEntrySet.getValue().stream().map(
txn -> Objects.equals(txn.getDepositId(), walletEntrySet.getKey())
? txn.getAmount() : (-txn.getAmount())).reduce(0, Integer::sum);
return UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(walletEntrySet.getKey())
.txnType("WITHDRAW")
.amount(totalAmount)
.build();
}
));
TL;DR
For those who didn't understand the question, OP wants to generate from each Txn instance (Txn probably stands for transaction) two peaces of data: hash and withdrawId + aggregated amount, and hash and depositId + aggregated amount.
And then they want to merge the two parts together (for that reason they were creating the two streams, and then concatenating them).
Note: it seems like there's a logical flow in the original code: the same amount gets associated with withdrawId and depositId. Which doesn't reflect that this amount has been taken from one account and transferred to another. Hence, it would make sense if for depositId amount would be used as is, and for withdrawId - negated (i.e. -1 * amount).
Collectors.teeing()
You can make use of the Java 12 Collector teeing() and internally group stream elements into two distinct Maps:
the first one by grouping the stream data by withdrawId and hash.
and another one by grouping the data depositId and hash.
Teeing expects three arguments: 2 downstream Collectors and a Function combining the results produced by collectors.
As the downstream of teeing() we can use a combination of Collectors groupingBy() and summingInt(), the second one is needed to accumulate integer amount of the transaction.
Note that there's no need in using nested Collector groupingBy() instead we can create a custom type that would hold id and hash (and its equals/hashCode should be implemented based on the wrapped id and hash). Java 16 record fits into this role perfectly well:
public record HashWalletId(String hash, String walletId) {}
Instances of HashWalletId would be used as Keys in both intermediate Maps.
The finisher function of teeing() would merge the results of the two Maps together.
The only thing left is to generate instances of UserTxn out of map entries.
List<Txn> txnList = // initializing the list
List<UserTxn> result = txnList.stream()
.collect(Collectors.teeing(
Collectors.groupingBy(
txn -> new HashWalletId(txn.getHash(), txn.getWithdrawId()),
Collectors.summingInt(txn -> -1 * txn.getAmount())), // because amount has been withdrawn
Collectors.groupingBy(
txn -> new HashWalletId(txn.getHash(), txn.getDepositId()),
Collectors.summingInt(Txn::getAmount)),
(map1, map2) -> {
map2.forEach((k, v) -> map1.merge(k, v, Integer::sum));
return map1;
}
))
.entrySet().stream()
.map(entry -> UserTxn.builder()
.hash(entry.getKey().hash())
.walletId(entry.getKey().walletId())
.txnType(entry.getValue() > 0 ? "DEPOSIT" : "WITHDRAW")
.amount(entry.getValue())
.build()
)
.toList(); // remove the terminal operation if your goal is to produce a Stream
I wouldn’t use this in my code because I think it’s not readable and will be very hard to change and manage in the future(SOLID).
But in case you still want this-
If I got your design right hash is unique per user and transaction will only have deposit or withdrawal, if so, this will work-
You could triple groupBy via collectors chaining like you did in your example.
You can create the Txn type via simple map function just check which id is null.
Map<String, Map<String, Map<String, List<Txn>>>> groupBy =
txnList.stream()
.collect(Collectors.groupingBy(Txn::getHash, LinkedHashMap::new,
Collectors.groupingBy(Txn::getDepositId, LinkedHashMap::new,
Collectors.groupingBy(Txn::getWithdrawId, LinkedHashMap::new, Collectors.toList()))));
then use the logic from your example on this stream.
Related
This question is about Java Streams' groupingBy capability.
Suppose I have a class, WorldCup:
public class WorldCup {
int year;
Country champion;
// all-arg constructor, getter/setters, etc
}
and an enum, Country:
public enum Country {
Brazil, France, USA
}
and the following code snippet:
WorldCup wc94 = new WorldCup(1994, Country.Brazil);
WorldCup wc98 = new WorldCup(1998, Country.France);
List<WorldCup> wcList = new ArrayList<WorldCup>();
wcList.add(wc94);
wcList.add(wc98);
Map<Country, List<Integer>> championsMap = wcList.stream()
.collect(Collectors.groupingBy(WorldCup::getCountry, Collectors.mapping(WorldCup::getYear));
After running this code, championsMap will contain:
Brazil: [1994]
France: [1998]
Is there a succinct way to have this list include an entry for all of the values of the enum? What I'm looking for is:
Brazil: [1994]
France: [1998]
USA: []
There are several approaches you can take.
The map which would be used for accumulating the stream data can be prepopulated with entries corresponding to every enum-member. To access all existing enum-members you can use values() method or EnumSet.allOf().
It can be achieved using three-args version of collect() or through a custom collector created via Collector.of().
Map<Country, List<Integer>> championsMap = wcList.stream()
.collect(
() -> EnumSet.allOf(Country.class).stream() // supplier
.collect(Collectors.toMap(
Function.identity(),
c -> new ArrayList<>()
)),
(Map<Country, List<Integer>> map, WorldCup next) -> // accumulator
map.get(next.getCountry()).add(next.getYear()),
(left, right) -> // combiner
right.forEach((k, v) -> left.get(k).addAll(v))
);
Another option is to add missing entries to the map after reduction of the stream has been finished.
For that we can use built-in collector collectingAndThen().
Map<Country, List<Integer>> championsMap = wcList.stream()
.collect(Collectors.collectingAndThen(
Collectors.groupingBy(WorldCup::getCountry,
Collectors.mapping(WorldCup::getYear,
Collectors.toList())),
map -> {
EnumSet.allOf(Country.class)
.forEach(country -> map.computeIfAbsent(country, k -> new ArrayList<>())); // if you're not going to mutate these lists - use Collections.emptyList()
return map;
}
));
I have a list of Transactions whom I wanted to :
First Group by year
Then Group by type for every transaction in that year
Then convert the Transactions to Result object having sum of all transaction's value in sub groups.
My Code snippets looks like :
Map<Integer, Map<String, Result> res = transactions.stream().collect(Collectors
.groupingBy(Transaction::getYear,
groupingBy(Transaction::getType),
reducing((a,b)-> new Result("YEAR_TYPE", a.getAmount() + b.getAmount()))
));
Transaction Class :
class Transaction {
private int year;
private String type;
private int value;
}
Result Class :
class Result {
private String group;
private int amount;
}
it seems to be not working, what should I do to fix this making sure it works on parallel streams too?
In the context, Collectors.reducing would help you reduce two Transaction objects into a final object of the same type. In your existing code what you could have done to map to Result type was to use Collectors.mapping and then trying to reduce it.
But reducing without an identity provides and Optional wrapped value for a possible absence. Hence your code would have looked like ;
Map<Integer, Map<String, Optional<Result>>> res = transactions.stream()
.collect(Collectors.groupingBy(Transaction::getYear,
Collectors.groupingBy(Transaction::getType,
Collectors.mapping(t -> new Result("YEAR_TYPE", t.getValue()),
Collectors.reducing((a, b) ->
new Result(a.getGroup(), a.getAmount() + b.getAmount()))))));
to thanks to Holger, one can simplify this further
…and instead of Collectors.mapping(func, Collectors.reducing(op)) you
can use Collectors.reducing(id, func, op)
Instead of using this and a combination of Collectors.grouping and Collectors.reducing, transform the logic to use Collectors.toMap as:
Map<Integer, Map<String, Result>> result = transactions.stream()
.collect(Collectors.groupingBy(Transaction::getYear,
Collectors.toMap(Transaction::getType,
t -> new Result("YEAR_TYPE", t.getValue()),
(a, b) -> new Result(a.getGroup(), a.getAmount() + b.getAmount()))));
The answer would stand complete with a follow-up read over Java Streams: Replacing groupingBy and reducing by toMap.
I would use a custom collector:
Collector<Transaction, Result, Result> resultCollector =
Collector.of(Result::new, // what is the result of this collector
(a, b) -> { a.setAmount( a.getAmount() + b.getValue());
a.setGroup("YEAR_TYPE"); }, // how to accumulate a result from a transaction
(l, r) -> { l.setAmount(l.getAmount() + r.getAmount()); return l; }); // how to combine two
// result instances
// (used in parallel streams)
then you can use the collector to get the map:
Map<Integer, Map<String, Result>> collect = transactions.parallelStream().collect(
groupingBy(Transaction::getYear,
groupingBy(Transaction::getType, resultCollector) ) );
I have a stream of orders (the source being a list of orders).
Each order has a Customer, and a list of OrderLine.
What I'm trying to achieve is to have a map with the customer as the key, and all order lines belonging to that customer, in a simple list, as value.
What I managed right now returns me a Map<Customer>, List<Set<OrderLine>>>, by doing the following:
orders
.collect(
Collectors.groupingBy(
Order::getCustomer,
Collectors.mapping(Order::getOrderLines, Collectors.toList())
)
);
I'm either looking to get a Map<Customer, List<OrderLine>>directly from the orders stream, or by somehow flattening the list from a stream of the Map<Customer>, List<Set<OrderLine>>> that I got above.
You can simply use Collectors.toMap.
Something like
orders
.stream()
.collect(Collectors
.toMap(Order::getCustomer
, Order::getOrderLines
, (v1, v2) -> { List<OrderLine> temp = new ArrayList<>(v1);
temp.addAll(v2);
return temp;});
The third argument to the toMap function is the merge function. If you don't explicitly provide that and it there is a duplicate key then it will throw the error while finishing the operation.
Another option would be to use a simple forEach call:
Map<Customer, List<OrderLine>> map = new HashMap<>();
orders.forEach(
o -> map.computeIfAbsent(
o.getCustomer(),
c -> new ArrayList<OrderLine>()
).addAll(o.getOrderLines())
);
You can then continue to use streams on the result with map.entrySet().stream().
For a groupingBy approach, try Flat-Mapping Collector for property of a Class using groupingBy
A bit weak with the concept of lambdas and streams so there might be stuff that really doesn't make any sense but I'll try to convey what I want to happen.
I have a class Invoice where there is an item name, price, and quantity.
I'm having to map the item name and to the total cost (price*quantity).
Though it doesn't work, hope it gives an idea what problem I am having:
invoiceList.stream()
.map(Invoice::getDesc)
.forEach(System.out.println(Invoice::getPrice*Invoice::getQty));
I can already tell the forEach would not work as it's mapping to a variable description(getDesc) and not the Invoice object where I can use it's methods to get other variables.
So, if item=pencil, price=1, qty=12, the output I would want is:
Pencil 12.00
This would be done on multiple Invoice objects.
Also, I am needing to sort them by their total and also filter those above a certain amount, eg. 100. How am I supposed to do that after having them placed in Map ?
if all you want to do is print to the console then it can be done as follows:
invoiceList.forEach(i -> System.out.println(i.getName() + " " + (i.getPrice() * i.getQty())));
If not then read on:
Using the toMap collector
Map<String, Double> result =
invoiceList.stream()
.collect(Collectors.toMap(Invoice::getName,
e -> e.getPrice() * e.getQuantity()));
This basically creates a map where the keys are the Invoice names and the values are the multiplication of the invoice price and quantity for that given Invoice.
Using the groupingBy collector
However, if there can be multiple invoices with the same name then you can use the groupingBy collector along with summingDouble as the downstream collector:
Map<String, Double> result =
invoiceList.stream()
.collect(groupingBy(Invoice::getName,
Collectors.summingDouble(e -> e.getPrice() * e.getQuantity())));
This groups the Invoice's by their names and then for each group sums the result of e.getPrice() * e.getQuantity().
Update:
if you want the toMap version and the result filtered then sorted by value ascending it can be done as follows:
Map<String, Double> result = invoiceList.stream()
.filter(e -> e.getPrice() * e.getQuantity() > 100)
.sorted(Comparator.comparingDouble(e -> e.getPrice() * e.getQuantity()))
.collect(Collectors.toMap(Invoice::getName,
e -> e.getPrice() * e.getQuantity(),
(left, right) -> left,
LinkedHashMap::new));
or with groupingBy approach :
Map<String, Double> result =
invoiceList.stream()
.collect(groupingBy(Invoice::getName,
Collectors.summingDouble(e -> e.getPrice() * e.getQuantity())))
.entrySet()
.stream()
.filter(e -> e.getValue() > 100)
.sorted(Map.Entry.comparingByValue())
.collect(Collectors.toMap(Map.Entry::getKey,
Map.Entry::getValue, (left, right) -> left,
LinkedHashMap::new));
So first, here's the code that I think you were trying to write:
invoiceList.stream()
.forEach(invoice -> System.out.println(invoice.getPrice() * invoice.getQty()));
Now let's look at what's going on here:
Calling .stream() on your list creates an object that has methods available for doing operations on the contents of the list. foreach is one such method that invokes a function on each element. map is another such method, but it returns another stream, where the contents are the contents of your original stream, but each element is replaced by the return value of the function that you pass to map.
If you look at the inside of the foreach call, you can see a lambda. This defines an anonymous function that will be called on each element of your invoiceList. The invoice variable to the left of the -> symbol is bound to each element of the stream, and the expression to the right executed.
I have a small snippet of code where I want to group the results by a combination of 2 properties of the type in the stream. After appropriate filtering, I do a map where I create an instance of a simple type that holds those 2 properties (in this case called AirportDay). Now I want to group them together and order them descending by the count. The trouble I am having is coming up with the correct arguments for the groupingBy method. Here is my code so far:
final int year = getYear();
final int limit = getLimit(10, 1, 100);
repository.getFlightStream(year)
.filter(f -> f.notCancelled())
.map(f -> new AirportDay(f.getOriginAirport(), f.getDate()))
.collect(groupingBy( ????? , counting())) // stuck here
.entrySet()
.stream()
.sorted(comparingByValue(reverseOrder()))
.limit(limit)
.forEach(entry -> {
AirportDay key = entry.getKey();
printf("%-30s\t%s\t%,10d\n",
key.getAirport().getName(),
key.getDate(),
entry.getValue()
);
});
My first instinct was to pass AirportDay::this but that obviously doesn't work...
I'd appreciate any assistance you can provide in coming up with a solution to the above problem.
-Tony
If you want to group by AirportDay, provide the function to create the key to groupingBy:
repository.getFlightStream(year)
.filter(f -> f.notCancelled())
.collect(groupingBy(f -> new AirportDay(f.getOriginAirport(), f.getDate()), counting()))
Note: The AirportDay class must implement sensible equals() and hashCode() methods for this to work.