A bit weak with the concept of lambdas and streams so there might be stuff that really doesn't make any sense but I'll try to convey what I want to happen.
I have a class Invoice where there is an item name, price, and quantity.
I'm having to map the item name and to the total cost (price*quantity).
Though it doesn't work, hope it gives an idea what problem I am having:
invoiceList.stream()
.map(Invoice::getDesc)
.forEach(System.out.println(Invoice::getPrice*Invoice::getQty));
I can already tell the forEach would not work as it's mapping to a variable description(getDesc) and not the Invoice object where I can use it's methods to get other variables.
So, if item=pencil, price=1, qty=12, the output I would want is:
Pencil 12.00
This would be done on multiple Invoice objects.
Also, I am needing to sort them by their total and also filter those above a certain amount, eg. 100. How am I supposed to do that after having them placed in Map ?
if all you want to do is print to the console then it can be done as follows:
invoiceList.forEach(i -> System.out.println(i.getName() + " " + (i.getPrice() * i.getQty())));
If not then read on:
Using the toMap collector
Map<String, Double> result =
invoiceList.stream()
.collect(Collectors.toMap(Invoice::getName,
e -> e.getPrice() * e.getQuantity()));
This basically creates a map where the keys are the Invoice names and the values are the multiplication of the invoice price and quantity for that given Invoice.
Using the groupingBy collector
However, if there can be multiple invoices with the same name then you can use the groupingBy collector along with summingDouble as the downstream collector:
Map<String, Double> result =
invoiceList.stream()
.collect(groupingBy(Invoice::getName,
Collectors.summingDouble(e -> e.getPrice() * e.getQuantity())));
This groups the Invoice's by their names and then for each group sums the result of e.getPrice() * e.getQuantity().
Update:
if you want the toMap version and the result filtered then sorted by value ascending it can be done as follows:
Map<String, Double> result = invoiceList.stream()
.filter(e -> e.getPrice() * e.getQuantity() > 100)
.sorted(Comparator.comparingDouble(e -> e.getPrice() * e.getQuantity()))
.collect(Collectors.toMap(Invoice::getName,
e -> e.getPrice() * e.getQuantity(),
(left, right) -> left,
LinkedHashMap::new));
or with groupingBy approach :
Map<String, Double> result =
invoiceList.stream()
.collect(groupingBy(Invoice::getName,
Collectors.summingDouble(e -> e.getPrice() * e.getQuantity())))
.entrySet()
.stream()
.filter(e -> e.getValue() > 100)
.sorted(Map.Entry.comparingByValue())
.collect(Collectors.toMap(Map.Entry::getKey,
Map.Entry::getValue, (left, right) -> left,
LinkedHashMap::new));
So first, here's the code that I think you were trying to write:
invoiceList.stream()
.forEach(invoice -> System.out.println(invoice.getPrice() * invoice.getQty()));
Now let's look at what's going on here:
Calling .stream() on your list creates an object that has methods available for doing operations on the contents of the list. foreach is one such method that invokes a function on each element. map is another such method, but it returns another stream, where the contents are the contents of your original stream, but each element is replaced by the return value of the function that you pass to map.
If you look at the inside of the foreach call, you can see a lambda. This defines an anonymous function that will be called on each element of your invoiceList. The invoice variable to the left of the -> symbol is bound to each element of the stream, and the expression to the right executed.
Related
I have a input object
#Getter
class Txn {
private String hash;
private String withdrawId;
private String depositId;
private Integer amount;
private String date;
}
and the output object is
#Builder
#Getter
class UserTxn {
private String hash;
private String walletId;
private String txnType;
private Integer amount;
}
In the Txn object transfers the amount from the withdrawId -> depositId.
what I am doing is I am adding all the transactions (Txn objects) in a single amount grouped by hash.
but for that I have to make two streams for groupingby withdrawId and second or for depositId and then the third stream for merging them
grouping by withdrawId
var withdrawStream = txnList.stream().collect(Collectors.groupingBy(Txn::getHash, LinkedHashMap::new,
Collectors.groupingBy(Txn::getWithdrawId, LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(withdrawEntrySet ->
UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(withdrawEntrySet.getKey())
.txnType("WITHDRAW")
.amount(withdrawEntrySet.getValue().stream().map(Txn::getAmount).reduce(0, Integer::sum))
.build()
));
grouping by depositId
var depositStream = txnList.stream().collect(Collectors.groupingBy(Txn::getHash, LinkedHashMap::new,
Collectors.groupingBy(Txn::getDepositId, LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(withdrawEntrySet ->
UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(withdrawEntrySet.getKey())
.txnType("DEPOSIT")
.amount(withdrawEntrySet.getValue().stream().map(Txn::getAmount).reduce(0, Integer::sum))
.build()
));
then merging them again, using deposites - withdraws
var res = Stream.concat(withdrawStream, depositStream).collect(Collectors.groupingBy(UserTxn::getHash, LinkedHashMap::new,
Collectors.groupingBy(UserTxn::getWalletId, LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(withdrawEntrySet -> {
var depositAmount = withdrawEntrySet.getValue().stream().filter(userTxn -> userTxn.txnType.equals("DEPOSIT")).map(UserTxn::getAmount).reduce(0, Integer::sum);
var withdrawAmount = withdrawEntrySet.getValue().stream().filter(userTxn -> userTxn.txnType.equals("WITHDRAW")).map(UserTxn::getAmount).reduce(0, Integer::sum);
var totalAmount = depositAmount-withdrawAmount;
return UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(withdrawEntrySet.getKey())
.txnType(totalAmount > 0 ? "DEPOSIT": "WITHDRAW")
.amount(totalAmount)
.build();
}
));
My question is, How can I do this in one stream.
Like by somehow groupingBy withdrawId and depositId is one grouping.
something like
res = txnList.stream()
.collect(Collectors.groupingBy(Txn::getHash,
LinkedHashMap::new,
Collectors.groupingBy(Txn::getWithdrawId && Txn::getDepositId,
LinkedHashMap::new, Collectors.toList())))
.entrySet().stream().flatMap(hashEntrySet -> hashEntrySet.getValue().entrySet().stream()
.map(walletEntrySet ->
{
var totalAmount = walletEntrySet.getValue().stream().map(
txn -> Objects.equals(txn.getDepositId(), walletEntrySet.getKey())
? txn.getAmount() : (-txn.getAmount())).reduce(0, Integer::sum);
return UserTxn.builder()
.hash(hashEntrySet.getKey())
.walletId(walletEntrySet.getKey())
.txnType("WITHDRAW")
.amount(totalAmount)
.build();
}
));
TL;DR
For those who didn't understand the question, OP wants to generate from each Txn instance (Txn probably stands for transaction) two peaces of data: hash and withdrawId + aggregated amount, and hash and depositId + aggregated amount.
And then they want to merge the two parts together (for that reason they were creating the two streams, and then concatenating them).
Note: it seems like there's a logical flow in the original code: the same amount gets associated with withdrawId and depositId. Which doesn't reflect that this amount has been taken from one account and transferred to another. Hence, it would make sense if for depositId amount would be used as is, and for withdrawId - negated (i.e. -1 * amount).
Collectors.teeing()
You can make use of the Java 12 Collector teeing() and internally group stream elements into two distinct Maps:
the first one by grouping the stream data by withdrawId and hash.
and another one by grouping the data depositId and hash.
Teeing expects three arguments: 2 downstream Collectors and a Function combining the results produced by collectors.
As the downstream of teeing() we can use a combination of Collectors groupingBy() and summingInt(), the second one is needed to accumulate integer amount of the transaction.
Note that there's no need in using nested Collector groupingBy() instead we can create a custom type that would hold id and hash (and its equals/hashCode should be implemented based on the wrapped id and hash). Java 16 record fits into this role perfectly well:
public record HashWalletId(String hash, String walletId) {}
Instances of HashWalletId would be used as Keys in both intermediate Maps.
The finisher function of teeing() would merge the results of the two Maps together.
The only thing left is to generate instances of UserTxn out of map entries.
List<Txn> txnList = // initializing the list
List<UserTxn> result = txnList.stream()
.collect(Collectors.teeing(
Collectors.groupingBy(
txn -> new HashWalletId(txn.getHash(), txn.getWithdrawId()),
Collectors.summingInt(txn -> -1 * txn.getAmount())), // because amount has been withdrawn
Collectors.groupingBy(
txn -> new HashWalletId(txn.getHash(), txn.getDepositId()),
Collectors.summingInt(Txn::getAmount)),
(map1, map2) -> {
map2.forEach((k, v) -> map1.merge(k, v, Integer::sum));
return map1;
}
))
.entrySet().stream()
.map(entry -> UserTxn.builder()
.hash(entry.getKey().hash())
.walletId(entry.getKey().walletId())
.txnType(entry.getValue() > 0 ? "DEPOSIT" : "WITHDRAW")
.amount(entry.getValue())
.build()
)
.toList(); // remove the terminal operation if your goal is to produce a Stream
I wouldn’t use this in my code because I think it’s not readable and will be very hard to change and manage in the future(SOLID).
But in case you still want this-
If I got your design right hash is unique per user and transaction will only have deposit or withdrawal, if so, this will work-
You could triple groupBy via collectors chaining like you did in your example.
You can create the Txn type via simple map function just check which id is null.
Map<String, Map<String, Map<String, List<Txn>>>> groupBy =
txnList.stream()
.collect(Collectors.groupingBy(Txn::getHash, LinkedHashMap::new,
Collectors.groupingBy(Txn::getDepositId, LinkedHashMap::new,
Collectors.groupingBy(Txn::getWithdrawId, LinkedHashMap::new, Collectors.toList()))));
then use the logic from your example on this stream.
I am trying to learn to work with streams and collectors, I know how to do it with multiple for loops but I want to become a more efficient programmer.
Each project has a map committedHoursPerDay, where the key is the employee and the value is the amount of hours expressed in Integer. I want to loop through all project's committedHoursPerDay maps and filter the maps where the committedHoursPerDay is more than 7(fulltime), and add each of the Employee who works fulltime to the set.
The code that i have written so far is this:
public Set<Employee> getFulltimeEmployees() {
// TODO
Map<Employee,Integer> fulltimeEmployees = projects.stream().filter(p -> p.getCommittedHoursPerDay().entrySet()
.stream()
.filter(map -> map.getValue() >= 8)
.collect(Collectors.toMap(map -> map.getKey(), map -> map.getValue())));
return fulltimeEmployees.keySet();
}
however the filter recognizes the map because I can access the key and values, but in the .collect(Collectors.toMap()) it doesnt recognize the map and only sees it as a lambda argument
There is one to many notion here. You can first flatten the maps using flatMap and then apply filter to the map entries.
Map<Employee,Integer> fulltimeEmployees = projects.stream()
.flatMap(p -> p.getCommittedHoursPerDay()
.entrySet()
.stream())
.filter(mapEntry -> mapEntry.getValue() >= 8)
.collect(Collectors.toMap(mapEntry -> mapEntry.getKey(), mapEntry -> mapEntry.getValue()));
The flatMap step returns a Stream<Map.Entry<Employee, Integer>>. The filter thus operates on a Map.Entry<Employee, Integer>.
You can also use method reference on the collect step as .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue))
I have a stream of orders (the source being a list of orders).
Each order has a Customer, and a list of OrderLine.
What I'm trying to achieve is to have a map with the customer as the key, and all order lines belonging to that customer, in a simple list, as value.
What I managed right now returns me a Map<Customer>, List<Set<OrderLine>>>, by doing the following:
orders
.collect(
Collectors.groupingBy(
Order::getCustomer,
Collectors.mapping(Order::getOrderLines, Collectors.toList())
)
);
I'm either looking to get a Map<Customer, List<OrderLine>>directly from the orders stream, or by somehow flattening the list from a stream of the Map<Customer>, List<Set<OrderLine>>> that I got above.
You can simply use Collectors.toMap.
Something like
orders
.stream()
.collect(Collectors
.toMap(Order::getCustomer
, Order::getOrderLines
, (v1, v2) -> { List<OrderLine> temp = new ArrayList<>(v1);
temp.addAll(v2);
return temp;});
The third argument to the toMap function is the merge function. If you don't explicitly provide that and it there is a duplicate key then it will throw the error while finishing the operation.
Another option would be to use a simple forEach call:
Map<Customer, List<OrderLine>> map = new HashMap<>();
orders.forEach(
o -> map.computeIfAbsent(
o.getCustomer(),
c -> new ArrayList<OrderLine>()
).addAll(o.getOrderLines())
);
You can then continue to use streams on the result with map.entrySet().stream().
For a groupingBy approach, try Flat-Mapping Collector for property of a Class using groupingBy
Is there any way to create this hashmap inside the lambda function?
Map<SaleStatus, Long> sales = new HashMap<>();
saleStatusCounters.forEach(saleStatusCounter -> sales.put(saleStatusCounter.getStatus(), saleStatusCounter.getMatches()));
Something like this:
saleStatusCounters.stream()
.map(obj -> new HashMap<SaleStatus, Long>().put(obj.getStatus(), obj.getMatches()))
.collect(Collectors.toMap(???)));
Your code is fine as is. You can, nonetheless, use streams and Collectors.toMap to get the result you want:
Map<SaleStatus, Long> sales = saleStatusCounters.stream()
.collect(Collectors.toMap(obj -> obj.getStatus(), obj -> obj.getMatches()));
Note: this works as long as there are no collisions in the map, i.e. as long as you don't have two or more sale status counter objects with the same status.
In case you have more than one element in your list with the same status, you should use the overloaded version of Collectors.toMap that expects a merge function:
Map<SaleStatus, Long> sales = saleStatusCounters.stream()
.collect(Collectors.toMap(
obj -> obj.getStatus(),
obj -> obj.getMatches(),
Long::sum));
Here Long::sum is a BinaryOperator<Long> that merges two values that are mapped to the same key.
Map is Map<String, List<User>> and List is List<User>. I want to use
Map<String,List<User>> newMap = oldMap.stream()
.filter(u ->userList.stream()
.filter(ul ->ul.getName().equalsIgnoreCase(u.getKey()).count()>0))
.collect(Collectors.toMap(u.getKey, u.getVaule()));
can't change to new Map. Why?
There are several problems with your code:
Map does not have a stream(): its entry set does, so you need to call entrySet() first.
There are a couple of misplaced parentheses
Collectors.toMap code is incorrect: you need to use the lambda u -> u.getKey() (or the method-reference Map.Entry::getKey) and not just the expression u.getKey(). Also, you mispelled getValue().
This would be a corrected code:
Map<String, List<User>> newMap =
oldMap.entrySet()
.stream()
.filter(u -> userList.stream()
.filter(ul ->ul.getName().equalsIgnoreCase(u.getKey())).count() > 0
).collect(Collectors.toMap(u -> u.getKey(), u -> u.getValue()));
But a couple of notes here:
You are filtering only to see if the count is greater than 0: instead you could just use anyMatch(predicate). This is a short-cuiting terminal operation that returns true if the predicate is true for at least one of the elements in the Stream. This has also the advantage that this operation might not process all the elements in the Stream (when filtering does)
It is inefficient since you are traversing the userList every time you need to filter a Stream element. It would be better to use a Set which has O(1) lookup (so first you would convert your userList into a userSet, transforming the username in lower-case, and then you would search this set for a lower-case value username).
This would be a more performant code:
Set<String> userNameSet = userList.stream().map(u -> u.getName().toLowerCase(Locale.ROOT)).collect(toSet());
Map<String,List<User>> newMap =
oldMap.entrySet()
.stream()
.filter(u -> userNameSet.contains(u.getKey().toLowerCase(Locale.ROOT)))
.collect(Collectors.toMap(u -> u.getKey(), u -> u.getValue()));
Perhaps you intended to create a Stream of the entry Set of the input Map.
Map<String,List<User>> newMap =
oldMap.entrySet().stream()
.filter(u ->userList.stream().filter(ul ->ul.getName().equalsIgnoreCase(u.getKey())).count()>0)
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
This would create a Map that retains the entries of the original Map whose keys equal the name of at least one of the members of userList (ignoring case).