I would like to use Streams API to process a call log and calculate the total billing amount for the same phone number. Here's the code that achieves it with a hybrid approach but I would like to use fully functional approach:
List<CallLog> callLogs = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.sorted(Comparator.comparingInt(callLog -> callLog.phoneNumber))
.collect(Collectors.toList());
for (int i = 0; i< callLogs.size() -1 ;i++) {
if (callLogs.get(i).phoneNumber == callLogs.get(i+1).phoneNumber) {
callLogs.get(i).billing += callLogs.get(i+1).billing;
callLogs.remove(i+1);
}
}
You can use Collectors.groupingBy to group CallLog object by phoneNumber with Collectors.summingInt to sum the billing of grouped elements
Map<Integer, Integer> likesPerType = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.collect(Collectors.groupingBy(CallLog::getPhoneNumber, Collectors.summingInt(CallLog::getBilling)));
Map<Integer, Integer> result = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.sorted(Comparator.comparingInt(callLog -> callLog.phoneNumber))
.collect(Collectors.toMap(
c -> c.phoneNumber(),
c -> c.billing(),
(a, b) -> a+b
));
And if you want to have a 'List callLogs' as a result:
List<CallLog> callLogs = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.collect(Collectors.toMap(
c -> c.phoneNumber(),
c -> c.billing(),
(a, b) -> a+b
))
.entrySet()
.stream()
.map(entry -> toCallLog(entry.getKey(), entry.getValue()))
.sorted(Comparator.comparingInt(callLog -> callLog.phoneNumber))
.collect(Collectors.toList())
You can save yourself the sorting -> collection to list -> iterating the list for values next to each other if you instead do the following
Create all CallLog objects.
Merge them by the phoneNumber field
combine the billing fields every time
Return the already merged items
This can be done using Collectors.toMap(Function, Function, BinaryOperator) where the third parameter is the merge function that defines how items with identical keys would be combined:
Collection<CallLog> callLogs = Arrays.stream(S.split("\n"))
.map(CallLog::new)
.collect(Collectors.toMap( //a collector that will produce a map
CallLog::phoneNumber, //using phoneNumber as the key to group
x -> x, //the item itself as the value
(a, b) -> { //and a merge function that returns an object with combined billing
a.billing += b.billing;
return a;
}))
.values(); //just return the values from that map
In the end, you would have CallLog items with unique phoneNumber fields whose billing field is equal to the combination of all billings of the previously duplicate phoneNumbers.
What you are trying to do is to remove duplicate phone numbers, while adding their billing. The one thing streams are incompatible with are remove operations. So how can we do what you need without remove?
Well instead of sorting I would go with groupingBy phone numbers then I would map the list of groups of callLogs into callLogs with the billing already accumulated.
You could group the billing amount by phoneNumber, like VLAZ said. An example implementation could look something like this:
import java.util.Arrays;
import java.util.Map;
import java.util.stream.Collectors;
public class Demo {
public static void main(String[] args) {
final String s = "555123456;12.00\n"
+ "555123456;3.00\n"
+ "555123457;1.00\n"
+ "555123457;2.00\n"
+ "555123457;5.00";
final Map<Integer, Double> map = Arrays.stream(s.split("\n"))
.map(CallLog::new)
.collect(Collectors.groupingBy(CallLog::getPhoneNumber, Collectors.summingDouble(CallLog::getBilling)));
map.forEach((key, value) -> System.out.printf("%d: %.2f\n", key, value));
}
private static class CallLog {
private final int phoneNumber;
private final double billing;
public CallLog(int phoneNumber, double billing) {
this.phoneNumber = phoneNumber;
this.billing = billing;
}
public CallLog(String s) {
final String[] strings = s.split(";");
this.phoneNumber = Integer.parseInt(strings[0]);
this.billing = Double.parseDouble(strings[1]);
}
public int getPhoneNumber() {
return phoneNumber;
}
public double getBilling() {
return billing;
}
}
}
which produces the following output:
555123456: 15.00
555123457: 8.00
Related
I have a fun puzzler. Say I have a list of String values:
["A", "B", "C"]
Then I have to query another system for a Map<User, Long> of users with an attribute that corresponds to those values in the list with a count:
{name="Annie", key="A"} -> 23
{name="Paul", key="C"} -> 16
I need to return a new List<UserCount> with a count of each key. So I expect:
{key="A", count=23},
{key="B", count=0},
{key="C", count=16}
But I'm having a hard time computing when one of my User objects has no corresponding count in the map.
I know that map.computeIfAbsent() does what I need, but how can I apply it based on what's on the contents of the original list?
I think I need to stream the over the original list, then apply compute? So I have:
valuesList.stream()
.map(it -> valuesMap.computeIfAbsent(it.getKey(), k-> OL))
...
But here's where I get stuck. Can anyone provide any insight as to how I accomplish what I need?
You can create an auxiliary Map<String, Long> which will associate each string key with the count and then generate a list of UserCount based on it.
Example:
public record User(String name, String key) {}
public record UserCount(String key, long count) {}
public static void main(String[] args) {
List<String> keys = List.of("A", "B", "C");
Map<User, Long> countByUser =
Map.of(new User("Annie", "A"), 23L,
new User("Paul", "C"), 16L));
Map<String, Long> countByKey = countByUser.entrySet().stream()
.collect(Collectors.groupingBy(entry -> entry.getKey().key(),
Collectors.summingLong(Map.Entry::getValue)));
List<UserCount> userCounts = keys.stream()
.map(key -> new UserCount(key, countByKey.getOrDefault(key, 0L)))
.collect(Collectors.toList());
System.out.println(userCounts);
}
Output
[UserCount[key=A, count=23], UserCount[key=B, count=0], UserCount[key=C, count=16]]
Regarding the idea of utilizing computeIfAbsent() with stream - this approach is wrong and discouraged by the documentation of the Stream API.
Sure, you can use computeIfAbsent() to solve this problem, but not in conjunction with streams. It's not a good idea to create a stream that operates via side effects (at least without compelling reason).
And I guess you even don't need Java 8 computeIfAbsent(), plain and simple putIfAbsent() will be sufficient.
The following code will produce the same result:
Map<String, Long> countByKey = new HashMap<>();
countByUser.forEach((k, v) -> countByKey.merge(k.key(), v, Long::sum));
keys.forEach(k -> countByKey.putIfAbsent(k, 0L));
List<UserCount> userCounts = keys.stream()
.map(key -> new UserCount(key, countByKey.getOrDefault(key, 0L)))
.collect(Collectors.toList());
And instead of applying forEach() on a map and list, you can create two enhanced for loops if this options looks convoluted.
Another educational and parallel friendly version would be to gather the logic in one place and build your own custom accumulator and combiner for the Collector
public static void main(String[] args) {
Map<User, Long> countByUser =
Map.of(new User("Alice", "A"), 23L,
new User("Bob", "C"), 16L);
List<String> keys = List.of("A", "B", "C");
UserCountAggregator userCountAggregator =
countByUser.entrySet()
.parallelStream()
.collect(UserCountAggregator::new,
UserCountAggregator::accumulator,
UserCountAggregator::combiner);
List<UserCount> userCounts = userCountAggregator.getUserCounts(keys);
System.out.println(userCounts);
}
Output
[UserCount(key=A, count=23), UserCount(key=B, count=0), UserCount(key=C, count=16)]
User and UserCount classes with Lombok's #Value
#Value
class User {
private String name;
private String key;
}
#Value
class UserCount {
private String key;
private long count;
}
And the UserCountAggregator which contains your custom accumulator and combiner
class UserCountAggregator {
private Map<String, Long> keyCounts = new HashMap<>();
public void accumulator(Map.Entry<User, Long> userLongEntry) {
keyCounts.put(userLongEntry.getKey().getKey(),
keyCounts.getOrDefault(userLongEntry.getKey().getKey(), 0L)
+ userLongEntry.getValue());
}
public void combiner(UserCountAggregator other) {
other.keyCounts
.forEach((key, value) -> keyCounts.merge(key, value, Long::sum));
}
public List<UserCount> getUserCounts(List<String> keys) {
return keys.stream()
.map(key -> new UserCount(key, keyCounts.getOrDefault(key, 0L)))
.collect(Collectors.toList());
}
}
final Map<User,Long> valuesMap = ...
// First, map keys to counts (assuming keys are unique for each user)
final Map<String,Long> keyToCountMap = valuesMap.entrySet().stream()
.collect(Collectors.toMap(e -> e.getKey().key, e -> e.getValue()));
final List<UserCount> list = valuesList.stream()
.map(key -> new UserCount (key, keyToCountMap.getOrDefault(key, 0L)))
.collect(Collectors.toList());
Now I have an object:
public class Room{
private long roomId;
private long roomGroupId;
private String roomName;
... getter
... setter
}
I want sort list of rooms by 'roomId', but in the meantime while room objects has 'roomGroupId' greator than zero and has same value then make them close to each other.
Let me give you some example:
input:
[{"roomId":3,"roomGroupId":0},
{"roomId":6,"roomGroupId":0},
{"roomId":1,"roomGroupId":1},
{"roomId":2,"roomGroupId":0},
{"roomId":4,"roomGroupId":1}]
output:
[{"roomId":6,"roomGroupId":0},
{"roomId":4,"roomGroupId":1},
{"roomId":1,"roomGroupId":1},
{"roomId":3,"roomGroupId":0},
{"roomId":2,"roomGroupId":0}]
As shown above, the list sort by 'roomId', but 'roomId 4' and 'roomId 1' are close together, because they has the same roomGroupId.
This does not have easy nice solution (maybe I am wrong).
You can do this like this
TreeMap<Long, List<Room>> roomMap = new TreeMap<>();
rooms.stream()
.collect(Collectors.groupingBy(Room::getRoomGroupId))
.forEach((key, value) -> {
if (key.equals(0L)) {
value.forEach(room -> roomMap.put(room.getRoomId(), Arrays.asList(room)));
} else {
roomMap.put(
Collections.max(value, Comparator.comparing(Room::getRoomId))
.getRoomId(),
value
.stream()
.sorted(Comparator.comparing(Room::getRoomId)
.reversed())
.collect(Collectors.toList())
);
}
});
List<Room> result = roomMap.descendingMap()
.entrySet()
.stream()
.flatMap(entry -> entry.getValue()
.stream())
.collect(Collectors.toList());
If you're in Java 8, you can use code like this
Collections.sort(roomList, Comparator.comparing(Room::getRoomGroupId)
.thenComparing(Room::getRoomId));
If not, you should use a comparator
class SortRoom implements Comparator<Room>
{
public int compare(Room a, Room b)
{
if (a.getRoomGroupId().compareTo(b.getRoomGroupId()) == 0) {
return a.getRoomId().compareTo(b.getRoomId());
}
return a.getRoomGroupId().compareTo(b.getRoomGroupId();
}
}
and then use it like this
Collections.sort(roomList, new SortRoom());
I want to compare getA(eg: 123) & getB(eg: 456) and find duplicate records.
P1 getA getB
1 000123000 456
P2 getA getB
2 000123001 456
I tried below but it finds duplicates based on getA & getB combination:
Map<Object, Boolean> findDuplicates = productsList.stream().collect(Collectors.toMap(cm -> Arrays.asList(cm.getB(),cm.getA().substring(3, cm.getCode().length() - 3)), cm -> false, (a, b) -> true));
Now I am trying to remove the record which has cm.getA value as lowest but unable to use comapartor here:
productsList.removeIf(cm -> cm.getA() && findDuplicates .get(Arrays.asList(cm.getB(),cm.getA().substring(3, cm.getA().length() - 3))));
Any help would be appreciated?
You can do it with two steps
Function<Product,Object> dupKey = cm ->
Arrays.asList(cm.getB(), cm.getA().substring(3, cm.getA().length() - 3));
Map<Object, Boolean> duplicates = productsList.stream()
.collect(Collectors.toMap(dupKey, cm -> false, (a, b) -> true));
Map<Object,Product> minDuplicates = productsList.stream()
.filter(cm -> duplicates.get(dupKey.apply(cm)))
.collect(Collectors.toMap(dupKey, Function.identity(),
BinaryOperator.minBy(Comparator.comparing(Product::getA))));
productsList.removeAll(minDuplicates.values());
First, it identifies the keys which have duplicates, then, it collects the minimum for each key, skipping elements not having duplicates. Finally, remove the selected values.
In principle, this can be done in one step, but then, it requires an object holding both information, whether there were duplicates for a particular key and which has minimum value of them:
BinaryOperator<Product> min = BinaryOperator.minBy(Comparator.comparing(Product::getA));
Set<Product> minDuplicates = productsList.stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(dupKey, cm -> Map.entry(false,cm),
(a, b) -> Map.entry(true, min.apply(a.getValue(), b.getValue()))),
m -> m.values().stream().filter(Map.Entry::getKey)
.map(Map.Entry::getValue).collect(Collectors.toSet())));
productsList.removeAll(minDuplicates);
This uses Map.Entry instances to hold two values of different type. For keeping the code readable, it uses Java 9’s Map.entry(K,V) factory method. When support for Java 8 is required, it’s recommended to create your own factory method to keep the code simple:
static <K, V> Map.Entry<K, V> entry(K k, V v) {
return new AbstractMap.SimpleImmutableEntry<>(k, v);
}
then use that method instead of Map.entry.
The logic stays the same as in the first variant, it maps values to false and the element itself and merges them to true and the minimum element, but now in one go. The filtering has to be done afterwards, to skip the false elements, then map to the minimum element and collect them into a Set.
Then, using removeAll is the same.
Instead of Map of the duplicate key to boolean, you can use a map of duplicate key and TreeSet also. This will make it in one step. As in the TreeSet, the elements always remain sorted, you don't need to sort it in in the next step to find the minimum value.
public class ProductDups {
public static void main(String[] args) {
List<Product> productsList = new ArrayList<>();
productsList.add(new Product("000123000", "456"));
productsList.add(new Product("000123001", "456"));
productsList.add(new Product("000124003", "567"));
productsList.add(new Product("000124002", "567"));
List<Product> minDuplicates = productsList.stream()
.collect(
Collectors.toMap(
p -> Arrays.asList(p.getB(),
p.getA().substring(3, p.getA().length() - 3)),
p -> {
TreeSet<Product> ts = new TreeSet<>(Comparator.comparing(Product::getA));
ts.addAll(Arrays.asList(p));
return ts;
},
(a, b) -> {
a.addAll(b);
return a;
}
)
)
.entrySet()
.stream()
.filter(e -> e.getValue().size() > 1)
.map(e -> e.getValue().first())
.collect(Collectors.toList());
System.out.println(minDuplicates);
}
}
class Product {
String a;
String b;
public Product(String a, String b) {
this.a = a;
this.b = b;
}
public String getA() {
return a;
}
public void setA(String a) {
this.a = a;
}
public String getB() {
return b;
}
public void setB(String b) {
this.b = b;
}
#Override
public String toString() {
return "Product{" +
"a='" + a + '\'' +
", b='" + b + '\'' +
'}';
}
}
You can make the string to an array list and the loop through the array list and compare it with the other array list if that is what you are trying to do.
I have a simple User class with a String and an int property.
I would like to add two Lists of users this way:
if the String equals then the numbers should be added and that would be its new value.
The new list should include all users with proper values.
Like this:
List1: { [a:2], [b:3] }
List2: { [b:4], [c:5] }
ResultList: {[a:2], [b:7], [c:5]}
User definition:
public class User {
private String name;
private int comments;
}
My method:
public List<User> addTwoList(List<User> first, List<User> sec) {
List<User> result = new ArrayList<>();
for (int i=0; i<first.size(); i++) {
Boolean bsin = false;
Boolean isin = false;
for (int j=0; j<sec.size(); j++) {
isin = false;
if (first.get(i).getName().equals(sec.get(j).getName())) {
int value= first.get(i).getComments() + sec.get(j).getComments();
result.add(new User(first.get(i).getName(), value));
isin = true;
bsin = true;
}
if (!isin) {result.add(sec.get(j));}
}
if (!bsin) {result.add(first.get(i));}
}
return result;
}
But it adds a whole lot of things to the list.
This is better done via the toMap collector:
Collection<User> result = Stream
.concat(first.stream(), second.stream())
.collect(Collectors.toMap(
User::getName,
u -> new User(u.getName(), u.getComments()),
(l, r) -> {
l.setComments(l.getComments() + r.getComments());
return l;
}))
.values();
First, concatenate both the lists into a single Stream<User> via Stream.concat.
Second, we use the toMap collector to merge users that happen to have the same Name and get back a result of Collection<User>.
if you strictly want a List<User> then pass the result into the ArrayList constructor i.e. List<User> resultSet = new ArrayList<>(result);
Kudos to #davidxxx, you could collect to a list directly from the pipeline and avoid an intermediate variable creation with:
List<User> result = Stream
.concat(first.stream(), second.stream())
.collect(Collectors.toMap(
User::getName,
u -> new User(u.getName(), u.getComments()),
(l, r) -> {
l.setComments(l.getComments() + r.getComments());
return l;
}))
.values()
.stream()
.collect(Collectors.toList());
You have to use an intermediate map to merge users from both lists by summing their ages.
One way is with streams, as shown in Aomine's answer. Here's another way, without streams:
Map<String, Integer> map = new LinkedHashMap<>();
list1.forEach(u -> map.merge(u.getName(), u.getComments(), Integer::sum));
list2.forEach(u -> map.merge(u.getName(), u.getComments(), Integer::sum));
Now, you can create a list of users, as follows:
List<User> result = new ArrayList<>();
map.forEach((name, comments) -> result.add(new User(name, comments)));
This assumes User has a constructor that accepts name and comments.
EDIT: As suggested by #davidxxx, we could improve the code by factoring out the first part:
BiConsumer<List<User>, Map<String, Integer>> action = (list, map) ->
list.forEach(u -> map.merge(u.getName(), u.getComments(), Integer::sum));
Map<String, Integer> map = new LinkedHashMap<>();
action.accept(list1, map);
action.accept(list2, map);
This refactor would avoid DRY.
There is a pretty direct way using Collectors.groupingBy and Collectors.reducing which doesnt require setters, which is the biggest advantage since you can keep the User immutable:
Collection<Optional<User>> d = Stream
.of(first, second) // start with Stream<List<User>>
.flatMap(List::stream) // flatting to the Stream<User>
.collect(Collectors.groupingBy( // Collecting to Map<String, List<User>>
User::getName, // by name (the key)
// and reducing the list into a single User
Collectors.reducing((l, r) -> new User(l.getName(), l.getComments() + r.getComments()))))
.values(); // return values from Map<String, List<User>>
Unfortunately, the result is Collection<Optional<User>> since the reducing pipeline returns Optional since the result might not be present after all. You can stream the values and use the map() to get rid of the Optional or use Collectors.collectAndThen*:
Collection<User> d = Stream
.of(first, second) // start with Stream<List<User>>
.flatMap(List::stream) // flatting to the Stream<User>
.collect(Collectors.groupingBy( // Collecting to Map<String, List<User>>
User::getName, // by name (the key)
Collectors.collectingAndThen( // reduce the list into a single User
Collectors.reducing((l, r) -> new User(l.getName(), l.getComments() + r.getComments())),
Optional::get))) // and extract from the Optional
.values();
* Thanks to #Aomine
As alternative fairly straight and efficient :
stream the elements
collect them into a Map<String, Integer> to associate each name to the sum of comments (int)
stream the entries of the collected map to create the List of User.
Alternatively for the third step you could apply a finishing transformation to the Map collector with collectingAndThen(groupingBy()..., m -> ...
but I don't find it always very readable and here we could do without.
It would give :
List<User> users =
Stream.concat(first.stream(), second.stream())
.collect(groupingBy(User::getName, summingInt(User::getComments)))
.entrySet()
.stream()
.map(e -> new User(e.getKey(), e.getValue()))
.collect(toList());
I have a list of the Class A, that includes a List itself.
public class A {
public double val;
public String id;
public List<String> names = new ArrayList<String>();
public A(double v, String ID, String name)
{
val = v;
id = ID;
names.add(name);
}
static public List<A> createAnExample()
{
List<A> items = new ArrayList<A>();
items.add(new A(8.0,"x1","y11"));
items.add(new A(12.0, "x2", "y21"));
items.add(new A(24.0,"x3","y31"));
items.get(0).names.add("y12");
items.get(1).names.add("y11");
items.get(1).names.add("y31");
items.get(2).names.add("y11");
items.get(2).names.add("y32");
items.get(2).names.add("y33");
return items;
}
The aim is to sum over average val per id over the List. I added the code in Main function by using some Java 8 stream.
My question is how can I rewrite it in a more elegant way without using the second Array and the for loop.
static public void main(String[] args) {
List<A> items = createAnExample();
List<A> items2 = new ArrayList<A>();
for (int i = 0; i < items.size(); i++) {
List<String> names = items.get(i).names;
double v = items.get(i).val / names.size();
String itemid = items.get(i).id;
for (String n : names) {
A item = new A(v, itemid, n);
items2.add(item);
}
}
Map<String, Double> x = items2.stream().collect(Collectors.groupingBy(item ->
item.names.isEmpty() ? "NULL" : item.names.get(0), Collectors.summingDouble(item -> item.val)));
for (Map.Entry entry : x.entrySet())
System.out.println(entry.getKey() + " --> " + entry.getValue());
}
You can do it with flatMap:
x = items.stream()
.flatMap(a -> a.names.stream()
.map(n -> new AbstractMap.SimpleEntry<>(n, a.val / a.names.size()))
).collect(groupingBy(
Map.Entry::getKey, summingDouble(Map.Entry::getValue)
));
If you find yourself dealing with problems like these often, consider a static method to create a Map.Entry:
static<K,V> Map.Entry<K,V> entry(K k, V v) {
return new AbstractMap.SimpleImmutableEntry<>(k,v);
}
Then you would have a less verbose .map(n -> entry(n, a.val/a.names.size()))
In my free StreamEx library which extends standard Stream API there are special operations which help building such complex maps. Using the StreamEx your problem can be solved like this:
Map<String, Double> x = StreamEx.of(createAnExample())
.mapToEntry(item -> item.names, item -> item.val / item.names.size())
.flatMapKeys(List::stream)
.grouping(Collectors.summingDouble(v -> v));
Here mapToEntry creates stream of map entries (so-called EntryStream) where keys are lists of names and values are averaged vals. Next we use flatMapKeys to flatten the keys leaving values as is (so we have stream of Entry<String, Double>). Finally we group them together summing the values for repeating keys.