Java8 streams with Map? - java

I have the following Map (each key is a String and each value is a List<Message>)
My map is like this :
1st entry :"MONDAY" -> [message1, message2, message3]
2nd entry : "TUESDAY" -> [message4, message5]
...
My goal is to change each message content :
I was thinking about this :
map.entrySet().stream().peek(entry -> {
entry.getValue().stream().peek(m -> m.setMessage(changeMessage()))
})
But don't know how to finish and do it properly.

Unfortunately, java-stream doesn't provide a straightforward way to change the Map values without violating the Side-effects principle:
Side-effects in behavioral parameters to stream operations are, in general, discouraged, as they can often lead to unwitting violations of the statelessness requirement, as well as other thread-safety hazards.
Here is a possible solution:
Map<String, List<Message>> = map.entrySet().stream()
.map(e -> { // iterate entries
e.setValue(e.getValue().stream() // set a new value
.map(message -> {
message -> message.setMessage(changeMessage()); // .. update message
return message;}) // .. use it
.collect(Collectors.toList())); // return as a List
return e;}) // return an updated value
.collect(Collectors.toMap(Entry::getKey, Entry::getValue)); // collec to a Map
However, Java provides a well-known for-each feature to achieve your goal in a more direct way which is more readable:
for (List<Message> list: map.values()) {
for (Message message: list) {
message.setMessage(changeMessage());
}
}

Iterate map, change each element of list again put the collected list on the same key of the map.
map.forEach((k,v)->{
map.put(k, v.stream().map(i->i+"-changed").collect(Collectors.toList()));
});

If you just want to update the message of all messages there is no need to use the whole entry set. You can just stream the values of your map and map the items. The use forEach() to update them:
map.values().stream().flatMap(List::stream)
.forEach(m -> m.setMessage(changeMessage(m.getMessage())));
If you need the key to change the message you can use this:
map.forEach((key, messages) -> messages.forEach(m ->
m.setMessage(changeMessage(key, m.getMessage()))));

Related

How to remove Keys that would cause Collisions before executing Collectors.toMap()

I have a stream of objects similar to this previous question, however, instead of ignoring duplicate values, I would like to remove any values from that stream beforehand and print them out.
For example, from this snippet:
Map<String, String> phoneBook = people.stream()
.collect(toMap(Person::getName,
Person::getAddress));
If there were duplicate entries, it would cause a java.lang.IllegalStateException: Duplicate key error to be thrown.
The solution proposed in that question used a mergeFunction to keep the first entry if a collision was found.
Map<String, String> phoneBook =
people.stream()
.collect(Collectors.toMap(
Person::getName,
Person::getAddress,
(address1, address2) -> {
System.out.println("duplicate key found!");
return address1;
}
));
Instead of keeping the first entry, if there is a collision from a duplicate key in the stream, I want to know which value caused the collision and make sure that there are no occurrences of that value within the resulting map.
I.e. if "Bob" appeared three times in the stream, it should not be in the map even once.
In the process of creating that map, I would like to filter out any duplicate names and record them some way.
I want to make sure that when creating the list there can be no duplicate entry and for there to be some way to know which entries had duplicate keys in incoming stream. I was thinking about using groupingBy and filter beforehand to find the duplicate keys, but I am not sure what the best way to do it is.
I would like to remove any values from that stream beforehand.
As #JimGarrison has pointed out, preprocessing the data doesn't make sense.
You can't know it in advance whether a name is unique or not until the all data set has been processed.
Another thing that you have to consider that inside the stream pipeline (before the collector) you have knowledge on what data has been encountered previously. Because results of intermediate operations should not depend on any state.
In case if you are thinking that streams are acting like a sequence of loops and therefore assuming that it's possible to preprocess stream elements before collecting them, that's not correct. Elements of the stream pipeline are being processed lazily one at a time. I.e. all the operations in the pipeline will get applied on a single element and each operation will be applied only if it's needed (that's what laziness means).
For more information, have a look at this tutorial and API documentation
Implementations
You can segregate unique values and duplicates in a single stream statement by utilizing Collectors.teeing() and a custom object that will contain separate collections of duplicated and unique entries of the phone book.
Since the primarily function of this object only to carry the data I've implemented it as Java 16 record.
public record FilteredPhoneBook(Map<String, String> uniquePersonsAddressByName,
List<String> duplicatedNames) {}
Collector teeing() expects three arguments: two collectors and a function that merges the results produced by both collectors.
The map generated by the groupingBy() in conjunction with counting(), is meant to determine duplicated names.
Since there's no point to processing the data, toMap() which is used as the second collector will create a map containing all names.
When both collectors will hand out their results to the merger function, it will take care of removing the duplicates.
public static FilteredPhoneBook getFilteredPhoneBook(Collection<Person> people) {
return people.stream()
.collect(Collectors.teeing(
Collectors.groupingBy(Person::getName, Collectors.counting()), // intermediate Map<String, Long>
Collectors.toMap( // intermediate Map<String, String>
Person::getName,
Person::getAddress,
(left, right) -> left),
(Map<String, Long> countByName, Map<String, String> addressByName) -> {
countByName.values().removeIf(count -> count == 1); // removing unique names
addressByName.keySet().removeAll(countByName.keySet()); // removing all duplicates
return new FilteredPhoneBook(addressByName, new ArrayList<>(countByName.keySet()));
}
));
}
Another way to address this problem to utilize Map<String,Boolean> as the mean of discovering duplicates, as #Holger have suggested.
With the first collector will be written using toMap(). And it will associate true with a key that has been encountered only once, and its mergeFunction will assign the value of false if at least one duplicate was found.
The rest logic remains the same.
public static FilteredPhoneBook getFilteredPhoneBook(Collection<Person> people) {
return people.stream()
.collect(Collectors.teeing(
Collectors.toMap( // intermediate Map<String, Boolean>
Person::getName,
person -> true, // not proved to be a duplicate and initially considered unique
(left, right) -> false), // is a duplicate
Collectors.toMap( // intermediate Map<String, String>
Person::getName,
Person::getAddress,
(left, right) -> left),
(Map<String, Boolean> isUniqueByName, Map<String, String> addressByName) -> {
isUniqueByName.values().removeIf(Boolean::booleanValue); // removing unique names
addressByName.keySet().removeAll(isUniqueByName.keySet()); // removing all duplicates
return new FilteredPhoneBook(addressByName, new ArrayList<>(isUniqueByName.keySet()));
}
));
}
main() - demo
public static void main(String[] args) {
List<Person> people = List.of(
new Person("Alise", "address1"),
new Person("Bob", "address2"),
new Person("Bob", "address3"),
new Person("Carol", "address4"),
new Person("Bob", "address5")
);
FilteredPhoneBook filteredPhoneBook = getFilteredPhoneBook(people);
System.out.println("Unique entries:");
filteredPhoneBook.uniquePersonsAddressByName.forEach((k, v) -> System.out.println(k + " : " + v));
System.out.println("\nDuplicates:");
filteredPhoneBook.duplicatedNames().forEach(System.out::println);
}
Output
Unique entries:
Alise : address1
Carol : address4
Duplicates:
Bob
You can't know which keys are duplicates until you have processed the entire input stream. Therefore, any pre-processing step has to make a complete pass of the input before your main logic, which is wasteful.
An alternate approach could be:
Use the merge function to insert a dummy value for the offending key
At the same time, insert the offending key into a Set<K>
After the input stream is processed, iterate over the Set<K> to remove offending keys from the primary map.
In mathematical terms you want to partition your grouped aggregate and handle both parts separately.
Map<String, String> makePhoneBook(Collection<Person> people) {
Map<Boolean, List<Person>> phoneBook = people.stream()
.collect(Collectors.groupingBy(Person::getName))
.values()
.stream()
.collect(Collectors.partitioningBy(list -> list.size() > 1,
Collectors.mapping(r -> r.get(0),
Collectors.toList())));
// handle duplicates
phoneBook.get(true)
.forEach(x -> System.out.println("duplicate found " + x));
return phoneBook.get(false).stream()
.collect(Collectors.toMap(
Person::getName,
Person::getAddress));
}

Transforming a Map in Java using Java 8 Streams - Segregating Map based on the Value of the Map

I have a Map in Java:
Map<String, Set<EntityObject>> itemGroupsMap
class EntityObject {
boolean isParent;
String groupId;
}
Now I want to transform itemGroupsMap Map ---to ---:
Map<EntityObject, Set<EntityObject>> parentChildMap.
The logic to do is that each entry of itemGroupsMap -> Set -> Among this Set will an EntityObject that will be a parent (EntityObject.isParent=1). So for each entry in the map I have to find the parent EntityObject and make it key of the parentChildMap and put only the rest of the entitiObjects as the Set/List to this key.
I have tried using 2 foreach loops and I am pretty new to Java 8 so was looking how I can reduce my code using streams?
I looked at Collectors.partitioningBy but it creates 2 maps with 0, 1 keys. I dont really need that.
Any suggestions?
Maybe something like this:
Map<EntityObject, Set<EntityObject>> newMap = itemGroupsMap.entrySet().stream()
.collect(Collectors.toMap(
entry -> entry.getValue().stream().filter(value -> value.isParent).findFirst().orElse(null),
Map.Entry::getValue
));
Note: his code doesn't care about data set in initial map without parent -> entry.getValue().stream().filter(value -> value.isParent).findFirst().orElse(null), it tells: set null as a key when no parents in set. So, if it is possible, then every set without parent will be overwritting by the next one without parent.
Here are several approaches, assuming I understand what you want. If the key of the map is EntityObject, then it needs to override equals to take care of duplicate keys which are not allowed. This could be done by using the Id as part of the equals implementation (which isn't obvious from the class provided)
Map<EntityObject, Set<EntityObject>> parentChildMap =
itemGroupsMap.entrySet().stream()
.collect(Collectors.toMap(
// get the single parent object and use
// as a key
e -> e.getValue().stream()
.filter(o -> o.isParent)
.findFirst().get(),
// get the value and remove the parent
// key, convert to a set and use as the new
// value.
e -> e.getValue().stream()
.filter(o -> !o.isParent)
.collect(
Collectors.toSet())));
There may be a better way to do it with streams but I prefer the following as it is straight forward.
// create the map
Map<EntityObject, Set<EntityObject>> parentChildMap = new HashMap<>();
for (Entry<String, Set<EntityObject>> e : itemGroupsMap
.entrySet()) {
// get the set of EntityObjects
Set<EntityObject> eoSet = e.getValue();
// get the parent one
EntityObject parentObject = eoSet.stream()
.filter(eo->eo.isParent).findFirst().get();
// remove the parent one from the set
eoSet.remove(parentObject);
// add the parentObject and the set to the map
parentChildMap.put(parentObject,eoSet);
}

Flatten a Map<Integer, List<String>> to Map<String, Integer> with stream and lambda

I would like to flatten a Map which associates an Integer key to a list of String, without losing the key mapping.
I am curious as though it is possible and useful to do so with stream and lambda.
We start with something like this:
Map<Integer, List<String>> mapFrom = new HashMap<>();
Let's assume that mapFrom is populated somewhere, and looks like:
1: a,b,c
2: d,e,f
etc.
Let's also assume that the values in the lists are unique.
Now, I want to "unfold" it to get a second map like:
a: 1
b: 1
c: 1
d: 2
e: 2
f: 2
etc.
I could do it like this (or very similarly, using foreach):
Map<String, Integer> mapTo = new HashMap<>();
for (Map.Entry<Integer, List<String>> entry: mapFrom.entrySet()) {
for (String s: entry.getValue()) {
mapTo.put(s, entry.getKey());
}
}
Now let's assume that I want to use lambda instead of nested for loops. I would probably do something like this:
Map<String, Integer> mapTo = mapFrom.entrySet().stream().map(e -> {
e.getValue().stream().?
// Here I can iterate on each List,
// but my best try would only give me a flat map for each key,
// that I wouldn't know how to flatten.
}).collect(Collectors.toMap(/*A String value*/,/*An Integer key*/))
I also gave a try to flatMap, but I don't think that it is the right way to go, because although it helps me get rid of the dimensionality issue, I lose the key in the process.
In a nutshell, my two questions are :
Is it possible to use streams and lambda to achieve this?
Is is useful (performance, readability) to do so?
You need to use flatMap to flatten the values into a new stream, but since you still need the original keys for collecting into a Map, you have to map to a temporary object holding key and value, e.g.
Map<String, Integer> mapTo = mapFrom.entrySet().stream()
.flatMap(e->e.getValue().stream()
.map(v->new AbstractMap.SimpleImmutableEntry<>(e.getKey(), v)))
.collect(Collectors.toMap(Map.Entry::getValue, Map.Entry::getKey));
The Map.Entry is a stand-in for the nonexistent tuple type, any other type capable of holding two objects of different type is sufficient.
An alternative not requiring these temporary objects, is a custom collector:
Map<String, Integer> mapTo = mapFrom.entrySet().stream().collect(
HashMap::new, (m,e)->e.getValue().forEach(v->m.put(v, e.getKey())), Map::putAll);
This differs from toMap in overwriting duplicate keys silently, whereas toMap without a merger function will throw an exception, if there is a duplicate key. Basically, this custom collector is a parallel capable variant of
Map<String, Integer> mapTo = new HashMap<>();
mapFrom.forEach((k, l) -> l.forEach(v -> mapTo.put(v, k)));
But note that this task wouldn’t benefit from parallel processing, even with a very large input map. Only if there were additional computational intense task within the stream pipeline that could benefit from SMP, there was a chance of getting a benefit from parallel streams. So perhaps, the concise, sequential Collection API solution is preferable.
You should use flatMap as follows:
entrySet.stream()
.flatMap(e -> e.getValue().stream()
.map(s -> new SimpleImmutableEntry(e.getKey(), s)));
SimpleImmutableEntry is a nested class in AbstractMap.
Hope this would do it in simplest way. :))
mapFrom.forEach((key, values) -> values.forEach(value -> mapTo.put(value, key)));
This should work. Please notice that you lost some keys from List.
Map<Integer, List<String>> mapFrom = new HashMap<>();
Map<String, Integer> mapTo = mapFrom.entrySet().stream()
.flatMap(integerListEntry -> integerListEntry.getValue()
.stream()
.map(listItem -> new AbstractMap.SimpleEntry<>(listItem, integerListEntry.getKey())))
.collect(Collectors.toMap(AbstractMap.SimpleEntry::getKey, AbstractMap.SimpleEntry::getValue));
Same as the previous answers with Java 9:
Map<String, Integer> mapTo = mapFrom.entrySet()
.stream()
.flatMap(entry -> entry.getValue()
.stream()
.map(s -> Map.entry(s, entry.getKey())))
.collect(toMap(Entry::getKey, Entry::getValue));

Can TreeMap be used to retrieve all key/value pairs above a given key value?

I have a piece of code that maintains a map of revisions done to samples with a given ID:
private Map<Long, SampleId> sampleRevisionMap = new HashMap<>();
While maintaining this, other threads can call in to get all changes made since the given revision number. To find the relevant IDs I do
public Set<SampleId> getRevisionIDs(long clientRevision) {
return sampleRevisionMap.entrySet().stream()
.filter(k -> k.getKey() > clientRevision)
.map(entry -> entry.getValue())
.collect(Collectors.toSet());
}
In short, give me all values with key above a threshold.
is there a better way to do this employing an ordered map, i.e. java.utils.TreeMap?
Yes, you can do it by calling tailMap:
public Collection<SampleId> getRevisionIDs(long clientRevision) {
return sampleRevisionMap.tailMap(clientRevision).values();
}
The above includes the value mapped to clientRevision as well. If you want everything above it, use clientRevision+1 instead.

How to use Map filter in list by Java 8 lambda

Map is Map<String, List<User>> and List is List<User>. I want to use
Map<String,List<User>> newMap = oldMap.stream()
.filter(u ->userList.stream()
.filter(ul ->ul.getName().equalsIgnoreCase(u.getKey()).count()>0))
.collect(Collectors.toMap(u.getKey, u.getVaule()));
can't change to new Map. Why?
There are several problems with your code:
Map does not have a stream(): its entry set does, so you need to call entrySet() first.
There are a couple of misplaced parentheses
Collectors.toMap code is incorrect: you need to use the lambda u -> u.getKey() (or the method-reference Map.Entry::getKey) and not just the expression u.getKey(). Also, you mispelled getValue().
This would be a corrected code:
Map<String, List<User>> newMap =
oldMap.entrySet()
.stream()
.filter(u -> userList.stream()
.filter(ul ->ul.getName().equalsIgnoreCase(u.getKey())).count() > 0
).collect(Collectors.toMap(u -> u.getKey(), u -> u.getValue()));
But a couple of notes here:
You are filtering only to see if the count is greater than 0: instead you could just use anyMatch(predicate). This is a short-cuiting terminal operation that returns true if the predicate is true for at least one of the elements in the Stream. This has also the advantage that this operation might not process all the elements in the Stream (when filtering does)
It is inefficient since you are traversing the userList every time you need to filter a Stream element. It would be better to use a Set which has O(1) lookup (so first you would convert your userList into a userSet, transforming the username in lower-case, and then you would search this set for a lower-case value username).
This would be a more performant code:
Set<String> userNameSet = userList.stream().map(u -> u.getName().toLowerCase(Locale.ROOT)).collect(toSet());
Map<String,List<User>> newMap =
oldMap.entrySet()
.stream()
.filter(u -> userNameSet.contains(u.getKey().toLowerCase(Locale.ROOT)))
.collect(Collectors.toMap(u -> u.getKey(), u -> u.getValue()));
Perhaps you intended to create a Stream of the entry Set of the input Map.
Map<String,List<User>> newMap =
oldMap.entrySet().stream()
.filter(u ->userList.stream().filter(ul ->ul.getName().equalsIgnoreCase(u.getKey())).count()>0)
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
This would create a Map that retains the entries of the original Map whose keys equal the name of at least one of the members of userList (ignoring case).

Categories