I have the following Object and a Map:
MyObject
String name;
Long priority;
foo bar;
Map<String, List<MyObject>> anotherHashMap;
I want to convert the Map in another Map. The Key of the result map is the key of the input map. The value of the result map ist the Property "name" of My object, ordered by priority.
The ordering and extracting the name is not the problem, but I could not put it into the result map. I do it the old Java 7 way, but it would be nice it is possible to use the streaming API.
Map<String, List<String>> result = new HashMap<>();
for (String identifier : anotherHashMap.keySet()) {
List<String> generatedList = anotherHashMap.get(identifier).stream()...;
teaserPerPage.put(identifier, generatedList);
}
Has anyone an idea? I tried this, but got stuck:
anotherHashMap.entrySet().stream().collect(Collectors.asMap(..., ...));
Map<String, List<String>> result = anotherHashMap
.entrySet().stream() // Stream over entry set
.collect(Collectors.toMap( // Collect final result map
Map.Entry::getKey, // Key mapping is the same
e -> e.getValue().stream() // Stream over list
.sorted(Comparator.comparingLong(MyObject::getPriority)) // Sort by priority
.map(MyObject::getName) // Apply mapping to MyObject
.collect(Collectors.toList())) // Collect mapping into list
);
Essentially, you stream over each entry set and collect it into a new map. To compute the value in the new map, you stream over the List<MyOjbect> from the old map, sort, and apply a mapping and collection function to it. In this case I used MyObject::getName as the mapping and collected the resulting names into a list.
For generating another map, we can have something like following:
HashMap<String, List<String>> result = anotherHashMap.entrySet().stream().collect(Collectors.toMap(elem -> elem.getKey(), elem -> elem.getValue() // can further process it);
Above I am recreating the map again, but you can process the key or the value according to your needs.
Map<String, List<String>> result = anotherHashMap.entrySet().stream().collect(Collectors.toMap(
Map.Entry::getKey,
e -> e.getValue().stream()
.sorted(comparing(MyObject::getPriority))
.map(MyObject::getName)
.collect(Collectors.toList())));
Similar to answer of Mike Kobit, but sorting is applied in the correct place (i.e. value is sorted, not map entries) and more concise static method Comparator.comparing is used to get Comparator for sorting.
Related
I have a list of String.
I want to store each string as key and the string's length as value in a Map (say HashMap).
I'm not able to achieve it.
List<String> ls = Arrays.asList("James", "Sam", "Scot", "Elich");
Map<String,Integer> map = new HashMap<>();
Function<String, Map<String, Integer>> fs = new Function<>() {
#Override
public Map<String, Integer> apply(String s) {
map.put(s,s.length());
return map;
}
};
Map<String, Integer> nmap = ls
.stream()
.map(fs).
.collect(Collectors.toMap()); //Lost here
System.out.println(nmap);
All strings are unique.
There's no need to wrap each and every string with its own map, as the function you've created does.
Instead, you need to provide proper arguments while calling Collectors.toMap() :
keyMapper - a function responsible for extracting a key from the stream element.
valueMapper - a function that generates a value from the stream element.
Hence, you need the stream element itself to be a key we can use Function.identity(), which is more descriptive than lambda str -> str, but does precisely the same.
Map<String,Integer> lengthByStr = ls.stream()
.collect(Collectors.toMap(
Function.identity(), // extracting a key
String::length // extracting a value
));
In case when the source list might contain duplicates, you need to provide the third argument - mergeFunction that will be responsible for resolving duplicates.
Map<String,Integer> lengthByStr = ls.stream()
.collect(Collectors.toMap(
Function.identity(), // key
String::length, // value
(left, right) -> left // resolving duplicates
));
You said there would be no duplicate Strings. But if one gets by you can use distinct() (which internally uses set) to ensure it doesn't cause issues.
a-> a is a shorthand for using the stream value. Essentially a lambda that returns its argument.
distinct() removes any duplicate strings
Map<String, Integer> result = names.stream().distinct()
.collect(Collectors.toMap(a -> a, String::length));
If you want to get the length of a String, you can do it immediately as someString.length(). But suppose you want to get a map of all the Strings keyed by a particular length. You can do it using Collectors.groupingBy() which by default puts duplicates in a list. In this case, the duplicate would be the length of the String.
use the length of the string as a key.
the value will be a List<String> to hold all strings that match that length.
List<String> names = List.of("James", "Sam", "Scot",
"Elich", "lucy", "Jennifer","Bob", "Joe", "William");
Map<Integer, List<String>> lengthMap = names.stream()
.distinct()
.collect(Collectors.groupingBy(String::length));
lengthMap.entrySet().forEach(System.out::println);
prints
3=[Sam, Bob, Joe]
4=[Scot, lucy]
5=[James, Elich]
7=[William]
8=[Jennifer]
How can I transform a Map<String, List<String>> into flattened map based on values grouped as keys.
I.e. from Map<String, List<String>> to Map<String, String> (flattened by values)
For an example.
Source map:
<"fuel", ["BMW", "Honda"]>,
<"electric", ["Tesla", "Nio"]>
Flattened map:
[
"BMW" : "fuel",
"Honda" : "fuel",
"Tesla" : "electric",
"Nio" : "electric"
]
It is not possible to have multiple keys identical according to equals/hashCode within the same map because it contradicts with idea of the map data structure. Every key must be unique, you can't store multiple entries with the same key in a Map.
But you can create a list of Map.Entry objects.
For that, you need to create a stream over the entry set and then flatten each entry by creating a new entry based on car brands (elements of a flatted value) for each key:
Map<String, List<String>> cars =
Map.of("fuel", List.of("BMW", "Honda"),
"electric", List.of("Tesla", "Nio"));
List<Map.Entry<String, String>> entries =
cars.entrySet().stream()
.flatMap(entry -> entry.getValue().stream()
.map(brand -> Map.entry(entry.getKey(), brand)))
.collect(Collectors.toList());
System.out.println(entries);
output
[electric=Tesla, electric=Nio, fuel=BMW, fuel=Honda]
Or maybe your intention was to create a Map<String, String> that will allow to retrieve car type based on brand (like that: "BMW" : "fuel", "Honda" : "fuel") then it'll make sense.
The overall approach will be similar to the previous one:
create a stream over the entry set;
flatten each entry using flatMap() by turning an element in the value-list into a new entry;
collect elements to a map with Collectors.toMap().
But there's a caveat: all values have to be unique or there must be a rule on how to combine/discarde car types.
I'll make an assumption that all brands in the map are unique, otherwise, the code below will fail (to deal with collisions Collectors.toMap() requires the third argument - mergeFunction).
Map<String, String> typeByBrand =
cars.entrySet().stream()
.flatMap(entry -> entry.getValue().stream()
.map(brand -> Map.entry(brand, entry.getKey())))
.collect(Collectors.toMap(Map.Entry::getKey,
Map.Entry::getValue));
System.out.println(typeByBrand);
output
{Nio=electric, Tesla=electric, BMW=fuel, Honda=fuel}
I have a List:
class DummyClass {
List<String> rname;
String name;
}
The values in my List look like this:
list.add(DummyClass(Array.asList("a","b"),"apple"))
list.add(DummyClass(Array.asList("a","b"),"banana"))
list.add(DummyClass(Array.asList("a","c"),"orange"))
list.add(DummyClass(null,"apple"))
I want to convert the above List into a Map<String, Set>, where key is rname and value is Set of name field.
{
"a"-> ["apple", "orange", "banana"],
"b"-> ["apple", "banana"]
"c" -> ["orange"]
}
I am trying to use java stream and facing null pointer exception . Can someone please guide
Map<String, Set<String>> map =
list.stream()
.collect(Collectors.groupingBy(DummyClass::rname,
Collectors.mapping(DummyClass::getName,
Collectors.toSet())));
I am not able to process {(Array.asList("a","b"))}each element of list in stream.
There is some flaw here :
Collectors.groupingBy(DummyClass::rname,
Collectors.mapping(DummyClass::getName,
Collectors.toSet())))
where I am processing the entire list together, rather than each element . Shall I use another stream
You need to do a filter - many of the util classes to construct collections no longer allow null e.g. Map.of or the groupingBy you have above.
You can filter or first map, replace null with a string and then group by.
Map<String, Set<String>> map =
list.stream().filter(v-> v.getName() != null)
.collect(Collectors.groupingBy(DummyClass::rname,
Collectors.mapping(DummyClass::getName,
Collectors.toSet())));
Or if you don't want to drop null values, do a map and produce a key that all null names can be grouped under something like:
Map<String, Set<String>> map =
list.stream().map(v-> Map.entry(v.getName() == null? "null": v.getName(), v))
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.mapping(Map.Entry::getKey,
Collectors.toSet())));
The groupingBy that I have above needs to be changed as it now has a Map.Entry rather than your desired type.
I'm writing this on a mobile...without an editor so will leave that part to you :)
I am trying to achieve the following
public class MyObject {
Map<String, String> myMap;
}
public class MyOtherObject {
List<MyObject> myObjects;
}
I want to be able to do the following
for (MyObject myObject: myObjects) {
Map<String,String> newMap = new Hashmap<String,String>();
for (Map.Entry<String, String> entry : myObject.getMyMap.entrySet() {
newMap.put(entry.key + "a" , entry.value)
}
}
How do I avoid this nested loop?
Regarding your current solution, you'll need to move the newMap
declaration outside the loop otherwise you're creating a new map at each iteration of the loop and that would not contain the result you'd expect.
You also have a typo on your map instantiation, change Hashmap to
HashMap.
As for avoiding nested loops, you can create a stream instance of the entrySet then simply perform a reduction operation on the stream using collect.
Map<String, String> copy = myObject.getMyMap.entrySet()
.stream()
.collect(Collectors.toMap(p -> p.getKey() + "a", Map.Entry::getValue));
if you want to copy all of the mappings from the specified map to another map then as svasa has suggested within the comments, you can do:
newMap.putAll(myObject.getMyMap.entrySet().stream().collect(Collectors.toMap(p -> p.getKey() + "a", Map.Entry::getValue)));
Use putAll:
Amap.putAll(Bmap);
The complexity will always be O(n x m), with n being the size of the list and m the maximum size of the inner maps.
With java 8, you can use:
Map<String, String> result = myObjects.stream()
.flatMap(myObject -> myObject.getMyMap().entrySet().stream())
.collect(Collectors.toMap(e -> e.getKey() + "a", Entry::getValue));
This assumes you want only one map with all the entries of all the inner maps, with their key modified.
I've used flatMap to flatten all the streams with the entries of the inner maps into one stream. Then, I've used Collectors.toMap to collect all these entries into a new map.
This will fail if there are repeated keys. In such case, you can use the overloaded version of Collectors.toMap, which accepts a function to merge values when there's a collision between keys:
Map<String, String> result = myObjects.stream()
.flatMap(myObject -> myObject.getMyMap().entrySet().stream())
.collect(Collectors.toMap(e -> e.getKey() + "a", Entry::getValue, String::concat));
This simply concatenates the values that are mapped more than once to the same key, but you can use any other merge function.
I would like to flatten a Map which associates an Integer key to a list of String, without losing the key mapping.
I am curious as though it is possible and useful to do so with stream and lambda.
We start with something like this:
Map<Integer, List<String>> mapFrom = new HashMap<>();
Let's assume that mapFrom is populated somewhere, and looks like:
1: a,b,c
2: d,e,f
etc.
Let's also assume that the values in the lists are unique.
Now, I want to "unfold" it to get a second map like:
a: 1
b: 1
c: 1
d: 2
e: 2
f: 2
etc.
I could do it like this (or very similarly, using foreach):
Map<String, Integer> mapTo = new HashMap<>();
for (Map.Entry<Integer, List<String>> entry: mapFrom.entrySet()) {
for (String s: entry.getValue()) {
mapTo.put(s, entry.getKey());
}
}
Now let's assume that I want to use lambda instead of nested for loops. I would probably do something like this:
Map<String, Integer> mapTo = mapFrom.entrySet().stream().map(e -> {
e.getValue().stream().?
// Here I can iterate on each List,
// but my best try would only give me a flat map for each key,
// that I wouldn't know how to flatten.
}).collect(Collectors.toMap(/*A String value*/,/*An Integer key*/))
I also gave a try to flatMap, but I don't think that it is the right way to go, because although it helps me get rid of the dimensionality issue, I lose the key in the process.
In a nutshell, my two questions are :
Is it possible to use streams and lambda to achieve this?
Is is useful (performance, readability) to do so?
You need to use flatMap to flatten the values into a new stream, but since you still need the original keys for collecting into a Map, you have to map to a temporary object holding key and value, e.g.
Map<String, Integer> mapTo = mapFrom.entrySet().stream()
.flatMap(e->e.getValue().stream()
.map(v->new AbstractMap.SimpleImmutableEntry<>(e.getKey(), v)))
.collect(Collectors.toMap(Map.Entry::getValue, Map.Entry::getKey));
The Map.Entry is a stand-in for the nonexistent tuple type, any other type capable of holding two objects of different type is sufficient.
An alternative not requiring these temporary objects, is a custom collector:
Map<String, Integer> mapTo = mapFrom.entrySet().stream().collect(
HashMap::new, (m,e)->e.getValue().forEach(v->m.put(v, e.getKey())), Map::putAll);
This differs from toMap in overwriting duplicate keys silently, whereas toMap without a merger function will throw an exception, if there is a duplicate key. Basically, this custom collector is a parallel capable variant of
Map<String, Integer> mapTo = new HashMap<>();
mapFrom.forEach((k, l) -> l.forEach(v -> mapTo.put(v, k)));
But note that this task wouldn’t benefit from parallel processing, even with a very large input map. Only if there were additional computational intense task within the stream pipeline that could benefit from SMP, there was a chance of getting a benefit from parallel streams. So perhaps, the concise, sequential Collection API solution is preferable.
You should use flatMap as follows:
entrySet.stream()
.flatMap(e -> e.getValue().stream()
.map(s -> new SimpleImmutableEntry(e.getKey(), s)));
SimpleImmutableEntry is a nested class in AbstractMap.
Hope this would do it in simplest way. :))
mapFrom.forEach((key, values) -> values.forEach(value -> mapTo.put(value, key)));
This should work. Please notice that you lost some keys from List.
Map<Integer, List<String>> mapFrom = new HashMap<>();
Map<String, Integer> mapTo = mapFrom.entrySet().stream()
.flatMap(integerListEntry -> integerListEntry.getValue()
.stream()
.map(listItem -> new AbstractMap.SimpleEntry<>(listItem, integerListEntry.getKey())))
.collect(Collectors.toMap(AbstractMap.SimpleEntry::getKey, AbstractMap.SimpleEntry::getValue));
Same as the previous answers with Java 9:
Map<String, Integer> mapTo = mapFrom.entrySet()
.stream()
.flatMap(entry -> entry.getValue()
.stream()
.map(s -> Map.entry(s, entry.getKey())))
.collect(toMap(Entry::getKey, Entry::getValue));