Collect to treemap java 8 - java

I wanted to collect the result to a TreeMap<Integer,Double> by processing a
TreeMap<Integer,ArrayList<String>>.
TreeMap<Integer,Double> result2 = units.entrySet().stream()
.filter(v -> v.getValue().size()>3)
.filter(v -> !v.getValue().get(1).isEmpty() && !v.getValue().get(2).isEmpty())
.mapToDouble(v -> mult( v.getValue().get(1), v.getValue().get(2)))
.collect();
Basically what I am doing is that I get the values from the ArrayList of Strings from the stream, and filter out the ones with no values and get a product of the 2nd and 3rd element by using mult function inside the lambda expression. Now I don't know how to collect to a TreeMap where the key is the key of the processed TreeMap called units and the value should be the product that I calculated in the mapToDouble.
NOTE: units is a TreeMap<Integer,ArrayList<String>>

You can collect the data to a TreeMap by using the overloaded version of toMap(keyMapper, valueMapper, mergeFunction, mapSupplier) method that allows you to specify which Map to create.
TreeMap<Integer,Double> result2 = units.entrySet().stream()
.filter(v -> v.getValue().size()>3)
.filter(v -> !v.getValue().get(1).isEmpty() && !v.getValue().get(2).isEmpty())
.collect(Collectors.toMap(
entry -> entry.getKey(),
entry-> mult( entry.getValue().get(1), entry.getValue().get(2)),
(entry1, entry2) -> entry1, // called if duplicate keys are there, I have return entry1 by default by you can modify it according to you
TreeMap::new
));
you can refer this to look at overloads of toMap method.
https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collectors.html

Related

How to collect data to List<Object> from Map<Object,Integer> using java Stream API?

I have map Map<Nominal, Integer> with objects and their counts:
a -> 3
b -> 1
c -> 2
And I need to get such a List<Nominal> from it:
a
a
a
b
c
c
How can I do this using the Stream API?
We can use Collections::nCopies to achieve the desired result:
private static <T> List<T> transform(Map<? extends T, Integer> map) {
return map.entrySet().stream()
.map(entry -> Collections.nCopies(entry.getValue(), entry.getKey()))
.flatMap(Collection::stream)
.collect(Collectors.toList());
}
Ideone demo
Remark
In the demo, I changed the key-type of the Map from Nominal to Object since the definition of Nominal was not provided. Changing the key-type, however, does not influence the solution.
Stream the entries and use flatMap to generate multiple copies of each key based on the value.
List<Nominal> expanded = map.entrySet().stream()
.flatMap(e -> generate(e::getKey).limit(e.getValue()))
.collect(toList());

How to gather a keyset from multiple maps from a stream that is filtered?

I am trying to learn to work with streams and collectors, I know how to do it with multiple for loops but I want to become a more efficient programmer.
Each project has a map committedHoursPerDay, where the key is the employee and the value is the amount of hours expressed in Integer. I want to loop through all project's committedHoursPerDay maps and filter the maps where the committedHoursPerDay is more than 7(fulltime), and add each of the Employee who works fulltime to the set.
The code that i have written so far is this:
public Set<Employee> getFulltimeEmployees() {
// TODO
Map<Employee,Integer> fulltimeEmployees = projects.stream().filter(p -> p.getCommittedHoursPerDay().entrySet()
.stream()
.filter(map -> map.getValue() >= 8)
.collect(Collectors.toMap(map -> map.getKey(), map -> map.getValue())));
return fulltimeEmployees.keySet();
}
however the filter recognizes the map because I can access the key and values, but in the .collect(Collectors.toMap()) it doesnt recognize the map and only sees it as a lambda argument
There is one to many notion here. You can first flatten the maps using flatMap and then apply filter to the map entries.
Map<Employee,Integer> fulltimeEmployees = projects.stream()
.flatMap(p -> p.getCommittedHoursPerDay()
.entrySet()
.stream())
.filter(mapEntry -> mapEntry.getValue() >= 8)
.collect(Collectors.toMap(mapEntry -> mapEntry.getKey(), mapEntry -> mapEntry.getValue()));
The flatMap step returns a Stream<Map.Entry<Employee, Integer>>. The filter thus operates on a Map.Entry<Employee, Integer>.
You can also use method reference on the collect step as .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue))

Extract keys from Map for which there is no duplicate values

Map is defined as:
Map<Integer,String> map = new HashMap<>();
map.put(2,"ram");
map.put(3,"ram");
map.put(4,"gopal");
map.put(5,"madan");
map.put(6,"shyam");
map.put(7,"gopal");
map.put(8,"ram");
My expected output is List which contains only keys for which there is no duplicate values.
5
6
My approach and thought process:
Thought process 1 :
I would take map.entrySet().stream().map(....) and then take another stream inside map and filter the values for which duplicate values are present.
The approach soon got wasted as the first indexed value would be again compared in the nested stream and i would happen to filter out all elements thus.
Thought process 2
I kept values in different List by :
List<String> subList = map.entrySet().stream()
.map((k)->k.getValue())
.collect(Collectors.toList());
and then:
map.entrySet().stream()
.filter(s ->
subList.contains(s.getValue()) )
.map(Map.Entry::getKey)
.collect(Collectors.toList());
But I am getting the output as
2
3
4
5
6
7
8
The output is obvious as the value that i am picking as s from stream i am comparing it in the pool where the value will be always present at-least one time.
I again thought then that if i could have a counter that count count and if the value is present then it would increment, but again all seems very much vague now.
Any ways in which i can iterate by index using stream, so that i can always leave the key value which i am taking and just comparing with rest of values.
will love to get a brief explanation.
You can break the task into two step. first count value repeating by groupingBy() and counting() collectors.
Map<String,Long> valueCount = map.values()
.stream()
.collect(Collectors.groupingBy(Function.identity(),Collectors.counting()));
Its result is:
{madan=1, shyam=1, gopal=2, ram=3}
second step is to find only keys which their values are not duplicate. so to attain this you can use filter() and filtering the map by previous step result.
map.entrySet()
.stream()
.filter(entry -> valueCount.get(entry.getValue())==1).map(Map.Entry::getKey)
.collect(Collectors.toList())
You can filter the values with frequency 1 while creating the subList such as:
Set<String> uniqueSet = map.values().stream()
.collect(Collectors.groupingBy(a -> a, Collectors.counting()))
.entrySet().stream()
.filter(a -> a.getValue() == 1)
.map((Map.Entry::getKey))
.collect(Collectors.toSet());
and then perform the same operation as:
Set<Integer> result = map.entrySet().stream()
.filter(e -> uniqueSet.contains(e.getValue()))
.map(Map.Entry::getKey)
.collect(Collectors.toSet());
Or as Holger pointed out in the comments, instead of counting, you could do away with a Boolean value to filter unique values as:
Set<String> uniqueSet = map.values().stream()
.collect(Collectors.toMap(Function.identity(), v -> true, (a,b) -> false))
.entrySet().stream()
.filter(Map.Entry::getValue)
.map((Map.Entry::getKey))
.collect(Collectors.toSet());

how to convert a hashmap of list into a single list in Java 8?

I am completely new to java 8 and i am a bit unclear on how to proceed.
i have a Map <String, List<value>> in Java 7, i would just use for loop on the keys and collect the List into a single list.
however, i want to be able to do this in 8.
what i have is:
List<Value> newList = resultMap.entrySet().stream()
.flatMap( e -> e.getValue().stream())
.map( // get the value in the list)
.collect(Collectors.toList())
However, in this case, i would not be able to know the key from the hashmap which the value belongs to.
How can i get the value of the key for the hashmap while doing the above?
You can do something like this:
Map<Key, List<Value>> map = ...;
List<Map.Entry<Key, Value>> list =
map.entrySet()
.stream()
.flatMap(e -> {
return e.getValue().stream()
.map(v -> new AbstractMap.SimpleEntry<>(e.getKey(), v));
})
.collect(Collectors.toList());
This creates an entry for each value in each sublist, where the key is the corresponding key for the list that value came from.

Transform and filter a Java Map with streams

I have a Java Map that I'd like to transform and filter. As a trivial example, suppose I want to convert all values to Integers then remove the odd entries.
Map<String, String> input = new HashMap<>();
input.put("a", "1234");
input.put("b", "2345");
input.put("c", "3456");
input.put("d", "4567");
Map<String, Integer> output = input.entrySet().stream()
.collect(Collectors.toMap(
Map.Entry::getKey,
e -> Integer.parseInt(e.getValue())
))
.entrySet().stream()
.filter(e -> e.getValue() % 2 == 0)
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
System.out.println(output.toString());
This is correct and yields: {a=1234, c=3456}
However, I can't help but wonder if there's a way to avoid calling .entrySet().stream() twice.
Is there a way I can perform both transform and filter operations and call .collect() only once at the end?
Yes, you can map each entry to another temporary entry that will hold the key and the parsed integer value. Then you can filter each entry based on their value.
Map<String, Integer> output =
input.entrySet()
.stream()
.map(e -> new AbstractMap.SimpleEntry<>(e.getKey(), Integer.valueOf(e.getValue())))
.filter(e -> e.getValue() % 2 == 0)
.collect(Collectors.toMap(
Map.Entry::getKey,
Map.Entry::getValue
));
Note that I used Integer.valueOf instead of parseInt since we actually want a boxed int.
If you have the luxury to use the StreamEx library, you can do it quite simply:
Map<String, Integer> output =
EntryStream.of(input).mapValues(Integer::valueOf).filterValues(v -> v % 2 == 0).toMap();
One way to solve the problem with much lesser overhead is to move the mapping and filtering down to the collector.
Map<String, Integer> output = input.entrySet().stream().collect(
HashMap::new,
(map,e)->{ int i=Integer.parseInt(e.getValue()); if(i%2==0) map.put(e.getKey(), i); },
Map::putAll);
This does not require the creation of intermediate Map.Entry instances and even better, will postpone the boxing of int values to the point when the values are actually added to the Map, which implies that values rejected by the filter are not boxed at all.
Compared to what Collectors.toMap(…) does, the operation is also simplified by using Map.put rather than Map.merge as we know beforehand that we don’t have to handle key collisions here.
However, as long as you don’t want to utilize parallel execution you may also consider the ordinary loop
HashMap<String,Integer> output=new HashMap<>();
for(Map.Entry<String, String> e: input.entrySet()) {
int i = Integer.parseInt(e.getValue());
if(i%2==0) output.put(e.getKey(), i);
}
or the internal iteration variant:
HashMap<String,Integer> output=new HashMap<>();
input.forEach((k,v)->{ int i = Integer.parseInt(v); if(i%2==0) output.put(k, i); });
the latter being quite compact and at least on par with all other variants regarding single threaded performance.
Guava's your friend:
Map<String, Integer> output = Maps.filterValues(Maps.transformValues(input, Integer::valueOf), i -> i % 2 == 0);
Keep in mind that output is a transformed, filtered view of input. You'll need to make a copy if you want to operate on them independently.
You could use the Stream.collect(supplier, accumulator, combiner) method to transform the entries and conditionally accumulate them:
Map<String, Integer> even = input.entrySet().stream().collect(
HashMap::new,
(m, e) -> Optional.ofNullable(e)
.map(Map.Entry::getValue)
.map(Integer::valueOf)
.filter(i -> i % 2 == 0)
.ifPresent(i -> m.put(e.getKey(), i)),
Map::putAll);
System.out.println(even); // {a=1234, c=3456}
Here, inside the accumulator, I'm using Optional methods to apply both the transformation and the predicate, and, if the optional value is still present, I'm adding it to the map being collected.
Another way to do this is to remove the values you don't want from the transformed Map:
Map<String, Integer> output = input.entrySet().stream()
.collect(Collectors.toMap(
Map.Entry::getKey,
e -> Integer.parseInt(e.getValue()),
(a, b) -> { throw new AssertionError(); },
HashMap::new
));
output.values().removeIf(v -> v % 2 != 0);
This assumes you want a mutable Map as the result, if not you can probably create an immutable one from output.
If you are transforming the values into the same type and want to modify the Map in place this could be alot shorter with replaceAll:
input.replaceAll((k, v) -> v + " example");
input.values().removeIf(v -> v.length() > 10);
This also assumes input is mutable.
I don't recommend doing this because It will not work for all valid Map implementations and may stop working for HashMap in the future, but you can currently use replaceAll and cast a HashMap to change the type of the values:
((Map)input).replaceAll((k, v) -> Integer.parseInt((String)v));
Map<String, Integer> output = (Map)input;
output.values().removeIf(v -> v % 2 != 0);
This will also give you type safety warnings and if you try to retrieve a value from the Map through a reference of the old type like this:
String ex = input.get("a");
It will throw a ClassCastException.
You could move the first transform part into a method to avoid the boilerplate if you expect to use it alot:
public static <K, VO, VN, M extends Map<K, VN>> M transformValues(
Map<? extends K, ? extends VO> old,
Function<? super VO, ? extends VN> f,
Supplier<? extends M> mapFactory){
return old.entrySet().stream().collect(Collectors.toMap(
Entry::getKey,
e -> f.apply(e.getValue()),
(a, b) -> { throw new IllegalStateException("Duplicate keys for values " + a + " " + b); },
mapFactory));
}
And use it like this:
Map<String, Integer> output = transformValues(input, Integer::parseInt, HashMap::new);
output.values().removeIf(v -> v % 2 != 0);
Note that the duplicate key exception can be thrown if, for example, the old Map is an IdentityHashMap and the mapFactory creates a HashMap.
Here is code by abacus-common
Map<String, String> input = N.asMap("a", "1234", "b", "2345", "c", "3456", "d", "4567");
Map<String, Integer> output = Stream.of(input)
.groupBy(e -> e.getKey(), e -> N.asInt(e.getValue()))
.filter(e -> e.getValue() % 2 == 0)
.toMap(Map.Entry::getKey, Map.Entry::getValue);
N.println(output.toString());
Declaration: I'm the developer of abacus-common.

Categories