Consider this code:
Function<BigDecimal,BigDecimal> func1 = x -> x;//This could be anything
Function<BigDecimal,BigDecimal> func2 = y -> y;//This could be anything
Map<Integer,BigDecimal> data = new HashMap<>();
Map<Integer,BigDecimal> newData =
data.entrySet().stream().
collect(Collectors.toMap(Entry::getKey,i ->
func1.apply(i.getValue())));
List<BigDecimal> list =
newData.entrySet().stream().map(i ->
func2.apply(i.getValue())).collect(Collectors.toList());
Basically what I'm doing is updating an HashMap with func1,to apply a second trasformation with func2 and to save second time updated value in a list.
I DID all in immutable way generating the new objects newData and list.
MY QUESTION:
It is possible to do that streaming the original HashMap (data) once?
I tried this:
Function<BigDecimal,BigDecimal> func1 = x -> x;
Function<BigDecimal,BigDecimal> func2 = y -> y;
Map<Integer,BigDecimal> data = new HashMap<>();
List<BigDecimal> list = new ArrayList<>();
Map<Integer,BigDecimal> newData =
data.entrySet().stream().collect(Collectors.toMap(
Entry::getKey,i ->
{
BigDecimal newValue = func1.apply(i.getValue());
//SIDE EFFECT!!!!!!!
list.add(func2.apply(newValue));
return newValue;
}));
but doing so I have a side effect in list updating so I lost the 'immutable way' requirement.
This seems like an ideal use case for the upcoming Collectors.teeing method in JDK 12. Here's the webrev and here's the CSR. You can use it as follows:
Map.Entry<Map<Integer, BigDecimal>, List<BigDecimal>> result = data.entrySet().stream()
.collect(Collectors.teeing(
Collectors.toMap(
Map.Entry::getKey,
i -> func1.apply(i.getValue())),
Collectors.mapping(
i -> func1.andThen(func2).apply(i.getValue()),
Collectors.toList()),
Map::entry));
Collectors.teeing collects to two different collectors and then merges both partial results into the final result. For this final step I'm using JDK 9's Map.entry(K k, V v) static method, but I could have used any other container, i.e. Pair or Tuple2, etc.
For the first collector I'm using your exact code to collect to a Map, while for the second collector I'm using Collectors.mapping along with Collectors.toList, using Function.andThen to compose your func1 and func2 functions for the mapping step.
EDIT: If you cannot wait until JDK 12 is released, you could use this code meanwhile:
public static <T, A1, A2, R1, R2, R> Collector<T, ?, R> teeing(
Collector<? super T, A1, R1> downstream1,
Collector<? super T, A2, R2> downstream2,
BiFunction<? super R1, ? super R2, R> merger) {
class Acc {
A1 acc1 = downstream1.supplier().get();
A2 acc2 = downstream2.supplier().get();
void accumulate(T t) {
downstream1.accumulator().accept(acc1, t);
downstream2.accumulator().accept(acc2, t);
}
Acc combine(Acc other) {
acc1 = downstream1.combiner().apply(acc1, other.acc1);
acc2 = downstream2.combiner().apply(acc2, other.acc2);
return this;
}
R applyMerger() {
R1 r1 = downstream1.finisher().apply(acc1);
R2 r2 = downstream2.finisher().apply(acc2);
return merger.apply(r1, r2);
}
}
return Collector.of(Acc::new, Acc::accumulate, Acc::combine, Acc::applyMerger);
}
Note: The characteristics of the downstream collectors are not considered when creating the returned collector (left as an exercise).
EDIT 2: Your solution is absolutely OK, even though it uses two streams. My solution above streams the original map only once, but it applies func1 to all the values twice. If func1 is expensive, you might consider memoizing it (i.e. caching its results, so that whenever it's called again with the same input, you return the result from the cache instead of computing it again). Or you might also first apply func1 to the values of the original map, and then collect with Collectors.teeing.
Memoizing is easy. Just declare this utility method:
public <T, R> Function<T, R> memoize(Function<T, R> f) {
Map<T, R> cache = new HashMap<>(); // or ConcurrentHashMap
return k -> cache.computeIfAbsent(k, f);
}
And then use it as follows:
Function<BigDecimal, BigDecimal> func1 = memoize(x -> x); //This could be anything
Now you can use this memoized func1 and it will work exactly as before, except that it will return results from the cache when its apply method is invoked with an argument that has been previously used.
The other solution would be to apply func1 first and then collect:
Map.Entry<Map<Integer, BigDecimal>, List<BigDecimal>> result = data.entrySet().stream()
.map(i -> Map.entry(i.getKey(), func1.apply(i.getValue())))
.collect(Collectors.teeing(
Collectors.toMap(
Map.Entry::getKey,
Map.Entry::getValue),
Collectors.mapping(
i -> func2.apply(i.getValue()),
Collectors.toList()),
Map::entry));
Again, I'm using jdk9's Map.entry(K k, V v) static method.
Your code can be simplified this way:
List<BigDecimal> list = data.values().stream()
.map(func1)
.map(func2)
.collect(Collectors.toList());
Your goal is to apply these functions to all the BigDecimal values in the Map. You can get all these values from the map using Map::values which returns the List. Then apply the Stream to the List only. Consider the data already contains some entries:
List<BigDecimal> list = data.values().stream()
.map(func1)
.map(func2)
.collect(Collectors.toList());
I discourage you from iterating all the entries (Set<Entry<Integer, BigDecimal>>) since you only need to work with the values.
Try this way it returns Array of Object[2] the first one is the map and second one is the list
Map<Integer, BigDecimal> data = new HashMap<>();
data.put(1, BigDecimal.valueOf(30));
data.put(2, BigDecimal.valueOf(40));
data.put(3, BigDecimal.valueOf(50));
Function<BigDecimal, BigDecimal> func1 = x -> x.add(BigDecimal.valueOf(10));//This could be anything
Function<BigDecimal, BigDecimal> func2 = y -> y.add(BigDecimal.valueOf(-20));//This could be anything
Object[] o = data.entrySet().stream()
.map(AbstractMap.SimpleEntry::new)
.map(entry -> {
entry.setValue(func1.apply(entry.getValue()));
return entry;
})
.collect(Collectors.collectingAndThen(toMap(Map.Entry::getKey, Map.Entry::getValue), a -> {
List<BigDecimal> bigDecimals = a.values().stream().map(func2).collect(Collectors.toList());
return new Object[]{a,bigDecimals};
}));
System.out.println(data);
System.out.println((Map<Integer, BigDecimal>)o[0]);
System.out.println((List<BigDecimal>)o[1]);
Output:
Original Map: {1=30, 2=40, 3=50}
func1 map: {1=40, 2=50, 3=60}
func1+func2 list: [20, 30, 40]
Related
This question is about Java Streams' groupingBy capability.
Suppose I have a class, WorldCup:
public class WorldCup {
int year;
Country champion;
// all-arg constructor, getter/setters, etc
}
and an enum, Country:
public enum Country {
Brazil, France, USA
}
and the following code snippet:
WorldCup wc94 = new WorldCup(1994, Country.Brazil);
WorldCup wc98 = new WorldCup(1998, Country.France);
List<WorldCup> wcList = new ArrayList<WorldCup>();
wcList.add(wc94);
wcList.add(wc98);
Map<Country, List<Integer>> championsMap = wcList.stream()
.collect(Collectors.groupingBy(WorldCup::getCountry, Collectors.mapping(WorldCup::getYear));
After running this code, championsMap will contain:
Brazil: [1994]
France: [1998]
Is there a succinct way to have this list include an entry for all of the values of the enum? What I'm looking for is:
Brazil: [1994]
France: [1998]
USA: []
There are several approaches you can take.
The map which would be used for accumulating the stream data can be prepopulated with entries corresponding to every enum-member. To access all existing enum-members you can use values() method or EnumSet.allOf().
It can be achieved using three-args version of collect() or through a custom collector created via Collector.of().
Map<Country, List<Integer>> championsMap = wcList.stream()
.collect(
() -> EnumSet.allOf(Country.class).stream() // supplier
.collect(Collectors.toMap(
Function.identity(),
c -> new ArrayList<>()
)),
(Map<Country, List<Integer>> map, WorldCup next) -> // accumulator
map.get(next.getCountry()).add(next.getYear()),
(left, right) -> // combiner
right.forEach((k, v) -> left.get(k).addAll(v))
);
Another option is to add missing entries to the map after reduction of the stream has been finished.
For that we can use built-in collector collectingAndThen().
Map<Country, List<Integer>> championsMap = wcList.stream()
.collect(Collectors.collectingAndThen(
Collectors.groupingBy(WorldCup::getCountry,
Collectors.mapping(WorldCup::getYear,
Collectors.toList())),
map -> {
EnumSet.allOf(Country.class)
.forEach(country -> map.computeIfAbsent(country, k -> new ArrayList<>())); // if you're not going to mutate these lists - use Collections.emptyList()
return map;
}
));
Suppose I have this Java 8 code:
public class Foo {
private long id;
public getId() {
return id;
}
//--snip--
}
//Somewhere else...
List<Foo> listA = getListA();
List<Foo> listB = getListB();
List<Foo> uniqueFoos = ???;
In List<Foo> uniqueFoos I want to add all elements of listA and listB so all Foos have unique IDs. I.e. if there is already a Foo in uniqueFoos that has a particular ID don't add another Foo with the same ID but skip it instead.
Of course there is plain old iteration, but I think there should be something more elegant (probably involving streams, but not mandatory), but I can't quite figure it out...
I can think of good solutions involving an override of the equals() method to basically return id == other.id; and using a Set or distinct(). Unfortunately I can't override equals() because object equality must not change.
What is a clear and efficient way to achieve this?
You can do it with Collectors.toMap:
Collection<Foo> uniqueFoos = Stream.concat(listA.stream(), listB.stream())
.collect(Collectors.toMap(
Foo::getId,
f -> f,
(oldFoo, newFoo) -> oldFoo))
.values();
If you need a List instead of a Collection, simply do:
List<Foo> listUniqueFoos = new ArrayList<>(uniqueFoos);
If you also need to preserve encounter order of elements, you can use the overloaded version of Collectors.toMap that accepts a Supplier for the returned map:
Collection<Foo> uniqueFoos = Stream.concat(listA.stream(), listB.stream())
.collect(Collectors.toMap(
Foo::getId,
f -> f,
(oldFoo, newFoo) -> oldFoo,
LinkedHashMap::new))
.values();
I think it's worth adding a non-stream variant:
Map<Long, Foo> map = new LinkedHashMap<>();
listA.forEach(f -> map.merge(g.getId(), f, (oldFoo, newFoo) -> oldFoo));
listB.forEach(f -> map.merge(g.getId(), f, (oldFoo, newFoo) -> oldFoo));
Collection<Foo> uniqueFoos = map.values();
This could be refactored into a generic method to not repeat code:
static <T, K> Collection<T> uniqueBy(Function<T, K> groupBy, List<T>... lists) {
Map<K, T> map = new LinkedHashMap<>();
for (List<T> l : lists) {
l.forEach(e -> map.merge(groupBy.apply(e), e, (o, n) -> o));
}
return map.values();
}
Which you can use as follows:
Collection<Foo> uniqueFoos = uniqueBy(Foo::getId, listA, listB);
This approach uses the Map.merge method.
Something like this will do.
List<Foo> uniqueFoos = Stream.concat(listA.stream(), listB.stream())
.filter(distinctByKey(Foo::getId))
.collect(Collectors.toList());
public <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> seen.add(keyExtractor.apply(t));
}
You could write this one. This skips the second and next elements that have the same id thanks to filter() and the use of a Set that stores the encountered ids :
Set<Long> ids = new HashSet<>();
List<Foo> uniqueFoos = Stream.concat(getListA().stream(), getListB().stream())
.filter(f -> ids.add(f.getId()))
.collect(Collectors.toList());
It is not a full stream solution but it is rather straight and readable.
I have a class defined like
public class TimePeriodCalc {
private double occupancy;
private double efficiency;
private String atDate;
}
I would like to perform the following SQL statement using Java 8 Stream API.
SELECT atDate, AVG(occupancy), AVG(efficiency)
FROM TimePeriodCalc
GROUP BY atDate
I tried :
Collection<TimePeriodCalc> collector = result.stream().collect(groupingBy(p -> p.getAtDate(), ....
What can be put into the code to select multiple attributes ? I'm thinking of using multiple Collectors but really don't know how to do so.
To do it without a custom Collector (not streaming again on the result), you could do it like this. It's a bit dirty, since it is first collecting to Map<String, List<TimePeriodCalc>> and then streaming that list and get the average double.
Since you need two averages, they are collected to a Holder or a Pair, in this case I'm using AbstractMap.SimpleEntry
Map<String, SimpleEntry<Double, Double>> map = Stream.of(new TimePeriodCalc(12d, 10d, "A"), new TimePeriodCalc(2d, 16d, "A"))
.collect(Collectors.groupingBy(TimePeriodCalc::getAtDate,
Collectors.collectingAndThen(Collectors.toList(), list -> {
double occupancy = list.stream().collect(
Collectors.averagingDouble(TimePeriodCalc::getOccupancy));
double efficiency = list.stream().collect(
Collectors.averagingDouble(TimePeriodCalc::getEfficiency));
return new AbstractMap.SimpleEntry<>(occupancy, efficiency);
})));
System.out.println(map);
Here's a way with a custom collector. It only needs one pass, but it's not very easy, especially because of generics...
If you have this method:
#SuppressWarnings("unchecked")
#SafeVarargs
static <T, A, C extends Collector<T, A, Double>> Collector<T, ?, List<Double>>
averagingManyDoubles(ToDoubleFunction<? super T>... extractors) {
List<C> collectors = Arrays.stream(extractors)
.map(extractor -> (C) Collectors.averagingDouble(extractor))
.collect(Collectors.toList());
class Acc {
List<A> averages = collectors.stream()
.map(c -> c.supplier().get())
.collect(Collectors.toList());
void add(T elem) {
IntStream.range(0, extractors.length).forEach(i ->
collectors.get(i).accumulator().accept(averages.get(i), elem));
}
Acc merge(Acc another) {
IntStream.range(0, extractors.length).forEach(i ->
averages.set(i, collectors.get(i).combiner()
.apply(averages.get(i), another.averages.get(i))));
return this;
}
List<Double> finish() {
return IntStream.range(0, extractors.length)
.mapToObj(i -> collectors.get(i).finisher().apply(averages.get(i)))
.collect(Collectors.toList());
}
}
return Collector.of(Acc::new, Acc::add, Acc::merge, Acc::finish);
}
This receives an array of functions that will extract double values from each element of the stream. These extractors are converted to Collectors.averagingDouble collectors and then the local Acc class is created with the mutable structures that are used to accumulate the averages for each collector. Then, the accumulator function forwards to each accumulator, and so with the combiner and finisher functions.
Usage is as follows:
Map<String, List<Double>> averages = list.stream()
.collect(Collectors.groupingBy(
TimePeriodCalc::getAtDate,
averagingManyDoubles(
TimePeriodCalc::getOccupancy,
TimePeriodCalc::getEfficiency)));
Assuming that your TimePeriodCalc class has all the necessary getters, this should get you the list you want:
List<TimePeriodCalc> result = new ArrayList<>(
list.stream()
.collect(Collectors.groupingBy(TimePeriodCalc::getAtDate,
Collectors.collectingAndThen(Collectors.toList(), TimePeriodCalc::avgTimePeriodCalc)))
.values()
);
Where TimePeriodCalc.avgTimePeriodCalc is this method in the TimePeriodCalc class:
public static TimePeriodCalc avgTimePeriodCalc(List<TimePeriodCalc> list){
return new TimePeriodCalc(
list.stream().collect(Collectors.averagingDouble(TimePeriodCalc::getOccupancy)),
list.stream().collect(Collectors.averagingDouble(TimePeriodCalc::getEfficiency)),
list.get(0).getAtDate()
);
}
The above can be combined into this monstrosity:
List<TimePeriodCalc> result = new ArrayList<>(
list.stream()
.collect(Collectors.groupingBy(TimePeriodCalc::getAtDate,
Collectors.collectingAndThen(
Collectors.toList(), a -> {
return new TimePeriodCalc(
a.stream().collect(Collectors.averagingDouble(TimePeriodCalc::getOccupancy)),
a.stream().collect(Collectors.averagingDouble(TimePeriodCalc::getEfficiency)),
a.get(0).getAtDate()
);
}
)))
.values());
With input:
List<TimePeriodCalc> list = new ArrayList<>();
list.add(new TimePeriodCalc(10,10,"a"));
list.add(new TimePeriodCalc(10,10,"b"));
list.add(new TimePeriodCalc(10,10,"c"));
list.add(new TimePeriodCalc(5,5,"a"));
list.add(new TimePeriodCalc(0,0,"b"));
This would give:
TimePeriodCalc [occupancy=7.5, efficiency=7.5, atDate=a]
TimePeriodCalc [occupancy=5.0, efficiency=5.0, atDate=b]
TimePeriodCalc [occupancy=10.0, efficiency=10.0, atDate=c]
You can chain multiple attributes like this:
Collection<TimePeriodCalc> collector = result.stream().collect(Collectors.groupingBy(p -> p.getAtDate(), Collectors.averagingInt(p -> p.getOccupancy())));
If you want more, you get the idea.
Use case:
Process list of string via method which returns ImmutableTable of type {R,C,V}. For instance ImmutableTable of {Integer,String,Boolean} process(String item){...}
Collect the result i.e, merge all results and return ImmutableTable. Is there a way to achieve it?
Current implementation (as suggested by Bohemian):
How about using parallel stream ? Is there any concurrency issues in the below code? With Parallel stream i an getting "NullPointerException at index 1800" on tableBuilder.build(), but works fine with stream.
ImmutableTable<Integer, String, Boolean> buildData() {
// list of 4 AwsS3KeyName
listToProcess.parallelStream()
//Create new instance via Guice dependency injection
.map(s3KeyName -> ProcessorInstanceProvider.get()
.fetchAndBuild(s3KeyName))
.forEach(tableBuilder::putAll);
return tableBuilder.build(); }
While below code worksgreat with stream as well as parallel stream. But ImmutableBuild is failing due to duplicate entry for row and col. What could be the best way to prevent duplicates while merging tables ?
public static <R, C, V> Collector<ImmutableTable<R, C, V>,
ImmutableTable.Builder<R, C, V>, ImmutableTable<R, C, V>>
toImmutableTable()
{
return Collector.of(ImmutableTable.Builder::new,
ImmutableTable.Builder::putAll, (builder1, builder2) ->
builder1.putAll(builder2.build()), ImmutableTable.Builder::build); }
Edit :
If there is any duplicate entry in ImmutableTable.Builder while merging different tables then it fails,
Trying to avoid faluire by putting ImmutableTables in HashBasedTable
ImmutableTable.copyOf(itemListToProcess.parallelStream()
.map(itemString ->
ProcessorInstanceProvider.get()
.buildImmutableTable(itemString))
.collect(
Collector.of(
HashBasedTable::create,
HashBasedTable::putAll,
(a, b) -> {
a.putAll(b);
return a;
}));
)
But i am getting runtime exception "Caused by: java.lang.IllegalAccessError: tried to access class com.google.common.collect.AbstractTable".
How can we use HashBasedTable as Accumulator to collect ImmutablesTables, as HashBasedTable overrides the existing entry with latest one and doesn't fail if we try to put duplicate entry , and return aggregated Immutable table.
Since Guava 21 you can use ImmutableTable.toImmutableTable collector.
public ImmutableTable<Integer, String, Boolean> processList(List<String> strings) {
return strings.stream()
.map(this::processText)
.flatMap(table -> table.cellSet().stream())
.collect(ImmutableTable.toImmutableTable(
Table.Cell::getRowKey,
Table.Cell::getColumnKey,
Table.Cell::getValue,
(b1, b2) -> b1 && b2 // You can ommit merge function!
));
}
private ImmutableTable<Integer, String, Boolean> processText(String text) {
return ImmutableTable.of(); // Whatever
}
This should work:
List<String> list; // given a list of String
ImmutableTable result = list.parallelStream()
.map(processor::process) // converts String to ImmutableTable
.collect(ImmutableTable.Builder::new, ImmutableTable.Builder::putAll,
(a, b) -> a.putAll(b.build())
.build();
This reduction is threadsafe.
Or using HashBasedTable as the intermediate data structure:
ImmutableTable result = ImmutableTable.copyOf(list.parallelStream()
.map(processor::process) // converts String to ImmutableTable
.collect(HashBasedTable::create, HashBasedTable::putAll, HashBasedTable::putAll));
You should be able to do this by creating an appropriate Collector, using the Collector.of static factory method:
ImmutableTable<R, C, V> table =
list.stream()
.map(processor::process)
.collect(
Collector.of(
() -> new ImmutableTable.Builder<R, C, V>(),
(builder, table1) -> builder.putAll(table1),
(builder1, builder2) ->
new ImmutableTable.Builder<R, C, V>()
.putAll(builder1.build())
.putAll(builder2.build()),
ImmutableTable.Builder::build));
I have a Java Map that I'd like to transform and filter. As a trivial example, suppose I want to convert all values to Integers then remove the odd entries.
Map<String, String> input = new HashMap<>();
input.put("a", "1234");
input.put("b", "2345");
input.put("c", "3456");
input.put("d", "4567");
Map<String, Integer> output = input.entrySet().stream()
.collect(Collectors.toMap(
Map.Entry::getKey,
e -> Integer.parseInt(e.getValue())
))
.entrySet().stream()
.filter(e -> e.getValue() % 2 == 0)
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
System.out.println(output.toString());
This is correct and yields: {a=1234, c=3456}
However, I can't help but wonder if there's a way to avoid calling .entrySet().stream() twice.
Is there a way I can perform both transform and filter operations and call .collect() only once at the end?
Yes, you can map each entry to another temporary entry that will hold the key and the parsed integer value. Then you can filter each entry based on their value.
Map<String, Integer> output =
input.entrySet()
.stream()
.map(e -> new AbstractMap.SimpleEntry<>(e.getKey(), Integer.valueOf(e.getValue())))
.filter(e -> e.getValue() % 2 == 0)
.collect(Collectors.toMap(
Map.Entry::getKey,
Map.Entry::getValue
));
Note that I used Integer.valueOf instead of parseInt since we actually want a boxed int.
If you have the luxury to use the StreamEx library, you can do it quite simply:
Map<String, Integer> output =
EntryStream.of(input).mapValues(Integer::valueOf).filterValues(v -> v % 2 == 0).toMap();
One way to solve the problem with much lesser overhead is to move the mapping and filtering down to the collector.
Map<String, Integer> output = input.entrySet().stream().collect(
HashMap::new,
(map,e)->{ int i=Integer.parseInt(e.getValue()); if(i%2==0) map.put(e.getKey(), i); },
Map::putAll);
This does not require the creation of intermediate Map.Entry instances and even better, will postpone the boxing of int values to the point when the values are actually added to the Map, which implies that values rejected by the filter are not boxed at all.
Compared to what Collectors.toMap(…) does, the operation is also simplified by using Map.put rather than Map.merge as we know beforehand that we don’t have to handle key collisions here.
However, as long as you don’t want to utilize parallel execution you may also consider the ordinary loop
HashMap<String,Integer> output=new HashMap<>();
for(Map.Entry<String, String> e: input.entrySet()) {
int i = Integer.parseInt(e.getValue());
if(i%2==0) output.put(e.getKey(), i);
}
or the internal iteration variant:
HashMap<String,Integer> output=new HashMap<>();
input.forEach((k,v)->{ int i = Integer.parseInt(v); if(i%2==0) output.put(k, i); });
the latter being quite compact and at least on par with all other variants regarding single threaded performance.
Guava's your friend:
Map<String, Integer> output = Maps.filterValues(Maps.transformValues(input, Integer::valueOf), i -> i % 2 == 0);
Keep in mind that output is a transformed, filtered view of input. You'll need to make a copy if you want to operate on them independently.
You could use the Stream.collect(supplier, accumulator, combiner) method to transform the entries and conditionally accumulate them:
Map<String, Integer> even = input.entrySet().stream().collect(
HashMap::new,
(m, e) -> Optional.ofNullable(e)
.map(Map.Entry::getValue)
.map(Integer::valueOf)
.filter(i -> i % 2 == 0)
.ifPresent(i -> m.put(e.getKey(), i)),
Map::putAll);
System.out.println(even); // {a=1234, c=3456}
Here, inside the accumulator, I'm using Optional methods to apply both the transformation and the predicate, and, if the optional value is still present, I'm adding it to the map being collected.
Another way to do this is to remove the values you don't want from the transformed Map:
Map<String, Integer> output = input.entrySet().stream()
.collect(Collectors.toMap(
Map.Entry::getKey,
e -> Integer.parseInt(e.getValue()),
(a, b) -> { throw new AssertionError(); },
HashMap::new
));
output.values().removeIf(v -> v % 2 != 0);
This assumes you want a mutable Map as the result, if not you can probably create an immutable one from output.
If you are transforming the values into the same type and want to modify the Map in place this could be alot shorter with replaceAll:
input.replaceAll((k, v) -> v + " example");
input.values().removeIf(v -> v.length() > 10);
This also assumes input is mutable.
I don't recommend doing this because It will not work for all valid Map implementations and may stop working for HashMap in the future, but you can currently use replaceAll and cast a HashMap to change the type of the values:
((Map)input).replaceAll((k, v) -> Integer.parseInt((String)v));
Map<String, Integer> output = (Map)input;
output.values().removeIf(v -> v % 2 != 0);
This will also give you type safety warnings and if you try to retrieve a value from the Map through a reference of the old type like this:
String ex = input.get("a");
It will throw a ClassCastException.
You could move the first transform part into a method to avoid the boilerplate if you expect to use it alot:
public static <K, VO, VN, M extends Map<K, VN>> M transformValues(
Map<? extends K, ? extends VO> old,
Function<? super VO, ? extends VN> f,
Supplier<? extends M> mapFactory){
return old.entrySet().stream().collect(Collectors.toMap(
Entry::getKey,
e -> f.apply(e.getValue()),
(a, b) -> { throw new IllegalStateException("Duplicate keys for values " + a + " " + b); },
mapFactory));
}
And use it like this:
Map<String, Integer> output = transformValues(input, Integer::parseInt, HashMap::new);
output.values().removeIf(v -> v % 2 != 0);
Note that the duplicate key exception can be thrown if, for example, the old Map is an IdentityHashMap and the mapFactory creates a HashMap.
Here is code by abacus-common
Map<String, String> input = N.asMap("a", "1234", "b", "2345", "c", "3456", "d", "4567");
Map<String, Integer> output = Stream.of(input)
.groupBy(e -> e.getKey(), e -> N.asInt(e.getValue()))
.filter(e -> e.getValue() % 2 == 0)
.toMap(Map.Entry::getKey, Map.Entry::getValue);
N.println(output.toString());
Declaration: I'm the developer of abacus-common.