I need to create all possible combinations of some kind of Key, that is composed from X (in my case, 8), equally important elements. So i came up with code like this:
final LinkedList<Key> keys = new LinkedList();
firstElementCreator.getApplicableElements() // All creators return a Set of elements
.forEach( first -> secondElementCreator.getApplicableElements()
.forEach( second -> thirdElementCreator.getApplicableElements()
// ... more creators
.forEach( X -> keys.add( new Key( first, second, third, ..., X ) ) ) ) ) ) ) ) );
return keys;
and it's working, but there is X nested forEach and i have feeling that i'm missing out an easier/better/more elegant solution. Any suggestions?
Thanks in advance!
Is it Cartesian Product? Many libraries provide the API, for example: Sets and Lists in Guava:
List<ApplicableElements> elementsList = Lists.newArrayList(firstElementCreator, secondElementCreator...).stream()
.map(c -> c.getApplicableElements()).collect(toList());
List<Key> keys = Lists.cartesianProduct(elementsList).stream()
.map(l -> new Key(l.get(0), l.get(1), l.get(2), l.get(3), l.get(4), l.get(5), l.get(6), l.get(7))).collect(toList());
Since the number of input sets is fixed (it has to match the number of arguments in the Key constructor), your solution is actually not bad.
It's more efficient and easier to read without the lambdas, though, like:
for (Element first : firstElementCreator.getApplicableElements()) {
for (Element second : secondElementCreator.getApplicableElements()) {
for (Element third : thirdElementCreator.getApplicableElements()) {
keys.add(new Key(first, second, third));
}
}
}
The canonical solution is to use flatMap. However, the tricky part is to create the Key object from the multiple input levels.
The straight-forward approach is to do the evaluation in the innermost function, where every value is in scope
final List<Key> keys = firstElementCreator.getApplicableElements().stream()
.flatMap(first -> secondElementCreator.getApplicableElements().stream()
.flatMap(second -> thirdElementCreator.getApplicableElements().stream()
// ... more creators
.map( X -> new Key( first, second, third, ..., X ) ) ) )
.collect(Collectors.toList());
but this soon becomes impractical with deep nesting
A solution without deep nesting requires elements to hold intermediate compound values. E.g. if we define Key as
class Key {
String[] data;
Key(String... arg) {
data=arg;
}
public Key add(String next) {
int pos = data.length;
String[] newData=Arrays.copyOf(data, pos+1);
newData[pos]=next;
return new Key(newData);
}
#Override
public String toString() {
return "Key("+Arrays.toString(data)+')';
}
}
(assuming String as element type), we can use
final List<Key> keys =
firstElementCreator.getApplicableElements().stream().map(Key::new)
.flatMap(e -> secondElementCreator.getApplicableElements().stream().map(e::add))
.flatMap(e -> thirdElementCreator.getApplicableElements().stream().map(e::add))
// ... more creators
.collect(Collectors.toList());
Note that these flatMap steps are now on the same level, i.e. not nested anymore. Also, all these steps are identical, only differing in the actual creator, which leads to the general solution supporting an arbitrary number of Creator instances.
List<Key> keys = Stream.of(firstElementCreator, secondElementCreator, thirdElementCreator
/* , and, some, more, if you like */)
.map(creator -> (Function<Key,Stream<Key>>)
key -> creator.getApplicableElements().stream().map(key::add))
.reduce(Stream::of, (f1,f2) -> key -> f1.apply(key).flatMap(f2))
.apply(new Key())
.collect(Collectors.toList());
Here, every creator is mapping to the identical stream-producing function of the previous solution, then all are reduced to a single function combining each function with a flatMap step to the next one, and finally the resulting function is executed to get a stream, which is then collected to a List.
Related
I have a small snippet of code where I want to group the results by a combination of 2 properties of the type in the stream. After appropriate filtering, I do a map where I create an instance of a simple type that holds those 2 properties (in this case called AirportDay). Now I want to group them together and order them descending by the count. The trouble I am having is coming up with the correct arguments for the groupingBy method. Here is my code so far:
final int year = getYear();
final int limit = getLimit(10, 1, 100);
repository.getFlightStream(year)
.filter(f -> f.notCancelled())
.map(f -> new AirportDay(f.getOriginAirport(), f.getDate()))
.collect(groupingBy( ????? , counting())) // stuck here
.entrySet()
.stream()
.sorted(comparingByValue(reverseOrder()))
.limit(limit)
.forEach(entry -> {
AirportDay key = entry.getKey();
printf("%-30s\t%s\t%,10d\n",
key.getAirport().getName(),
key.getDate(),
entry.getValue()
);
});
My first instinct was to pass AirportDay::this but that obviously doesn't work...
I'd appreciate any assistance you can provide in coming up with a solution to the above problem.
-Tony
If you want to group by AirportDay, provide the function to create the key to groupingBy:
repository.getFlightStream(year)
.filter(f -> f.notCancelled())
.collect(groupingBy(f -> new AirportDay(f.getOriginAirport(), f.getDate()), counting()))
Note: The AirportDay class must implement sensible equals() and hashCode() methods for this to work.
I have googled quite a bit, but didn't found an answer.
Here is what I have:
parentList.forEach(p -> {
childList
.stream()
.filter(c -> p.id() == c.parentId())
.<...continue working on stream...>
});
I cannot find a way how to replace "filter" part with a Predicate as it seems that I need to pass argument to Predicate?
Your problem is that you're using a different Predicate each time, because although c is the parameter to your predicate, p also varies:
final Node p;
Predicate<Node> matchesParentId = c -> p.id() == c.id();
The reason your existing code compiles OK is that p is effectively final in the scope of the forEach block, so it can be used as a final field in a Predicate within that scope, with a lifetime of one forEach iteration.
You could do:
parentList.forEach(p -> {
childList
.stream()
.filter(matchesId(p))
.<...continue working on stream...>
});
private Predicate<Node> matchesId(Node other) {
return node -> node.id() == other.id();
}
But you won't be able to create one Predicate and reuse it as p varies.
You could write a BiPredicate and curry it into a Predicate. Unfortunately Java doesn't provide a curry method, so you have to provide your own.
private <T,U> Predicate<U> curry(BiPredicate<T,U> biPredicate, T t) {
return u -> biPredicate.test(t, u);
}
BiPredicate<Node,Node> nodesMatch = (a,b) -> a.id() == b.id();
parentList.forEach(p -> {
childList
.stream()
.filter(curry(nodesMatch, p))
.<...continue working on stream...>
});
This doesn't buy you all that much over and above the previous solution, but it's a bit more FP-nerdy. You're still creating a new Predicate for every p. Of course you could inline it rather than use the curry() method.
.filter(c -> nodesMatch.test(p, c))
It does mean you could have a selection of BiPredicate<Node,Node>s to plug in dynamically. If your BiPredicate were expensive to initialise, the many Predicates wrapped around it by currying would be cheap.
Or, you could map p and c into a single object, which allows you to submit the whole thing to a predicate:
Predicate<Pair<Node,Node>> nodesMatch = pair ->
pair.left().id() == pair.right().id();
parentList.forEach(p -> {
childList
.stream()
.map(c -> new Pair<Node>( c, p))
.filter(nodesMatch)
.map( pair -> pair.left() )
.<...continue working on stream...>
});
(Pair here is hypothetical, but a number of 3rd party libraries (e.g. Guava) provide one, or roll your own, or use new Node[] { c, p })
I would be currious to know how to propagate variable into a stream in java 8.
An example is better than a long explaination, so how would you convert the following (abstract) code into streams:
Map<Integer,A> myMap = new HashMap();
for (Entry<Integer,A> entry : myMap)
{
int param1=entry.getValue().getParam1();
List param2=entry.getValue().getParam2();
for (B b : param2)
{
System.out.println(""+entry.getKey()+"-"+param1+"-"+b.toString());
}
}
Knowing that this example is a simplification of the problem (for example, i need "param1" more than once in the next for loop)
So far, the only idea i have is to store all the informations i need into a tuple to finally use the forEach stream method over this tuple.
(Not sure to be very clear....)
Edit:I simplified my example too much. My case is more something like that:
Map<Integer,A> myMap = new HashMap();
for (Entry<Integer,A> entry : myMap)
{
int param1=entry.getValue().getParam1();
CustomList param2=entry.getValue().getParam2();
for (int i = 0; i<param2.size(); i++)
{
System.out.println(""+entry.getKey()+"-"+param1+"-"+param2.get(i).toString());
}
}
I could write something like that with stream:
myMap.entrySet().stream()
.forEach(
e -> IntStream.range(0, e.getValue.getParam2().getSize())
.forEach(
i -> System.out.println(e.getKey()+"-"+e.getValue().getParam1()+"-"+e.getValue.getParam2.get(i))
)
);
However, what i have instead of "e.getValue.getParam2()" in my real case is much more complex (a sequence of 5-6 methods) and heavier than just retrieving a variable (it executes some logic), so i would like to avoid to repeat e.getValue.getParam2 (once in just before the forEach, and once in the forEach)
i know that it's maybe not the best use case for using stream, but I am learning about it and would like to know about the limits
Thanks!
Something like this:
myMap.forEach(
(key, value) -> value.getParam2().forEach(
b -> System.out.println(key+"-"+value.getParam1()+"-"+b)
)
);
That is, for each key/value pair, iterate through value.getParam2(). For each one of those, print out string formatted as you specified. I'm not sure what that gets you, other than being basically what you had before, but using streams.
Update
Responding to updates to your question, this:
myMap.forEach((key, value) -> {
final CustomList param2 = value.getParam2();
IntStream.range(0, param2.getSize()).forEach(
i -> System.out.println(key+"-"+value.getParam1()+"-"+param2.get(i))
)
});
Here we assign the result of getParam2() to a final variable, so it is only calculated once. Final (and effectively final) variables are visible inside lambda functions.
(Thank you to Holger for the suggestions.)
Note that there are more features in the Java 8 API than just streams. Especially, if you just want to process all elements of a collection, you don’t need streams.
You can simplify every form of coll.stream().forEach(consumer) to coll.forEach(consumer). This applies to map.entrySet() as well, however, if you want to process all mappings of a Map, you can use forEach on the Map directly, providing a BiConsumer<KeyType,ValueType> rather than a Consumer<Map.Entry<KeyType,ValueType>>, which can greatly improve the readability:
myMap.forEach((key, value) -> {
int param1 = value.getParam1();
CustomList param2 = value.getParam2();
IntStream.range(0, param2.size()).mapToObj(param2::get)
.forEach(obj -> System.out.println(key+"-"+param1+"-"+obj));
});
It’s worth thinking about adding a forEach(Consumer<ElementType>) method to your CustomList, even if the CustomList doesn’t support the other standard collection operations…
Given a list of elements, I want to get the element with a given property and remove it from the list. The best solution I found is:
ProducerDTO p = producersProcedureActive
.stream()
.filter(producer -> producer.getPod().equals(pod))
.findFirst()
.get();
producersProcedureActive.remove(p);
Is it possible to combine get and remove in a lambda expression?
To Remove element from the list
objectA.removeIf(x -> conditions);
eg:
objectA.removeIf(x -> blockedWorkerIds.contains(x));
List<String> str1 = new ArrayList<String>();
str1.add("A");
str1.add("B");
str1.add("C");
str1.add("D");
List<String> str2 = new ArrayList<String>();
str2.add("D");
str2.add("E");
str1.removeIf(x -> str2.contains(x));
str1.forEach(System.out::println);
OUTPUT:
A
B
C
Although the thread is quite old, still thought to provide solution - using Java8.
Make the use of removeIf function. Time complexity is O(n)
producersProcedureActive.removeIf(producer -> producer.getPod().equals(pod));
API reference: removeIf docs
Assumption: producersProcedureActive is a List
NOTE: With this approach you won't be able to get the hold of the deleted item.
Consider using vanilla java iterators to perform the task:
public static <T> T findAndRemoveFirst(Iterable<? extends T> collection, Predicate<? super T> test) {
T value = null;
for (Iterator<? extends T> it = collection.iterator(); it.hasNext();)
if (test.test(value = it.next())) {
it.remove();
return value;
}
return null;
}
Advantages:
It is plain and obvious.
It traverses only once and only up to the matching element.
You can do it on any Iterable even without stream() support (at least those implementing remove() on their iterator).
Disadvantages:
You cannot do it in place as a single expression (auxiliary method or variable required)
As for the
Is it possible to combine get and remove in a lambda expression?
other answers clearly show that it is possible, but you should be aware of
Search and removal may traverse the list twice
ConcurrentModificationException may be thrown when removing element from the list being iterated
The direct solution would be to invoke ifPresent(consumer) on the Optional returned by findFirst(). This consumer will be invoked when the optional is not empty. The benefit also is that it won't throw an exception if the find operation returned an empty optional, like your current code would do; instead, nothing will happen.
If you want to return the removed value, you can map the Optional to the result of calling remove:
producersProcedureActive.stream()
.filter(producer -> producer.getPod().equals(pod))
.findFirst()
.map(p -> {
producersProcedureActive.remove(p);
return p;
});
But note that the remove(Object) operation will again traverse the list to find the element to remove. If you have a list with random access, like an ArrayList, it would be better to make a Stream over the indexes of the list and find the first index matching the predicate:
IntStream.range(0, producersProcedureActive.size())
.filter(i -> producersProcedureActive.get(i).getPod().equals(pod))
.boxed()
.findFirst()
.map(i -> producersProcedureActive.remove((int) i));
With this solution, the remove(int) operation operates directly on the index.
Use can use filter of Java 8, and create another list if you don't want to change the old list:
List<ProducerDTO> result = producersProcedureActive
.stream()
.filter(producer -> producer.getPod().equals(pod))
.collect(Collectors.toList());
I'm sure this will be an unpopular answer, but it works...
ProducerDTO[] p = new ProducerDTO[1];
producersProcedureActive
.stream()
.filter(producer -> producer.getPod().equals(pod))
.findFirst()
.ifPresent(producer -> {producersProcedureActive.remove(producer); p[0] = producer;}
p[0] will either hold the found element or be null.
The "trick" here is circumventing the "effectively final" problem by using an array reference that is effectively final, but setting its first element.
With Eclipse Collections you can use detectIndex along with remove(int) on any java.util.List.
List<Integer> integers = Lists.mutable.with(1, 2, 3, 4, 5);
int index = Iterate.detectIndex(integers, i -> i > 2);
if (index > -1) {
integers.remove(index);
}
Assert.assertEquals(Lists.mutable.with(1, 2, 4, 5), integers);
If you use the MutableList type from Eclipse Collections, you can call the detectIndex method directly on the list.
MutableList<Integer> integers = Lists.mutable.with(1, 2, 3, 4, 5);
int index = integers.detectIndex(i -> i > 2);
if (index > -1) {
integers.remove(index);
}
Assert.assertEquals(Lists.mutable.with(1, 2, 4, 5), integers);
Note: I am a committer for Eclipse Collections
The below logic is the solution without modifying the original list
List<String> str1 = new ArrayList<String>();
str1.add("A");
str1.add("B");
str1.add("C");
str1.add("D");
List<String> str2 = new ArrayList<String>();
str2.add("D");
str2.add("E");
List<String> str3 = str1.stream()
.filter(item -> !str2.contains(item))
.collect(Collectors.toList());
str1 // ["A", "B", "C", "D"]
str2 // ["D", "E"]
str3 // ["A", "B", "C"]
When we want to get multiple elements from a List into a new list (filter using a predicate) and remove them from the existing list, I could not find a proper answer anywhere.
Here is how we can do it using Java Streaming API partitioning.
Map<Boolean, List<ProducerDTO>> classifiedElements = producersProcedureActive
.stream()
.collect(Collectors.partitioningBy(producer -> producer.getPod().equals(pod)));
// get two new lists
List<ProducerDTO> matching = classifiedElements.get(true);
List<ProducerDTO> nonMatching = classifiedElements.get(false);
// OR get non-matching elements to the existing list
producersProcedureActive = classifiedElements.get(false);
This way you effectively remove the filtered elements from the original list and add them to a new list.
Refer the 5.2. Collectors.partitioningBy section of this article.
As others have suggested, this might be a use case for loops and iterables. In my opinion, this is the simplest approach. If you want to modify the list in-place, it cannot be considered "real" functional programming anyway. But you could use Collectors.partitioningBy() in order to get a new list with elements which satisfy your condition, and a new list of those which don't. Of course with this approach, if you have multiple elements satisfying the condition, all of those will be in that list and not only the first.
the task is: get ✶and✶ remove element from list
p.stream().collect( Collectors.collectingAndThen( Collector.of(
ArrayDeque::new,
(a, producer) -> {
if( producer.getPod().equals( pod ) )
a.addLast( producer );
},
(a1, a2) -> {
return( a1 );
},
rslt -> rslt.pollFirst()
),
(e) -> {
if( e != null )
p.remove( e ); // remove
return( e ); // get
} ) );
resumoRemessaPorInstrucoes.removeIf(item ->
item.getTipoOcorrenciaRegistro() == TipoOcorrenciaRegistroRemessa.PEDIDO_PROTESTO.getNome() ||
item.getTipoOcorrenciaRegistro() == TipoOcorrenciaRegistroRemessa.SUSTAR_PROTESTO_BAIXAR_TITULO.getNome());
Combining my initial idea and your answers I reached what seems to be the solution
to my own question:
public ProducerDTO findAndRemove(String pod) {
ProducerDTO p = null;
try {
p = IntStream.range(0, producersProcedureActive.size())
.filter(i -> producersProcedureActive.get(i).getPod().equals(pod))
.boxed()
.findFirst()
.map(i -> producersProcedureActive.remove((int)i))
.get();
logger.debug(p);
} catch (NoSuchElementException e) {
logger.error("No producer found with POD [" + pod + "]");
}
return p;
}
It lets remove the object using remove(int) that do not traverse again the
list (as suggested by #Tunaki) and it lets return the removed object to
the function caller.
I read your answers that suggest me to choose safe methods like ifPresent instead of get but I do not find a way to use them in this scenario.
Are there any important drawback in this kind of solution?
Edit following #Holger advice
This should be the function I needed
public ProducerDTO findAndRemove(String pod) {
return IntStream.range(0, producersProcedureActive.size())
.filter(i -> producersProcedureActive.get(i).getPod().equals(pod))
.boxed()
.findFirst()
.map(i -> producersProcedureActive.remove((int)i))
.orElseGet(() -> {
logger.error("No producer found with POD [" + pod + "]");
return null;
});
}
A variation of the above:
import static java.util.function.Predicate.not;
final Optional<MyItem> myItem = originalCollection.stream().filter(myPredicate(someInfo)).findFirst();
final List<MyItem> myOtherItems = originalCollection.stream().filter(not(myPredicate(someInfo))).toList();
private Predicate<MyItem> myPredicate(Object someInfo) {
return myItem -> myItem.someField() == someInfo;
}
Using Java 8 lambdas, what's the "best" way to effectively create a new List<T> given a List<K> of possible keys and a Map<K,V>? This is the scenario where you are given a List of possible Map keys and are expected to generate a List<T> where T is some type that is constructed based on some aspect of V, the map value types.
I've explored a few and don't feel comfortable claiming one way is better than another (with maybe one exception -- see code). I'll clarify "best" as a combination of code clarity and runtime efficiency. These are what I came up with. I'm sure someone can do better, which is one aspect of this question. I don't like the filter aspect of most as it means needing to create intermediate structures and multiple passes over the names List. Right now, I'm opting for Example 6 -- a plain 'ol loop. (NOTE: Some cryptic thoughts are in the code comments, especially "need to reference externally..." This means external from the lambda.)
public class Java8Mapping {
private final Map<String,Wongo> nameToWongoMap = new HashMap<>();
public Java8Mapping(){
List<String> names = Arrays.asList("abbey","normal","hans","delbrook");
List<String> types = Arrays.asList("crazy","boring","shocking","dead");
for(int i=0; i<names.size(); i++){
nameToWongoMap.put(names.get(i),new Wongo(names.get(i),types.get(i)));
}
}
public static void main(String[] args) {
System.out.println("in main");
Java8Mapping j = new Java8Mapping();
List<String> testNames = Arrays.asList("abbey", "froderick","igor");
System.out.println(j.getBongosExample1(testNames).stream().map(Bongo::toString).collect(Collectors.joining(", ")));
System.out.println(j.getBongosExample2(testNames).stream().map(Bongo::toString).collect(Collectors.joining(", ")));
System.out.println(j.getBongosExample3(testNames).stream().map(Bongo::toString).collect(Collectors.joining(", ")));
System.out.println(j.getBongosExample4(testNames).stream().map(Bongo::toString).collect(Collectors.joining(", ")));
System.out.println(j.getBongosExample5(testNames).stream().map(Bongo::toString).collect(Collectors.joining(", ")));
System.out.println(j.getBongosExample6(testNames).stream().map(Bongo::toString).collect(Collectors.joining(", ")));
}
private static class Wongo{
String name;
String type;
public Wongo(String s, String t){name=s;type=t;}
#Override public String toString(){return "Wongo{name="+name+", type="+type+"}";}
}
private static class Bongo{
Wongo wongo;
public Bongo(Wongo w){wongo = w;}
#Override public String toString(){ return "Bongo{wongo="+wongo+"}";}
}
// 1: Create a list externally and add items inside 'forEach'.
// Needs to externally reference Map and List
public List<Bongo> getBongosExample1(List<String> names){
final List<Bongo> listOne = new ArrayList<>();
names.forEach(s -> {
Wongo w = nameToWongoMap.get(s);
if(w != null) {
listOne.add(new Bongo(nameToWongoMap.get(s)));
}
});
return listOne;
}
// 2: Use stream().map().collect()
// Needs to externally reference Map
public List<Bongo> getBongosExample2(List<String> names){
return names.stream()
.filter(s -> nameToWongoMap.get(s) != null)
.map(s -> new Bongo(nameToWongoMap.get(s)))
.collect(Collectors.toList());
}
// 3: Create custom Collector
// Needs to externally reference Map
public List<Bongo> getBongosExample3(List<String> names){
Function<List<Wongo>,List<Bongo>> finisher = list -> list.stream().map(Bongo::new).collect(Collectors.toList());
Collector<String,List<Wongo>,List<Bongo>> bongoCollector =
Collector.of(ArrayList::new,getAccumulator(),getCombiner(),finisher, Characteristics.UNORDERED);
return names.stream().collect(bongoCollector);
}
// example 3 helper code
private BiConsumer<List<Wongo>,String> getAccumulator(){
return (list,string) -> {
Wongo w = nameToWongoMap.get(string);
if(w != null){
list.add(w);
}
};
}
// example 3 helper code
private BinaryOperator<List<Wongo>> getCombiner(){
return (l1,l2) -> {
l1.addAll(l2);
return l1;
};
}
// 4: Use internal Bongo creation facility
public List<Bongo> getBongosExample4(List<String> names){
return names.stream().filter(s->nameToWongoMap.get(s) != null).map(s-> new Bongo(nameToWongoMap.get(s))).collect(Collectors.toList());
}
// 5: Stream the Map EntrySet. This avoids referring to anything outside of the stream,
// but bypasses the lookup benefit from Map.
public List<Bongo> getBongosExample5(List<String> names){
return nameToWongoMap.entrySet().stream().filter(e->names.contains(e.getKey())).map(e -> new Bongo(e.getValue())).collect(Collectors.toList());
}
// 6: Plain-ol-java loop
public List<Bongo> getBongosExample6(List<String> names){
List<Bongo> bongos = new ArrayList<>();
for(String s : names){
Wongo w = nameToWongoMap.get(s);
if(w != null){
bongos.add(new Bongo(w));
}
}
return bongos;
}
}
If namesToWongoMap is an instance variable, you can't really avoid a capturing lambda.
You can clean up the stream by splitting up the operations a little more:
return names.stream()
.map(n -> namesToWongoMap.get(n))
.filter(w -> w != null)
.map(w -> new Bongo(w))
.collect(toList());
return names.stream()
.map(namesToWongoMap::get)
.filter(Objects::nonNull)
.map(Bongo::new)
.collect(toList());
That way you don't call get twice.
This is very much like the for loop, except, for example, it could theoretically be parallelized if namesToWongoMap can't be mutated concurrently.
I don't like the filter aspect of most as it means needing to create intermediate structures and multiple passes over the names List.
There are no intermediate structures and there is only one pass over the List. A stream pipeline says "for each element...do this sequence of operations". Each element is visited once and the pipeline is applied.
Here are some relevant quotes from the java.util.stream package description:
A stream is not a data structure that stores elements; instead, it conveys elements from a source such as a data structure, an array, a generator function, or an I/O channel, through a pipeline of computational operations.
Processing streams lazily allows for significant efficiencies; in a pipeline such as the filter-map-sum example above, filtering, mapping, and summing can be fused into a single pass on the data, with minimal intermediate state.
Radiodef's answer pretty much nailed it, I think. The solution given there:
return names.stream()
.map(namesToWongoMap::get)
.filter(Objects::nonNull)
.map(Bongo::new)
.collect(toList());
is probably about the best that can be done in Java 8.
I did want to mention a small wrinkle in this, though. The Map.get call returns null if the name isn't present in the map, and this is subsequently filtered out. There's nothing wrong with this per se, though it does bake null-means-not-present semantics into the pipeline structure.
In some sense we'd want a mapper pipeline operation that has a choice of returning zero or one elements. A way to do this with streams is with flatMap. The flatmapper function can return an arbitrary number of elements into the stream, but in this case we want just zero or one. Here's how to do that:
return names.stream()
.flatMap(name -> {
Wongo w = nameToWongoMap.get(name);
return w == null ? Stream.empty() : Stream.of(w);
})
.map(Bongo::new)
.collect(toList());
I admit this is pretty clunky and so I wouldn't recommend doing this. A slightly better but somewhat obscure approach is this:
return names.stream()
.flatMap(name -> Optional.ofNullable(nameToWongoMap.get(name))
.map(Stream::of).orElseGet(Stream::empty))
.map(Bongo::new)
.collect(toList());
but I'm still not sure I'd recommend this as it stands.
The use of flatMap does point to another approach, though. If you have a more complicated policy of how to deal with the not-present case, you could refactor this into a helper function that returns a Stream containing the result or an empty Stream if there's no result.
Finally, JDK 9 -- still under development as of this writing -- has added Stream.ofNullable which is useful in exactly these situations:
return names.stream()
.flatMap(name -> Stream.ofNullable(nameToWongoMap.get(name)))
.map(Bongo::new)
.collect(toList());
As an aside, JDK 9 has also added Optional.stream which creates a zero-or-one stream from an Optional. This is useful in cases where you want to call an Optional-returning function from within flatMap. See this answer and this answer for more discussion.
One approach I didn't see is retainAll:
public List<Bongo> getBongos(List<String> names) {
Map<String, Wongo> copy = new HashMap<>(nameToWongoMap);
copy.keySet().retainAll(names);
return copy.values().stream().map(Bongo::new).collect(
Collectors.toList());
}
The extra Map is a minimal performance hit, since it's just copying pointers to objects, not the objects themselves.