Does guava (or another java library) have something like reduce() function in Python?
I'm looking for something like this http://docs.python.org/library/functions.html#reduce
No. It might eventually, though functional stuff like that isn't a core focus of Guava. See this issue.
I've not (yet) managed to find any Java collections libraries that support map and reduce. (I exclude map/reduce functionality in parallel / distributed processing frameworks ... because you need a "big" problem for these frameworks to be worthwhile.)
Probably, the reason for this "lack" is that map/reduce coding without closures is just too cumbersome. Too much boilerplate code, too much heavy-weight syntax. Since the main point of using map / reduce primitives on simple collections is to make your code simple and elegant ...
#CurtainDog contributed a link to lambdaj. That does the kind of thing that the OP is after (though there's no method specifically called reduce). But it illustrates what I was saying about boilerplate. Notice that many of the higher order operations involve creating classes that extend one or other of the Closure classes.
(FWIW, I think that the Lambda.aggregate(...) methods are the lambdaj analog of reduce.)
Java 8 streams allow you to do this.
mylist.stream().map((x) -> x + 1).reduce((a,b) -> a + b)
For more information: http://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html
I have recently submitted an issue where I requested / discussed something similar. This is what would be needed in my implementation
/**
* Aggregate the selected values from the supplied {#link Iterable} using
* the provided selector and aggregator functions.
*
* #param <I>
* the element type over which to iterate
* #param <S>
* type of the values to be aggregated
* #param <A>
* type of the aggregated value
* #param data
* elements for aggregation
* #param selectorFunction
* a selector function that extracts the values to be aggregated
* from the elements
* #param aggregatorFunction
* function that performs the aggregation on the selected values
* #return the aggregated value
*/
public static <I, S, A> A aggregate(final Iterable<I> data,
final Function<I, S> selectorFunction,
final Function<Iterable<S>, A> aggregatorFunction){
checkNotNull(aggregatorFunction);
return aggregatorFunction.apply(
Iterables.transform(data, selectorFunction)
);
}
(The selector function can pull the value to aggregate from the object to query, but in many cases it will be Functions.identity(), i.e. the object itself is what's aggregated)
This is not a classic fold, but it requires a Function<Iterable<X>,X> to do the work. But since the actual code is a one-liner, I have instead chosen to request some standard aggregator functions (I'd put them in a class called something like Aggregators, AggregatorFunctions or even Functions.Aggregators):
/** A Function that returns the average length of the Strings in an Iterable. */
public static Function<Iterable<String>,Integer> averageLength()
/** A Function that returns a BigDecimal that corresponds to the average
of all numeric values passed from the iterable. */
public static Function<Iterable<? extends Number>,BigDecimal> averageOfFloats()
/** A Function that returns a BigInteger that corresponds to the average
of all numeric values passed from the iterable. */
public static Function<Iterable<? extends Number>,BigInteger> averageOfIntegers()
/** A Function that returns the length of the longest String in an Iterable. */
public static Function<Iterable<String>,Integer> maxLength()
/** A Function that returns the length of the shortest String in an Iterable. */
public static Function<Iterable<String>,Integer> minLength()
/** A Function that returns a BigDecimal that corresponds to the sum of all
numeric values passed from the iterable. */
public static Function<Iterable<? extends Number>,BigDecimal> sumOfFloats()
/** A Function that returns a BigInteger that corresponds to the integer sum
of all numeric values passed from the iterable. */
public static Function<Iterable<? extends Number>,BigInteger> sumOfIntegers()
(You can see my sample implementations in the issue)
That way, you can do things like this:
int[] numbers = { 1, 5, 6, 9, 11111, 54764576, 425623 };
int sum = Aggregators.sumOfIntegers().apply(Ints.asList(numbers)).intValue();
This is definitely not what you are asking for, but it would make like easier in many cases and would overlap with your request (even if the approach is different).
Jedi has a reduce operation. Jedi also helps reduce the boiler plate by using annotations to generate functors for you. See these examples.
Guava has transform (map). Seems like reduce is missing though?
I have developed a library to do map/filter/reduce with standard J2SE.
Sorry it is in french, but with google translate you can read it :
http://caron-yann.developpez.com/tutoriels/java/fonction-object-design-pattern-attendant-closures-java-8/
You can use if like this :
int sum = dogs.filter(new Predicate<Arguments2<Dog, Integer>>() {
#Override
public Boolean invoke(Arguments2<Dog, Integer> arguments) {
// filter on male
return arguments.getArgument1().getGender() == Dog.Gender.MALE;
}
}).<Integer>map(new Function<Integer, Arguments2<Dog, Integer>>() {
#Override
public Integer invoke(Arguments2<Dog, Integer> arguments) {
// get ages
return arguments.getArgument1().getAge();
}
}).reduce(new Function<Integer, Arguments2<Integer, Integer>>() {
#Override
public Integer invoke(Arguments2<Integer, Integer> arguments) {
// sum âges
return arguments.getArgument1() + arguments.getArgument2();
}
});
System.out.println("Le cumul de l'âge des mâles est de : " + sum + " ans");
Enjoy this help
Use Totally Lazy, it implements all of those things an even more.
It basicly copied the whole funcional approach from Clojure.
Related
Given the following context:
public interface IAdditive<T> {
/** True if still capable to crunch. */
boolean canCrunch();
/** True if capable to crunch with B. */
boolean canCrunch(T other);
/** Returns a new T which is the sum of this and other */
T crunch(T other);
}
class A implements IAdditive<A> {
..
A crunch(A other) {...}
}
class B extends A {
...
B crunch(B other) {...}
}
class C implements IAdditive<C> {
...
C crunch(C other) {...}
}
Now I want to "crunch" a Stream of Implementations
/** Chrunches the streams where possible */
public Stream<A> crunchStream(Stream s) {
return s.map(...);
}
I am stuck with my rather naive approach:
public Set<A> collect(Stream<A> stream) {
Set<I> res = new HashSet<>();
Set<I> set = stream
.filter(IAdditive::canCrunch)
.collect(Collectors.toSet());
set.forEach(setItem -> set.stream()
.filter(concurrentItem -> concurrentItem.canCrunch(setItem))
.map(setItem::crunch)
.forEach(res::add));
return res;
}
That should be flawed. I am unfolding the stream, add mandatory complexity, and If I want the interface to offer that in a default method I would have to use rawtypes.
I believe I could use some help :-)
Based on your comments, I think this is what you want:
public static interface Additive<T> {
default public Additive<T> crunch(Additive<T> other) { return null; }
}
public static class A implements Additive<A> {};
public static class B implements Additive<B> {};
public static class C implements Additive<C> {};
/**
* Takes a stream of arbitrary Additives and returns a Set containing
* only one crunched Additive for each type
*
* #param stream additives to crunch
* #return crunched set
*/
public Set<Additive<?>> crunch(Stream<Additive<?>> stream) {
return stream
.collect(Collectors.groupingBy(o -> o.getClass()))
.values().stream()
.map(values -> values.stream().reduce(Additive::crunch).get())
.collect(Collectors.toSet());
}
/**
* Takes a stream of arbitrary Additives and returns a Set containing
* only one crunched Additive for each type
*
* #param stream additives to crunch
* #return crunched set
*/
public Collection<Additive<Object>> crunchMap(Stream<Additive<?>> stream) {
return stream
.collect(Collectors.toMap(k -> k.getClass(), v -> (Additive<Object>) v, (a, b) -> a.crunch(b))).values();
}
Both method should produce the desired output. They take a Stream containing arbitrary Additives, group them by actual type, then crunch the ones of the same type into only one object, and finally return a Set or Collection containing only one object for each type of Additive.
I have laid out two different approaches. The first approach first groups into a map, and then crunches all similar types. this is perhaps the easier to understand method.
the second approach uses a mapping collector, mapping each additive to its class as key, and doing a no-op for the value, then, on a key collision, crunches the new value with the old values and puts that into the map. its a bit more involved, a bit harder to read and needs a bit more generic-fu to actually get to work.
Note that passing a Stream as parameter doesn't have any benefit - just pass the collection. In fact, passing streams is discouraged as you can never be sure who has already operated on the stream and who hasn't, leading to exceptions when the stream is already closed.
Note that I do not use your canCrunch method, but rely on the fact that each type can crunch itself and only itself, but no less. This is far easier to enforce and deal with than objects of same types not being able to be crushed. if you want that, you need some form to distinguish them and to change the classificator accordingly.
I'm having some trouble understanding the validation library, io.vavr.control.Validation. At the risk of asking too broad a question, I do have several sub-questions—however I believe they are closely related and would piece together to help me understand the proper way to use this validation mechanism.
I started with the example here: https://softwaremill.com/javaslang-data-validation.
Validation<String, ValidRegistrationRequest> validate(RegistrationRequest request) {
return combine(
validateCardId(request.getCardId()),
validateTicketType(request.getTicketType()),
validateGuestId(request.getGuestId())
)
.ap(ValidRegistrationRequest::new)
.mapError(this::errorsAsJson);
}
private Validation<String, Card> validateCardId(String cardId) {
// validate cardId
// if correct then return an instance of entity the cardId corresponds to
}
private Validation<String, TicketType> validateTicketType(String ticketType) {
// validate ticketType
// if known then return enumeration representing the ticket
}
private Validation<String, Guest> validateGuest(String guestId) {
// validate guestId
// if correct then return an instance of entity the questId corresponds to
}
At first, I didn't understand where the generic parameters for Validation<String, ValidRegistrationRequest> came from. I now understand that they are linked to the return types of the methods passed to mapError and ap, respectively. But:
How does combine know to return Validation<String, ValidRegistrationRequest>? I feel the only way this is possible, is if combine is actually a Validation<String, ValidRegistrationRequest>::combine, so that the ap and mapError are defined from this template. But I don't believe that the compiler should be able to imply that that combine refers to a static implementation in the class of the return type. What's happening here?
[Minor] What is the use case for using a ValidRegistrationRequest as opposed to just RegistrationRequest again? I'm tempted to do the latter in my coding, until I see an example.
A second example I was reading about is here: http://www.vavr.io/vavr-docs/#_validation.
class PersonValidator {
private static final String VALID_NAME_CHARS = "[a-zA-Z ]";
private static final int MIN_AGE = 0;
public Validation<Seq<String>, Person> validatePerson(String name, int age) {
return Validation.combine(validateName(name), validateAge(age)).ap(Person::new);
}
private Validation<String, String> validateName(String name) {
return CharSeq.of(name).replaceAll(VALID_NAME_CHARS, "").transform(seq -> seq.isEmpty()
? Validation.valid(name)
: Validation.invalid("Name contains invalid characters: '"
+ seq.distinct().sorted() + "'"));
}
private Validation<String, Integer> validateAge(int age) {
return age < MIN_AGE
? Validation.invalid("Age must be at least " + MIN_AGE)
: Validation.valid(age);
}
}
Where did Seq come from? Is that the default when no mapError is supplied? But I'm looking at the decompiled .class file for Validation.class, and the only reference to Seq is here:
static <E, T> Validation<List<E>, Seq<T>> sequence(Iterable<? extends Validation<List<E>, T>> values) {
Objects.requireNonNull(values, "values is null");
List<E> errors = List.empty();
List<T> list = List.empty();
Iterator var3 = values.iterator();
while(var3.hasNext()) {
Validation<List<E>, T> value = (Validation)var3.next();
if (value.isInvalid()) {
errors = errors.prependAll(((List)value.getError()).reverse());
} else if (errors.isEmpty()) {
list = list.prepend(value.get());
}
}
return errors.isEmpty() ? valid(list.reverse()) : invalid(errors.reverse());
}
Which, I don't think is relevant. Perhaps I'm using an outdated Validation? (It is after all javaslang.control.Validation in my imports, not io.vavr.control.Validation.)
I had this question for both examples: How does combine know which parameters to pass to the constructor (ap), and in what order? Is the answer, "All its parameters, in the order given"?
Thanks in advance.
You have the same questions and doubts I had when was looking for the first time into validation mechanism of Vavr.
Here are my responses to the first two questions:
combine(...) method returns with an instance of a validation builder, in this case, this is a Builder3 class holding three results of validate*(...) functions. The ap(...) method is a method of this builder and triggers building of Validation instance.
When it is called, validation results are applied, one by one, to a curried version of a function provided as an argument:
v3.ap(v2.ap(v1.ap(Validation.valid(f.curried()))))
In the example, f is a constructor of ValidRegistrationRequest class. In the end, we have a validation holding the valid request instance.
On the other hand, if any of the results are invalid, the method creates an invalid result with a list of error messages. And calling mapError(this::errorsAsJson) (on Validation instance this time!) transforms it into a JSON format.
What's the use case of using ValidRegistrationRequest?
I have used Vavr's validation in one of my projects. I had a request coming with some identifiers of entities. To validate the correctness of it, I had to query a database to check whether there is something for each id.
So, if validation returned with the original request, I would have to fetch those objects from the database once again. Thus, I decided to return ValidRegistrationRequest holding domain objects. With calling database once only, request processing is significantly faster.
And answers to the second pair of questions:
Yes, you are right. In case of an invalid result, Validation.combine(...).ap(...) returns with an instance of Invalid class, holding a list of error messages, returned from validation methods.
If you look into sources, to Validation.ap(...) method, you can see that invalid results are gathered into a Vavr's List. Because it inherits from Seq, you can see this type in the validatePerson example Seq<String>.
Yes, exactly. "All its parameters, in the order given" :)
The order of arguments in combine must be the same as the order of arguments taken by the function provided to ap(...) method.
With sources downloaded, it is way easier to track internals of Vavr.
Okay, this is my attempt at answering my own questions, but confirmation from someone more experienced would be nice. I found the latest source for Validation here.
Example 1
The article I copied the example from actually stated that combine was "statically imported for better readability." I missed that. So, I was right—we are calling a static method. Specifically, this one:
static <E, T1, T2, T3> Builder3<E, T1, T2, T3> combine(Validation<E, T1> validation1, Validation<E, T2> validation2, Validation<E, T3> validation3) {
Objects.requireNonNull(validation1, "validation1 is null");
Objects.requireNonNull(validation2, "validation2 is null");
Objects.requireNonNull(validation3, "validation3 is null");
return new Builder3<>(validation1, validation2, validation3);
}
My guess at the use of ValidRegistrationRequest is simply to enforce validation at compile-time. That is, this way, a developer can never accidentally use an unvalidated RegistrationRequest if all consuming code require a ValidRegistrationRequest.
Example 2
I think the Set comes from here:
/**
* An invalid Validation
*
* #param <E> type of the errors of this Validation
* #param <T> type of the value of this Validation
*/
final class Invalid<E, T> implements Validation<E, T>, Serializable {
...
#Override
public Seq<E> getErrors() {
return errors;
}
...
}
And then something to do with this:
/**
* Applies a given {#code Validation} that encapsulates a function to this {#code Validation}'s value or combines both errors.
*
* #param validation a function that transforms this value (on the 'sunny path')
* #param <U> the new value type
* #return a new {#code Validation} that contains a transformed value or combined errors.
*/
#SuppressWarnings("unchecked")
default <U> Validation<E, U> ap(Validation<E, ? extends Function<? super T, ? extends U>> validation) {
Objects.requireNonNull(validation, "validation is null");
if (isValid()) {
return validation.map(f -> f.apply(get()));
} else if (validation.isValid()) {
return (Validation<E, U>) this;
} else {
return invalidAll(getErrors().prependAll(validation.getErrors()));
}
}
#mchmiel answered my question while I was writing mine.
I have been trying to do a little basic GA implementation myself. I used a class Gene which wraps a binary bit, a Chromosome class that has an ArrayList named genes of Gene objects. In the Chromosome class I have an evaluation method value() that simply computes the decimal equivalent of the bits in the chromosome. The overridden toString() method in Chromosome class uses a lambda expression to create a String representation of the bits in the Gene objects contained in the ArrayList genes. My question is: since forEach() method is not supposed to respect the ordering (for the benefit of parallelism), why does it always return the correct string representation of the underlying bits, i.e. in the order in which they were created? Or am I missing something serious here? May it be because of very short chromosome length, that the 'disrespecting' of the ordering is not prominent here?
Here are my classes.
The Gene class
public class Gene {
private short value;
public Gene() {
value = (short) (Math.random() <= 0.5 ? 0 : 1);
System.out.print(value);
}
#Override
public String toString() {
return String.valueOf(value);
}
}
The Chromosome class
import java.util.ArrayList;
import java.util.Arrays;
public class Chromosome {
private ArrayList<Gene> genes;
public Chromosome(int numOfGene) {
this.genes = new ArrayList<>();
for (int i = 0; i < numOfGene; i++) {
this.genes.add(i, new Gene());
}
}
public int value() {
return Integer.parseInt(this.toString(), 2);
}
#Override
public String toString() {
StringBuilder chromosome = new StringBuilder("");
genes.stream().forEach((g) -> chromosome.append(g));
return chromosome.toString();
}
public static void main(String[] args) {
Chromosome c = new Chromosome(10);
System.out.println("");
System.out.println(c);
}
}
The print statement in Gene constructor is to see the order in which the genes were created. No matter how many times I run the program, the forEach() always gives the correct representation of the bits, which is confusing for me. Or am I completely ignorant of how this is supposed to work, I know not :-(
Since you are using a sequential Stream, the order of the input list is preserved. If you change it to
genes.parallelStream().forEach((g) -> chromosome.append(g));
You would probably get a different order.
Since genes is an ArrayList, and a List is ordered, you are basically printing the List according to how you added the stuff to it. Basically, the first to be printed is the first that was inputted.
If it was a parallelStream, the order would be random.
Takes a look at this link which explains the forEach in an excellent way.
Excellent answers given! But I would like to add this for future readers, as I found this stated very clearly in the link given by bajada93.
When you create a stream, it is always a serial stream unless otherwise specified. To create a parallel stream, invoke the operation Collection.parallelStream.
Which means forEach will iterate such collection in the order it was constructed.
Link: JavaSE tutorial
I'd like to start off by saying this is a little more of a general question; not one pertaining to the specific examples that I have given, but simply a conceptual topic.
Example #1:
I'm creating a truly random string with UUID.java. Let's say I never want to have the same UUID generated, ever. Here's an idea of the circumstance:
(Let's assume that I'm saving/loading the List at the top- that's not the point)
Gist URL (I'm new to StackExchange- sorry!)
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
public class Example {
/**
* A final List<String> of all previous UUIDs generated with
* generateUniqueID(), turned into a string with uuid.toString();
*/
private static final List<String> PREVIOUS = new ArrayList<String>();
/**
* Generates a truly unique UUID.
*
* #param previous
* A List<String> of previous UUIDs, converted into a string with
* uuid.toString();
* #return a UUID generated with UUID.randomUUID(); that is not included in
* the given List<String>.
*/
public static UUID generateUniqueID(List<String> previous) {
UUID u = UUID.randomUUID();
if (previous.contains(u.toString())) {
return generateUniqueID(previous);
}
return u;
}
/**
* Generates a truly unique UUID using the final List<String> PREVIOUS
* variable defined at the top of the class.
*
* #return A truly random UUID created with generateUniqueID(List<String>
* previous);
*/
public static UUID generateUniqueID() {
UUID u = generateUniqueID(PREVIOUS);
PREVIOUS.add(u.toString());
return u;
}
}
Example #2: Okay, maybe UUID was a bad example, so let's use Random and a double. Here's another example:
Gist URL
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class Example2 {
/**
* A final List<Double> of all previous double generated with
* generateUniqueDouble(), turned into a string with Double.valueOf(d);
*/
private static final List<Double> PREVIOUS = new ArrayList<Double>();
/**
* The RANDOM variable used in the class.
*/
private static final Random RANDOM = new Random();
/**
* Generates a truly unique double.
*
* #param previous
* A List<Double> of previous doubles, converted into a Double
* with Double.valueOf(d);
* #return a UUID generated with UUID.randomUUID(); that is not included in
* the given List<Double>.
*/
public static double generateUniqueDouble(List<Double> previous) {
double d = RANDOM.nextDouble();
if (previous.contains(Double.valueOf(d))) {
return generateUniqueDouble(previous);
}
return d;
}
/**
* Generates a truly unique double using the final List<Double> PREVIOUS
* variable defined at the top of the class.
*
* #return A truly random double created with generateUnique(List<Double>
* previous);
*/
public static double generateUnique() {
double d = RANDOM.nextDouble();
PREVIOUS.add(Double.valueOf(d));
return d;
}
}
The point: Is this the most efficient method of doing something like this? Keep in mind I gave you examples, so they're pretty vague. Preferrably I wouldn't like to use any libraries for this, but if they really are a substantial difference in efficency please let me know about them.
Please let me know what you think in the responses :)
I suggest you make the generated IDs sequential numbers instead of doubles or uuids. If you want them to appear random to end users, display the sha1 of the number in base64.
Some points have already been discussed in the comments. To summarize and elaborate them here:
It is very unlikely that you create the same double value twice. There are roughly 7*1012 different double values (assuming that the random number generator can deliver "most" of them). For the UUIDs, the chance of creating the same value twice is even lower, since there are 2122 different UUIDs. If you created enough elements to have a non-negligible chance for a collision, you'd run out of memory anyhow.
So this approach does not make sense in practice.
However, from a purely theoretical point of view:
Performance
Using a List for this operation is not optimal. The "best case" (and by far the most common case) for you is that the new element is not contained in the list. But for the check whether the element is contained, this is the worst case: You'll have to check each and every element of the list, only to detect that the new element was not yet present. This is said to be linear complexity, or for short, O(n). You could use a different data structure where checking whether an element is contained can be done more quickly, namely in O(1). For example, you could replace the line
private static final List<Double> PREVIOUS = new ArrayList<Double>();
with
private static final Set<Double> PREVIOUS = new HashSet<Double>();
Performance and Correctness
(referring to the recursive approach in general here)
Performance
From a performance point of view, you should not use recursion when it can easily be replaced by an iterative solution. In this case, this would be trivial:
public static double generateUniqueDouble(List<Double> previous) {
double d = RANDOM.nextDouble();
while (previous.contains(d)) {
d = RANDOM.nextDouble();
}
PREVIOUS.add(d);
return d;
}
(it could be written a bit more compact, but that does not matter now).
Correctness
This is more subtle: When there are many recursive calls, then you might end up with a StackOverflowError. So you should never use recursion unless you can prove that the recursion will end (or better: That it will end "after a few steps").
But here's your main problem:
The algorithm is flawed. You cannot prove that it will be able to create a new random number. The chance that even a single new element is already contained in the collection of PREVIOUS elements is ridiculously low for double (or UUID) values. But it is not zero. And there is nothing preventing the random number generator from creating the random number 0.5 indefinitely, trillions of times in a row.
(Again: These are purely theoretical considerations. But not as far away from practice as they might look at the first glance: If you did not create random double values, but random byte values, then, after 256 calls, there would be no "new" values to return - and you would actually receive the StackOverflowError...)
It would be better to use a hash table than a list. Generate your candidate value, check for a collision in the hash table, and accept it if there is no collision. If you use a list, generating a new value is an O(n) operation. If you use a hash table, generating a new value is an O(1) operation .
Is it possible to iterate an Enumeration by using Lambda Expression? What will be the Lambda representation of the following code snippet:
Enumeration<NetworkInterface> nets = NetworkInterface.getNetworkInterfaces();
while (nets.hasMoreElements()) {
NetworkInterface networkInterface = nets.nextElement();
}
I didn't find any stream within it.
(This answer shows one of many options. Just because is has had acceptance mark, doesn't mean it is the best one. I suggest reading other answers and picking one depending on situation you are in. IMO:
for Java 8 Holger's answer is nicest, because aside from being simple it doesn't require additional iteration which happens in my solution.
for Java 9 I would pick solution describe in Tagir Valeev answer)
You can copy elements from your Enumeration to ArrayList with Collections.list and then use it like
Collections.list(yourEnumeration).forEach(yourAction);
If there are a lot of Enumerations in your code, I recommend creating a static helper method, that converts an Enumeration into a Stream. The static method might look as follows:
public static <T> Stream<T> enumerationAsStream(Enumeration<T> e) {
return StreamSupport.stream(
Spliterators.spliteratorUnknownSize(
new Iterator<T>() {
public T next() {
return e.nextElement();
}
public boolean hasNext() {
return e.hasMoreElements();
}
},
Spliterator.ORDERED), false);
}
Use the method with a static import. In contrast to Holger's solution, you can benefit from the different stream operations, which might make the existing code even simpler. Here is an example:
Map<...> map = enumerationAsStream(enumeration)
.filter(Objects::nonNull)
.collect(groupingBy(...));
Since Java-9 there will be new default method Enumeration.asIterator() which will make pure Java solution simpler:
nets.asIterator().forEachRemaining(iface -> { ... });
In case you don’t like the fact that Collections.list(Enumeration) copies the entire contents into a (temporary) list before the iteration starts, you can help yourself out with a simple utility method:
public static <T> void forEachRemaining(Enumeration<T> e, Consumer<? super T> c) {
while(e.hasMoreElements()) c.accept(e.nextElement());
}
Then you can simply do forEachRemaining(enumeration, lambda-expression); (mind the import static feature)…
You can use the following combination of standard functions:
StreamSupport.stream(Spliterators.spliteratorUnknownSize(CollectionUtils.toIterator(enumeration), Spliterator.IMMUTABLE), parallel)
You may also add more characteristics like NONNULL or DISTINCT.
After applying static imports this will become more readable:
stream(spliteratorUnknownSize(toIterator(enumeration), IMMUTABLE), false)
now you have a standard Java 8 Stream to be used in any way! You may pass true for parallel processing.
To convert from Enumeration to Iterator use any of:
CollectionUtils.toIterator() from Spring 3.2 or you can use
IteratorUtils.asIterator() from Apache Commons Collections 3.2
Iterators.forEnumeration() from Google Guava
For Java 8 the simplest transformation of enumeration to stream is:
Collections.list(NetworkInterface.getNetworkInterfaces()).stream()
I know this is an old question but I wanted to present an alternative to Collections.asList and Stream functionality. Since the question is titled "Iterate an Enumeration", I recognize sometimes you want to use a lambda expression but an enhanced for loop may be preferable as the enumerated object may throw an exception and the for loop is easier to encapsulate in a larger try-catch code segment (lambdas require declared exceptions to be caught within the lambda). To that end, here is using a lambda to create an Iterable which is usable in a for loop and does not preload the enumeration:
/**
* Creates lazy Iterable for Enumeration
*
* #param <T> Class being iterated
* #param e Enumeration as base for Iterator
* #return Iterable wrapping Enumeration
*/
public static <T> Iterable<T> enumerationIterable(Enumeration<T> e)
{
return () -> new Iterator<T>()
{
#Override
public T next()
{
return e.nextElement();
}
#Override
public boolean hasNext()
{
return e.hasMoreElements();
}
};
}