Java streams limit the collection elements based on condition - java

The below code, takes a stream, sorts it. If there is a maximum limit that should be applied, it would apply it.
if(maxLimit > 0) {
return list.stream().sorted(comparator).limit(maxLimit).collect(Collectors.toList());
} else {
return list.stream().sorted(comparator).collect(Collectors.toList());
}
//maxLimit, list, comparator can be understood in general terms.
Here, inside if, limit operation is present and in else, it is not present. Other operations on stream are same.
Is there any way to apply limit when maxLimit is greater than zero. In the code block presented above, same logic is repeated, except limit operation in one block.

list.stream().sorted(comparator).limit(maxLimit>0 ? maxLimit: list.size()).collect(Collectors.toList())
Looking at the current implementation of limit i believe it checks to see if the limit is less than the current size so would not be as inefficient as i initially expected

You can split your code into parts like:
final Stream stream = list.stream()
.sorted(comparator);
if (maxLimit > 0) {
stream = stream.limit(maxLimit);
}
return stream.collect(Collectors.toList());
In this case you don't have to maintain two branches as it was in your initial example.
Also when assigning stream variable it's worth using specific generic type, e.g. if list is a List<String> then use Stream<String> type for a stream variable.

Related

For Java streams, does generate + limit guarantee no additional calls to the generator function, or is there a preferred alternative?

I have a source of data that I know has n elements, which I can access by repeatedly calling a method on an object; for the sake of example, let's call it myReader.find(). I want to create a stream of data containing those n elements. Let's also say that I don't want to call the find() method more times than the amount of data I want to return, as it will throw an exception (e.g. NoSuchElementException) if the method is called after the end of the data is reached.
I know I can create this stream by using the IntStream.range method, and mapping each element using the find method. However, this feels a little weird since I'm completely ignoring the int values in the stream (I'm really just using it to produce a stream with exactly n elements).
return IntStream.range(0, n).mapToObj(i -> myReader.read());
An approach I've considered is using Stream.generate(supplier) followed by Stream.limit(maxSize). Based on my understanding of the limit function, this feels like it should work.
Stream.generate(myReader::read).limit(n)
However, nowhere in the API documentation do I see an indication that the Stream.limit() method will guarantee exactly maxSize elements are generated by the stream it's called on. It wouldn't be infeasible that a stream implementation could be allowed to call the generator function more than n times, so long as the end result was just the first n calls, and so long as it meets the API contract for being a short-circuiting intermediate operation.
Stream.limit JavaDocs
Returns a stream consisting of the elements of this stream, truncated to be no longer than maxSize in length.
This is a short-circuiting stateful intermediate operation.
Stream operations and pipelines documentation
An intermediate operation is short-circuiting if, when presented with infinite input, it may produce a finite stream as a result. [...] Having a short-circuiting operation in the pipeline is a necessary, but not sufficient, condition for the processing of an infinite stream to terminate normally in finite time.
Is it safe to rely on Stream.generate(generator).limit(n) only making n calls to the underlying generator? If so, is there some documentation of this fact that I'm missing?
And to avoid the XY Problem: what is the idiomatic way of creating a stream by performing an operation exactly n times?
Stream.generate creates an unordered Stream. This implies that the subsequent limit operation is not required to use the first n elements, as there is no “first” when there’s no order, but may select arbitrary n elements. The implementation may exploit this permission , e.g. for higher parallel processing performance.
The following code
IntSummaryStatistics s =
Stream.generate(new AtomicInteger()::incrementAndGet)
.parallel()
.limit(100_000)
.collect(Collectors.summarizingInt(Integer::intValue));
System.out.println(s);
prints something like
IntSummaryStatistics{count=100000, sum=5000070273, min=1, average=50000,702730, max=100207}
on my machine, whereas the max number may vary. It demonstrates that the Stream has selected exactly 100000 elements, as required, but not the elements from 1 to 100000. Since the generator produces strictly ascending numbers, it’s clear that is has been called more than 100000 times to get number higher than that.
Another example
System.out.println(
Stream.generate(new AtomicInteger()::incrementAndGet)
.parallel()
.map(String::valueOf)
.limit(10)
.collect(Collectors.toList())
);
prints something like this on my machine (JDK-14)
[4, 8, 5, 6, 10, 3, 7, 1, 9, 11]
With JDK-8, it even prints something like
[4, 14, 18, 24, 30, 37, 42, 52, 59, 66]
If a construct like
IntStream.range(0, n).mapToObj(i -> myReader.read())
feels weird due to the unused i parameter, you may use
Collections.nCopies(n, myReader).stream().map(TypeOfMyReader::read)
instead. This doesn’t show an unused int parameter and works equally well, as in fact, it’s internally implemented as IntStream.range(0, n).mapToObj(i -> element). There is no way around some counter, visible or hidden, to ensure that the method will be called n times. Note that, since read likely is a stateful operation, the resulting behavior will always be like an unordered stream when enabling parallel processing, but the IntStream and nCopies approaches create a finite stream that will never invoke the method more than the specified number of times.
Only answering the XY-problem part of your question: simply create a spliterator for your reader.
class MyStreamSpliterator implements Spliterator<String> { // or whichever datatype
private final MyReaderClass reader;
public MyStramSpliterator(MyReaderClass reader) {
this.reader = reader;
}
#Override
public boolean tryAdvance(Consumer<String> action) {
try {
String nextval = reader.read();
action.accept(nextval);
return true;
} catch(NoSuchElementException e) {
// cleanup if necessary
return false;
}
// Alternative: if you really really want to use n iterations,
// add a counter and use it.
}
#Override
public Spliterator<String> trySplit() {
return null; // we don't split
}
#Override
public long estimateSize() {
return Long.MAX_VALUE; // or the correct value, if you know it before
}
#Override
public int characteristics() {
// add SIZED if you know the size
return Spliterator.IMMUTABLE | Spliterator.ORDERED;
}
}
Then, create your stream as StreamSupport.stream(new MyStreamSpliterator(reader), false)
Disclaimer: I just threw this together in the SO editor, probably there are some errors.

Observable#take(Long) not returning desired size of items in RxJava

I am using RxJava/Kotlin Observable#take() to get first 50 items from the list. But #take() operator is not behaving as it should as per Rx docs.
In Rx docs, #take() is defined as:
"Emit only the first n items emitted by an Observable"
I have a function like this:
As we can see the pageSize argument is 50
And initial size of the list is 300
After that #take(50) is applied to that Observable and at next breakpoint I still get the full size list i.e. size = 300
But just for the check, if something is wrong with the debugger or observable, I tried to take only items whose displayName contains "9", but this time I get the expected result of smaller list with 9 in each of their #displayName field.
I believe RxJava/Kotlin's #take() operator is not that crazy and it's just me.
take behaves correctly as it will give you only 50 List<FollowersEntry> "marbles". Based on your screenshots and wording, I guess you wanted 50 FollowersEntry. There is a fundamental logical difference between a container of objects and the objects themselves. RxJava sees only an object sequence of type List<> but it can't know about the nested objects you intended to work with.
Therefore, you either have to use it.take(50) inside map (or whatever the Kotlin collections function is) or unroll the sequence of lists into sequence of entries via flatMapIterable:
getFollowers()
.flatMapIterable(entry -> entry)
.take(50 /* entries this time, not lists */)
Take a good look at the return type of your method - Single<List<FollowersEntity>>. The Observable returned from remoteFollowersService.getFollowers() is not an Observable that emits 300 FollowersEntity items - it is an Observable that emits a single item, and that single item is a List containing 300 FollowersEntity items. In other words you need to call take on the list, not on the observable.
return remoteFollowersService.getFollowers()
.map { val size = it.size; it } // for debugging
.map { it.take(pageSize) }
.map { val size = it.size; it } // for debugging
.map { it.filter { item -> item.displayName.contains("9") } }
.single(emptyList())

Is Java 8 stream laziness useless in practice?

I have read a lot about Java 8 streams lately, and several articles about lazy loading with Java 8 streams specifically: here and over here. I can't seem to shake the feeling that lazy loading is COMPLETELY useless (or at best, a minor syntactic convenience offering zero performance value).
Let's take this code as an example:
int[] myInts = new int[]{1,2,3,5,8,13,21};
IntStream myIntStream = IntStream.of(myInts);
int[] myChangedArray = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n))
.toArray();
This will log in the console, because the terminal operation, in this case toArray(), is called, and our stream is lazy and executes only when the terminal operation is called. Of course I can also do this:
IntStream myChangedInts = myIntStream
.peek(n -> System.out.println("About to square: " + n))
.map(n -> (int)Math.pow(n, 2))
.peek(n -> System.out.println("Done squaring, result: " + n));
And nothing will be printed, because the map isn't happening, because I don't need the data. Until I call this:
int[] myChangedArray = myChangedInts.toArray();
And voila, I get my mapped data, and my console logs. Except I see zero benefit to it whatsoever. I realize I can define the filter code long before I call to toArray(), and I can pass around this "not-really-filtered stream around), but so what? Is this the only benefit?
The articles seem to imply there is a performance gain associated with laziness, for example:
In the Java 8 Streams API, the intermediate operations are lazy and their internal processing model is optimized to make it being capable of processing the large amount of data with high performance.
and
Java 8 Streams API optimizes stream processing with the help of short circuiting operations. Short Circuit methods ends the stream processing as soon as their conditions are satisfied. In normal words short circuit operations, once the condition is satisfied just breaks all of the intermediate operations, lying before in the pipeline. Some of the intermediate as well as terminal operations have this behavior.
It sounds literally like breaking out of a loop, and not associated with laziness at all.
Finally, there is this perplexing line in the second article:
Lazy operations achieve efficiency. It is a way not to work on stale data. Lazy operations might be useful in the situations where input data is consumed gradually rather than having whole complete set of elements beforehand. For example consider the situations where an infinite stream has been created using Stream#generate(Supplier<T>) and the provided Supplier function is gradually receiving data from a remote server. In those kind of the situations server call will only be made at a terminal operation when it's needed.
Not working on stale data? What? How does lazy loading keep someone from working on stale data?
TLDR: Is there any benefit to lazy loading besides being able to run the filter/map/reduce/whatever operation at a later time (which offers zero performance benefit)?
If so, what's a real-world use case?
Your terminal operation, toArray(), perhaps supports your argument given that it requires all elements of the stream.
Some terminal operations don't. And for these, it would be a waste if streams weren't lazily executed. Two examples:
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
.peek(System.out::println)
.mapToObj(String::valueOf)
.peek(System.out::println)
.findFirst()
.ifPresent(System.out::println);
//example 2: check if any value has an even key
boolean valid = records.
.map(this::heavyConversion)
.filter(this::checkWithWebService)
.mapToInt(Record::getKey)
.anyMatch(i -> i % 2 == 0)
The first stream will print:
0
0
0
That is, intermediate operations will be run just on one element. This is an important optimization. If it weren't lazy, then all the peek() calls would have to run on all elements (absolutely unnecessary as you're interested in just one element). Intermediate operations can be expensive (such as in the second example)
Short-circuiting terminal operation (of which toArray isn't) make this optimization possible.
Laziness can be very useful for the users of your API, especially when the final result of the Stream pipeline evaluation might be very large!
The simple example is the Files.lines method in the Java API itself. If you don't want to read the whole file into the memory and you only need the first N lines, then just write:
Stream<String> stream = Files.lines(path); // lazy operation
List<String> result = stream.limit(N).collect(Collectors.toList()); // read and collect
You're right that there won't be a benefit from map().reduce() or map().collect(), but there's a pretty obvious benefit with findAny() findFirst(), anyMatch(), allMatch(), etc. Basically, any operation that can be short-circuited.
Good question.
Assuming you write textbook perfect code, the difference in performance between a properly optimized for and a stream is not noticeable (streams tend to be slightly better class loading wise, but the difference should not be noticeable in most cases).
Consider the following example.
// Some lengthy computation
private static int doStuff(int i) {
try { Thread.sleep(1000); } catch (InterruptedException e) { }
return i;
}
public static OptionalInt findFirstGreaterThanStream(int value) {
return IntStream
.of(MY_INTS)
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
}
public static OptionalInt findFirstGreaterThanFor(int value) {
for (int i = 0; i < MY_INTS.length; i++) {
int mapped = Main.doStuff(MY_INTS[i]);
if(mapped > value){
return OptionalInt.of(mapped);
}
}
return OptionalInt.empty();
}
Given the above methods, the next test should show they execute in about the same time.
public static void main(String[] args) {
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanFor(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
}
OptionalInt[8]
5119
OptionalInt[8]
5001
Anyway, we spend most of the time in the doStuff method. Let's say we want to add more threads to the mix.
Adjusting the stream method is trivial (considering your operations meets the preconditions of parallel streams).
public static OptionalInt findFirstGreaterThanParallelStream(int value) {
return IntStream
.of(MY_INTS)
.parallel()
.map(Main::doStuff)
.filter(x -> x > value)
.findFirst();
}
Achieving the same behavior without streams can be tricky.
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor) {
AtomicInteger counter = new AtomicInteger(0);
CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() -> {
while(counter.get() != MY_INTS.length-1);
return OptionalInt.empty();
});
for (int i = 0; i < MY_INTS.length; i++) {
final int current = MY_INTS[i];
executor.execute(() -> {
int mapped = Main.doStuff(current);
if(mapped > value){
cf.complete(OptionalInt.of(mapped));
} else {
counter.incrementAndGet();
}
});
}
try {
return cf.get();
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
return OptionalInt.empty();
}
}
The tests execute in about the same time again.
public static void main(String[] args) {
long begin;
long end;
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelStream(5));
end = System.currentTimeMillis();
System.out.println(end-begin);
ExecutorService executor = Executors.newFixedThreadPool(10);
begin = System.currentTimeMillis();
System.out.println(findFirstGreaterThanParallelFor(5678, executor));
end = System.currentTimeMillis();
System.out.println(end-begin);
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
executor.shutdownNow();
}
OptionalInt[8]
1004
OptionalInt[8]
1004
In conclusion, although we don't squeeze a big performance benefit out of streams (considering you write excellent multi-threaded code in your for alternative), the code itself tends to be more maintainable.
A (slightly off-topic) final note:
As with programming languages, higher level abstractions (streams relative to fors) make stuff easier to develop at the cost of performance. We did not move away from assembly to procedural languages to object-oriented languages because the later offered greater performance. We moved because it made us more productive (develop the same thing at a lower cost). If you are able to get the same performance out of a stream as you would do with a for and properly written multi-threaded code, I would say it's already a win.
I have a real example from our code base, since I'm going to simplify it, not entirely sure you might like it or fully grasp it...
We have a service that needs a List<CustomService>, I am suppose to call it. Now in order to call it, I am going to a database (much simpler than reality) and obtaining a List<DBObject>; in order to obtain a List<CustomService> from that, there are some heavy transformations that need to be done.
And here are my choices, transform in place and pass the list. Simple, yet, probably not that optimal. Second option, refactor the service, to accept a List<DBObject> and a Function<DBObject, CustomService>. And this sounds trivial, but it enables laziness (among other things). That service might sometimes need only a few elements from that List, or sometimes a max by some property, etc. - thus no need for me to do the heavy transformation for all elements, this is where Stream API pull based laziness is a winner.
Before Streams existed, we used to use guava. It had Lists.transform( list, function) that was lazy too.
It's not a fundamental feature of streams as such, it could have been done even without guava, but it's s lot simpler that way. The example here provided with findFirst is great and the simplest to understand; this is the entire point of laziness, elements are pulled only when needed, they are not passed from an intermediate operation to another in chunks, but pass from one stage to another one at a time.
One interesting use case that hasn't been mentioned is arbitrary composition of operations on streams, coming from different parts of the code base, responding to different sorts of business or technical requisites.
For example, say you have an application where certain users can see all the data but certain other users can only see part of it. The part of the code that checks user permissions can simply impose a filter on whatever stream is being handed about.
Without lazy streams, that same part of the code could be filtering the already realized full collection, but that may have been expensive to obtain, for no real gain.
Alternatively, that same part of the code might want to append its filter to a data source, but now it has to know whether the data comes from a database, so it can impose an additional WHERE clause, or some other source.
With lazy streams, it's a filter that can be implemented ever which way. Filters imposed on streams from the database can translate into the aforementioned WHERE clause, with obvious performance gains over filtering in-memory collections resulting from whole table reads.
So, a better abstraction, better performance, better code readability and maintainability, sounds like a win to me. :)
Non-lazy implementation would process all input and collect output to a new collection on each operation. Obviously, it's impossible for unlimited or large enough sources, memory-consuming otherwise, and unnecessarily memory-consuming in case of reducing and short-circuiting operations, so there are great benefits.
Check the following example
Stream.of("0","0","1","2","3","4")
.distinct()
.peek(a->System.out.println("after distinct: "+a))
.anyMatch("1"::equals);
If it was not behaving as lazy you would expect that all elements would pass through the distinct filtering first. But because of lazy execution it behaves differently. It will stream the minimum amount of elements needed to calculate the result.
The above example will print
after distinct: 0
after distinct: 1
How it works analytically:
First "0" goes until the terminal operation but does not satisfy it. Another element must be streamed.
Second "0" is filtered through .distinct() and never reaches terminal operation.
Since the terminal operation is not satisfied yet, next element is streamed.
"1" goes through terminal operation and satisfies it.
No more elements need to be streamed.

Conditionally add an operation to a Java 8 stream

I'm wondering if I can add an operation to a stream, based off of some sort of condition set outside of the stream. For example, I want to add a limit operation to the stream if my limit variable is not equal to -1.
My code currently looks like this, but I have yet to see other examples of streams being used this way, where a Stream object is reassigned to the result of an intermediate operation applied on itself:
// Do some stream stuff
stream = stream.filter(e -> e.getTimestamp() < max);
// Limit the stream
if (limit != -1) {
stream = stream.limit(limit);
}
// Collect stream to list
stream.collect(Collectors.toList());
As stated in this stackoverflow post, the filter isn't actually applied until a terminal operation is called. Since I'm reassigning the value of stream before a terminal operation is called, is the above code still a proper way to use Java 8 streams?
There is no semantic difference between a chained series of invocations and a series of invocations storing the intermediate return values. Thus, the following code fragments are equivalent:
a = object.foo();
b = a.bar();
c = b.baz();
and
c = object.foo().bar().baz();
In either case, each method is invoked on the result of the previous invocation. But in the latter case, the intermediate results are not stored but lost on the next invocation. In the case of the stream API, the intermediate results must not be used after you have called the next method on it, thus chaining is the natural way of using stream as it intrinsically ensures that you don’t invoke more than one method on a returned reference.
Still, it is not wrong to store the reference to a stream as long as you obey the contract of not using a returned reference more than once. By using it they way as in your question, i.e. overwriting the variable with the result of the next invocation, you also ensure that you don’t invoke more than one method on a returned reference, thus, it’s a correct usage. Of course, this only works with intermediate results of the same type, so when you are using map or flatMap, getting a stream of a different reference type, you can’t overwrite the local variable. Then you have to be careful to not use the old local variable again, but, as said, as long as you are not using it after the next invocation, there is nothing wrong with the intermediate storage.
Sometimes, you have to store it, e.g.
try(Stream<String> stream = Files.lines(Paths.get("myFile.txt"))) {
stream.filter(s -> !s.isEmpty()).forEach(System.out::println);
}
Note that the code is equivalent to the following alternatives:
try(Stream<String> stream = Files.lines(Paths.get("myFile.txt")).filter(s->!s.isEmpty())) {
stream.forEach(System.out::println);
}
and
try(Stream<String> srcStream = Files.lines(Paths.get("myFile.txt"))) {
Stream<String> tmp = srcStream.filter(s -> !s.isEmpty());
// must not be use variable srcStream here:
tmp.forEach(System.out::println);
}
They are equivalent because forEach is always invoked on the result of filter which is always invoked on the result of Files.lines and it doesn’t matter on which result the final close() operation is invoked as closing affects the entire stream pipeline.
To put it in one sentence, the way you use it, is correct.
I even prefer to do it that way, as not chaining a limit operation when you don’t want to apply a limit is the cleanest way of expression your intent. It’s also worth noting that the suggested alternatives may work in a lot of cases, but they are not semantically equivalent:
.limit(condition? aLimit: Long.MAX_VALUE)
assumes that the maximum number of elements, you can ever encounter, is Long.MAX_VALUE but streams can have more elements than that, they even might be infinite.
.limit(condition? aLimit: list.size())
when the stream source is list, is breaking the lazy evaluation of a stream. In principle, a mutable stream source might legally get arbitrarily changed up to the point when the terminal action is commenced. The result will reflect all modifications made up to this point. When you add an intermediate operation incorporating list.size(), i.e. the actual size of the list at this point, subsequent modifications applied to the collection between this point and the terminal operation may turn this value to have a different meaning than the intended “actually no limit” semantic.
Compare with “Non Interference” section of the API documentation:
For well-behaved stream sources, the source can be modified before the terminal operation commences and those modifications will be reflected in the covered elements. For example, consider the following code:
List<String> l = new ArrayList(Arrays.asList("one", "two"));
Stream<String> sl = l.stream();
l.add("three");
String s = sl.collect(joining(" "));
First a list is created consisting of two strings: "one"; and "two". Then a stream is created from that list. Next the list is modified by adding a third string: "three". Finally the elements of the stream are collected and joined together. Since the list was modified before the terminal collect operation commenced the result will be a string of "one two three".
Of course, this is a rare corner case as normally, a programmer will formulate an entire stream pipeline without modifying the source collection in between. Still, the different semantic remains and it might turn into a very hard to find bug when you once enter such a corner case.
Further, since they are not equivalent, the stream API will never recognize these values as “actually no limit”. Even specifying Long.MAX_VALUE implies that the stream implementation has to track the number of processed elements to ensure that the limit has been obeyed. Thus, not adding a limit operation can have a significant performance advantage over adding a limit with a number that the programmer expects to never be exceeded.
There is two ways you can do this
// Do some stream stuff
List<E> results = list.stream()
.filter(e -> e.getTimestamp() < max);
.limit(limit > 0 ? limit : list.size())
.collect(Collectors.toList());
OR
// Do some stream stuff
stream = stream.filter(e -> e.getTimestamp() < max);
// Limit the stream
if (limit != -1) {
stream = stream.limit(limit);
}
// Collect stream to list
List<E> results = stream.collect(Collectors.toList());
As this is functional programming you should always work on the result of each function. You should specifically avoid modifying anything in this style of programming and treat everything as if it was immutable if possible.
Since I'm reassigning the value of stream before a terminal operation is called, is the above code still a proper way to use Java 8 streams?
It should work, however it reads as a mix of imperative and functional coding. I suggest writing it as a fixed stream as per my first answer.
I think your first line needs to be:
stream = stream.filter(e -> e.getTimestamp() < max);
so that your using the stream returned by filter in subsequent operations rather than the original stream.
I known it is a bit too late, but I had the same question myself and didn't find the satisfying answer, however, inspired by this question and answers I came to the following solution:
return Stream.of( ///< wrap target stream in other stream ;)
/*do regular stream stuff*/
stream.filter(e -> e.getTimestamp() < max)
).flatMap(s -> limit != -1 ? s.limit(limit) : s) ///< apply limit only if necessary and unwrap stream of stream to "normal" stream
.collect(Collectors.toList()) ///< do final stuff

Get last element of Stream/List in a one-liner

How can I get the last element of a stream or list in the following code?
Where data.careas is a List<CArea>:
CArea first = data.careas.stream()
.filter(c -> c.bbox.orientationHorizontal).findFirst().get();
CArea last = data.careas.stream()
.filter(c -> c.bbox.orientationHorizontal)
.collect(Collectors.toList()).; //how to?
As you can see getting the first element, with a certain filter, is not hard.
However getting the last element in a one-liner is a real pain:
It seems I cannot obtain it directly from a Stream. (It would only make sense for finite streams)
It also seems that you cannot get things like first() and last() from the List interface, which is really a pain.
I do not see any argument for not providing a first() and last() method in the List interface, as the elements in there, are ordered, and moreover the size is known.
But as per the original answer: How to get the last element of a finite Stream?
Personally, this is the closest I could get:
int lastIndex = data.careas.stream()
.filter(c -> c.bbox.orientationHorizontal)
.mapToInt(c -> data.careas.indexOf(c)).max().getAsInt();
CArea last = data.careas.get(lastIndex);
However it does involve, using an indexOf on every element, which is most likely not you generally want as it can impair performance.
It is possible to get the last element with the method Stream::reduce. The following listing contains a minimal example for the general case:
Stream<T> stream = ...; // sequential or parallel stream
Optional<T> last = stream.reduce((first, second) -> second);
This implementations works for all ordered streams (including streams created from Lists). For unordered streams it is for obvious reasons unspecified which element will be returned.
The implementation works for both sequential and parallel streams. That might be surprising at first glance, and unfortunately the documentation doesn't state it explicitly. However, it is an important feature of streams, and I try to clarify it:
The Javadoc for the method Stream::reduce states, that it "is not constrained to execute sequentially".
The Javadoc also requires that the "accumulator function must be an associative, non-interfering, stateless function for combining two values", which is obviously the case for the lambda expression (first, second) -> second.
The Javadoc for reduction operations states: "The streams classes have multiple forms of general reduction operations, called reduce() and collect() [..]" and "a properly constructed reduce operation is inherently parallelizable, so long as the function(s) used to process the elements are associative and stateless."
The documentation for the closely related Collectors is even more explicit: "To ensure that sequential and parallel executions produce equivalent results, the collector functions must satisfy an identity and an associativity constraints."
Back to the original question: The following code stores a reference to the last element in the variable last and throws an exception if the stream is empty. The complexity is linear in the length of the stream.
CArea last = data.careas
.stream()
.filter(c -> c.bbox.orientationHorizontal)
.reduce((first, second) -> second).get();
If you have a Collection (or more general an Iterable) you can use Google Guava's
Iterables.getLast(myIterable)
as handy oneliner.
One liner (no need for stream;):
Object lastElement = list.isEmpty() ? null : list.get(list.size()-1);
Guava has dedicated method for this case:
Stream<T> stream = ...;
Optional<T> lastItem = Streams.findLast(stream);
It's equivalent to stream.reduce((a, b) -> b) but creators claim it has much better performance.
From documentation:
This method's runtime will be between O(log n) and O(n), performing
better on efficiently splittable streams.
It's worth to mention that if stream is unordered this method behaves like findAny().
list.stream().sorted(Comparator.comparing(obj::getSequence).reversed()).findFirst().get();
reverse the order and get the first element from the list. here object has sequence number, Comparator provides multiple functionalities can be used as per logic.
Another way to get the last element is by using sort.
Optional<CArea> num=data.careas.stream().sorted((a,b)->-1).findFirst();
You can also use skip() function as below...
long count = data.careas.count();
CArea last = data.careas.stream().skip(count - 1).findFirst().get();
it's super simple to use.
One more approach. Pair will have first and last elements:
List<Object> pair = new ArrayList<>();
dataStream.ForEach(o -> {
if (pair.size() == 0) {
pair.add(o);
pair.add(o);
}
pair.set(1, o);
});
If you need to get the last N number of elements. Closure can be used.
The below code maintains an external queue of fixed size until, the stream reaches the end.
final Queue<Integer> queue = new LinkedList<>();
final int N=5;
list.stream().peek((z) -> {
queue.offer(z);
if (queue.size() > N)
queue.poll();
}).count();
Another option could be to use reduce operation using identity as a Queue.
final int lastN=3;
Queue<Integer> reduce1 = list.stream()
.reduce(
(Queue<Integer>)new LinkedList<Integer>(),
(m, n) -> {
m.offer(n);
if (m.size() > lastN)
m.poll();
return m;
}, (m, n) -> m);
System.out.println("reduce1 = " + reduce1);

Categories