I have a source of data that I know has n elements, which I can access by repeatedly calling a method on an object; for the sake of example, let's call it myReader.find(). I want to create a stream of data containing those n elements. Let's also say that I don't want to call the find() method more times than the amount of data I want to return, as it will throw an exception (e.g. NoSuchElementException) if the method is called after the end of the data is reached.
I know I can create this stream by using the IntStream.range method, and mapping each element using the find method. However, this feels a little weird since I'm completely ignoring the int values in the stream (I'm really just using it to produce a stream with exactly n elements).
return IntStream.range(0, n).mapToObj(i -> myReader.read());
An approach I've considered is using Stream.generate(supplier) followed by Stream.limit(maxSize). Based on my understanding of the limit function, this feels like it should work.
Stream.generate(myReader::read).limit(n)
However, nowhere in the API documentation do I see an indication that the Stream.limit() method will guarantee exactly maxSize elements are generated by the stream it's called on. It wouldn't be infeasible that a stream implementation could be allowed to call the generator function more than n times, so long as the end result was just the first n calls, and so long as it meets the API contract for being a short-circuiting intermediate operation.
Stream.limit JavaDocs
Returns a stream consisting of the elements of this stream, truncated to be no longer than maxSize in length.
This is a short-circuiting stateful intermediate operation.
Stream operations and pipelines documentation
An intermediate operation is short-circuiting if, when presented with infinite input, it may produce a finite stream as a result. [...] Having a short-circuiting operation in the pipeline is a necessary, but not sufficient, condition for the processing of an infinite stream to terminate normally in finite time.
Is it safe to rely on Stream.generate(generator).limit(n) only making n calls to the underlying generator? If so, is there some documentation of this fact that I'm missing?
And to avoid the XY Problem: what is the idiomatic way of creating a stream by performing an operation exactly n times?
Stream.generate creates an unordered Stream. This implies that the subsequent limit operation is not required to use the first n elements, as there is no “first” when there’s no order, but may select arbitrary n elements. The implementation may exploit this permission , e.g. for higher parallel processing performance.
The following code
IntSummaryStatistics s =
Stream.generate(new AtomicInteger()::incrementAndGet)
.parallel()
.limit(100_000)
.collect(Collectors.summarizingInt(Integer::intValue));
System.out.println(s);
prints something like
IntSummaryStatistics{count=100000, sum=5000070273, min=1, average=50000,702730, max=100207}
on my machine, whereas the max number may vary. It demonstrates that the Stream has selected exactly 100000 elements, as required, but not the elements from 1 to 100000. Since the generator produces strictly ascending numbers, it’s clear that is has been called more than 100000 times to get number higher than that.
Another example
System.out.println(
Stream.generate(new AtomicInteger()::incrementAndGet)
.parallel()
.map(String::valueOf)
.limit(10)
.collect(Collectors.toList())
);
prints something like this on my machine (JDK-14)
[4, 8, 5, 6, 10, 3, 7, 1, 9, 11]
With JDK-8, it even prints something like
[4, 14, 18, 24, 30, 37, 42, 52, 59, 66]
If a construct like
IntStream.range(0, n).mapToObj(i -> myReader.read())
feels weird due to the unused i parameter, you may use
Collections.nCopies(n, myReader).stream().map(TypeOfMyReader::read)
instead. This doesn’t show an unused int parameter and works equally well, as in fact, it’s internally implemented as IntStream.range(0, n).mapToObj(i -> element). There is no way around some counter, visible or hidden, to ensure that the method will be called n times. Note that, since read likely is a stateful operation, the resulting behavior will always be like an unordered stream when enabling parallel processing, but the IntStream and nCopies approaches create a finite stream that will never invoke the method more than the specified number of times.
Only answering the XY-problem part of your question: simply create a spliterator for your reader.
class MyStreamSpliterator implements Spliterator<String> { // or whichever datatype
private final MyReaderClass reader;
public MyStramSpliterator(MyReaderClass reader) {
this.reader = reader;
}
#Override
public boolean tryAdvance(Consumer<String> action) {
try {
String nextval = reader.read();
action.accept(nextval);
return true;
} catch(NoSuchElementException e) {
// cleanup if necessary
return false;
}
// Alternative: if you really really want to use n iterations,
// add a counter and use it.
}
#Override
public Spliterator<String> trySplit() {
return null; // we don't split
}
#Override
public long estimateSize() {
return Long.MAX_VALUE; // or the correct value, if you know it before
}
#Override
public int characteristics() {
// add SIZED if you know the size
return Spliterator.IMMUTABLE | Spliterator.ORDERED;
}
}
Then, create your stream as StreamSupport.stream(new MyStreamSpliterator(reader), false)
Disclaimer: I just threw this together in the SO editor, probably there are some errors.
This question already has answers here:
When is a Java 8 Stream considered to be consumed?
(2 answers)
Closed 4 years ago.
I think all of the resources I have studied one way or another emphasize that a stream can be consumed only once, and the consumption is done by so-called terminal operations (which is very clear to me).
Just out of curiosity I tried this:
import java.util.stream.IntStream;
class App {
public static void main(String[] args) {
IntStream is = IntStream.of(1, 2, 3, 4);
is.map(i -> i + 1);
int sum = is.sum();
}
}
which ends up throwing a Runtime Exception:
Exception in thread "main" java.lang.IllegalStateException: stream has already been operated upon or closed
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:229)
at java.util.stream.IntPipeline.reduce(IntPipeline.java:456)
at java.util.stream.IntPipeline.sum(IntPipeline.java:414)
at App.main(scratch.java:10)
This is usual, I am missing something, but still want to ask: As far as I know map is an intermediate (and lazy) operation and does nothing on the Stream by itself. Only when the terminal operation sum (which is an eager operation) is called, the Stream gets consumed and operated on.
But why do I have to chain them?
What is the difference between
is.map(i -> i + 1);
is.sum();
and
is.map(i -> i + 1).sum();
?
When you do this:
int sum = IntStream.of(1, 2, 3, 4).map(i -> i + 1).sum();
Every chained method is being invoked on the return value of the previous method in the chain.
So map is invoked on what IntStream.of(1, 2, 3, 4) returns and sum on what map(i -> i + 1) returns.
You don't have to chain stream methods, but it's more readable and less error-prone than using this equivalent code:
IntStream is = IntStream.of(1, 2, 3, 4);
is = is.map(i -> i + 1);
int sum = is.sum();
Which is not the same as the code you've shown in your question:
IntStream is = IntStream.of(1, 2, 3, 4);
is.map(i -> i + 1);
int sum = is.sum();
As you see, you're disregarding the reference returned by map. This is the cause of the error.
EDIT (as per the comments, thanks to #IanKemp for pointing this out): Actually, this is the external cause of the error. If you stop to think about it, map must be doing something internally to the stream itself, otherwise, how would then the terminal operation trigger the transformation passed to map on each element? I agree in that intermediate operations are lazy, i.e. when invoked, they do nothing to the elements of the stream. But internally, they must configure some state into the stream pipeline itself, so that they can be applied later.
Despite I'm not aware of the full details, what happens is that, conceptually, map is doing at least 2 things:
It's creating and returning a new stream that holds the function passed as an argument somewhere, so that it can be applied to elements later, when the terminal operation is invoked.
It is also setting a flag to the old stream instance, i.e. the one which it has been called on, indicating that this stream instance no longer represents a valid state for the pipeline. This is because the new, updated state which holds the function passed to map is now encapsulated by the instance it has returned. (I believe that this decision might have been taken by the jdk team to make errors appear as early as possible, i.e. by throwing an early exception instead of letting the pipeline go on with an invalid/old state that doesn't hold the function to be applied, thus letting the terminal operation return unexpected results).
Later on, when a terminal operation is invoked on this instance flagged as invalid, you're getting that IllegalStateException. The two items above configure the deep, internal cause of the error.
Another way to see all this is to make sure that a Stream instance is operated only once, by means of either an intermediate or a terminal operation. Here you are violating this requirement, because you are calling map and sum on the same instance.
In fact, javadocs for Stream state it clearly:
A stream should be operated on (invoking an intermediate or terminal stream operation) only once. This rules out, for example, "forked" streams, where the same source feeds two or more pipelines, or multiple traversals of the same stream. A stream implementation may throw IllegalStateException if it detects that the stream is being reused. However, since some stream operations may return their receiver rather than a new stream object, it may not be possible to detect reuse in all cases.
Imagine the IntStream is a wrapper around your data stream with an
immutable list of operations. These operations are not executed until you need the final result (sum in your case).
Since the list is immutable, you need a new instance of IntStream with a list that contains the previous items plus the new one, which is what '. map' returns.
This means that if you don't chain, you will operate on the old instance, which does not have that operation.
The stream library also keeps some internal tracking of what's going on, that's why it's able to throw the exception in the sum step.
If you don't want to chain, you can use a variable for each step:
IntStream is = IntStream.of(1, 2, 3, 4);
IntStream is2 = is.map(i -> i + 1);
int sum = is2.sum();
Intermediate operations return a new stream. They are always lazy; executing an intermediate operation such as filter() does not actually perform any filtering, but instead creates a new stream that, when traversed, contains the elements of the initial stream that match the given predicate.
Taken from https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html under "Stream Operations and Pipelines"
At the lowest level, all streams are driven by a spliterator.
Taken from the same link under "Low-level stream construction"
Traversal and splitting exhaust elements; each Spliterator is useful for only a single bulk computation.
Taken from https://docs.oracle.com/javase/8/docs/api/java/util/Spliterator.html
stream.parallel().skip(1)
vs
stream.skip(1).parallel()
This is about Java 8 streams.
Are both of these skipping the 1st line/entry?
The example is something like this:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.StringReader;
import java.util.concurrent.atomic.AtomicLong;
public class Test010 {
public static void main(String[] args) {
String message =
"a,b,c\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n1,2,3\n4,5,6\n7,8,9\n";
try(BufferedReader br = new BufferedReader(new StringReader(message))){
AtomicLong cnt = new AtomicLong(1);
br.lines().parallel().skip(1).forEach(
s -> {
System.out.println(cnt.getAndIncrement() + "->" + s);
}
);
}catch (IOException e) {
e.printStackTrace();
}
}
}
Earlier today, I was sometimes getting the header line "a,b,c" in the lambda expression. This was a surprise since I was expecting to have skipped it already. Now I cannot get that example to work i.e. I cannot get the header line in the lambda expression. So I am pretty confused now, maybe something else was influencing that behavior. Of course this is just an example. In the real world the message is being read from a CSV file. The message is the full content of that CSV file.
You actually have two questions in one, the first being whether it makes a difference in writing stream.parallel().skip(1) or stream.skip(1).parallel(), the second being whether either or both will always skip the first element. See also “loaded question”.
The first answer is that it makes no difference, because specifying a .sequential() or .parallel() execution policy affects the entire Stream pipeline, regardless of where you place it in the call chain—of course, unless you specify multiple contradicting policies, in which case the last one wins.
So in either case you are requesting a parallel execution which might affect the outcome of the skip operation, which is subject of the second question.
The answer is not that simple. If the Stream has no defined encounter order in the first place, an arbitrary element might get skipped, which is a consequence of the fact that there is no “first” element, even if there might be an element you encounter first when iterating over the source.
If you have an ordered Stream, skip(1) should skip the first element, but this has been laid down only recently. As discussed in “Stream.skip behavior with unordered terminal operation”, chaining an unordered terminal operation had an effect on the skip operation in earlier implementations and there was some uncertainty of whether this could even be intentional, as visible in “Is this a bug in Files.lines(), or am I misunderstanding something about parallel streams?”, which happens to be close to your code; apparently skipping the first line is a common case.
The final word is that the behavior of earlier JREs is a bug and skip(1) on an ordered stream should skip the first element, even when the stream pipeline is executed in parallel and the terminal operation is unordered. The associated bug report names jdk1.8.0_60 as first fixed version, which I could verify. So if you are using on older implementation, you might experience the Stream skipping different elements when using .parallel() and the unordered .forEach(…) terminal operation. It’s not contradicting if the implementation occasionally skips the expected element, that’s the unpredictability of multi-threading.
So the answer still is that stream.parallel().skip(1) and stream.skip(1).parallel() have the same behavior, even when being used in earlier versions, as both are equally unpredictable when being used with an unordered terminal operation like forEach. They should always skip the first element with ordered Streams and when being used with 1.8.0_60 or newer, they do.
Yes, but skip(n) is slower as n is larger with a parallel stream.
Here's the API note from skip():
While skip() is generally a cheap operation on sequential stream pipelines, it can be quite expensive on ordered parallel pipelines, especially for large values of n, since skip(n) is constrained to skip not just any n elements, but the first n elements in the encounter order. Using an unordered stream source (such as generate(Supplier)) or removing the ordering constraint with BaseStream.unordered() may result in significant speedups of skip() in parallel pipelines, if the semantics of your situation permit. If consistency with encounter order is required, and you are experiencing poor performance or memory utilization with skip() in parallel pipelines, switching to sequential execution with BaseStream.sequential() may improve performance.
So essentially, if you want better performance with skip(), don't use a parellel stream, or use an unordered stream.
As for it seeming to not work with parallel streams, perhaps you're actually seeing that the elements are no longer ordered? For example, an output of this code:
Stream.of("Hello", "How", "Are", "You?")
.parallel()
.skip(1)
.forEach(System.out::println);
Is
Are
You?
How
Ideone Demo
This is perfectly fine because forEach doesn't enforce the encounter order in a parallel stream. If you want it to enforce the encounter order, use a sequential stream (and perhaps use forEachOrdered so that your intent is obvious).
Stream.of("Hello", "How", "Are", "You?")
.skip(1)
.forEachOrdered(System.out::println);
How
Are
You?
I'm processing a potentially infinite stream of data elements that follow the pattern:
E1 <start mark>
E2 foo
E3 bah
...
En-1 bar
En <end mark>
That is, a stream of <String>s, which must be accumulated in a buffer before I can map them to object model.
Goal: aggregate a Stream<String> into a Stream<ObjectDefinedByStrings> without the overhead of collecting on an infinite stream.
In english, the code would be something like "Once you see a start marker, start buffering. Buffer until you see an end marker, then get ready to return the old buffer, and prepare a fresh buffer. Return the old buffer."
My current implementation has the form:
Data<String>.stream()
.map(functionReturningAnOptionalPresentOnlyIfObjectIsComplete)
.filter(Optional::isPresent)
I have several questions:
What is this operation properly called? (i.e. what can I Google for more examples? Every discussion I find of .map() talks about 1:1 mapping. Every discussion of .reduce) talks about n:1 reduction. Every discussion of .collect() talks about accumulating as a terminal operation...)
This seems bad in many different ways. Is there a better way of implementing this? (A candidate of the form .collectUntilConditionThenApplyFinisher(Collector,Condition,Finisher)...?)
Thanks!
To avoid your kludge you could filter before mapping.
Data<String>.stream()
.filter(text -> canBeConvertedToObject(text))
.map(text -> convertToObject(text))
That works perfectly well on an infinite stream and only constructs objects that need to be constructed. It also avoids the overhead of creating unnecessary Optional objects.
Unfortunately there's no partial reduce operation in Java 8 Stream API. However such operation is implemented in my StreamEx library which enhances standard Java 8 Streams. So your task can be solved like this:
Stream<ObjectDefinedByStrings> result =
StreamEx.of(strings)
.groupRuns((a, b) -> !b.contains("<start mark>"))
.map(stringList -> constructObjectDefinedByStrings());
The strings is normal Java-8 stream or other source like array, Collection, Spliterator, etc. Works fine with infinite or parallel streams. The groupRuns method takes a BiPredicate which is applied to two adjacent stream elements and returns true if these elements must be grouped. Here we say that elements should be grouped unless the second one contains "<start mark>" (which is the start of the new element). After that you will get the stream of List<String> elements.
If collecting to the intermediate lists is not appropriate for you, you can use the collapse(BiPredicate, Collector) method and specify the custom Collector to perform the partial reduction. For example, you may want to join all the strings together:
Stream<ObjectDefinedByStrings> result =
StreamEx.of(strings)
.collapse((a, b) -> !b.contains("<start mark>"), Collectors.joining())
.map(joinedString -> constructObjectDefinedByStrings());
I propose 2 more use cases for this partial reduction:
1. Parsing SQL and PL/SQL (Oracle procedural) statements
Standard delimiter for SQL statements is semicolon (;). It separates normal SQL statements from each other. But if you have PL/SQL statement then semicolon separates operators inside statement from each other, not only statements as whole.
One of the ways of parsing script file containing both normal SQL and PL/SQL statements is to first split them by semicolon and then if particular statement starts with specific keywords (DECLARE, BEGIN, etc.) join this statement with next statements following rules of PL/SQL grammar.
By the way, this cannot be done by using StreamEx partial reduce operations since they only test two adjacent elements. Since you need to know about previous stream elements starting from initial PL/SQL keyword element to determine whether or not to include current element into partial reduction or partial reduction should be finished. In this case mutable partial reduction may be usable with collector holding information of already collected elements and some Predicate testing either only collector itself (if partial reduction should be finished) or BiPredicate testing both collector and current stream element.
In theory, we're speaking about implementing LR(0) or LR(1) parser (see https://en.wikipedia.org/wiki/LR_parser) using Stream pipeline ideology. LR-parser can be used to parse syntax of most programming languages.
Parser is a finite automata with stack. In case of LR(0) automata its transition depends on stack only. In case of LR(1) automata it depends both on stack and next element from the stream (theoretically there can be LR(2), LR(3), etc. automatas peeking 2, 3, etc. next elements to determine transition but in practice all programming languages are syntactically LR(1) languages).
To implement parser there should be a Collector containing stack of finite automata and predicate testing whether final state of this automata is reached (so we can stop reduction). In case of LR(0) it should be Predicate testing Collector itself. And in case of LR(1) it should be BiPredicate testing both Collector and next element from stream (since transition depends on both stack and next symbol).
So to implement LR(0) parser we would need something like following (T is stream elements type, A is accumulator holding both finite automata stack and result, R is result of each parser work forming output stream):
<R,A> Stream<R> Stream<T>.parse(
Collector<T,A,R> automataCollector,
Predicate<A> isFinalState)
(i removed complexity like ? super T instead of T for compactness - result API should contain these)
To implement LR(1) parser we would need something like following:
<R,A> Stream<R> Stream<T>.parse(
BiPredicate<A, T> isFinalState
Collector<T,A,R> automataCollector)
NOTE: In this case BiPredicate should test element before it would be consumed by accumulator. Remember LR(1) parser is peeking next element to determine transition. So there can be a potential exception if empty accumulator rejects to accept next element (BiPredicate returns true, signalizing that partial reduction is over, on empty accumulator just created by Supplier and next stream element).
2. Conditional batching based on stream element type
When we're executing SQL statemens we want to merge adjacent data-modification (DML) statements into a single batch (see JDBC API) to improve overall performance. But we don't want to batch queries. So we need conditional batching (instead of unconditional batching like in Java 8 Stream with batch processing).
For this specific case StreamEx partial reduce operations can be used since if both adjacent elements tested by BiPredicate are DML statements they should be included into batch. So we don't need to know previous history of batch collection.
But we can increase complexity of the task and say that batches should be limited by size. Say, no more than 100 DML statements in a batch. In this case we cannot ignore previous batch collection history and using of BiPredicate to determine whether batch collection should be continued or stopped is insufficient.
Though we can add flatMap after StreamEx partial reduction to split long batches into parts. But this would delay specific 100-element batch execution until all DML statements would be collected into unlimited batch. Needless to say that this is against pipeline ideology: we want to minimize buffering to maximize speed between input and output. Moreover, unlimited batch collection may result in OutOfMemoryError in case of very long list of DML statements without any queries in between (say, million of INSERTs as a result of database export) which is intolerable.
So in case of this complex conditional batch collection with upper limit we also need something as powerful as LR(0) parser described in previous use case.