Will this make a faster parallel stream? - java

The OCP book says that all streams are ordered by default but that it is possible to turn an ordered stream into an unordered stream using the unordered() method.
It also says that this method can greatly improve performance when I use this method as an intermediate operation before calling the parallel() terminal operation. My question is: Will the below parallelstream be faster then the one below that one?
Arrays.asList(1,2,3,4,5,6).stream().unordered().parallel()
Arrays.asList(1,2,3,4,5,6).parallelStream().
PS: I know a parallelstream doesent increase performance when working with a small collection, but lets pretend we are working with a very large collection here.
The second stream is still ordered right? So will the first one have better performance?
Thank you

You state that all streams are ordered by default: that's not the case. For example if your source is a HashSet, the resulting stream will not be ordered.
Regarding your question on making a parallel stream unordered to "greatly improve performance": as always when it comes to performance, it depends (on the terminal operation, on the intermediate operations, on the size of the stream etc.)
The java.util.stream package javadoc gives some pointers that answer your question, at least in part:
For parallel streams, relaxing the ordering constraint can sometimes enable more efficient execution. Certain aggregate operations, such as filtering duplicates (distinct()) or grouped reductions (Collectors.groupingBy()) can be implemented more efficiently if ordering of elements is not relevant. Similarly, operations that are intrinsically tied to encounter order, such as limit(), may require buffering to ensure proper ordering, undermining the benefit of parallelism. In cases where the stream has an encounter order, but the user does not particularly care about that encounter order, explicitly de-ordering the stream with unordered() may improve parallel performance for some stateful or terminal operations. However, most stream pipelines, such as the "sum of weight of blocks" example above, still parallelize efficiently even under ordering constraints.

For the case that you have shown here, absolutely not. There are way too few elements here. Generally you should measure and then conclude, but this one is almost a no-brainer.
Also read this: Parallel Processing
The thing about unordered is that while executing the terminal operation, the Stream pipeline has to mention order - that means additional costs. If there is no order to maintain, the stream is faster.
Notice that once you called unordered there is no way to get that order back. You could sort, but that might not mean the initial order.
Same goes for findFirst for example and findAny in a parallel process.

Related

Why is takeWhile stateful?

The Javadoc states that
This is a short-circuiting stateful intermediate operation.
Definition of stateful from Javadoc:
Stateful operations, such as distinct and sorted, may incorporate state from previously seen elements when processing new elements.
Stateful operations may need to process the entire input before
producing a result. For example, one cannot produce any results from
sorting a stream until one has seen all elements of the stream. As a
result, under parallel computation, some pipelines containing stateful
intermediate operations may require multiple passes on the data or may
need to buffer significant data. Pipelines containing exclusively
stateless intermediate operations can be processed in a single pass,
whether sequential or parallel, with minimal data buffering.
How is default Stream<T> takeWhile​(Predicate<? super T> predicate) stateful?It does not need look at the entire input, etc...
It's almost like filter but short-circuiting.
Well, takeWhile should process the longest prefix of the Stream that satisfies the given Predicate. This means that in order to know if a given element of the Stream should be processed by takeWhile, you may have to process all the elements preceding it.
Hence, you need to know the state of the processing of the previous elements of the Stream in order to know how to process the current element.
In sequential Streams you don't have to keep state, since once you reach the first element that doesn't match the Predicate, you know you are done.
In parallel Streams, however, this becomes much trickier.
It is stateful in that it changes its behavior based on internal state (whether it has already seen an element matching the predicate). It does not process elements independently from each other. This may disable certain optimizations and may reduce the usefulness of processing in parallel.
So it is stateful in the same way limit and skip are stateful - the outcome does not (only) depend on the current element, but also on elements preceding it.

In Java 8, does a sequential and ordered Stream guarantee performing operations in encounter order?

Is there any guarantee that operations on a sequential and ordered Stream are processed in encounter order?
I mean, if I have code like this:
IntStream.range(0, 5)
.map(i -> {
myFunction(i);
return i * 2;
})
.boxed()
.collect(toList());
is there a guarantee, that it will perform myFunction() calls in the encounter order of the generated range?
I found draft JavaDocs for the Stream class which explicitely states this:
For sequential stream pipelines, all operations are performed in the encounter order of the pipeline source, if the pipeline source has a defined encounter order.
but in the official JavaDocs this line was removed. It now discusses encounter order only for selected methods. The Package java.util.stream doc in the Side-effects paragraph states:
Even when a pipeline is constrained to produce a result that is consistent with the encounter order of the stream source (for example, IntStream.range(0,5).parallel().map(x -> x*2).toArray() must produce [0, 2, 4, 6, 8]), no guarantees are made as to the order in which the mapper function is applied to individual elements, or in what thread any behavioral parameter is executed for a given element.
but it says nothing about sequential streams and the example is for a parallel stream (My understanding is that it's true for both sequential and parallel streams, but this is the part I'm not sure about).
On the other hand, it also states in the Ordering section:
If a stream is ordered, most operations are constrained to operate on the elements in their encounter order; if the source of a stream is a List containing [1, 2, 3], then the result of executing map(x -> x*2) must be [2, 4, 6]. However, if the source has no defined encounter order, then any permutation of the values [2, 4, 6] would be a valid result.
but this time it start with "operating on the elements", but the example is about the resulting stream, so I'm not sure they are taking side-effects in account and side-effects is really what this question is about.
I think we can learn a lot from the fact that this explicit sentence has been removed. This question seems to be closely related to the question “Does Stream.forEach respect the encounter order of sequential streams?”. The answer from Brian Goetz basically says that despite the fact that there’s no scenario where the order is ignored by the Stream’s current implementation when forEach is invoked on a sequential Stream, forEach has the freedom to ignore the encounter order even for sequential Streams per specification.
Now consider the following section of Stream’s class documentation:
To perform a computation, stream operations are composed into a stream pipeline. A stream pipeline consists of a source (which might be an array, a collection, a generator function, an I/O channel, etc), zero or more intermediate operations (which transform a stream into another stream, such as filter(Predicate)), and a terminal operation (which produces a result or side-effect, such as count() or forEach(Consumer)). Streams are lazy; computation on the source data is only performed when the terminal operation is initiated, and source elements are consumed only as needed.
Since it is the terminal operation which determines whether elements are needed and whether they are needed in the encounter order, a terminal action’s freedom to ignore the encounter order also implies consuming, hence processing, the elements in an arbitrary order.
Note that not only forEach can do that. A Collector has the ability to report an UNORDERED characteristic, e.g. Collectors.toSet() does not depend on the encounter order. It’s obvious that also an operation like count() doesn’t depend on the order—in Java 9 it may even return without any element processing. Think of IntStream#sum() for another example.
In the past, the implementation was too eager in propagating an unordered characteristic up the stream, see “Is this a bug in Files.lines(), or am I misunderstanding something about parallel streams?” where the terminal operation affected the outcome of a skip step, which is the reason why the current implementation is reluctant about such optimizations to avoid similar bugs, but that doesn’t preclude the reappearance of such optimizations, then being implemented with more care…
So at the moment it’s hard to imagine how an implementation could ever gain a performance benefit from exploiting the freedom of unordered evaluations in a sequential Stream, but, as stated in the forEach-related question, that doesn’t imply any guarantees.

What's the difference between Stream.map(...) and Collectors.mapping(...)?

I've noticed many functionalities exposed in Stream are apparently duplicated in Collectors, such as Stream.map(Foo::bar) versus Collectors.mapping(Foo::bar, ...), or Stream.count() versus Collectors.counting(). What's the difference between these approaches? Is there a performance difference? Are they implemented differently in some way that affects how well they can be parallelized?
The collectors that appear to duplicate functionality in Stream exist so they can be used as downstream collectors for collector combinators like groupingBy().
As a concrete example, suppose you want to compute "count of transactions by seller". You could do:
Map<Seller, Long> salesBySeller =
txns.stream()
.collect(groupingBy(Txn::getSeller, counting()));
Without collectors like counting() or mapping(), these kinds of queries would be much more difficult.
There's a big difference. The stream operations could be divided into two group:
Intermediate operations - Stream.map, Stream.flatMap, Stream.filter. Those produce instance of the Stream and are always lazy, e.g. no actual traversal of the Stream elements happens. Those operations are used to create transformation chain.
Terminal operations - Stream.collect, Stream.findFirst, Stream.reduce etc. Those do the actual work, e.g. perform the transformation chain operations on the stream, producing a terminal value. Which could be a List, count of element, first element etc.
Take a look at the Stream package summary javadoc for more information.

Most efficient collection for filtering a Java Stream?

I'm storing several Things in a Collection. The individual Things are unique, but their types aren't. The order in which they are stored also doesn't matter.
I want to use Java 8's Stream API to search it for a specific type with this code:
Collection<Thing> things = ...;
// ... populate things ...
Stream<Thing> filtered = things.stream.filter(thing -> thing.type.equals(searchType));
Is there a particular Collection that would make the filter() more efficient?
I'm inclined to think no, because the filter has to iterate through the entire collection.
On the other hand, if the collection is some sort of tree that is indexed by the Thing.type then the filter() might be able to take advantage of that fact. Is there any way to achieve this?
The stream operations like filter are not that specialized to take an advantage in special cases. For example, IntStream.range(0, 1_000_000_000).filter(x -> x > 999_999_000) will actually iterate all the input numbers, it cannot just "skip" the first 999_999_000. So your question is reduced to find the collection with the most efficient iteration.
The iteration is usually performed in Spliterator.forEachRemaining method (for non-short-circuiting stream) and in Spliterator.tryAdvance method (for short-circuiting stream), so you can take a look into the corresponding spliterator implementation and check how efficient it is. To my opinion the most efficient is an array (either bare or wrapped into list with Arrays.asList): it has minimal overhead. ArrayList is also quite fast, but for short-circuiting operation it will check the modCount (to detect concurrent modification) on every iteration which would add very slight overhead. Other types like HashSet or LinkedList are comparably slower, though in most of applications this difference is practically insignificant.
Note that parallel streams should be used with care. For example, the splitting of LinkedList is quite poor and you may experience worse performance than in sequential case.
The most important thing to understand, regarding this question, is that when you pass a lambda expression to a particular library like the Stream API, all the library receives is an implementation of a functional interface, e.g. an instance of Predicate. It has no knowledge about what that implementation will do and therefore has no way to exploit scenarios like filtering sorted data via comparison. The stream library simply doesn’t know that the Predicate is doing a comparison.
An implementation doing such an optimization would need an interaction of the JVM, which knows and understands the code, and the library, which knows the semantics. Such thing does not happen in current implementation and is currently far away, at least as I can see it.
If the source is a tree or sorted list and you want to benefit from that for filtering, you have to do it using APIs operating on the source, before creating the stream. E.g. suppose, we have a TreeSet and want to filter it to get items within a particular range, like
// our made-up source
TreeSet<Integer> tree=IntStream.range(0, 100).boxed()
.collect(Collectors.toCollection(TreeSet::new));
// the naive implementation
tree.stream().filter(i -> i>=65 && i<91).forEach(i->System.out.print((char)i.intValue()));
We can do instead:
tree.tailSet(65).headSet(91).stream().forEach(i->System.out.print((char)i.intValue()));
which will utilize the sorted/tree nature. When we have a sorted list instead, say
List<Integer> list=new ArrayList<>(tree);
utilizing the sorted nature is more complex as the collection itself doesn’t know that it’s sorted and doesn’t offer operations utilizing that directly:
int ix=Collections.binarySearch(list, 65);
if(ix<0) ix=~ix;
if(ix>0) list=list.subList(ix, list.size());
ix=Collections.binarySearch(list, 91);
if(ix<0) ix=~ix;
if(ix<list.size()) list=list.subList(0, ix);
list.stream().forEach(i->System.out.print((char)i.intValue()));
Of course, the stream operations here are only exemplary and you don’t need a stream at all, when all you do then is forEach…
As far as I am aware, there's no such differenciation for normal streaming.
However, you might be better off when you use parallel streaming when you use a collection which is easily devideable, like ArrayList over LinkedList or any type of Set.

How stream's pipeline works in java like IntPipeline

I'm learning about java 8 streams and some questions became to me.
Suppose this code:
new Random().ints().forEach(System.out::println);
internally at some point, it calls IntPipeline, that I think it's responsible to generate those indefinitely ints. Streams implementation is hard to understand by looking the java source.
Can you give a brief explanation or give some good/easy-understandable material about how streams are generated and how operation over the pipeline are connected. Example in code above the integers are generate randomly, how this connection is made?
The Stream implementation is separated to Spliterator (which is input-specific code) and pipeline (which is input-independent code). The Spliterator is similar to Iterator. The main differences are the following:
It can split itself to the two parts (the trySplit method). For ordered spliterator the parts are prefix and suffix (for example, for array it could be the first half and the last half). For unordered sources (like random numbers) both parts just can generated some of the elements. The resulting parts are able to split further (unless they become too small). This feature is crucial for parallel stream processing.
It can report its size either exact or estimated. The exact size may be used to preallocate memory for some stream operations like toArray() or just to return it to caller (like count() in Java-9). The estimated size is used for parallel stream processing to decide when to stop splitting.
It can report some characteristics like ORDERED, SORTED, DISTINCT, etc.
It implements internal iteration: instead of two methods hasNext and next you have single method tryAdvance which executes the provided Consumer once unless there are no more elements left.
There are also primitive specializations of Spliterator interface (Spliterator.OfInt, etc.) which can help you process primitive values like int, long or double efficiently.
Thus to create your own Stream datasource you have to implement Spliterator, then call StreamSupport.stream(mySpliterator, isParallel) to create the Stream and StreamSupport.int/long/doubleStream for primitive specializations. So actually Random.ints calls StreamSupport.intStream providing its own spliterator. You don't have to implement all the Stream operations by yourself. In general Stream interface is implemented only once per stream type in JDK for different sources. There's basic abstract class AbstractPipeline and four implementations (ReferencePipeline for Stream, IntPipeline for IntStream, LongPipeline for LongStream and DoublePipeline for DoubleStream). But you have much more sources (Collection.stream(), Arrays.stream(), IntStream.range, String.chars(), BufferedReader.lines(), Files.lines(), Random.ints(), and so on, even more to appear in Java-9). All of these sources are implemented using custom spliterators. Implementing the Spliterator is much simpler than implementing the whole stream pipeline (especially taking into account the parallel processing), so such separation makes sense.
If you want to create your own stream source, you may start extending AbstractSpliterator. In this case you only have to implement tryAdvance and call superclass constructor providing the estimated size and some characteristics. The AbstractSpliterator provides default splitting behavior by reading a part of your source into array (calling your implemented tryAdvance method) and creating array-based spliterator for this prefix. Of course such strategy is not very performant and often affords only limited parallelism, but as a starting point it's ok. Later you can implement trySplit by yourself providing better splitting strategy.

Categories