Unexpected parallelstream performance in Java 8 - java

I experienced a performance issue when using the stream created using the spliterator() over an Iterable. ie., like StreamSupport.stream(integerList.spliterator(), true). Wanted to prove this over a normal collection. Please see below some benchmark results.
Question:
Why does the parallel stream created from an iterable much slower than the stream created from an ArrayList or an IntStream ?
From a range
public void testParallelFromIntRange() {
long start = System.nanoTime();
IntStream stream = IntStream.rangeClosed(1, Integer.MAX_VALUE).parallel();
System.out.println("Is Parallel: "+stream.isParallel());
stream.forEach(ParallelStreamSupportTest::calculate);
long end = System.nanoTime();
System.out.println("ParallelStream from range Takes : " + TimeUnit.MILLISECONDS.convert((end - start),
TimeUnit.NANOSECONDS) + " milli seconds");
}
Is Parallel: true
ParallelStream from range Takes : 490 milli seconds
From an Iterable
public void testParallelFromIterable() {
Set<Integer> integerList = ContiguousSet.create(Range.closed(1, Integer.MAX_VALUE), DiscreteDomain.integers());
long start = System.nanoTime();
Stream<Integer> stream = StreamSupport.stream(integerList.spliterator(), true);
System.out.println("Is Parallel: " + stream.isParallel());
stream.forEach(ParallelStreamSupportTest::calculate);
long end = System.nanoTime();
System.out.println("ParallelStream from Iterable Takes : " + TimeUnit.MILLISECONDS.convert((end - start),
TimeUnit.NANOSECONDS) + " milli seconds");
}
Is Parallel: true
ParallelStream from Iterable Takes : 12517 milli seconds
And the so trivial calculate method.
public static Integer calculate(Integer input) {
return input + 2;
}

Not all spliterators are created equally. One of the tasks of a spliterator is to decompose the source into two parts, that can be processed in parallel. A good spliterator will divide the source roughly in half (and will be able to continue to do so recursively.)
Now, imagine you are writing a spliterator for a source that is only described by an Iterator. What quality of decomposition can you get? Basically, all you can do is divide the source into "first" and "rest". That's about as bad as it gets. The result is a computation tree that is very "right-heavy".
The spliterator that you get from a data structure has more to work with; it knows the layout of the data, and can use that to give better splits, and therefore better parallel performance. The spliterator for ArrayList can always divide in half, and retains knowledge of exactly how much data is in each half. That's really good. The spliterator from a balanced tree can get good distribution (since each half of the tree has roughly half the elements), but isn't quite as good as the ArrayList spliterator because it doesn't know the exact sizes. The spliterator for a LinkedList is about as bad as it gets; all it can do is (first, rest). And the same for deriving a spliterator from an iterator.
Now, all is not necessarily lost; if the work per element is high, you can overcome bad splitting. But if you're doing a small amount of work per element, you'll be limited by the quality of splits from your spliterator.

There are several problems with your benchmark.
Stream<Integer> cannot be compared to IntStream because of boxing overhead.
You aren't doing anything with the result of the calculation, which makes it hard to know whether the code is actually being run
You are benchmarking with System.nanoTime instead of using a proper benchmarking tool.
Here's a JMH-based benchmark:
import com.google.common.collect.ContiguousSet;
import com.google.common.collect.DiscreteDomain;
import com.google.common.collect.Range;
import java.util.stream.IntStream;
import java.util.stream.Stream;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.OptionsBuilder;
public class Ranges {
final static int SIZE = 10_000_000;
#Benchmark
public long intStream() {
Stream<Integer> st = IntStream.rangeClosed(1, SIZE).boxed();
return st.parallel().mapToInt(x -> x).sum();
}
#Benchmark
public long contiguousSet() {
ContiguousSet<Integer> cs = ContiguousSet.create(Range.closed(1, SIZE), DiscreteDomain.integers());
Stream<Integer> st = cs.stream();
return st.parallel().mapToInt(x -> x).sum();
}
public static void main(String[] args) throws RunnerException {
new Runner(
new OptionsBuilder()
.include(".*Ranges.*")
.forks(1)
.warmupIterations(5)
.measurementIterations(5)
.build()
).run();
}
}
And the output:
Benchmark Mode Samples Score Score error Units
b.Ranges.contiguousSet thrpt 5 13.540 0.924 ops/s
b.Ranges.intStream thrpt 5 27.047 5.119 ops/s
So IntStream.range is about twice as fast as ContiguousSet, which is perfectly reasonable, given that ContiguousSet doesn't implement its own Spliterator and uses the default from Set

Related

Java Spliterator Continually Splits Parallel Stream

I found some surprising behavior with Java parallel streams. I made my own Spliterator, and the resulting parallel stream gets divided up until each stream has only one element in it. That seems way too small and I wonder what I'm doing wrong. I'm hoping there's some characteristics I can set to correct this.
Here's my test code. The Float here is just a dummy payload, my real stream class is somewhat more complicated.
public static void main( String[] args ) {
TestingSpliterator splits = new TestingSpliterator( 10 );
Stream<Float> test = StreamSupport.stream( splits, true );
double total = test.mapToDouble( Float::doubleValue ).sum();
System.out.println( "Total: " + total );
}
This code will continually split this stream until each Spliterator has only one element. That seems way too much to be efficient.
Output:
run:
Split on count: 10
Split on count: 5
Split on count: 3
Split on count: 5
Split on count: 2
Split on count: 2
Split on count: 3
Split on count: 2
Split on count: 2
Total: 5.164293184876442
BUILD SUCCESSFUL (total time: 0 seconds)
Here's the code of the Spliterator. My main concern is what characteristics I should be using, but perhaps there's a problem somewhere else?
public class TestingSpliterator implements Spliterator<Float> {
int count;
int splits;
public TestingSpliterator( int count ) {
this.count = count;
}
#Override
public boolean tryAdvance( Consumer<? super Float> cnsmr ) {
if( count > 0 ) {
cnsmr.accept( (float)Math.random() );
count--;
return true;
} else
return false;
}
#Override
public Spliterator<Float> trySplit() {
System.err.println( "Split on count: " + count );
if( count > 1 ) {
splits++;
int half = count / 2;
TestingSpliterator newSplit = new TestingSpliterator( count - half );
count = half;
return newSplit;
} else
return null;
}
#Override
public long estimateSize() {
return count;
}
#Override
public int characteristics() {
return IMMUTABLE | SIZED;
}
}
So how can I get the stream to be split in to much larger chunks? I was hoping in the neighborhood of 10,000 to 50,000 would be better.
I know I can return null from the trySplit() method, but that seems like a backwards way of doing it. It seems like the system should have some notion of number of cores, current load, and how complex the code is that uses the stream, and adjust itself accordingly. In other words, I want the stream chunk size to be externally configured, not internally fixed by the stream itself.
EDIT: re. Holger's answer below, when I increase the number of elements in the original stream, the stream splits are somewhat less, so StreamSupport does stop splitting eventually.
At an initial stream size of 100 elements, StreamSupport stops splitting when it reaches a stream size of 2 (the last line I see on my screen is Split on count: 4).
And for an initial stream size of 1000 elements, the final size of the individual stream chunks is about 32 elements.
Edit part deux: After looking at the output of the above, I changed my code to list out the individual Spliterators created. Here's the changes:
public static void main( String[] args ) {
TestingSpliterator splits = new TestingSpliterator( 100 );
Stream<Float> test = StreamSupport.stream( splits, true );
double total = test.mapToDouble( Float::doubleValue ).sum();
System.out.println( "Total Spliterators: " + testers.size() );
for( TestingSpliterator t : testers ) {
System.out.println( "Splits: " + t.splits );
}
}
And to the TestingSpliterator's ctor:
static Queue<TestingSpliterator> testers = new ConcurrentLinkedQueue<>();
public TestingSpliterator( int count ) {
this.count = count;
testers.add( this ); // OUCH! 'this' escape
}
The result of this code is that the first Spliterator gets split 5 times. The nextSpliterator gets split 4 times. The next set of Spliterators get split 3 times. Etc. The result is that 36 Spliterators get made and the stream is split into as many parts. On typical desktop systems this seems to be the way that the API thinks is the best for parallel operations.
I'm going to accept Holger's answer below, which is essentially that the StreamSupport class is doing the right thing, don't worry, be happy. Part of the issue for me was that I was doing my early testing on very small stream sizes and I was surprised at the number of splits. Don't make the same mistake yourself.
You are looking on it from the wrong angle. The implementation did not split “until each spliterator has one element”, it rather split “until having ten spliterators”.
A single spliterator instance can only be processed by one thread. A spliterator is not required to support splitting after its traversal has been started. Therefore any splitting opportunity that has not been used beforehand may lead to limited parallel processing capabilities afterwards.
It’s important to keep in mind that the Stream implementation received a ToDoubleFunction with an unknown workload¹. It doesn’t know that it is as simple as Float::doubleValue in your case. It could be a function taking a minute to evaluate and then, having a spliterator per CPU core would be righteous right. Even having more than CPU cores is a valid strategy to handle the possibility that some evaluations take significantly longer than others.
A typical number of initial spliterators will be “number of CPU cores” × 4, though here might be more split operations later when more knowledge about actual workloads exist. When your input data has less than that number, it’s not surprising when it gets split down until one element per spliterator is left.
You may try with new TestingSpliterator( 10000 ) or 1000 or 100 to see that the number of splits will not change significantly, once the implementation assumes to have enough chunks to keep all CPU cores busy.
Since your spliterator does not know anything about the per-element workload of the consuming stream either, you shouldn’t be concerned about this. If you can smoothly support splitting down to single elements, just do that.
¹ It doesn’t have special optimizations for the case that no operations have been chained, though.
Unless I am missing the obvious, you could always pass a bufferSize in the constructor and use that for your trySplit:
#Override
public Spliterator<Float> trySplit() {
if( count > 1 ) {
splits++;
if(count > bufferSize) {
count = count - bufferSize;
return new TestingSpliterator( bufferSize, bufferSize);
}
}
return null;
}
And with this:
TestingSpliterator splits = new TestingSpliterator(12, 5);
Stream<Float> test = StreamSupport.stream(splits, true);
test.map(x -> new AbstractMap.SimpleEntry<>(
x.doubleValue(),
Thread.currentThread().getName()))
.collect(Collectors.groupingBy(
Map.Entry::getValue,
Collectors.mapping(
Map.Entry::getKey,
Collectors.toList())))
.forEach((x, y) -> System.out.println("Thread : " + x + " processed : " + y));
You would see that there are 3 threads. Two of them process 5 elements and one 2.

Why does this parallelized Mersenne prime calculation algorithm hang?

Trying out the code from this video of Joshua Bloch, I've added .parallel() to the original code trying to make it a bit faster (in terms of speed of execution). As a result, it began to hang, it didn't complete after 5 minutes of running, while its sequential version completes in seconds.
I wonder, what can be the reason for the following code to hang after it was parallelized?
import java.math.BigInteger;
import java.util.stream.Stream;
import static java.math.BigInteger.ONE;
import static java.math.BigInteger.TWO;
class Scratch {
static Stream<BigInteger> primes() {
return Stream.iterate(TWO, BigInteger::nextProbablePrime);
}
public static void main(String[] args) {
primes().map(p -> TWO.pow(p.intValueExact()).subtract(ONE))
.filter(mersenne -> mersenne.isProbablePrime(50))
.limit(20)
.parallel()
.forEach(System.out::println);
}
}
P.S.
The parallelism of the underlying ForkJoinPool is 12.
Note
This is not an answer, but posting all of that in a comment is not possible and would be hard to read.
Finding each Mersenne prime takes longer time than the previous one as can be seen by removing .parallel() and changing .limit(20) to a higher value.
Observation 1
One would think that using parallelism on this stream ignores the .limit(20) directive, but that is not the case as if we change the limit to a lower value - for example to .limit(10), we'll get exactly the first 10 mersenne primes (unordered, but that's expected):
primes().map(p -> TWO.pow(p.intValueExact()).subtract(ONE))
.filter(mersenne -> mersenne.isProbablePrime(50))
.limit(10)
.parallel()
.forEach(m -> System.out.println(Thread.currentThread().getName() + ": " + m));
Sample output:
ForkJoinPool.commonPool-worker-3: 8191
ForkJoinPool.commonPool-worker-13: 131071
ForkJoinPool.commonPool-worker-19: 524287
ForkJoinPool.commonPool-worker-23: 2305843009213693951
ForkJoinPool.commonPool-worker-5: 31
ForkJoinPool.commonPool-worker-9: 7
ForkJoinPool.commonPool-worker-31: 2147483647
ForkJoinPool.commonPool-worker-27: 3
ForkJoinPool.commonPool-worker-17: 127
ForkJoinPool.commonPool-worker-7: 618970019642690137449562111
(note that all use a worker on ForkJoinPool.commonPool - there's workaround for that, but this is irrelevant to the problem)
Observation 2
If we limit the input stream get by primes() - for example to 700, the program completes as expected:
primes().limit(700)
.map(p -> TWO.pow(p.intValueExact()).subtract(ONE))
.filter(mersenne -> mersenne.isProbablePrime(50))
.limit(20)
.parallel()
.forEach(m -> System.out.println(Thread.currentThread().getName() + ": " + m));
ForkJoinPool.commonPool-worker-19: 6...
ForkJoinPool.commonPool-worker-3: 21...
ForkJoinPool.commonPool-worker-19: 2...
ForkJoinPool.commonPool-worker-23: 1...
ForkJoinPool.commonPool-worker-5: 68...
ForkJoinPool.commonPool-worker-23: 7...
ForkJoinPool.commonPool-worker-19: 1...
ForkJoinPool.commonPool-worker-13: 4...
ForkJoinPool.commonPool-worker-3: 81...
ForkJoinPool.commonPool-worker-17: 5...
ForkJoinPool.commonPool-worker-7: 25...
ForkJoinPool.commonPool-worker-3: 12...
ForkJoinPool.commonPool-worker-13: 1...
ForkJoinPool.commonPool-worker-19: 3...
ForkJoinPool.commonPool-worker-23: 3...
ForkJoinPool.commonPool-worker-31: 1...
ForkJoinPool.commonPool-worker-5: 52...
ForkJoinPool.commonPool-worker-9: 28...
ForkJoinPool.commonPool-worker-27: 1...
ForkJoinPool.commonPool-worker-21: 1...
Changing input stream limit to a higher value (1000, 1500 etc.) increases the time required for the program to complete while the final result remains correct (20 mersenne primes).
Taking into account both observations above, an educated guess would be that when this stream is .parallel(), more input elements than required to get the desired result are processed.
When required mersenne primes count is high (above ~14) and input stream has no limit (as it is in the example), the program could take very long time to complete and is effectively hung.
As it turned out, the program doesn't hang but stumbles upon checking some big numbers for primariness.
When there is no limit at the initial source (kudos to #MartinBG for the clue), the underlying Spliterator advances too much (considering the task) and gives bigger numbers as a source for the downstream, resulting in extreme computational complexity.
I've modified the input, so it doesn't advance that much and returns only the next probable prime as the next chunk of Spliterator. I.e., every time a thread from the pool is ready to proceed with the following number it has to receive a new chunk that consist of a single number, contrary to the big pieces that typically observed.
import java.math.BigInteger;
import java.util.Spliterator;
import java.util.Spliterators;
import java.util.concurrent.atomic.AtomicReference;
import java.util.function.Consumer;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
import static java.lang.Long.MAX_VALUE;
import static java.math.BigInteger.ONE;
import static java.math.BigInteger.TWO;
class Scratch {
static Stream<BigInteger> primes() {
return StreamSupport.stream(new Spliterator<>() {
private AtomicReference<BigInteger> seedReference = new AtomicReference<>(TWO);
private BigInteger resolveNextProbablePrime() {
return seedReference.getAndUpdate(BigInteger::nextProbablePrime);
}
#Override
public boolean tryAdvance(Consumer<? super BigInteger> action) {
action.accept(resolveNextProbablePrime());
return true;
}
#Override
public Spliterator<BigInteger> trySplit() {
return Spliterators.spliterator(new BigInteger[]{resolveNextProbablePrime()}, characteristics());
}
#Override
public long estimateSize() {
return MAX_VALUE;
}
#Override
public int characteristics() {
return ORDERED & DISTINCT & SORTED & NONNULL & IMMUTABLE & CONCURRENT;
}
}, true);
}
public static void main(String[] args) {
primes().map(p -> TWO.pow(p.intValueExact()).subtract(ONE))
.filter(mersenne -> mersenne.isProbablePrime(50))
.limit(20)
.parallel()
.forEach(System.out::println);
}
}
The new version of the program completes in seconds, utilizing multiple cores for its computations.
The given solution is still not a cure for the encountered trait; it just works in this particular case, since it's rather safe to take a couple of next probable primes and perform calculations from them.
Depends on your hardware, it's still possible to get 20 numbers. The problem here is the parallel stream may not give you too much performance gain. Remove parallel call:
primes().map(p -> TWO.pow(p.intValueExact()).subtract(ONE))
.filter(mersenne -> mersenne.isProbablePrime(50))
.limit(20)
.forEach(System.out::println);
It can be finished in about 50 seconds on my machine (i5-6300u, 2.4GHz).
Or:
primes().map(p -> TWO.pow(p.intValueExact()).subtract(ONE))
.filter(mersenne -> mersenne.isProbablePrime(50))
.limit(20)
.parallel()
.sorted()
.sequential()
.forEach(System.out::println);

java stream performace for finding maximum element form a list

I wrote a simple program to compare to performance of stream for finding maximum form list of integer. Surprisingly I found that the performance of ' stream way' 1/10 of 'usual way'. Am I doing something wrong? Is there any condition on which Stream way will not be efficient? Could anyone have a nice explanation for this behavior?
"stream way" took 80 milliseconds "usual way" took 15 milli seconds
Please find the code below
public class Performance {
public static void main(String[] args) {
ArrayList<Integer> a = new ArrayList<Integer>();
Random randomGenerator = new Random();
for (int i=0;i<40000;i++){
a.add(randomGenerator.nextInt(40000));
}
long start_s = System.currentTimeMillis( );
Optional<Integer> m1 = a.stream().max(Integer::compare);
long diff_s = System.currentTimeMillis( ) - start_s;
System.out.println(diff_s);
int e = a.size();
Integer m = Integer.MIN_VALUE;
long start = System.currentTimeMillis( );
for(int i=0; i < e; i++)
if(a.get(i) > m) m = a.get(i);
long diff = System.currentTimeMillis( ) - start;
System.out.println(diff);
}
}
Yes, Streams are slower for such simple operations. But your numbers are completely unrelated. If you think that 15 milliseconds is satisfactory time for your task, then there are good news: after warm-up stream code can solve this problem in like 0.1-0.2 milliseconds, which is 70-150 times faster.
Here's quick-and-dirty benchmark:
import java.util.concurrent.TimeUnit;
import java.util.*;
import java.util.stream.*;
import org.openjdk.jmh.infra.Blackhole;
import org.openjdk.jmh.annotations.*;
#Warmup(iterations = 5, time = 1000, timeUnit = TimeUnit.MILLISECONDS)
#Measurement(iterations = 10, time = 1000, timeUnit = TimeUnit.MILLISECONDS)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.MICROSECONDS)
#Fork(3)
#State(Scope.Benchmark)
public class StreamTest {
// Stream API is very nice to get random data for tests!
List<Integer> a = new Random().ints(40000, 0, 40000).boxed()
.collect(Collectors.toList());
#Benchmark
public Integer streamList() {
return a.stream().max(Integer::compare).orElse(Integer.MIN_VALUE);
}
#Benchmark
public Integer simpleList() {
int e = a.size();
Integer m = Integer.MIN_VALUE;
for(int i=0; i < e; i++)
if(a.get(i) > m) m = a.get(i);
return m;
}
}
The results are:
Benchmark Mode Cnt Score Error Units
StreamTest.simpleList avgt 30 38.241 ± 0.434 us/op
StreamTest.streamList avgt 30 215.425 ± 32.871 us/op
Here's microseconds. So the Stream version is actually much faster than your test. Nevertheless the simple version is even more faster. So if you were fine with 15 ms, you can use any of these two versions you like: both will perform much faster.
If you want to get the best possible performance no matter what, you should get rid of boxed Integer objects and work with primitive array:
int[] b = new Random().ints(40000, 0, 40000).toArray();
#Benchmark
public int streamArray() {
return Arrays.stream(b).max().orElse(Integer.MIN_VALUE);
}
#Benchmark
public int simpleArray() {
int e = b.length;
int m = Integer.MIN_VALUE;
for(int i=0; i < e; i++)
if(b[i] > m) m = b[i];
return m;
}
Both versions are faster now:
Benchmark Mode Cnt Score Error Units
StreamTest.simpleArray avgt 30 10.132 ± 0.193 us/op
StreamTest.streamArray avgt 30 167.435 ± 1.155 us/op
Actually the stream version result may vary greatly as it involves many intermediate methods which are JIT-compiled in different time, so the speed may change in any direction after some iterations.
By the way your original problem can be solved by good old Collections.max method without Stream API like this:
Integer max = Collections.max(a);
In general you should avoid testing the artificial code which does not solve real problems. With artificial code you will get the artificial results which generally say nothing about the API performance in real conditions.
The immediate difference that I see is that the stream way uses Integer::compare which might require more autoboxing etc. vs. an operator in the loop. perhaps you can call Integer::compare in the loop to see if this is the reason?
EDIT: following the advice from Nicholas Robinson, I wrote a new version of the test. It uses 400K sized list (the original one yielded zero diff results), it uses Integer.compare in both cases and runs only one of them in each invocation (I alternate between the two methods):
static List<Integer> a = new ArrayList<Integer>();
public static void main(String[] args)
{
Random randomGenerator = new Random();
for (int i = 0; i < 400000; i++) {
a.add(randomGenerator.nextInt(400000));
}
long start = System.currentTimeMillis();
//Integer max = checkLoop();
Integer max = checkStream();
long diff = System.currentTimeMillis() - start;
System.out.println("max " + max + " diff " + diff);
}
static Integer checkStream()
{
Optional<Integer> max = a.stream().max(Integer::compare);
return max.get();
}
static Integer checkLoop()
{
int e = a.size();
Integer max = Integer.MIN_VALUE;
for (int i = 0; i < e; i++) {
if (Integer.compare(a.get(i), max) > 0) max = a.get(i);
}
return max;
}
The results for loop: max 399999 diff 10
The results for stream: max 399999 diff 40 (and sometimes I got 50)
In Java 8 they have been putting a lot of effort into making use of concurrent processes with the new lambdas. You will find the stream to be so much faster because the list is being processed concurrently in the most efficient way possible where as the usual way is running through the list sequentially.
Because the lambda are static this makes threading easier, however when you are accessing something line your hard drive (reading in a file line by line) you will probably find the stream wont be as efficient because the hard drive can only access info.
[UPDATE]
The reason your stream took so much longer than the normal way is because you run in first. The JRE is constantly trying to optimize the performance so there will be a cache set up with the usual way. If you run the usual way before the stream way you should get opposing results. I would recommend running the tests in different mains for the best results.

Comparison between legacy for loop, streams and parallelStream in Java 8

import java.util.ArrayList;
import java.util.List;
public class IterationBenchmark {
public static void main(String args[]){
List<String> persons = new ArrayList<String>();
persons.add("AAA");
persons.add("BBB");
persons.add("CCC");
persons.add("DDD");
long timeMillis = System.currentTimeMillis();
for(String person : persons)
System.out.println(person);
System.out.println("Time taken for legacy for loop : "+
(System.currentTimeMillis() - timeMillis));
timeMillis = System.currentTimeMillis();
persons.stream().forEach(System.out::println);
System.out.println("Time taken for sequence stream : "+
(System.currentTimeMillis() - timeMillis));
timeMillis = System.currentTimeMillis();
persons.parallelStream().forEach(System.out::println);
System.out.println("Time taken for parallel stream : "+
(System.currentTimeMillis() - timeMillis));
}
}
Output:
AAA
BBB
CCC
DDD
Time taken for legacy for loop : 0
AAA
BBB
CCC
DDD
Time taken for sequence stream : 49
CCC
DDD
AAA
BBB
Time taken for parallel stream : 3
Why the Java 8 Stream API performance is very low compare to legacy for loop?
Very first call to the Stream API in your program is always quite slow, because you need to load many auxiliary classes, generate many anonymous classes for lambdas and JIT-compile many methods. Thus usually very first Stream operation takes several dozens of milliseconds. The consecutive calls are much faster and may fall beyond 1 us depending on the exact stream operation. If you exchange the parallel-stream test and sequential stream test, the sequential stream will be much faster. All the hard work is done by one who comes the first.
Let's write a JMH benchmark to properly warm-up your code and test all the cases independently:
import java.util.concurrent.TimeUnit;
import java.util.*;
import java.util.stream.*;
import org.openjdk.jmh.annotations.*;
#Warmup(iterations = 5, time = 1000, timeUnit = TimeUnit.MILLISECONDS)
#Measurement(iterations = 10, time = 1000, timeUnit = TimeUnit.MILLISECONDS)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.MICROSECONDS)
#Fork(3)
#State(Scope.Benchmark)
public class StreamTest {
List<String> persons;
#Setup
public void setup() {
persons = new ArrayList<String>();
persons.add("AAA");
persons.add("BBB");
persons.add("CCC");
persons.add("DDD");
}
#Benchmark
public void loop() {
for(String person : persons)
System.err.println(person);
}
#Benchmark
public void stream() {
persons.stream().forEach(System.err::println);
}
#Benchmark
public void parallelStream() {
persons.parallelStream().forEach(System.err::println);
}
}
Here we have three tests: loop, stream and parallelStream. Note that I changed the System.out to System.err. That's because System.out is used normally to output the JMH results. I will redirect the output of System.err to nul, so the result should less depend on my filesystem or console subsystem (which is especially slow on Windows).
So the results are (Core i7-4702MQ CPU # 2.2GHz, 4 cores HT, Win7, Oracle JDK 1.8.0_40):
Benchmark Mode Cnt Score Error Units
StreamTest.loop avgt 30 42.410 ± 1.833 us/op
StreamTest.parallelStream avgt 30 76.440 ± 2.073 us/op
StreamTest.stream avgt 30 42.820 ± 1.389 us/op
What we see is that stream and loop produce exactly the same result. The difference is statistically insignificant. Actually Stream API is somewhat slower than loop, but here the slowest part is the PrintStream. Even with output to nul the IO subsystem is very slow compared to other operations. So we just measured not the Stream API or loop speed, but println speed.
Also see, it's microseconds, thus stream version actually works 1000 times faster than in your test.
Why parallelStream is much slower? Just because you cannot parallelize the writes to the same PrintStream, because it is internally synchronized. So the parallelStream did all the hard work to splitting 4-element list to the 4 sub-tasks, schedule the jobs in the different threads, synchronize them properly, but it's absolutely futile as the slowest operation (println) cannot perform in parallel: while one of threads is working, others are waiting. In general it's useless to parallelize the code which synchronizes on the same mutex (which is your case).

Java performance: Search and removal speed on removeAll()

I had some fun comparing the speed of the removeAll(Collection<?> c) call declared in Collection. Now I know that micro-benchmarks are difficult to do right, and I won’t look at a few milliseconds difference, but I believe my results to be valid, since I ran them repeatedly and they are very reproducible.
Let’s assume I have two collections that are not too tiny, say 100,000 consecutive integer elements, and also that they mostly overlap, for instance 5,000 are in the left but not the right. Now I simply call:
left.removeAll(right);
Of course this all depends on the types of both the left and the right collection. It’s blazingly fast if the right collection is a hash map, because that’s where the look-ups are done. But looking closer, I noticed two results that I cannot explain. I tried all the tests both with an ArrayList that is sorted and with another that is shuffled (using Collections.shuffle(), if that is of importance).
The first weird result is:
00293 025% shuffled ArrayList, HashSet
00090 008% sorted ArrayList, HashSet
Now either removing elements from the sorted ArrayList is faster than removing from the shuffled list, or looking up consecutive values from the HashSet is faster that looking up random values.
Now the other one:
02311 011% sorted ArrayList, shuffled ArrayList
01401 006% sorted ArrayList, sorted ArrayList
Now this suggests that the lookup in the sorted ArrayList (using a contains() call for each element of the list to the left) is faster than in the shuffled list. Now that would be quite easy if we could make use of the fact that it is sorted and use a binary search, but I do not do that.
Both results are mysterious to me. I cannot explain them by looking at the code or with my data-structure knowledge. Does it have anything to do with processor cache access patterns? Is the JIT compiler optimizing stuff? But if so, which? I performed warming up and run the tests a few times in a row, but perhaps there is a fundamental problem with my benchmark?
The reason for the performance difference is the memory access pattern: accessing elements which are consecutive in memory is faster than doing a random memory access (due to memory pre-fetching, cpu caches etc.)
When you initially populate the collection you create all the elements sequentially in the memory, so when you are traversing it (foreach, removeAll, etc) you are accessing consecutive memory regions which is cache friendly. When you shuffle the collection - the elements remain in the same order in memory, but the pointers to those elements are no longer in the same order, so when you are traversing the collection you'll be accessing for instance the 10th, the 1st, then the 5th element which is very cache unfriendly and ruins the performance.
You can look at this question where this effect is visible in greater detail:
Why filtering an unsorted list is faster than filtering a sorted list
Since the asker did not provide any example code, and there have been doubts about the benchmark mentioned in the comments and answers, I created a small test to see whether the removeAll method is slower when the argument is a shuffled list (instead of a sorted list). And I confirmed the observation of the asker: The output of the test was roughly
100000 elements, sortedList and sortedList, 5023,090 ms, size 5000
100000 elements, shuffledList and sortedList, 5062,293 ms, size 5000
100000 elements, sortedList and shuffledList, 10657,438 ms, size 5000
100000 elements, shuffledList and shuffledList, 10700,145 ms, size 5000
I'll omit the code for this particular test here, because it also has been questioned (which - by the way - is perfectly justified! A lot of BS is posted on the web...).
So I did further tests, for which I'll provide the code here.
This may also not be considered as a definite answer. But I tried to adjust the tests so that they at least provide some strong evidence that the reason for the reduced performance is indeed what Svetlin Zarev mentioned in his answer (+1 and accept this if it convinces you). Namely, that the reason for the slowdown lies in the caching effects of the scattered accesses.
First of all: I am aware of many of the possible pitfalls when writing a microbenchmark (and so is the asker, according to his statements). However, I know that nobody will believe a lie benchmark, even if it is perfectly reasonable, unless it is performed with an appropriate microbenchmarking tool. So in order to show that the performance with a shuffled list is lower than with a sorted list, I created this simple JMH benchmark:
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.infra.Blackhole;
#State(Scope.Thread)
public class RemoveAllBenchmarkJMH
{
#Param({"sorted", "shuffled"})
public String method;
#Param({"1000", "10000", "100000" })
public int numElements;
private List<Integer> left;
private List<Integer> right;
#Setup
public void initList()
{
left = new ArrayList<Integer>();
right = new ArrayList<Integer>();
for (int i=0; i<numElements; i++)
{
left.add(i);
}
int n = (int)(numElements * 0.95);
for (int i=0; i<n; i++)
{
right.add(i);
}
if (method.equals("shuffled"))
{
Collections.shuffle(right);
}
}
#Benchmark
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.MICROSECONDS)
public void testMethod(Blackhole bh)
{
left.removeAll(right);
bh.consume(left.size());
}
}
The output of this one is as follows:
(method) (numElements) Mode Cnt Score Error Units
sorted 1000 avgt 50 52,055 ± 0,507 us/op
shuffled 1000 avgt 50 55,720 ± 0,466 us/op
sorted 10000 avgt 50 5341,917 ± 28,630 us/op
shuffled 10000 avgt 50 7108,845 ± 45,869 us/op
sorted 100000 avgt 50 621714,569 ± 19040,964 us/op
shuffled 100000 avgt 50 1110301,876 ± 22935,976 us/op
I hope that this helps to resolve doubts about the statement itself.
Although I admit that I'm not a JMH expert. If there is something wrong with this benchmark, please let me know
Now, these results have been roughly in line with my other, manual (non-JMH) microbenchmark. In order to create evidence for the fact that the shuffling is the problem, I created a small test that compares the performance using lists that are shuffled by different degrees. By providing a value between 0.0 and 1.0, one can limit the number of swapped elements, and thus the shuffledness of the list. (Of course, this is rather "pragmatic", as there are different options of how this could be implemented, considering the different possible (statistical) measures for "shuffledness").
The code looks as follows:
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Random;
import java.util.function.Function;
public class RemoveAllBenchmarkExt
{
public static void main(String[] args)
{
for (int n=10000; n<=100000; n+=10000)
{
runTest(n, sortedList() , sortedList());
runTest(n, sortedList() , shuffledList(0.00));
runTest(n, sortedList() , shuffledList(0.25));
runTest(n, sortedList() , shuffledList(0.50));
runTest(n, sortedList() , shuffledList(0.75));
runTest(n, sortedList() , shuffledList(1.00));
runTest(n, sortedList() , reversedList());
System.out.println();
}
}
private static Function<Integer, Collection<Integer>> sortedList()
{
return new Function<Integer, Collection<Integer>>()
{
#Override
public Collection<Integer> apply(Integer t)
{
List<Integer> list = new ArrayList<Integer>(t);
for (int i=0; i<t; i++)
{
list.add(i);
}
return list;
}
#Override
public String toString()
{
return "sorted";
}
};
}
private static Function<Integer, Collection<Integer>> shuffledList(
final double degree)
{
return new Function<Integer, Collection<Integer>>()
{
#Override
public Collection<Integer> apply(Integer t)
{
List<Integer> list = new ArrayList<Integer>(t);
for (int i=0; i<t; i++)
{
list.add(i);
}
shuffle(list, degree);
return list;
}
#Override
public String toString()
{
return String.format("shuffled(%4.2f)", degree);
}
};
}
private static void shuffle(List<Integer> list, double degree)
{
Random random = new Random(0);
int n = (int)(degree * list.size());
for (int i=n; i>1; i--)
{
swap(list, i-1, random.nextInt(i));
}
}
private static void swap(List<Integer> list, int i, int j)
{
list.set(i, list.set(j, list.get(i)));
}
private static Function<Integer, Collection<Integer>> reversedList()
{
return new Function<Integer, Collection<Integer>>()
{
#Override
public Collection<Integer> apply(Integer t)
{
List<Integer> list = new ArrayList<Integer>(t);
for (int i=0; i<t; i++)
{
list.add(i);
}
Collections.reverse(list);
return list;
}
#Override
public String toString()
{
return "reversed";
}
};
}
private static void runTest(int n,
Function<Integer, ? extends Collection<Integer>> leftFunction,
Function<Integer, ? extends Collection<Integer>> rightFunction)
{
Collection<Integer> left = leftFunction.apply(n);
Collection<Integer> right = rightFunction.apply((int)(n*0.95));
long before = System.nanoTime();
left.removeAll(right);
long after = System.nanoTime();
double durationMs = (after - before) / 1e6;
System.out.printf(
"%8d elements, %15s, duration %10.3f ms, size %d\n",
n, rightFunction, durationMs, left.size());
}
}
(Yes, it's very simple. However, if you think that the timings are completely useless, compare them to a JMH run, and after a few hours, you'll see that they are reasonable)
The timings for the last pass are as follows:
100000 elements, sorted, duration 6016,354 ms, size 5000
100000 elements, shuffled(0,00), duration 5849,537 ms, size 5000
100000 elements, shuffled(0,25), duration 7319,948 ms, size 5000
100000 elements, shuffled(0,50), duration 9344,408 ms, size 5000
100000 elements, shuffled(0,75), duration 10657,021 ms, size 5000
100000 elements, shuffled(1,00), duration 11295,808 ms, size 5000
100000 elements, reversed, duration 5830,695 ms, size 5000
One can clearly see that the timings are basically increasing linearly with the shuffledness.
Of course, all this is still not a proof, but at least an evidence that the answer by Svetlin Zarev is correct.
Looking at the source code for ArrayList.removeAll() (OpenJDK7-b147) it appears that the it delegates to a private method called batchRemove() which is as follows:
663 private boolean batchRemove(Collection<?> c, boolean complement) {
664 final Object[] elementData = this.elementData;
665 int r = 0, w = 0;
666 boolean modified = false;
667 try {
668 for (; r < size; r++)
669 if (c.contains(elementData[r]) == complement)
670 elementData[w++] = elementData[r];
671 } finally {
672 // Preserve behavioral compatibility with AbstractCollection,
673 // even if c.contains() throws.
674 if (r != size) {
675 System.arraycopy(elementData, r,
676 elementData, w,
677 size - r);
678 w += size - r;
679 }
680 if (w != size) {
681 for (int i = w; i < size; i++)
682 elementData[i] = null;
683 modCount += size - w;
684 size = w;
685 modified = true;
686 }
687 }
688 return modified;
689 }
It practically loops through the array and has a bunch of c.contains() calls. Basically there's no reason why this iteration would go faster for a sorted array.
I second StephenC's doubt about the benchmark, and believe that it'd be more fruitful for you to scrutinize the benchmark code before digging in any deeper into cache access patterns etc.
Also if the benchmark code is not the culprit, it would be interesting to know the java version, and the OS/arch etc.
Now I know that micro-benchmarks are difficult to do right, and I won’t look at a few milliseconds difference, but I believe my results to be valid, since I ran them repeatedly and they are very reproducible.
That does not convince me. The behaviour of an flawed benchmark can be 100% reproducible.
I suspect that ... in fact ... a flaw or flaws in your benchmark >>is<< the cause of your strange results. It often is.
... but perhaps there is a fundamental problem with my benchmark?
Yes (IMO).
Show us the benchmark code if you want a more detailed answer.

Categories