Limiting infinite parallel stream - java

1) How can I use a Supplier (supplier) to create a sized stream of N values in parallel, while ensuring that no more than N calls are made to the supplier? I need this because I have a supplier with a costly supplier.get() operation.
2) The 'obvious' answer to my question, Streams.generate(supplier).limit(N), does not work and often results in more than N calls being made to the supplier. Why is this?
As 'proof' of the fact that Streams.generate(supplier).limit(N) results in more than N calls to supplier.get(), consider the following code:
public class MWE {
static final int N_ELEMENTS=100000;
static Supplier<IntSupplier> mySupplier = () -> new IntSupplier() {
AtomicInteger ai = new AtomicInteger(-1);
#Override
public int getAsInt() {
return ai.incrementAndGet();
}
};
public static void main(String[] args) {
int[] a = IntStream.generate(mySupplier.get()).limit(N_ELEMENTS).toArray();
int[] b = IntStream.generate(mySupplier.get()).parallel().limit(N_ELEMENTS).toArray();
}
}
a is equal to [0, 1, ..., N_ELEMENTS-1] as expected, but contrary to what you might expect b does not contain the same elements as a. Instead, b often contains elements that are greater than or equal to N_ELEMENTS, which indicates more than N_ELEMENTS number of calls to the supplier.
Another illustration would be that Streams.generate(new Random(0)::nextDouble()).limit(5) does not always generate the same set of numbers.

The stream API does not guarantee that IntStream.generate() will call the generator specified number of times. Also this call does not respect ordering.
If you actually need a parallel stream of increasing numbers, it's much better to use IntStream.range(0, N_ELEMENTS).parallel(). This not only ensures that you will actually have all the numbers from 0 to N_ELEMENTS-1, but greatly reduces the contention and guarantees order. If you need to generate something more complex, consider using custom source defining your own Spliterator class.
Note that the proposed IntStream.iterate solution may not parallelize greatly as it's sequential-by-nature source.

Calling .limit() is not guaranteed to result in a stream of the first N elements generated by the supplier because Stream.generate() creates an unordered stream, which leaves limit() free to decide on what 'part' of the stream to keep. Actually, it is not even semantically sound to refer to "the first N elements" or "(the first) part of the stream", because the stream is unordered. This behavior is clearly laid out in the API documentation; many thanks to everyone who pointed this out to me!
Since asking this question, I have come up with two solutions to my own question. My thanks go to Tagir who set me off in the right direction.
Solution 1: Misusing IntStream.range()
A simple and fairly efficient way of creating an unordered, sized, parallel stream backed by a supplier that makes no more calls to the supplier than is absolutely necessary is to (mis)use IntStream.range() like this:
IntStream.range(0,N_ELEMENTS).parallel().mapToObj($ -> generator.get())
Basically, we are using IntStream.range() only to create a sized stream that can be processed in parallel.
Solution 2: Custom spliterator
Because we never actually use the integers inside of the stream created by IntStream.range(), it seems like we can do slightly better by creating a custom Spliterator:
final class SizedSuppliedSpliterator<T> implements Spliterator<T> {
private int remaining;
private final Supplier<T> supplier;
private SizedSuppliedSpliterator(Supplier<T> supplier, int remaining) {
this.remaining = remaining;
this.supplier = supplier;
}
static <T> SizedSuppliedSpliterator of(Supplier<T> supplier, int limit) {
return new SizedSuppliedSpliterator(supplier, limit);
}
#Override
public boolean tryAdvance(final Consumer<? super T> consumer) {
Objects.requireNonNull(consumer);
if (remaining > 0) {
remaining--;
final T supplied = supplier.get();
consumer.accept(supplied);
return true;
}
return false;
}
#Override
public void forEachRemaining(final Consumer<? super T> consumer) {
while (remaining > 0) {
consumer.accept(supplier.get());
remaining--;
}
}
#Override
public SizedSuppliedSpliterator<T> trySplit() {
int split = (int)remaining/2;
remaining -= split;
return new SizedSuppliedSpliterator<>(supplier, split);
}
#Override
public long estimateSize() {
return remaining;
}
#Override
public int characteristics() {
return SIZED | SUBSIZED | IMMUTABLE;
}
}
We can use this spliterator to create the stream as follows:
StreamSupport.stream(SizedSuppliedSpliterator.of(supplier, N_ELEMENTS), true)
Of course, computing a couple of integers is hardly expensive, and I have not been able to notice or even measure any improvement in performance over solution 1.

Related

Why does list.size change when executing java parallel stream?

Consider the following code:
static void statefullParallelLambdaSet() {
Set<Integer> s = new HashSet<>(
Arrays.asList(1, 2, 3, 4, 5, 6)
);
List<Integer> list = new ArrayList<>();
int sum = s.parallelStream().mapToInt(e -> { // pipeline start
if (list.size() <= 3) { // list.size() changes while the pipeline operation is executing.
list.add(e); // mapToInt's lambda expression depends on this value, so it's stateful.
return e;
}
else return 0;
}).sum(); // terminal operation
System.out.println(sum);
}
In the code above, it says that list.size() changes while the pipe operation is running, but I don't understand.
Since list.add(e) is executed at once in multiple threads because it is executed in parallel, is it correct to assume that the value changes each time it is executed?
The reason why the value changes even if it is executed as a serial stream is that there is no order because it is a set, so the number drawn is different each time it is executed...
Am I right?
So the reason this happens is because of what is called race conditions CPU even many threaded ones are running more processes than just your applications processes so it could parse and instruction evaluate it and then have to jump off to do something for the OS and then come back and another parallel process for your application has managed to get past it because the core / hyper-thread has not been stolen from its job.
you can read about race conditions in books like: https://link.springer.com/referenceworkentry/10.1007/978-0-387-09766-4_36
But what you're supposed to do to prevent this is implemented locks on the memory you're altering, in Java you want to look at java.util.concurrent.Locks https://www.baeldung.com/java-concurrent-locks
Note that the problem itself is slightly artificial, because it's not very likely to get a significant performance gain by parallelizing this task.
Issues Explained
Your code accumulates the result by operating via side-effects which is discouraged by the Stream API documentation.
And you've stumbled on the very first bullet point from the link above:
... there are no guarantees as to:
the visibility of those side-effects to other threads;
ArrayList is not a thread-safe Collection, and as a consequence each thread is not guaranteed to observe the same state of the list.
Also, note that map() operation (and all it's flavors) is not intended to perform side-effects and it's function according to the documentation should be stateless:
mapper - a non-interfering, stateless function to apply to each element
In this case, the correct way to incorporate from processing the previous stream elements would be to define a Collector.
For that we would need to define a mutable container which would hold a list
In a nut-shell, Collector can be implemented as concurrent (i.e. optimized for a multithreaded environment, so that all the threads are updating the same mutable container) or non-concurrent (each thread creates its own instance of the mutable container and populates it, then results produces by each thread are getting merged).
In order to implement a concurrent Collector, we need to provide a thread-safe mutable container and specify a characteristic CONCURRENT. If take a look at the implementations of the List interface, you'll find out that the only options that JDK offers are CopyOnWriteArrayList and outdated Verctor.
CopyOnWriteArrayList would be a terrible choice since under the hood it would create a new list with every added element, that's a recipe on how to get an OutOfMemoryError. This Collection is not suitable for frequent updates.
And if we would use a synchronized List it would buy anything in terms of performance, because threads would not be able to operate on this list simultaneously. While one thread is adding an element, the others are blocked. In fact, it would be slower than processing the data sequentially, because synchronization has a cost.
For that reason, Locking suggested in another answer would only allow to get a correct result, but you would not be able to benefit from the parallel execution.
What we can do is create a non-concurrent Collector (i.e. a collector that uses a non-thread-safe container) based on a plain ArrayList (it still would be able to be used with a parallel stream, each thread would act independently on a separate container without locking and running into concurrency-related issues).
Non-concurrent Collector
Firstly, we need to define a custom accumulation type that encapsulates the ArrayList and the sum of consumed elements.
And in order to create a Collector, we need to use static method Collector.of().
Collector:
public static Collector<Integer, ?, IntSumContainer> toParallelIntSumContainer(int limit) {
return Collector.of(
() -> new IntSumContainer(limit),
IntSumContainer::accept,
IntSumContainer::merge
);
}
Custom accumulation type:
public class IntSumContainer implements IntConsumer {
private int sum;
private List<Integer> list = new ArrayList<>();
private final int limit;
public IntSumContainer(int limit) {
this.limit = limit;
}
#Override
public void accept(int value) {
if (list.size() < limit) {
list.add(value);
sum += value;
}
}
public IntSumContainer merge(IntSumContainer other) {
other.list.stream().limit(limit - list.size()).forEach(this::accept); // there couldn't be issues related to concurrent access in the case, hence performing side-effects via forEach is safe
return this;
}
// getters
}
Usage example:
List<Integer> source = List.of(1, 2, 3, 4, 5, 6);
IntSumContainer result = s.parallelStream()
.collect(toIntSumContainer(3));
List<Integer> list = result.getList();
int sum = result.getSum();
System.out.println(list);
System.out.println(sum);
Output:
[1, 2, 3]
6
Concurrent Collector
Since you're using as a stream source a HashSet, which produces an unordered stream, probably it might be not important which elements would be present in the resulting collection and would contribute to the resulting sum. And since you were using a Set, you might be fine with getting a Set as a result as well.
In this case we can make use of the concurrent HashSet which is provided by the JDK in the form of a view over the keys of the ConcurrentHashMap and can be obtained via static method ConcurrentHashMap.newKeySet(). The implementation of ConcurrentHashMap is lock-free.
To accumulate the sum concurrently, we can use a LongAdder which is more performant than AtomicLong when frequent updates are required because of not being synchronized (which is the case here).
Like in the previous example, a custom accumulation type would encapsulate the Set and the sum of consumed elements.
While defining collector, in order to make it concurrent we need to specify the characteristic CONCURRENT, and UNORDERED would also be handy since we stated that the ordering is not important.
Collector:
public static Collector<Integer, ?, ConcurrentIntSumContainer> toConcurrentIntSumContainer(int limit) {
return Collector.of(
() -> new ConcurrentIntSumContainer(limit),
ConcurrentIntSumContainer::accept,
(left, right) -> { throw new AssertionError("merge function is not expected be called by the Parallel collector"); },
Collector.Characteristics.UNORDERED, Collector.Characteristics.CONCURRENT
);
}
Custom accumulation type:
public class ConcurrentIntSumContainer implements IntConsumer {
private LongAdder sum = new LongAdder();
private Set<Integer> set = ConcurrentHashMap.newKeySet();
private final int limit;
public ConcurrentIntSumContainer(int limit) {
this.limit = limit;
}
#Override
public void accept(int value) {
if (set.size() < limit && set.add(value)) {
sum.add(value);
}
}
public Set<Integer> getSet() {
return new HashSet<>(set); // because a general purpose set is faster than concurrent set
}
public long getSum() {
return sum.sum();
}
}
Usage example:
List<Integer> source = List.of(1, 2, 3, 4, 5, 6);
ConcurrentIntSumContainer result1 = source.parallelStream()
.collect(toConcurrentIntSumContainer(3));
Set<Integer> set = result1.getSet();
long sum = result1.getSum();
System.out.println(set);
System.out.println(sum);
Output:
[1, 4, 5]
10

Do parallel streams treat upstream iterators in a thread safe way?

Today I was using a stream that was performing a parallel() operation after a map, however; the underlying source is an iterator which is not thread safe which is similar to the BufferedReader.lines implementation.
I originally thought that trySplit would be called on the created thread, however; I observed that the accesses to the iterator have come from multiple threads.
By example, the following silly iterator implementation is just setup with enough elements to cause splitting and also keeps track of the unique threads that accessed the hasNext method.
class SillyIterator implements Iterator<String> {
private final ArrayDeque<String> src =
IntStream.range(1, 10000)
.mapToObj(Integer::toString)
.collect(toCollection(ArrayDeque::new));
private Map<String, String> ts = new ConcurrentHashMap<>();
public Set<String> threads() { return ts.keySet(); }
private String nextRecord = null;
#Override
public boolean hasNext() {
var n = Thread.currentThread().getName();
ts.put(n, n);
if (nextRecord != null) {
return true;
} else {
nextRecord = src.poll();
return nextRecord != null;
}
}
#Override
public String next() {
if (nextRecord != null || hasNext()) {
var rec = nextRecord;
nextRecord = null;
return rec;
}
throw new NoSuchElementException();
}
}
Using this to create a stream as follows:
var iter = new SillyIterator();
StreamSupport
.stream(Spliterators.spliteratorUnknownSize(
iter, Spliterator.ORDERED | Spliterator.NONNULL
), false)
.map(n -> "value = " + n)
.parallel()
.collect(toList());
System.out.println(iter.threads());
This on my system output the two fork join threads as well as the main thread, which kind of scared me.
[ForkJoinPool.commonPool-worker-1, ForkJoinPool.commonPool-worker-2, main]
Thread safety does not necessarily imply being accessed by only one thread. The important aspect is that there is no concurrent access, i.e. no access by more than one thread at the same time. If the access by different threads is temporally ordered and this ordering also ensures the necessary memory visibility, which is the responsibility of the caller, it still is a thread safe usage.
The Spliterator documentation says:
Despite their obvious utility in parallel algorithms, spliterators are not expected to be thread-safe; instead, implementations of parallel algorithms using spliterators should ensure that the spliterator is only used by one thread at a time. This is generally easy to attain via serial thread-confinement, which often is a natural consequence of typical parallel algorithms that work by recursive decomposition.
The spliterator doesn’t need to be confined to the same thread throughout its lifetime, but there should be a clear handover at the caller’s side ensuring that the old thread stops using it before the new thread starts using it.
But the important takeaway is, the spliterator doesn’t need to be thread safe, hence, the iterator wrapped by a spliterator also doesn’t need to be thread safe.
Note that a typical behavior is splitting and handing over before starting traversal, but since an ordinary Iterator doesn’t support splitting, the wrapping spliterator has to iterate and buffer elements to implement splitting. Therefore, the Iterator experiences traversal by different threads (but one at a time) when the traversal has not been started from the Stream implementation’s perspective.
That said, the lines() implementation of BufferedReader is a bad example which you should not follow. Since it’s centered around a single readLine() call, it would be natural to implement Spliterator directly instead of implementing a more complicated Iterator and have it wrapped via spliteratorUnknownSize(…).
Since your example is likewise centered around a single poll() call, it’s also straight-forward to implement Spliterator directly:
class SillySpliterator extends Spliterators.AbstractSpliterator<String> {
private final ArrayDeque<String> src = IntStream.range(1, 10000)
.mapToObj(Integer::toString).collect(toCollection(ArrayDeque::new));
SillySpliterator() {
super(Long.MAX_VALUE, ORDERED | NONNULL);
}
#Override
public boolean tryAdvance(Consumer<? super String> action) {
String nextRecord = src.poll();
if(nextRecord == null) return false;
action.accept(nextRecord);
return true;
}
}
Depending on your real life case, you may also pass the actual deque size to the constructor and provide the SIZED characteristic.
Then, you may use it like
var result = StreamSupport.stream(new SillySpliterator(), true)
.map(n -> "value = " + n)
.collect(toList());

Alternative to ConcurrentLinkedQueue, do I need to use LinkedList with locks?

i am currently using a ConcurrentLinkedQueue, so that I can use natural order FIFO and also use it in a thread safe application . I have a requirement to log the size of the queue every minute and given that this collection does not guarantee size and also cost to calculate size is O(N), is there any alternative bounded non blocking concurrent queue that I can use where in obtaining size will not be a costly operation and at the same time the add/remove operation is not expensive either?
If there is no collection, do I need to use LinkedList with locks?
If you really (REALLY) need to log a correct, current size of the Queue you are currently dealing with - you need to block. There is simply no other way. You can think that maintaining a separate LongAdder field might help, may be making your own interface as a wrapper around ConcurrentLinkedQueue, something like:
interface KnownSizeQueue<T> {
T poll();
long size();
}
And an implementation:
static class ConcurrentKnownSizeQueue<T> implements KnownSizeQueue<T> {
private final ConcurrentLinkedQueue<T> queue = new ConcurrentLinkedQueue<>();
private final LongAdder currentSize = new LongAdder();
#Override
public T poll() {
T result = queue.poll();
if(result != null){
currentSize.decrement();
}
return result;
}
#Override
public long size() {
return currentSize.sum();
}
}
I just encourage you to add one more method, like remove into the interface and try to reason about the code. You will, very shortly realize, that such implementations will still give you a wrong result. So, do not do it.
The only reliable way to get the size, if you really need it, is to block for each operation. This comes at a high price, because ConcurrentLinkedQueue is documented as:
This implementation employs an efficient non-blocking...
You will lose those properties, but if that is a hard requirement that does not care about that, you could write your own:
static class ParallelKnownSizeQueue<T> implements KnownSizeQueue<T> {
private final Queue<T> queue = new ArrayDeque<>();
private final ReentrantLock lock = new ReentrantLock();
#Override
public T poll() {
try {
lock.lock();
return queue.poll();
} finally {
lock.unlock();
}
}
#Override
public long size() {
try {
lock.lock();
ConcurrentLinkedQueue
return queue.size();
} finally {
lock.unlock();
}
}
}
Or, of course, you can use an already existing structure, like LinkedBlockingDeque or ArrayBlockingQueue, etc - depending on what you need.

Is it possible to write a Java Collector that does early exit when it has a result?

Is it possible to implement a Collector that stops processing of the stream as soon as an answer is available?
For example, if the Collector is computing an average, and one of the values is NaN, I know the answer is going to be NaN without seeing any more values, so further computation is pointless.
Thanks for the responses. The comments pointed the way to a solution, which I will describe here. It's very much inspired by StreamEx, but adapted to my particular situation.
Firstly, I define an implementation of Stream called XdmStream which in general delegates all methods to an underlying Stream which it wraps.
This immediately gives me the opportunity to define new methods, so for example my users can do stream.last() instead of stream.reduce((first,second)->second), which is a useful convenience.
As an example of a short-circuiting method I have implemented XdmStream.untilFirst(Predicate) as follows (base is the wrapped Stream). The idea of this method is to return a stream that delivers the same results as the original stream, except that when a predicate is satisfied, no more results are delivered.
public XdmStream<T> untilFirst(Predicate<? super XdmItem> predicate) {
Stream<T> stoppable = base.peek(item -> {
if (predicate.test(item)) {
base.close();
}
});
return new XdmStream<T>(stoppable);
}
When I first create the base Stream I call its onClose() method so that a call on close() triggers the supplier of data to stop supplying data.
The close() mechanism doesn't seem particularly well documented (it relies on the concept of a "stream pipeline" and it's not entirely clear when a new stream returned by some method is part of the same pipeline as the original stream) - but it's working for me. I guess I should probably ensure that this is only an optimization, so that the results will still be correct even if the flow of data isn't immediately turned off (e.g. if there is any buffering in the stream).
In addition to Federico's comment, it is possible to emulate a short-circuiting Collector by ceasing accumulation once a certain condition has been met. Though, this method will only be beneficial if accumulation is expensive. Here's an example, but keep in mind that there are flaws with this implementation:
public class AveragingCollector implements Collector<Double, double[], Double> {
private final AtomicBoolean hasFoundNaN = new AtomicBoolean();
#Override
public Supplier<double[]> supplier() {
return () -> new double[2];
}
#Override
public BiConsumer<double[], Double> accumulator() {
return (a, b) -> {
if (hasFoundNaN.get()) {
return;
}
if (b.equals(Double.NaN)) {
hasFoundNaN.set(true);
return;
}
a[0] += b;
a[1]++;
};
}
#Override
public BinaryOperator<double[]> combiner() {
return (a, b) -> {
a[0] += b[0];
a[1] += b[1];
return a;
};
}
#Override
public Function<double[], Double> finisher() {
return average -> average[0] / average[1];
}
#Override
public Set<Characteristics> characteristics() {
return new HashSet<>();
}
}
The following use-case returns Double.NaN, as expected:
public static void main(String args[]) throws IOException {
DoubleStream.of(1, 2, 3, 4, 5, 6, 7, Double.NaN)
.boxed()
.collect(new AveragingCollector()));
}
Instead of using a Collector, you could use Stream.allMatch(..) to terminate the Stream early and use the util classes like LongSummaryStatistics directly. If all values (and at least one) were present, you return them, e.g.:
Optional<LongSummaryStatistics> toLongStats(Stream<OptionalLong> stream) {
LongSummaryStatistics stat = new LongSummaryStatistics();
boolean allPresent = stream.allMatch(opt -> {
if (opt.isEmpty()) return false;
stat.accept(opt.getAsLong());
return true;
});
return allPresent && stat.getCount() > 0 ? Optional.of(stat) : Optional.empty();
}
Instead of a Stream<OptionalLong> you might use a DoubleStream and check for your NaN case.
For the case of NaN, it might be acceptable to consider this an Exceptional outcome, and so throw a custom NaNAverageException, short circuiting the collection operation. Normally using exceptions for normal control flow is a bad practice, however, it may be justified in this case.
Stream<String> s = Stream.of("1","2","ABC", "3");
try
{
double result = s.collect(Collectors.averagingInt(n -> Integer.parseInt(n)));
System.err.println("Average :"+ result);
}
catch (NumberFormatException e)
{
// exception will be thrown it encounters ABC and collector won't go for "3"
e.printStackTrace();
}

Write an ArrayList with integers which will be accessed concurrently

The requirement is that, I need to write an ArrayList of integers. I need thread-safe access of the different integers (write, read, increase, decrease), and also need to allow maximum concurrency.
The operation with each integer is also special, like this:
Mostly frequent operation is to read
Secondly frequent operation is to decrease by one only if the value is greater than zero. Or, to increase by one (unconditionally)
Adding/removing elements is rare, but still needed.
I thought about AtomicInteger. However this becomes unavailable, because the atomic operation I want is to compare if not zero, then decrease. However the atomic operation provided by AtomicInteger, is compare if equal, and set. If you know how to apply AtomicInteger in this case, please raise it here.
What I am thinking is to synchronized the access to each integer like this:
ArrayList <Integer> list;
... ...
// Compare if greater than zero, and decrease
MutableInt n = list.get(index);
boolean success = false;
synchronized (n) {
if (n.intValue()>0) { n.decrement(); success=true; }
}
// To add one
MutableInt n = list.get(index);
synchronized (n) {
n.increment();
}
// To just read, I am thinking no need synchronization at all.
int n = list.get(index).intValue();
With my solution, is there any side-effect? Is it efficient to maintain hundreds or even thousands of synchronized integers?
Update: I am also thinking that allowing concurrent access to every element is not practical and not beneficial, as the actual concurrent access is limited by the number of processors. Maybe I just use several synchronization objects to guard different portions of the List, then it is enough?
Then another thing is to implement the operation of add/delete, that it is thread-safe, but do not impact much of the concurrency of the other operations. I am thinking ReadWriteLock, for add/delete, need to acquire the write lock, for other operations (change the value of one integer), acquire the read lock. Is this a right approach?
I think you're right to use read lock for accessing the list and write lock for add/remove on the list.
You can still use AtomicInteger for the values:
// Increase value
value.incrementAndGet()
// Decrease value, lower bound is 0
do {
int num = value.get();
if (num == 0)
break;
} while (! value.compareAndSet(num, num - 1)); // try again if concurrently updated
I think, if you can live with a fixed size list, using a single AtomicIntegerArray is a better choice than using multiple AtomicIntegers:
public class AtomicIntList extends AbstractList<Integer> {
private final AtomicIntegerArray array;
public AtomicIntList(int size) {
array=new AtomicIntegerArray(size);
}
public int size() {
return array.length();
}
public Integer get(int index) {
return array.get(index);
}
// for code accessing this class directly rather than using the List interface
public int getAsInt(int index) {
return array.get(index);
}
public Integer set(int index, Integer element) {
return array.getAndSet(index, element);
}
// for code accessing this class directly rather than using the List interface
public int setAsInt(int index, int element) {
return array.getAndSet(index, element);
}
public boolean decrementIfPositive(int index) {
for(;;) {
int old=array.get(index);
if(old<=0) return false;
if(array.compareAndSet(index, old, old-1)) return true;
}
}
public int incrementAndGet(int index) {
return array.incrementAndGet(index);
}
}
Code accessing this class directly rather than via the List<Integer> interface may use the methods getAsInt and setAsInt to avoid boxing conversions. This is a common pattern. Since the methods decrementIfPositive and incrementAndGet are not part of the List interface anyway, they always use int values.
As an update of this question... I found out that the simplest solution, just synchronizing the entire code-block for all possible conflict methods, turns out to be the best, even from performance's point of view. Synchronizing the code block solves both issues - accessing each counters, and also add/delete elements to the counter list.
This is because ReentrantReadWriteLock has a really high overhead, even when only the read lock is applied. Comparing to the overhead of read/write lock, the cost of the operation itself is so tiny, that any additional locking is not worthy doing that.
The statement in the API doc of ReentrantReadWriteLock shall be highly put attention to: "ReentrantReadWriteLocks... is typically worthwhile only when ... and entail operations with overhead that outweighs synchronization overhead".

Categories