When is it a good idea to use AtomicReferenceArray? Please explain with an example.
looks like it's functionally equivalent to AtomicReference[], occupying a little less memory though.
So it's useful when you need more than a million atomic references - can't think of any use case.
If you had a shared array of object references, then you would use an AtomicReferenceArray to ensure that the array couldn't be updated simultaneously by different threads i.e. only one element can be updated at a time.
However, in an AtomicReference[] (array of AtomicReference) multiple threads can still update different elements simulateously, because the atomicity is on the elements, not on the array as a whole.
More info here.
It could be useful if you have a large number of objects that are updated concurrently, for example in a large multiplayer game.
An update of reference i would follow the pattern
boolean success = false;
while (!success)
{
E previous = atomicReferenceArray.get(i);
E next = ... // compute updated object
success = atomicReferenceArray.compareAndSet(i, previous, next);
}
Depending on the circumstances this may be faster and/or easier to use than locking (synchronized).
One possible use case would have been ConcurrentHashMap which extensively uses array internally. Array can be volatile but at per element level sematics can't be volatile. it's one of the reason automic array came into existence.
some notes from a C++ programmer below, please don't condemn my Java much :)
AtomicReferenceArray allows to avoid false sharing, when multiple CPU logical cores access the same cache line that is changed by one of the thread. Invalidating and re-fetching the cache is very expensive. Unfortunately there is no sizeof in Java, so we don't know how many bytes each AtomicReference takes, but assuming it's at least 8 bytes (the size of a pointer on 64-bit architectures), you can allocate as follows:
// a lower bound is enough
private final int sizeofAtomicReference = 8;
// good for x86/x64
private final int sizeofCacheLine = 64;
// the number of CPU cores
private final int nLogicalCores = Runtime.getRuntime().availableProcessors();
private final int refsPerCacheLine = (sizeofCacheLine + sizeofAtomicReference - 1) / sizeofAtomicReference;
private AtomicReferenceArray<Task> tasks = new AtomicReferenceArray<Task>(nLogicalCores * refsPerCacheLine);
Now if you assign a task to i-th thread via
tasks.compareAndSet(i*refsPerCacheLine, null, new Task(/*problem definition here*/));
you guarantee that the task references are assigned to different CPU cache lines. Thus there is no expensive false sharing. So the latency of passing tasks from the producer thread to the consumer threads is minimal (for Java, but not for C++/Assembly).
Bonus:
You then poll the tasks array in the worker threads like this:
// consider iWorker is the 0-based index of the logical core this thread is assigned to
final int myIndex = iWorker*refsPerCacheLine;
while(true) {
Task curTask = tasks.get(myIndex);
if(curTask == null) continue;
if(curTask.isTerminator()) {
return; // exit the thread
}
// ... Process the task here ...
// Signal the producer thread that the current worker is free
tasks.set(myIndex, null);
}
import java.util.concurrent.atomic.AtomicReferenceArray;
public class AtomicReferenceArrayExample {
AtomicReferenceArray<String> arr = new AtomicReferenceArray<String>(10);
public static void main(String... args) {
new Thread(new AtomicReferenceArrayExample().new AddThread()).start();
new Thread(new AtomicReferenceArrayExample().new AddThread()).start();
}
class AddThread implements Runnable {
#Override
public void run() {
// Sets value at the index 1
arr.set(0, "A");
// At index 0, if current reference is "A" then it changes as "B".
arr.compareAndSet(0, "A", "B");
// At index 0, if current value is "B", then it is sets as "C".
arr.weakCompareAndSet(0, "B", "C");
System.out.println(arr.get(0));
}
}
}
// Result:
// C
// C
Related
Can someone explain the output of the following program:
public class DataRace extends Thread {
static ArrayList<Integer> arr = new ArrayList<>();
public void run() {
Random random = new Random();
int local = random.nextInt(10) + 1;
arr.add(local);
}
public static void main(String[] args) {
DataRace t1 = new DataRace();
DataRace t2 = new DataRace();
DataRace t3 = new DataRace();
DataRace t4 = new DataRace();
t1.start();
t2.start();
t3.start();
t4.start();
try {
t1.join();
t2.join();
t3.join();
t4.join();
} catch (InterruptedException e) {
System.out.println("interrupted");
}
System.out.println(DataRace.arr);
}
}
Output:
[8, 5]
[9, 2, 2, 8]
[2]
I am having trouble understanding the varying number of values in my output. I would expect the main thread to either wait until all threads have finished execution as I am joining them in the try-catch block and then output four values, one from each thread, or print to the console in case of an interruption. Neither of which is really happening here.
How does it come into play here if this is due to data race in multithreading?
The main problem is that multiple threads are adding to the same shared ArrayList concurrently. ArrayList is not thread-safe. From source one can read:
Note that this implementation is not synchronized.
If multiple threads
access an ArrayList instance concurrently, and at least one of the
threads modifies the list structurally, it must be synchronized
externally. (A structural modification is any operation that adds or
deletes one or more elements, or explicitly resizes the backing array;
merely setting the value of an element is not a structural
modification.) This is typically accomplished by synchronizing on some
object that naturally encapsulates the list. If no such object exists,
the list should be "wrapped" using the Collections.synchronizedList
method. This is best done at creation time, to prevent accidental
unsynchronized access to the list:
In your code every time you call
arr.add(local);
inside the add method implementation, among others, a variable that keeps track of the size of the array will be updated. Below is shown the relevant part of the add method of the ArrayList:
private void add(E e, Object[] elementData, int s) {
if (s == elementData.length)
elementData = grow();
elementData[s] = e;
size = s + 1; // <--
}
where the variable field size is:
/**
* The size of the ArrayList (the number of elements it contains).
*
* #serial
*/
private int size;
Notice that neither is the add method synchronized nor the variable size is marked with the volatile clause. Hence, suitable to race-conditions.
Therefore, because you did not ensure mutual exclusion on the accesses to that ArrayList (e.g., surrounding the calls to the ArrayList with the synchronized clause), and because the ArrayList does not ensure that the size variable is updated atomically, each thread might see (or not) the last updated value of that variable. Hence, threads might see outdated values of the size variable, and add elements into positions that already other threads have added before. In the extreme, all threads might end-up adding an element into the same position (e.g., as one of your outputs [2]).
The aforementioned race-condition leads to undefined behavior, hence the reason why:
System.out.println(DataRace.arr);
outputs different number of elements in different execution of your code.
To make the ArrayList thread-safe or for alternatives have a look at the following SO thread: How do I make my ArrayList Thread-Safe?, where it showcases the use of Collections.synchronizedList()., CopyOnWriteArrayList among others.
An example of ensuring mutual exclusion of the accesses to the arr structure:
public void run() {
Random random = new Random();
int local = random.nextInt(10) + 1;
synchronized (arr) {
arr.add(local);
}
}
or :
static final List<Integer> arr = Collections.synchronizedList(new ArrayList<Integer>());
public void run() {
Random random = new Random();
int local = random.nextInt(10) + 1;
arr.add(local);
}
TL;DR
ArrayList is not Thread-Safe. Therefore it's behaviour in a race-condition is undefined. Use synchronized or CopyOnWriteArrayList instead.
Longer answer
ArrayList.add ultimately calls this private method:
private void add(E e, Object[] elementData, int s) {
if (s == elementData.length)
elementData = grow();
elementData[s] = e;
size = s + 1;
}
When two Threads reach this same point at the "same" time, they would have the same size (s), and both will try add an element on the same position and update the size to s + 1, thus likely keeping the result of the second.
If the size limit of the ArrayList is reached, and it has to grow(), a new bigger array is created and the contents copied, likely causing any other changes made concurrently to be lost (is possible that multiple threads will be trying to grow).
Alternatives here are to use monitors - a.k.a. synchronized, or to use Thread-Safe alternatives like CopyOnWriteArrayList.
I think there is a lot of similar or closely related questions. For example see this.
Basically the reason of this "unexpected" behabiour is because ArrayList is not thread-safe. You can try List<Integer> arr = new CopyOnWriteArrayList<>() and it will work as expected. This data structure is recommended when we want to perform read operation frequently and the number of write operations is relatively rare. For good explanation see What is CopyOnWriteArrayList in Java - Example Tutorial.
Another option is to use List<Integer> arr = Collections.synchronizedList(new ArrayList<>()).
You can also use Vector but it is not recommended (see here).
This article also will be useful - Vector vs ArrayList in Java.
I have been asked to implement fine grained locking on a hashlist. I have done this using synchronized but the questions tells me to use Lock instead.
I have created a hashlist of objects in the constructor
private LinkedList<E> data[];;
private Lock lock[];
private Lock lockR = new ReentrantLock();
// The constructors ensure that both the data and the dataLock are the same size
#SuppressWarnings("unchecked")
public ConcurrentHashList(int n){
if(n > 1000) {
data = (LinkedList<E>[])(new LinkedList[n/10]);
lock = new Lock [n/10];
}
else {
data = (LinkedList<E>[])(new LinkedList[100]);
lock = new Lock [100]; ;
}
for(int j = 0; j < data.length;j++) {
data[j] = new LinkedList<E>();
lock[j] = new ReentrantLock();// Adding a lock to each bucket index
}
}
The original method
public void add(E x){
if(x != null){
lock.lock();
try{
int index = hashC(x);
if(!data[index].contains(x))
data[index].add(x);
}finally{lock.unlock();}
}
}
Using synchronization to grab a handle on the object hashlist to allow mutable Threads to work on mutable indexes concurrently.
public void add(E x){
if(x != null){
int index = hashC(x);
synchronized (dataLock[index]) { // Getting handle before adding
if(!data[index].contains(x))
data[index].add(x);
}
}
}
I do not know how to implement it using Lock though I can not lock a single element in a array only the whole method which means it is not coarse grained.
Using an array of ReentrantLock
public void add(E x){
if(x != null){
int index = hashC(x);
dataLock[index].lock();
try {
// Getting handle before adding
if(!data[index].contains(x))
data[index].add(x);
}finally {dataLock[index].unlock();}
}
}
The hash function
private int hashC(E x){
int k = x.hashCode();
int h = Math.abs(k % data.length);
return(h);
}
Presumably, hashC() is a function that is highly likely to produce unique numbers. As in, you have no guarantee that the hashes are unique, but the incidence of non-unique hashes is extremely low. For a data structure with a few million entries, you have a literal handful of collisions, and any given collision always consists of only a pair or maybe 3 conflicts (2 to 3 objects in your data structure have the same hash, but not 'thousands').
Also, assumption: the hash for a given object is constant. hashC(x) will produce the same value no matter how many times you call it, assuming you provide the same x.
Then, you get some fun conclusions:
The 'bucket' (The LinkedList instance found at array slot hashC(x) in data) that your object should go into, is always the same - you know which one it should be based solely on the result of hashC.
Calculating hashC does not require a lock of any sort. It has no side effects whatsoever.
Thus, knowing which bucket you need for a given operation on a single value (Be it add, remove, or check-if-in-collection) can be done without locking anything.
Now, once you know which bucket you need to look at / mutate, okay, now locking is involved.
So, just have 1 lock for each bucket. Not a List<Object> locks[];, that's a whole list worth of locks per bucket. Just Object[] locks is all you need, or ReentrantLock[] locks if you prefer to use lock/unlock instead of synchronized (lock[bucketIdx]) { ... }.
This is effectively fine-grained: After all, the odds that one operation needs to twiddle its thumbs because another thread is doing something, even though that other thread is operating on a different object, is very low; it would require the two different objects to have a colliding hash, which is possible, but extremely rare - as per assumption #1.
NB: Note that therefore lock can go away entirely, you don't need it, unless you want to build into your code that the code may completely re-design its bucket structure. For example, 1000 buckets feels a bit meh if you end up with a billion objects. I don't think 'rebucket everything' is part of the task here, though.
Goal: To know, as I fork off a thread, which processor it's going to land on. Is that possible? Regardless of whether the underlying approach is valid, is there a good answer to that narrow question? Thanks.
(Right now I need to make a copy of one of our classes for each thread, write to it in that thread and merge them all later. Using a synchronized approach is not possible because my Java expert boss thinks it's a bad idea, and after a lot of discussion I agree. If I knew which processor each thread would land on, I would only need to make as many copies of that class as there are processors.)
We use Apache Spark to get our jobs spread across a cluster, but in our application is makes sense to run one big executor and then do some multi-threading of our own out on each machine in the cluster.
I could save a lot of deep copying if I could know which processor a thread is being sent to, is that possible? I threw in our code but it's probably more of a conceptual question:
When I get down to the "do task" part of compute(), can I know which processor it's running on?
public class TholdExecutor extends RecursiveTask<TholdDropEvaluation> {
final static Logger logger = LoggerFactory.getLogger(TholdExecutor.class);
private List<TholdDropResult> partitionOfN = new ArrayList<>();
private int coreCount;
private int desiredPartitionSize; // will be updated by whatever is passed into the constructor per-chromosome
private TholdDropEvaluation localDropEvaluation; // this DropEvaluation
private TholdDropResult mSubI_DR;
public TholdExecutor(List<TholdDropResult> subsetOfN, int cores, int partSize, TholdDropEvaluation passedDropEvaluation, TholdDropResult mDrCopy) {
partitionOfN = subsetOfN;
coreCount = cores;
desiredPartitionSize = partSize;
// the TholdDropEvaluation needs to be a copy for each thread? It can't be the same one passed to threads ... so ...
TholdDropEvaluation localDropEvaluation = makeDECopy(passedDropEvaluation); // THIS NEEDS TO BE A DEEP COPY OF THE DROP EVAL!!! NOT THE ORIGINAL!!
// we never modify the TholdDropResult that is passed in, we just need to read it all on the same JVM/worker, so
mSubI_DR = mDrCopy; // this is purely a reference and can point to the passed in value (by reference, right?)
}
// this makes a deep copy of the TholdDropEvaluation for each thread, we copy the SharingRun's startIndex and endIndex only,
// as LEG events will be calculated during the subsequent dropComparison. The constructor for TholdDropEvaluation must set
// LEG events to zero.
private void makeDECopy(TholdDropEvaluation passedDropEvaluation) {
TholdDropEvaluation tholdDropEvaluation = new TholdDropEvaluation();
// iterate through the SharingRuns in the SharingRunList from the TholdDropEval that was passed in
for (SharingRun sr : passedDropEvaluation.getSharingRunList()) {
SharingRun ourSharingRun = new SharingRun();
ourSharingRun.startIndex = sr.startIndex;
ourSharingRun.endIndex = sr.endIndex;
tholdDropEvaluation.addSharingRun(ourSharingRun);
}
return tholdDropEvaluation
}
#Override
protected TholdDropEvaluation compute() {
int simsToDo = partitionOfN.size();
UUID tag = UUID.randomUUID();
long computeStartTime = System.nanoTime();
if (simsToDo <= desiredPartitionSize) {
logger.debug("IN MULTI-THREAD compute() --- UUID {}:Evaluating partitionOfN sublist length", tag, simsToDo);
// job within size limit, do the task and return the completed TholdDropEvaluation
// iterate through each TholdDropResult in the sub-partition and do the dropComparison to the refernce mSubI_DR,
// writing to the copy of the DropEval in tholdDropEvaluation
for (TholdDropResult currentResult : partitionOfN) {
mSubI_DR.dropComparison(currentResult, localDropEvaluation);
}
} else {
// job too large, subdivide and call this recursively
int half = simsToDo / 2;
logger.info("Splitting UUID = {}, half is {} and simsToDo is {}", tag, half, simsToDo );
TholdExecutor nextExec = new TholdExecutor(partitionOfN.subList(0, half), coreCount, desiredPartitionSize, tholdDropEvaluation, mSubI_DR);
TholdExecutor futureExec = new TholdExecutor(partitionOfN.subList(half, simsToDo), coreCount, desiredPartitionSize, tholdDropEvaluation, mSubI_DR);
nextExec.fork();
TholdDropEvaluation futureEval = futureExec.compute();
TholdDropEvaluation nextEval = nextExec.join();
tholdDropEvaluation.merge(futureEval);
tholdDropEvaluation.merge(nextEval);
}
logger.info("{} Compute time is {} ns",tag, System.nanoTime() - computeStartTime);
// NOTE: this was inside the else block in Rob's example, but don't we want it outside the block so it's returned
// whether
return tholdDropEvaluation;
}
}
Even if you could figure out where a thread would run initially there's no reason to assume it would live on that processor/core for the rest of its life. In all probability for any task big enough to be worth the cost of spawning a thread it won't, so you'd need to control where it ran completely to offer that level of assurance.
As far as I know there's no standard mechanism for controlling mappings from threads to processor cores inside Java. Typically that's known as "thread affinity" or "processor affinity". On Windows and Linux for example you can control that using:
Windows: SetThreadAffinityMask
Linux: sched_setaffinity or pthread_setaffinity_np
so in theory you could write some C and JNI code that allowed you to abstract this enough on the Java hosts you cared about to make it work.
That feels like the wrong solution to the real problem you seem to be facing, because you end up withdrawing options from the OS scheduler, which potentially doesn't allow it to make the smartest scheduling decisions causing total runtime to increase. Unless you're pushing an unusual workload and modelling/querying processor information/topology down to the level of NUMA and shared caches it ought to do a better job of figuring out where to run threads for most workloads than you could. Your JVM typically runs a large number of additional threads besides just the ones you explicitly create from after main() gets called. Additionally I wouldn't like to promise anything about what the JVM you run today (or even tomorrow) might decide to do on its own about thread affinity.
Having said that it seems like the underlying problem is that you want to have one instance of an object per thread. Typically that's much easier than predicting where a thread will run and then manually figuring out a mapping between N processors and M threads at any point in time. Usually you'd use "thread local storage" (TLS) to solve this problem.
Most languages provide this concept in one form or another. In Java this is provided via the ThreadLocal class. There's an example in the linked document given:
public class ThreadId {
// Atomic integer containing the next thread ID to be assigned
private static final AtomicInteger nextId = new AtomicInteger(0);
// Thread local variable containing each thread's ID
private static final ThreadLocal<Integer> threadId =
new ThreadLocal<Integer>() {
#Override protected Integer initialValue() {
return nextId.getAndIncrement();
}
};
// Returns the current thread's unique ID, assigning it if necessary
public static int get() {
return threadId.get();
}
}
Essentially there are two things you care about:
When you call get() it returns the value (Object) belonging to the current thread
If you call get in a thread which currently has nothing it will call initialValue() you implement, which allows you to construct or obtain a new object.
So in your scenario you'd probably want to deep copy the initial version of some local state from a read-only global version.
One final point of note: if your goal is to divide and conquer; do some work on lots of threads and then merge all their results to one answer the merging part is often known as a reduction. In that case you might be looking for MapReduce which is probably the most well known form of parallelism using reductions.
I am new to multi-threading and I have to write a program using multiple threads to increase its efficiency. At my first attempt what I wrote produced just opposite results. Here is what I have written:
class ThreadImpl implements Callable<ArrayList<Integer>> {
//Bloom filter instance for one of the table
BloomFilter<Integer> bloomFilterInstance = null;
// Data member for complete data access.
ArrayList< ArrayList<UserBean> > data = null;
// Store the result of the testing
ArrayList<Integer> result = null;
int tableNo;
public ThreadImpl(BloomFilter<Integer> bloomFilterInstance,
ArrayList< ArrayList<UserBean> > data, int tableNo) {
this.bloomFilterInstance = bloomFilterInstance;
this.data = data;
result = new ArrayList<Integer>(this.data.size());
this.tableNo = tableNo;
}
public ArrayList<Integer> call() {
int[] tempResult = new int[this.data.size()];
for(int i=0; i<data.size() ;++i) {
tempResult[i] = 0;
}
ArrayList<UserBean> chkDataSet = null;
for(int i=0; i<this.data.size(); ++i) {
if(i==tableNo) {
//do nothing;
} else {
chkDataSet = new ArrayList<UserBean> (data.get(i));
for(UserBean toChk: chkDataSet) {
if(bloomFilterInstance.contains(toChk.getUserId())) {
++tempResult[i];
}
}
}
this.result.add(new Integer(tempResult[i]));
}
return result;
}
}
In the above class there are two data members data and bloomFilterInstance and they(the references) are passed from the main program. So actually there is only one instance of data and bloomFilterInstance and all the threads are accessing it simultaneously.
The class that launches the thread is(few irrelevant details have been left out, so all variables etc. you can assume them to be declared):
class MultithreadedVrsion {
public static void main(String[] args) {
if(args.length > 1) {
ExecutorService es = Executors.newFixedThreadPool(noOfTables);
List<Callable<ArrayList<Integer>>> threadedBloom = new ArrayList<Callable<ArrayList<Integer>>>(noOfTables);
for (int i=0; i<noOfTables; ++i) {
threadedBloom.add(new ThreadImpl(eval.bloomFilter.get(i),
eval.data, i));
}
try {
List<Future<ArrayList<Integer>>> answers = es.invokeAll(threadedBloom);
long endTime = System.currentTimeMillis();
System.out.println("using more than one thread for bloom filters: " + (endTime - startTime) + " milliseconds");
System.out.println("**Printing the results**");
for(Future<ArrayList<Integer>> element: answers) {
ArrayList<Integer> arrInt = element.get();
for(Integer i: arrInt) {
System.out.print(i.intValue());
System.out.print("\t");
}
System.out.println("");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
I did the profiling with jprofiler and
![here]:(http://tinypic.com/r/wh1v8p/6)
is a snapshot of cpu threads where red color shows blocked, green runnable and yellow is waiting. I problem is that threads are running one at a time I do not know why?
Note:I know that this is not thread safe but I know that I will only be doing read operations throughout now and just want to analyse raw performance gain that can be achieved, later I will implement a better version.
Can anyone please tell where I have missed
One possibility is that the cost of creating threads is swamping any possible performance gains from doing the computations in parallel. We can't really tell if this is a real possibility because you haven't included the relevant code in the question.
Another possibility is that you only have one processor / core available. Threads only run when there is a processor to run them. So your expectation of a linear speed with the number of threads and only possibly achieved (in theory) if is a free processor for each thread.
Finally, there could be memory contention due to the threads all attempting to access a shared array. If you had proper synchronization, that would potentially add further contention. (Note: I haven't tried to understand the algorithm to figure out if contention is likely in your example.)
My initial advice would be to profile your code, and see if that offers any insights.
And take a look at the way you are measuring performance to make sure that you aren't just seeing some benchmarking artefact; e.g. JVM warmup effects.
That process looks CPU bound. (no I/O, database calls, network calls, etc.) I can think of two explanations:
How many CPUs does your machine have? How many is Java allowed to use? - if the threads are competing for the same CPU, you've added coordination work and placed more demand on the same resource.
How long does the whole method take to run? For very short times, the additional work in context switching threads could overpower the actual work. The way to deal with this is to make a longer job. Also, run it a lot of times in a loop not counting the first few iterations (like a warm up, they aren't representative.)
Several possibilities come to mind:
There is some synchronization going on inside bloomFilterInstance's implementation (which is not given).
There is a lot of memory allocation going on, e.g., what appears to be an unnecessary copy of an ArrayList when chkDataSet is created, use of new Integer instead of Integer.valueOf. You may be running into overhead costs for memory allocation.
You may be CPU-bound (if bloomFilterInstance#contains is expensive) and threads are simply blocking for CPU instead of executing.
A profiler may help reveal the actual problem.
I have a multithreaded application, where a shared list has write-often, read-occasionally behaviour.
Specifically, many threads will dump data into the list, and then - later - another worker will grab a snapshot to persist to a datastore.
This is similar to the discussion over on this question.
There, the following solution is provided:
class CopyOnReadList<T> {
private final List<T> items = new ArrayList<T>();
public void add(T item) {
synchronized (items) {
// Add item while holding the lock.
items.add(item);
}
}
public List<T> makeSnapshot() {
List<T> copy = new ArrayList<T>();
synchronized (items) {
// Make a copy while holding the lock.
for (T t : items) copy.add(t);
}
return copy;
}
}
However, in this scenario, (and, as I've learned from my question here), only one thread can write to the backing list at any given time.
Is there a way to allow high-concurrency writes to the backing list, which are locked only during the makeSnapshot() call?
synchronized (~20 ns) is pretty fast and even though other operations can allow concurrency, they can be slower.
private final Lock lock = new ReentrantLock();
private List<T> items = new ArrayList<T>();
public void add(T item) {
lock.lock();
// trivial lock time.
try {
// Add item while holding the lock.
items.add(item);
} finally {
lock.unlock();
}
}
public List<T> makeSnapshot() {
List<T> copy = new ArrayList<T>(), ret;
lock.lock();
// trivial lock time.
try {
ret = items;
items = copy;
} finally {
lock.unlock();
}
return ret;
}
public static void main(String... args) {
long start = System.nanoTime();
Main<Integer> ints = new Main<>();
for (int j = 0; j < 100 * 1000; j++) {
for (int i = 0; i < 1000; i++)
ints.add(i);
ints.makeSnapshot();
}
long time = System.nanoTime() - start;
System.out.printf("The average time to add was %,d ns%n", time / 100 / 1000 / 1000);
}
prints
The average time to add was 28 ns
This means if you are creating 30 million entries per second, you will have one thread accessing the list on average. If you are creating 60 million per second, you will have concurrency issues, however you are likely to be having many more resourcing issue at this point.
Using Lock.lock() and Lock.unlock() can be faster when there is a high contention ratio. However, I suspect your threads will be spending most of the time building the objects to be created rather than waiting to add the objects.
You could use a ConcurrentDoublyLinkedList. There is an excellent implementation here ConcurrentDoublyLinkedList.
So long as you iterate forward through the list when you make your snapshot all should be well. This implementation preserves the forward chain at all times. The backward chain is sometimes inaccurate.
First of all, you should investigate if this really is too slow. Adds to ArrayLists are O(1) in the happy case, so if the list has an appropriate initial size, CopyOnReadList.add is basically just a bounds check and an assignment to an array slot, which is pretty fast. (And please, do remember that CopyOnReadList was written to be understandable, not performant.)
If you need a non-locking operation, you can have something like this:
class ConcurrentStack<T> {
private final AtomicReference<Node<T>> stack = new AtomicReference<>();
public void add(T value){
Node<T> tail, head;
do {
tail = stack.get();
head = new Node<>(value, tail);
} while (!stack.compareAndSet(tail, head));
}
public Node<T> drain(){
// Get all elements from the stack and reset it
return stack.getAndSet(null);
}
}
class Node<T> {
// getters, setters, constructors omitted
private final T value;
private final Node<T> tail;
}
Note that while adds to this structure should deal pretty well with high contention, it comes with several drawbacks. The output from drain is quite slow to iterate over, it uses quite a lot of memory (like all linked lists), and you also get things in the opposite insertion order. (Also, it's not really tested or verified, and may actually suck in your application. But that's always the risk with using code from some random dude on the intertubes.)
Yes, there is a way. It is similar to the way ConcurrentHashMap made, if you know.
You should make your own data structure not from one list for all writing threads, but use several independent lists. Each of such lists should be guarded by it's own lock. .add() method should choose list for append current item based on Thread.currentThread.id (for example, just id % listsCount). This will gives you good concurrency properties for .add() -- at best, listsCount threads will be able to write without contention.
On makeSnapshot() you should just iterate over all lists, and for each list you grab it's lock and copy content.
This is just an idea -- there are many places to improve it.
You can use a ReadWriteLock to allow multiple threads to perform add operations on the backing list in parallel, but only one thread to make the snapshot. While the snapshot is being prepared all other add and snapshot request are put on hold.
A ReadWriteLock maintains a pair of associated locks, one for
read-only operations and one for writing. The read lock may be held
simultaneously by multiple reader threads, so long as there are no
writers. The write lock is exclusive.
class CopyOnReadList<T> {
// free to use any concurrent data structure, ConcurrentLinkedQueue used as an example
private final ConcurrentLinkedQueue<T> items = new ConcurrentLinkedQueue<T>();
private final ReadWriteLock rwLock = new ReentrantReadWriteLock();
private final Lock shared = rwLock.readLock();
private final Lock exclusive = rwLock.writeLock();
public void add(T item) {
shared.lock(); // multiple threads can attain the read lock
// try-finally is overkill if items.add() never throws exceptions
try {
// Add item while holding the lock.
items.add(item);
} finally {
shared.unlock();
}
}
public List<T> makeSnapshot() {
List<T> copy = new ArrayList<T>(); // probably better idea to use a LinkedList or the ArrayList constructor with initial size
exclusive.lock(); // only one thread can attain write lock, all read locks are also blocked
// try-finally is overkill if for loop never throws exceptions
try {
// Make a copy while holding the lock.
for (T t : items) {
copy.add(t);
}
} finally {
exclusive.unlock();
}
return copy;
}
}
Edit:
The read-write lock is so named because it is based on the readers-writers problem not on how it is used. Using the read-write lock we can have multiple threads achieve read locks but only one thread achieve the write lock exclusively. In this case the problem is reversed - we want multiple threads to write (add) and only thread to read (make the snapshot). So, we want multiple threads to use the read lock even though they are actually mutating. Only thread is exclusively making the snapshot using the write lock even though snapshot only reads. Exclusive means that during making the snapshot no other add or snapshot requests can be serviced by other threads at the same time.
As #PeterLawrey pointed out, the Concurrent queue will serialize the writes aqlthough the locks will be used for as minimal a duration as possible. We are free to use any other concurrent data structure, e.g. ConcurrentDoublyLinkedList. The queue is used only as an example. The main idea is the use of read-write locks.