I have a situation where one thread updates int and another one at some point reads it. So single-reader single-writer.
So far I was using volatile int for that purpose, but since this forces full sync on memory barriers I was thinking about something else.
One approach would be AtomicInteger.incrementAndGet()
but I think this has exactly the same effect and will actually be slower
Another approach would be to use AtomicInteger.lazySet with extra non-volatile counter for writer.
So basically we would have
private int counter;
public AtomicInteger visibleCounter = new AtomicInteger();
private void write() {
counter++
visibleCounter.lazySet(counter)
}
// called by reader
public int isCountEqual(int val) {
return val == visibleCounter.get()
}
as a naive "lazyIncrement".
Would it be actually more performant than simple increment of volatile int by writer?
Thanks
If lazy increment is one of your options I'll suggest LongAdder. link
LongAdder is good for multiple threads updates.
... under high contention, expected throughput of this class is significantly higher (than AtomicLong)
Related
my question is about synchronisation and preventing deadlocks when using threads. In this example an object simply holds an integer variable and multiple threads call swapValue on those objects.
public class Data {
private long value;
public Data(long value) {
this.value = value;
}
public synchronized long getValue() {
return value;
}
public synchronized void setValue(long value) {
this.value = value;
}
public void swapValue(Data other) {
long temp = getValue();
long newValue = other.getValue();
setValue(newValue);
other.setValue(temp);
}
}
The swapValue method should be thread safe and should not skip swapping the values if the resources are not available. Simply using the synchronized keyword on the method signature will result in a deadlock. I came up with this (apparently) working solution, which is only based on the probability that one thread unlocks its resource and the other tries to claim it while the resource is still unlocked.
private Lock lock = new ReentrantLock();
...
public void swapValue(Data other) {
lock.lock();
while(!other.lock.tryLock())
{
lock.unlock();
lock.lock();
}
long temp = getValue();
long newValue = other.getValue();
setValue(newValue);
other.setValue(temp);
other.lock.unlock();
lock.unlock();
}
To me this looks like a hack. Is this a common solution for these kind of problems? Are there solutions that are "more deterministic" in their behaviour and also applicable in practice?
There are two issues at play here:
First, mixing Data.lock with the built-in lock used by the synchronized keyword
Second, inconsistent locking order among four (!) locks - this.lock, other.lock, the built-in lock of this, and the built-in lock of other
Even without synchronized, a.swapValue(b) and b.swapValue(a) can deadlock unless you use your approach to try to spin while locking and unlocking, which is inefficient.
One approach that you could take is add a field with some kind of final unique ID to each Data object - when swapping data of two objects, lock the one with a lower ID before the one with the higher ID, regardless of which is this and which is other. Note that System.identityHashCode is unfortunately not unique so it can't be easily used here.
The unlock ordering isn't critical here, but unlocking in the reverse order of locking is generally a good practice to follow where possible.
#Nanofarad has the right idea: Give every Data instance a unique, permanent numeric ID, and then use those IDs to decide which object to lock first. Here's what that might look like in practice:
private static void lockBoth(Data a, Data b) {
Lock first = a.lock;
Lock second = b.lock;
if (a.getID() < b.getID()) {
first = b.lock;
second = a.lock;
}
first.lock();
second.lock();
}
private static void unlockBoth(Data a, Data b) {
a.lock.unlock();
b.lock.unlock();
// Note: #Queeg suggests in comments below that in the general case,
// it would be good practice to make this routine always unlock the
// two locks in the order opposite to which `lockBoth()` locked them.
// See https://stackoverflow.com/a/8949355/801894 for an explanation.
}
public void swapValue(Data other) {
lockBoth(this, other);
...swap 'em...
unlockBoth(this, other);
}
In your case, just use AtomicInteger or AtomicLong instead inventing the wheel again. About the synchronization and deadlocks part of your question in general - DO NOT RELY ON PROBABILITY -- it is way too tricky and too easy to get it wrong, unless you're experienced mathematician knowing exactly what youre doing - but even then it is risky. One example when probability is used is UUID, but if computers will get fast enough then the code that shouldn't reasonably break till the end of universe can break in matter of milliseconds or faster, it is better to write code that do not rely on probability, especially concurrent code.
I am trying to count how many instances of a class generated during the run time of a process under multi-threading environment. The way how I do it is to increase a static counter in the constructor by looking at this post:
How to Count Number of Instances of a Class
So in multi-threading environment, here is how i define the class:
class Television {
private static volatile int counter = 0;
public Television(){
counter ++;
}
}
However, I am not sure whether there is a potential bug with the code above since I think constructor in java does not imply synchronization and counter++ is not atomic so if two threads are creating instances simultaneously, is the code a bug somehow? but I am not quite sure yet.
There is a bug in this code (specifically, a race condition), because the read of counter and write to counter aren't atomically executed.
In other words, two threads can read the same value of counter, increment that value, and then write the same value back to the variable.
Thread 1 Thread 2
======== ========
Read 0
Read 0
Increment
Increment
Write 1
Write 1
So the value would be 1, not 2, afterwards.
Use AtomicInteger and AtomicInteger.incrementAndGet() instead.
As counter++ is NOT atomic, you can replace it with JDK's AtomicInteger which is threadsafe.
You can AtomicInteger's use getAndIncrement() method as shown below:
class Television {
private static final AtomicInteger counter = new AtomicInteger();
public Television(){
counter.getAndIncrement();
}
}
An AtomicInteger is used in applications such as atomically
incremented counters, and cannot be used as a replacement for an
Integer.
You can look here
There are two ways here to bypass the underlying "++ on int" not being an atomic operation:
A) as others suggested, use AtomicInteger
B) introduce a common LOCK that all ctors can be using to sync on; like:
private final static Object LOCK = new Object();
public Television() {
synchronized (LOCK) {
counter++;
}
The requirement is that, I need to write an ArrayList of integers. I need thread-safe access of the different integers (write, read, increase, decrease), and also need to allow maximum concurrency.
The operation with each integer is also special, like this:
Mostly frequent operation is to read
Secondly frequent operation is to decrease by one only if the value is greater than zero. Or, to increase by one (unconditionally)
Adding/removing elements is rare, but still needed.
I thought about AtomicInteger. However this becomes unavailable, because the atomic operation I want is to compare if not zero, then decrease. However the atomic operation provided by AtomicInteger, is compare if equal, and set. If you know how to apply AtomicInteger in this case, please raise it here.
What I am thinking is to synchronized the access to each integer like this:
ArrayList <Integer> list;
... ...
// Compare if greater than zero, and decrease
MutableInt n = list.get(index);
boolean success = false;
synchronized (n) {
if (n.intValue()>0) { n.decrement(); success=true; }
}
// To add one
MutableInt n = list.get(index);
synchronized (n) {
n.increment();
}
// To just read, I am thinking no need synchronization at all.
int n = list.get(index).intValue();
With my solution, is there any side-effect? Is it efficient to maintain hundreds or even thousands of synchronized integers?
Update: I am also thinking that allowing concurrent access to every element is not practical and not beneficial, as the actual concurrent access is limited by the number of processors. Maybe I just use several synchronization objects to guard different portions of the List, then it is enough?
Then another thing is to implement the operation of add/delete, that it is thread-safe, but do not impact much of the concurrency of the other operations. I am thinking ReadWriteLock, for add/delete, need to acquire the write lock, for other operations (change the value of one integer), acquire the read lock. Is this a right approach?
I think you're right to use read lock for accessing the list and write lock for add/remove on the list.
You can still use AtomicInteger for the values:
// Increase value
value.incrementAndGet()
// Decrease value, lower bound is 0
do {
int num = value.get();
if (num == 0)
break;
} while (! value.compareAndSet(num, num - 1)); // try again if concurrently updated
I think, if you can live with a fixed size list, using a single AtomicIntegerArray is a better choice than using multiple AtomicIntegers:
public class AtomicIntList extends AbstractList<Integer> {
private final AtomicIntegerArray array;
public AtomicIntList(int size) {
array=new AtomicIntegerArray(size);
}
public int size() {
return array.length();
}
public Integer get(int index) {
return array.get(index);
}
// for code accessing this class directly rather than using the List interface
public int getAsInt(int index) {
return array.get(index);
}
public Integer set(int index, Integer element) {
return array.getAndSet(index, element);
}
// for code accessing this class directly rather than using the List interface
public int setAsInt(int index, int element) {
return array.getAndSet(index, element);
}
public boolean decrementIfPositive(int index) {
for(;;) {
int old=array.get(index);
if(old<=0) return false;
if(array.compareAndSet(index, old, old-1)) return true;
}
}
public int incrementAndGet(int index) {
return array.incrementAndGet(index);
}
}
Code accessing this class directly rather than via the List<Integer> interface may use the methods getAsInt and setAsInt to avoid boxing conversions. This is a common pattern. Since the methods decrementIfPositive and incrementAndGet are not part of the List interface anyway, they always use int values.
As an update of this question... I found out that the simplest solution, just synchronizing the entire code-block for all possible conflict methods, turns out to be the best, even from performance's point of view. Synchronizing the code block solves both issues - accessing each counters, and also add/delete elements to the counter list.
This is because ReentrantReadWriteLock has a really high overhead, even when only the read lock is applied. Comparing to the overhead of read/write lock, the cost of the operation itself is so tiny, that any additional locking is not worthy doing that.
The statement in the API doc of ReentrantReadWriteLock shall be highly put attention to: "ReentrantReadWriteLocks... is typically worthwhile only when ... and entail operations with overhead that outweighs synchronization overhead".
Suppose that there are many threads that call the method m(int i) and change the value of the array in position i. Is the following code correct, or is there a race condition?
public class A{
private int []a =new int[N];
private Semaphore[] s=new Semaphore[N];
public A(){
for(int i =0 ; i<N ; i++)
s[i]=new Semaphore(1);
}
public void m(int i){
s[i].acquire();
a[i]++;
s[i].release();
}
}
The code is correct, I see no race condition although both a and s should be made final. You should also use a try/finally every time you use locks that need to be acquired and released:
s[i].acquire();
try {
a[i]++;
} finally {
s[i].release();
}
But, for updating an array, the idea of individual locks per item is very unnecessary. A single lock would be just as appropriate since the major cost is the memory updating and the other native synchronization. This said, if the actual operation is not a int ++ then you are warranted in using a Semaphore or other Lock object.
But for simple operations, something like the following is fine:
// make sure it is final if you are synchronizing on it
private final int[] a = new int[N];
...
public void m(int i) {
synchronized (a) {
a[i]++:
}
}
If you are really worried about the blocking then an array of AtomicInteger is another possibility but even this feels like overkill unless a profiler tells you otherwise.
private final AtomicInteger[] a = new AtomicInteger[N];
...
public A(){
for(int i = 0; i < N; i++)
a[i] = new AtomicInteger(0);
}
public void m(int i) {
a[i].incrementAndGet();
}
Edit:
I just wrote a quick stupid test program that compares a single synchronized lock, a synchronized on an array of locks, AtomicInteger array, and Semaphore array. Here are the results:
synchronized on the int[] 10617ms
synchronized on an array of Object[] 1827ms
AtomicInteger array 1414ms
Semaphore array 3211ms
But, the kicker is that this is with 10 threads each doing 10 million iterations. Sure it is faster but unless you are truly doing millions of iterations, you won't see any noticeable performance improvement in your application. This is the definition of "premature optimization". You will be paying for code complexity, increasing the likelihood of bugs, adding debugging time, increasing maintenance costs, etc.. To quote Knuth:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Now, as the OP implies in comments, the i++ is not the real operation that s/he is protecting. If the increment is a lot more time consuming (i.e. if the blocking is increased), then the array of locks will be required.
Consider code sniper below:
package sync;
public class LockQuestion {
private String mutable;
public synchronized void setMutable(String mutable) {
this.mutable = mutable;
}
public String getMutable() {
return mutable;
}
}
At time Time1 thread Thread1 will update ‘mutable’ variable. Synchronization is needed in setter in order to flush memory from local cache to main memory.
At time Time2 ( Time2 > Time1, no thread contention) thread Thread2 will read value of mutable.
Question is – do I need to put synchronized before getter? Looks like this won’t cause any issues - memory should be up to date and Thread2’s local cache memory should be invalidated&updated by Thread1, but I’m not sure.
Rather than wonder, why not just use the atomic references in java.util.concurrent?
(and for what it's worth, my reading of happens-before does not guarantee that Thread2 will see changes to mutable unless it also uses synchronized ... but I always get a headache from that part of the JLS, so use the atomic references)
It will be fine if you make mutable volatile, details in the "cheap read-write lock"
Are you absolutely sure that the getter will be called only after the setter is called? If so, you don't need the getter to be synchronized, since concurrent reads do not need to synchronized.
If there is a chance that get and set can be called concurrently then you definitely need to synchronize the two.
If you worry so much about the performance in the reading thread, then what you do is read the value once using proper synchronization or volatile or atomic references. Then you assign the value to a plain old variable.
The assign to the plain variable is guaranteed to happen after the atomic read (because how else could it get the value?) and if the value will never be written to by another thread again you are all set.
I think you should start with something which is correct and optimise later when you know you have an issue. I would just use AtomicReference unless a few nano-seconds is too long. ;)
public static void main(String... args) {
AtomicReference<String> ars = new AtomicReference<String>();
ars.set("hello");
long start = System.nanoTime();
int runs = 1000* 1000 * 1000;
int length = test(ars, runs);
long time = System.nanoTime() - start;
System.out.printf("get() costs " + 1000*time / runs + " ps.");
}
private static int test(AtomicReference<String> ars, int runs) {
int len = 0;
for (int i = 0; i < runs; i++)
len = ars.get().length();
return len;
}
Prints
get() costs 1219 ps.
ps is a pico-second, with is 1 millionth of a micro-second.
This probably will never result in incorrect behavior, but unless you also guarantee the order that the threads startup in, you cannot necessarily guarantee that the compiler didn't reorder the read in Thread2 before the write in Thread1. More specifically, the entire Java runtime only has to guarantee that threads execute as if they were run in serial. So, as long as the thread has the same output running serially under optimizations, the entire language stack (compiler, hardware, language runtime) can do
pretty much whatever it wants. Including allowing Thread2 to cache the the result of LockQuestion.getMutable().
In practice, I would be very surprised if that ever happened. If you want to guarantee that this doesn't happen, have LockQuestion.mutable be declared as final and get initialized in the constructor. Or use the following idiom:
private static class LazySomethingHolder {
public static Something something = new Something();
}
public static Something getInstance() {
return LazySomethingHolder.something;
}