How I can use AtomicBoolean and what is that class for?
When multiple threads need to check and change the boolean. For example:
if (!initialized) {
initialize();
initialized = true;
}
This is not thread-safe. You can fix it by using AtomicBoolean:
if (atomicInitialized.compareAndSet(false, true)) {
initialize();
}
Here is the notes (from Brian Goetz book) I made, that might be of help to you
AtomicXXX classes
provide Non-blocking Compare-And-Swap implementation
Takes advantage of the support provide
by hardware (the CMPXCHG instruction
on Intel) When lots of threads are
running through your code that uses
these atomic concurrency API, they
will scale much better than code
which uses Object level
monitors/synchronization. Since,
Java's synchronization mechanisms
makes code wait, when there are lots
of threads running through your
critical sections, a substantial
amount of CPU time is spent in
managing the synchronization
mechanism itself (waiting, notifying,
etc). Since the new API uses hardware
level constructs (atomic variables)
and wait and lock free algorithms to
implement thread-safety, a lot more
of CPU time is spent "doing stuff"
rather than in managing
synchronization.
not only offer better
throughput, but they also provide
greater resistance to liveness
problems such as deadlock and
priority inversion.
There are two main reasons why you can use an atomic boolean. First it's mutable, you can pass it in as a reference and change the value that is associated to the boolean itself, for example.
public final class MyThreadSafeClass{
private AtomicBoolean myBoolean = new AtomicBoolean(false);
private SomeThreadSafeObject someObject = new SomeThreadSafeObject();
public boolean doSomething(){
someObject.doSomeWork(myBoolean);
return myBoolean.get(); //will return true
}
}
and in the someObject class
public final class SomeThreadSafeObject{
public void doSomeWork(AtomicBoolean b){
b.set(true);
}
}
More importantly though, it's thread safe and can indicate to developers maintaining the class, that this variable is expected to be modified and read from multiple threads. If you do not use an AtomicBoolean, you must synchronize the boolean variable you are using by declaring it volatile or synchronizing around the read and write of the field.
The AtomicBoolean class gives you a boolean value that you can update atomically. Use it when you have multiple threads accessing a boolean variable.
The java.util.concurrent.atomic package overview gives you a good high-level description of what the classes in this package do and when to use them. I'd also recommend the book Java Concurrency in Practice by Brian Goetz.
Excerpt from the package description
Package java.util.concurrent.atomic description: A small toolkit of classes that support lock-free thread-safe programming on single variables.[...]
The specifications of these methods enable implementations to employ efficient machine-level atomic instructions that are available on contemporary processors.[...]
Instances of classes AtomicBoolean, AtomicInteger, AtomicLong, and AtomicReference each provide access and updates to a single variable of the corresponding type.[...]
The memory effects for accesses and updates of atomics generally follow the rules for volatiles:
get has the memory effects of reading a volatile variable.
set has the memory effects of writing (assigning) a volatile variable.
weakCompareAndSet atomically reads and conditionally writes a variable, is ordered with respect to other memory operations on that variable, but otherwise acts as an ordinary non-volatile memory operation.
compareAndSet and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.
Related
I want to track getVariableAndLogAccess(RequestInfo requestInfo) using the code below. Will it be thread safe if only these two methods access variable?
What is the standard way to make it thread safe?
public class MyAccessLog(){
private int recordIndex = 0;
private int variableWithAccessTracking = 42;
private final Map<Integer, RequestInfo> requestsLog = new HashMap<>();
public int getVariableAndLogAccess(RequestInfo requestInfo){
Integer myID = recordIndex++;
int variableValue = variableWithAccessTracking;
requestInfo.saveValue(variableValue);
requestLog.put(myID, requestInfo);
return variableValue;
}
public void setValueAndLog(RequestInfo requestInfo, int newValue){
Integer myID = recordIndex++;
variableWithAccessTracking = variableValue;
requestInfo.saveValue(variableValue);
requestLog.put(myID, requestInfo);
}
/*other methods*/
}
Will it be thread safe if only these two methods access variable?
No.
For instance, if two threads call setValueAndLog, they might end up with the same myID value.
What is the standard way to make it thread safe?
You should either replace your int with an AtomicInteger, use a lock, or a syncrhonized block to prevent concurrent modifications.
As a rule of thumb, using an atomic variable such as the previously mentioned AtomicInteger is better than using locks since locks involve the operating system. Calling the operating system is like bringing in the lawyers - both are best avoided for things you can solve yourself.
Note that if you use locks or synchronized blocks, both the setter and getter need to use the same lock. Otherwise the getter could be accessed while the setter is still updating the variable, leading to concurrency errors.
Will it be thread safe if only these two methods access variable?
Nope.
Intuitively, there are two reasons:
An increment consists of a read followed by a write. The JLS does not guarantee that the two will be performed as an atomic operation. And indeed, neither to Java implementations do that.
Modern multi-core systems implement memory access with fast local memory caches and slower main memory. This means that one thread is not guaranteed to see the results of another thread's memory writes ... unless there are appropriate "memory barrier" instructions to force main-memory writes / reads.
Java will only insert these instructions if the memory model says it is necessary. (Because ... they slow the code down!)
Technically, the JLS has a whole chapter describing the Java Memory Model, and it provides a set of rules that allow you to reason about whether memory is being used correctly. For the higher level stuff, you can reason based on the guarantees provided by AtomicInteger, etcetera.
What is the standard way to make it thread safe?
In this case, you could use either an AtomicInteger instance, or you could synchronize using a primitive object locking (i.e the synchronized keyword) or a Lock object.
#Malt is right. Your code is not even close to be thread safe.
You can use AtomicInteger for your counter, but LongAdder would be more suitable for your case, as it is optimized for cases where you need counting things and read the result of your counting less often then you update it. LongAdder also has the same thread safety assurance of AtomicInteger
From java doc on LongAdder:
This class is usually preferable to AtomicLong when multiple threads update a common sum that is used for purposes such as collecting statistics, not for fine-grained synchronization control. Under low update contention, the two classes have similar characteristics. But under high contention, expected throughput of this class is significantly higher, at the expense of higher space consumption.
This is a common approach to log in a thread safe way:
For counter use AtomicInteger counter with counter.addAndGet(1) method.
Add data using public synchronized void putRecord(Data data){ /**/}
If you only use recordIndex as a handler for the record you can replace a map with a synchronized list: List list = Collections.synchronizedList(new LinkedList());
I'm working with a framework that requires a callback when sending a request. Each callback has to implement this interface. The methods in the callback are invoked asynchronously.
public interface ClientCallback<RESP extends Response>
{
public void onSuccessResponse(RESP resp);
public void onFailureResponse(FailureResponse failure);
public void onError(Throwable e);
}
To write integration tests with TestNG, I wanted to have a blocking callback. So I used a CountDownLatch to synchronize between threads.
Is the AtomicReference really needed here or is a raw reference okay? I know that if I use a raw reference and a raw integer (instead of CountDownLatch), the code wouldn't work because visibility is not guaranteed. But since the CountDownLatch is already synchronized, I wasn't sure whether I needed the extra synchronization from AtomicReference.
Note: The Result class is immutable.
public class BlockingCallback<RESP extends Response> implements ClientCallback<RESP>
{
private final AtomicReference<Result<RESP>> _result = new AtomicReference<Result<RESP>>();
private final CountDownLatch _latch = new CountDownLatch(1);
public void onSuccessResponse(RESP resp)
{
_result.set(new Result<RESP>(resp, null, null));
_latch.countDown();
}
public void onFailureResponse(FailureResponse failure)
{
_result.set(new Result<RESP>(null, failure, null));
_latch.countDown();
}
public void onError(Throwable e)
{
_result.set(new Result<RESP>(null, null, e));
_latch.countDown();
}
public Result<RESP> getResult(final long timeout, final TimeUnit unit) throws InterruptedException, TimeoutException
{
if (!_latch.await(timeout, unit))
{
throw new TimeoutException();
}
return _result.get();
}
You don't need to use another synchronization object (AtomicRefetence) here. The point is that the variable is set before CountDownLatch is invoked in one thread and read after CountDownLatch is invoked in another thread. CountDownLatch already performs thread synchronization and invokes memory barrier so the order of writing before and reading after is guaranteed. Because of this you don't even need to use volatile for that field.
A good starting point is the javadoc (emphasis mine):
Memory consistency effects: Until the count reaches zero, actions in a thread prior to calling countDown() happen-before actions following a successful return from a corresponding await() in another thread.
Now there are two options:
either you never call the onXxx setter methods once the count is 0 (i.e. you only call one of the methods once) and you don't need any extra synchronization
or you may call the setter methods more than once and you do need extra synchronization
If you are in scenario 2, you need to make the variable at least volatile (no need for an AtomicReference in your example).
If you are in scenario 1, you need to decide how defensive you want to be:
to err on the safe side you can still use volatile
if you are happy that the calling code won't mess up with the class, you can use a normal variable but I would at least make it clear in the javadoc of the methods that only the first call to the onXxx methods is guaranteed to be visible
Finally, in scenario 1, you may want to enforce the fact that the setters can only be called once, in which case you would probably use an AtomicReference and its compareAndSet method to make sure that the reference was null beforehand and throw an exception otherwise.
Short answer is you don't need AtomicReference here. You'll need volatile though.
The reason is that you're only writing to and reading from the reference (Result) and not doing any composite operations like compareAndSet().
Reads and writes are atomic for reference variables and for most primitive variables (all types except long and double).
Reference,
Sun Java tutorial
https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
Then there is JLS (Java Language Specification)
Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.
Java 8
http://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html#jls-17.7
Java 7
http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.7
Java 6
http://docs.oracle.com/javase/specs/jls/se6/html/memory.html#17.7
Source : https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
Atomic actions cannot be interleaved, so they can be used without fear of thread interference. However, this does not eliminate all need to synchronize atomic actions, because memory consistency errors are still possible. Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable. This means that changes to a volatile variable are always visible to other threads. What's more, it also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
Since you have only single operation write/read and it's atomic, making the variable volatile will suffice.
Regarding use of CountDownLatch, it's used to wait for n operations in other threads to complete. Since you have only one operation, you can use Condition, instead of CountDownLatch.
If you're interested in usage of AtomicReference, you can check Java Concurrency in Practice (Page 326), find the book below:
https://github.com/HackathonHackers/programming-ebooks/tree/master/Java
Or the same example used by #Binita Bharti in following StackOverflow answer
When to use AtomicReference in Java?
In order for an assignment to be visible across threads some sort of memory barrier must be crossed. This can be accomplished several different ways, depending on what exactly you're trying to do.
You can use a volatile field. Reads and writes to volatile fields are atomic and visible across threads.
You can use an AtomicReference. This is effectively the same as a volatile field, but it's a little more flexible (you can reassign and pass around references to the AtomicReference) and has a few extra operations, like compareAndSet().
You can use a CountDownLatch or similar synchronizer class, but you need to pay close attention to the memory invariants they offer. CountDownLatch, for instance, guarantees that all threads that await() will see everything that occurs in a thread that calls countDown() up to when countDown() is called.
You can use synchronized blocks. These are even more flexible, but require more care - both the write and the read must be synchronized, otherwise the write may not be seen.
You can use a thread-safe collection, such as a ConcurrentHashMap. Overkill if all you need is a cross-thread reference, but useful for storing structured data that multiple threads need to access.
This isn't intended to be a complete list of options, but hopefully you can see there are several ways to ensure a value becomes visible to other threads, and that AtomicReference is simply one of those mechanisms.
i read
thread safety for static variables and i understand it and i agree with it but
In book java se 7 programmer exam 804 can some one explain to me
public void run() {
synchronized(SharedCounter.class) {
SharedCounter.count++;
}
}
However, this code is inefficient since it acquires and releases the
lock every time just to increment the value of count.
can someone explain to me the above quote
The code is not particularly inefficient. It could be slightly more efficient. The main problem is that it is fragile: if any developer forgets to synchronize its access to the global SharedCounter.count variable, you have a thread-safety issue. Indeed, since i++ is not an atomic operation and since changing the value of a variable without synchronization doesn't make the variables new value visible to other threads, Every access to i must be done in a synchronized way.
The synchronization is thus not correctly encapsulated in a single class. Generally, accessing global public fields is bad design. It's even worse in a multi-threaded environment.
Using an AtomicInteger solves the encapsulation problem, and makes it slightly more efficient at the same time.
Synchronizing can be expensive, so it shouldn't be used carelessly. There are better ways such as using AtomicInteger.incrementAndGet(); which uses different mechanisms to handle the synchronization.
It's inefficient compared to using intrinsic CPU instructions which can do atomic increments without using a lock. See http://en.wikipedia.org/wiki/Fetch-and-add and http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/AtomicInteger.html
I understand valid use cases for AtomicInteger but I am confused on how can AtomicBoolean, guarantee atomicity of two actions i. 'changing the boolean-value' and ii. execute the 'one-time-logic' eg initialize() in following often-quoted use-case for AtomicBoolean variable atomicInitialized:
if (atomicInitialized.compareAndSet(false, true)) {
initialize();
}
This operation will first set the atomicInitialized to true(if it is false) and then execute intialize() which isn't safe. It will guarantee that initialize() is only called once, but the second thread to hit the getAndSet() will not be delayed until the first thread has finished the initialisation. So, the AtomicBoolean while providing atomicity in updating boolean value doesn't really provide atomicity for entire 'if-block' and synchronize/lock mechanism has to be used for achieving complete atomicity. Hence, above often quoted, popular use-case, isn't really atomic!!
The "atomic" classes are meant to provide thread-safe access and manipulation for single variables. They are not meant for synchronization of entire blocks, such as the if block you have as an example here.
From the java.util.concurrent.atomic package description:
Atomic classes are designed primarily as building blocks for
implementing non-blocking data structures and related infrastructure
classes. The compareAndSet method is not a general replacement for
locking. It applies only when critical updates for an object are
confined to a single variable.
To synchronize the entire block, don't rely solely on the "atomic" classes. You must provide other synchronization code.
Mutexes are pretty common in many programming languages, like e.g. C/C++. I miss them in Java. However, there are multiple ways I could write my own class Mutex:
Using a simple synchronized keyword on Mutex.
Using a binary semaphore.
Using atomic variables, like discussed here.
...?
What is the fastest (best runtime) way? I think synchronized is most common, but what about performance?
Mutexes are pretty common in many programming languages, like e.g. C/C++. I miss them in Java.
Not sure I follow you (especially because you give the answer in your question).
public class SomeClass {
private final Object mutex = new Object();
public void someMethodThatNeedsAMutex() {
synchronized(mutex) {
//here you hold the mutex
}
}
}
Alternatively, you can simply make the whole method synchronized, which is equivalent to using this as the mutex object:
public class SomeClass {
public synchronized void someMethodThatNeedsAMutex() {
//here you hold the mutex
}
}
What is the fastest (best runtime) way?
Acquiring / releasing a monitor is not going to be a significant performance issue per se (you can read this blog post to see an analysis of the impact). But if you have many threads fighting for the lock, it will create contention and degrade performance.
In that case, the best strategy is to not use mutexes by using "lock-free" algorithms if you are mostly reading data (as pointed out by Marko in the comments, lock-free uses CAS operations, which may involve retrying writes many times if you have lots of writing threads, eventually leading to worse performance) or even better, by avoiding to share too much stuff across threads.
The opposite is the case: Java designers solved it so well that you don't even recognize it: you don't need a first-class Mutex object, just the synchronized modifier.
If you have a special case where you want to juggle your mutexes in a non-nesting fashion, there's always the ReentrantLock and java.util.concurrent offers a cornucopia of synchronization tools that go way beyond the crude mutex.
In Java each object can be uses as Mutex.
This objects are typicaly named "lock" or "mutex".
You can create that object for yourself which is the prefered variant, because it avoids external access to that lock:
// usually a field in the class
private Object mutex = new Object();
// later in methods
synchronized(mutex) {
// mutual exclusive section for all that uses synchronized
// ob this mutex object
}
Faster is to avoid the mutex, by thinking what happens if another thread reads an non actual value. In some situations this would produce wrong calculation results, in other results only in a minimal delay. (but faster than with syncing)
Detailed explanation in book
Java Concurreny in practise
.
What is the fastest (best runtime) way?
That depends on many things. For example, ReentrantLock used to perform better under contention than using synchronized, but that changed when a new HotSpot version, optimizing synchronized locking, was released. So there's nothing inherent in any way of locking that favors one flavor of mutexes over the other (from a performance point of view) - in fact, the "best" solution can change with the data you're processing and the machine you're running on.
Also, why did the inventors of Java not solve this question for me?
They did - in several ways: synchronized, Locks, atomic variables, and a whole slew of other utilities in java.util.concurrent.
You can run micro benchmarks of each variant, like atomic, synchronized, locked. As others have pointed out, it depends a lot on the machine and number of threads in use. In my own experiments incrementing long integers, I found that with only one thread on a Xeon W3520, synchronized wins over atomic: Atomic/Sync/Lock: 8.4/6.2/21.8, in nanos per increment operation.
This is of course a border case since there is never any contention. Of course, in that case, we can also look at unsynchronized single-threads long increment, which comes out six times faster than atomic.
With 4 threads I get 21.8/40.2/57.3. Note that these are all increments across all threads, so we actually see a slowdown. It gets a bit better for locks with 64 threads: 22.2/45.1/45.9.
Another test on a 4-way/64T machine using Xeon E7-4820 yields for 1 thread: 9.1/7.8/29.1, 4 threads: 18.2/29.1/55.2 and 64 Threads: 53.7/402/420.
One more data point, this time a dual Xeon X5560, 1T: 6.6/5.8/17.8, 4T: 29.7/81.5/121, 64T: 31.2/73.4/71.6.
So, on a multi-socket machine, there is a heavy cache coherency tax.
you can use java.util.concurrent.locks.Lock in the same way as the mutex or java.util.concurrent.Semaphore. But using synchronized-keyword is a better way :-)
Regards
Andrej