I'm working with a framework that requires a callback when sending a request. Each callback has to implement this interface. The methods in the callback are invoked asynchronously.
public interface ClientCallback<RESP extends Response>
{
public void onSuccessResponse(RESP resp);
public void onFailureResponse(FailureResponse failure);
public void onError(Throwable e);
}
To write integration tests with TestNG, I wanted to have a blocking callback. So I used a CountDownLatch to synchronize between threads.
Is the AtomicReference really needed here or is a raw reference okay? I know that if I use a raw reference and a raw integer (instead of CountDownLatch), the code wouldn't work because visibility is not guaranteed. But since the CountDownLatch is already synchronized, I wasn't sure whether I needed the extra synchronization from AtomicReference.
Note: The Result class is immutable.
public class BlockingCallback<RESP extends Response> implements ClientCallback<RESP>
{
private final AtomicReference<Result<RESP>> _result = new AtomicReference<Result<RESP>>();
private final CountDownLatch _latch = new CountDownLatch(1);
public void onSuccessResponse(RESP resp)
{
_result.set(new Result<RESP>(resp, null, null));
_latch.countDown();
}
public void onFailureResponse(FailureResponse failure)
{
_result.set(new Result<RESP>(null, failure, null));
_latch.countDown();
}
public void onError(Throwable e)
{
_result.set(new Result<RESP>(null, null, e));
_latch.countDown();
}
public Result<RESP> getResult(final long timeout, final TimeUnit unit) throws InterruptedException, TimeoutException
{
if (!_latch.await(timeout, unit))
{
throw new TimeoutException();
}
return _result.get();
}
You don't need to use another synchronization object (AtomicRefetence) here. The point is that the variable is set before CountDownLatch is invoked in one thread and read after CountDownLatch is invoked in another thread. CountDownLatch already performs thread synchronization and invokes memory barrier so the order of writing before and reading after is guaranteed. Because of this you don't even need to use volatile for that field.
A good starting point is the javadoc (emphasis mine):
Memory consistency effects: Until the count reaches zero, actions in a thread prior to calling countDown() happen-before actions following a successful return from a corresponding await() in another thread.
Now there are two options:
either you never call the onXxx setter methods once the count is 0 (i.e. you only call one of the methods once) and you don't need any extra synchronization
or you may call the setter methods more than once and you do need extra synchronization
If you are in scenario 2, you need to make the variable at least volatile (no need for an AtomicReference in your example).
If you are in scenario 1, you need to decide how defensive you want to be:
to err on the safe side you can still use volatile
if you are happy that the calling code won't mess up with the class, you can use a normal variable but I would at least make it clear in the javadoc of the methods that only the first call to the onXxx methods is guaranteed to be visible
Finally, in scenario 1, you may want to enforce the fact that the setters can only be called once, in which case you would probably use an AtomicReference and its compareAndSet method to make sure that the reference was null beforehand and throw an exception otherwise.
Short answer is you don't need AtomicReference here. You'll need volatile though.
The reason is that you're only writing to and reading from the reference (Result) and not doing any composite operations like compareAndSet().
Reads and writes are atomic for reference variables and for most primitive variables (all types except long and double).
Reference,
Sun Java tutorial
https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
Then there is JLS (Java Language Specification)
Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.
Java 8
http://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html#jls-17.7
Java 7
http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.7
Java 6
http://docs.oracle.com/javase/specs/jls/se6/html/memory.html#17.7
Source : https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
Atomic actions cannot be interleaved, so they can be used without fear of thread interference. However, this does not eliminate all need to synchronize atomic actions, because memory consistency errors are still possible. Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable. This means that changes to a volatile variable are always visible to other threads. What's more, it also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
Since you have only single operation write/read and it's atomic, making the variable volatile will suffice.
Regarding use of CountDownLatch, it's used to wait for n operations in other threads to complete. Since you have only one operation, you can use Condition, instead of CountDownLatch.
If you're interested in usage of AtomicReference, you can check Java Concurrency in Practice (Page 326), find the book below:
https://github.com/HackathonHackers/programming-ebooks/tree/master/Java
Or the same example used by #Binita Bharti in following StackOverflow answer
When to use AtomicReference in Java?
In order for an assignment to be visible across threads some sort of memory barrier must be crossed. This can be accomplished several different ways, depending on what exactly you're trying to do.
You can use a volatile field. Reads and writes to volatile fields are atomic and visible across threads.
You can use an AtomicReference. This is effectively the same as a volatile field, but it's a little more flexible (you can reassign and pass around references to the AtomicReference) and has a few extra operations, like compareAndSet().
You can use a CountDownLatch or similar synchronizer class, but you need to pay close attention to the memory invariants they offer. CountDownLatch, for instance, guarantees that all threads that await() will see everything that occurs in a thread that calls countDown() up to when countDown() is called.
You can use synchronized blocks. These are even more flexible, but require more care - both the write and the read must be synchronized, otherwise the write may not be seen.
You can use a thread-safe collection, such as a ConcurrentHashMap. Overkill if all you need is a cross-thread reference, but useful for storing structured data that multiple threads need to access.
This isn't intended to be a complete list of options, but hopefully you can see there are several ways to ensure a value becomes visible to other threads, and that AtomicReference is simply one of those mechanisms.
Related
I am trying to wrap my head around thread safety in java (or in general). I have this class (which I hope complies with the definition of a POJO) which also needs to be compatible with JPA providers:
public class SomeClass {
private Object timestampLock = new Object();
// are "volatile"s necessary?
private volatile java.sql.Timestamp timestamp;
private volatile String timestampTimeZoneName;
private volatile BigDecimal someValue;
public ZonedDateTime getTimestamp() {
// is synchronisation necessary here? is this the correct usage?
synchronized (timestampLock) {
return ZonedDateTime.ofInstant(timestamp.toInstant(), ZoneId.of(timestampTimeZoneName));
}
}
public void setTimestamp(ZonedDateTime dateTime) {
// is this the correct usage?
synchronized (timestampLock) {
this.timestamp = java.sql.Timestamp.from(dateTime.toInstant());
this.timestampTimeZoneName = dateTime.getZone().getId();
}
}
// is synchronisation required?
public BigDecimal getSomeValue() {
return someValue;
}
// is synchronisation required?
public void setSomeValue(BigDecimal val) {
someValue = val;
}
}
As stated in the commented rows in the code, is it necessary to define timestamp and timestampTimeZoneName as volatile and are the synchronized blocks used as they should be? Or should I use only the synchronized blocks and not define timestamp and timestampTimeZoneName as volatile? A timestampTimeZoneName of a timestamp should not be erroneously matched with another timestamp's.
This link says
Reads and writes are atomic for all variables declared volatile
(including long and double variables)
Should I understand that accesses to someValue in this code through the setter/getter are thread safe thanks to volatile definitions? If so, is there a better (I do not know what "better" might mean here) way to accomplish this?
To determine if you need synchronized, try to imagine a place where you can have a context switch that would break your code.
In this case, if the context switch happens where I put the comment, then in getTimestamp() you're going to be reading different values from each timestamp type.
Also, although assignments are atomic, this expression java.sql.Timestamp.from(dateTime.toInstant()); certainly isn't, so you can get a context switch inbetween dateTime.toInstant() and the call to from. In short you definitely need the synchronized blocks.
synchronized (timestampLock) {
this.timestamp = java.sql.Timestamp.from(dateTime.toInstant());
//CONTEXT SWITCH HERE
this.timestampTimeZoneName = dateTime.getZone().getId();
}
synchronized (timestampLock) {
return ZonedDateTime.ofInstant(timestamp.toInstant(), ZoneId.of(timestampTimeZoneName));
}
In terms of volatile, I'm pretty sure they're required. You have to guarantee that each thread definitely is getting the most updated version of a variable.
This is the contract of volatile. And although it may be covered by the synchronized block, and volatile not actually necessary here, it's good to write anyway. If the synchronized block does the job of volatile already, the VM won't do the guarantee twice. This means volatile won't cost you any more, and it's a very good flashing light that says to the programmer: "I'M USED IN MULTIPLE THREADS".
For someValue: If there's no synchronized block here, then volatile is definitely necessary. If you call a set in one thread, the other thread has no queue that tells it that may have been updated outside of this thread. So it may use an old and cached value. The JIT can do a lot of funny optimizations if it assumes single thread. Ones that can simply break your program.
Now I'm not entirely certain if synchronized is required here. My guess is no. I would add it anyway to be safe though. Or you can let java worry about the synchronization and use http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/AtomicInteger.html
Nothing new here, this is just a more explicit version of something #Cruncher already said:
You need synchronized whenever it is important for two or more fields in your program to be consistent with one another. Suppose you have two parallel lists, and your code depends on them both being the same length. That's called an invariant as in, the two lists are invariably the same length.
How can you write a method, append(x,y), that adds a new pair of values to the lists without temporarily breaking the invariant? You can't. The method must add one item to the first list, breaking the invariant, and then add the other item to the second list, fixing it again. There's no other way.
In a single-threaded program, that temporary broken state is no problem because no other method can possibly use the lists while append(x,y) is running. That's no longer true in a multithreaded program. In the worst case, append(x,y) could add x to the x list, and then the scheduler could suspend the thread at that exact moment to allow other threads to run. The CPUs could execute millions of instructions before append(x,y) gets to finish the job and make the lists right again. During all of that time, other threads would see the broken invariant, and possibly corrupt your data or crash the program as a result.
The fix is for append(x,y) to be synchronized on some object, and (this is the important part), for every other method that uses the lists to be synchronized on the same object. Since only one thread can be synchronized on a given object at a given time, it will not be possible for any other thread to see the lists in an inconsistent state.
So, if thread A calls append(x,y), and thread B tries to look at the lists "at the same time", will thread B see the what the lists looked like before or after thread A did its work? That's called a data race. And with only the synchronization that I have described so far, there's no way to know which thread will win. All we've done so far is to guarantee one particular invariant.
If it matters which thread wins the race, then that means that there is some higher-level invariant that also needs protection. You will have to add more synchronization to protect that one too. "Thread safety" -- two little words to name a subject that is both broad and deep.
Good Luck, and Have Fun!
// is synchronisation required?
public BigDecimal getSomeValue() {
return someValue;
}
// is synchronisation required?
public void setSomeValue(BigDecimal val) {
someValue = val;
}
I think Yes you are require to put the synchronization block because consider an example in which one thread is setting the value and at the same time other thread is trying to read from getter method, like here in the example you will see the syncronization block.So, if you take your variable inside the method then you must require the synchronization block.
I could find the answer if I read a complete chapter/book about multithreading, but I'd like a quicker answer. (I know this stackoverflow question is similar, but not sufficiently.)
Assume there is this class:
public class TestClass {
private int someValue;
public int getSomeValue() { return someValue; }
public void setSomeValue(int value) { someValue = value; }
}
There are two threads (A and B) that access the instance of this class. Consider the following sequence:
A: getSomeValue()
B: setSomeValue()
A: getSomeValue()
If I'm right, someValue must be volatile, otherwise the 3rd step might not return the up-to-date value (because A may have a cached value). Is this correct?
Second scenario:
B: setSomeValue()
A: getSomeValue()
In this case, A will always get the correct value, because this is its first access so he can't have a cached value yet. Is this right?
If a class is accessed only in the second way, there is no need for volatile/synchronization, or is it?
Note that this example was simplified, and actually I'm wondering about particular member variables and methods in a complex class, and not about whole classes (i.e. which variables should be volatile or have synced access). The main point is: if more threads access certain data, is synchronized access needed by all means, or does it depend on the way (e.g. order) they access it?
After reading the comments, I try to present the source of my confusion with another example:
From UI thread: threadA.start()
threadA calls getSomeValue(), and informs the UI thread
UI thread gets the message (in its message queue), so it calls: threadB.start()
threadB calls setSomeValue(), and informs the UI thread
UI thread gets the message, and informs threadA (in some way, e.g. message queue)
threadA calls getSomeValue()
This is a totally synchronized structure, but why does this imply that threadA will get the most up-to-date value in step 6? (if someValue is not volatile, or not put into a monitor when accessed from anywhere)
If two threads are calling the same methods, you can't make any guarantees about the order that said methods are called. Consequently, your original premise, which depends on calling order, is invalid.
It's not about the order in which the methods are called; it's about synchronization. It's about using some mechanism to make one thread wait while the other fully completes its write operation. Once you've made the decision to have more than one thread, you must provide that synchronization mechanism to avoid data corruption.
As we all know, that its the crucial state of the data that we need to protect, and the atomic statements which govern the crucial state of the data must be Synchronized.
I had this example, where is used volatile, and then i used 2 threads which used to increment the value of a counter by 1 each time till 10000. So it must be a total of 20000. but to my surprise it didnt happened always.
Then i used synchronized keyword to make it work.
Synchronization makes sure that when a thread is accessing the synchronized method, no other thread is allowed to access this or any other synchronized method of that object, making sure that data corruption is not done.
Thread-Safe class means that it will maintain its correctness in the presence of the scheduling and interleaving of the underlining Runtime environment, without any thread-safe mechanism from the Client side, which access that class.
Let's look at the book.
A field may be declared volatile, in which case the Java memory model (§17) ensures that all threads see a consistent value for the variable.
So volatile is a guarantee that the declared variable won't be copied into thread local storage, which is otherwise allowed. It's further explained that this is an intentional alternative to locking for very simple kinds of synchronized access to shared storage.
Also see this earlier article, which explains that int access is necessarily atomic (but not double or long).
These together mean that if your int field is declared volatile then no locks are necessary to guarantee atomicity: you will always see a value that was last written to the memory location, not some confused value resulting from a half-complete write (as is possible with double or long).
However you seem to imply that your getters and setters themselves are atomic. This is not guaranteed. The JVM can interrupt execution at intermediate points of during the call or return sequence. In this example, this has no consequences. But if the calls had side effects, e.g. setSomeValue(++val), then you would have a different story.
The issue is that java is simply a specification. There are many JVM implementations and examples of physical operating environments. On any given combination an an action may be safe or unsafe. For instance On single processor systems the volatile keyword in your example is probably completely unnecessary. Since the writers of the memory and language specifications can't reasonably account for possible sets of operating conditions, they choose to white-list certain patterns that are guaranteed to work on all compliant implementations. Adhering to to these guidelines ensures both that your code will work on your target system and that it will be reasonably portable.
In this case "caching" typically refers to activity that is going on at the hardware level. There are certain events that occur in java that cause cores on a multi processor systems to "Synchronize" their caches. Accesses to volatile variables are an example of this, synchronized blocks are another. Imagine a scenario where these two threads X and Y are scheduled to run on different processors.
X starts and is scheduled on proc 1
y starts and is scheduled on proc 2
.. now you have two threads executing simultaneously
to speed things up the processors check local caches
before going to main memory because its expensive.
x calls setSomeValue('x-value') //assuming proc 1's cache is empty the cache is set
//this value is dropped on the bus to be flushed
//to main memory
//now all get's will retrieve from cache instead
//of engaging the memory bus to go to main memory
y calls setSomeValue('y-value') //same thing happens for proc 2
//Now in this situation depending on to order in which things are scheduled and
//what thread you are calling from calls to getSomeValue() may return 'x-value' or
//'y-value. The results are completely unpredictable.
The point is that volatile(on compliant implementations) ensures that ordered writes will always be flushed to main memory and that other processor's caches will be flagged as 'dirty' before the next access regardless of the thread from which that access occurs.
disclaimer: volatile DOES NOT LOCK. This is important especially in the following case:
volatile int counter;
public incrementSomeValue(){
counter++; // Bad thread juju - this is at least three instructions
// read - increment - write
// there is no guarantee that this operation is atomic
}
this could be relevant to your question if your intent is that setSomeValue must always be called before getSomeValue
If the intent is that getSomeValue() must always reflect the most recent call to setSomeValue() then this is a good place for the use of the volatile keyword. Just remember that without it there is no guarantee that getSomeValue() will reflect to most recent call to setSomeValue() even if setSomeValue() was scheduled first.
If I'm right, someValue must be volatile, otherwise the 3rd step might not return the up-to-date value (because A may have a cached
value). Is this correct?
If thread B calls setSomeValue(), you need some sort of synchronization to ensure that thread A can read that value. volatile won't accomplish this on its own, and neither will making the methods synchronized. The code that does this is ultimately whatever synchronization code you added that made sure that A: getSomeValue() happens after B: setSomeValue(). If, as you suggest, you used a message queue to synchronize threads, this happens because the memory changes made by thread A became visible to thread B once thread B acquired the lock on your message queue.
If a class is accessed only in the second way, there is no need for
volatile/synchronization, or is it?
If you are really doing your own synchronization then it doesn't sound like you care whether these classes are thread-safe. Be sure that you aren't accessing them from more than one thread at the same time though; otherwise, any methods that aren't atomic (assiging an int is) may lead to you to be in an unpredictable state. One common pattern is to put the shared state into an immutable object so that you are sure that the receiving thread isn't calling any setters.
If you do have a class that you want to be updated and read from multiple threads, I'd probably do the simplest thing to start, which is often to synchronize all public methods. If you really believe this to be a bottleneck, you could look into some of the more complex locking mechanisms in Java.
So what does volatile guarantee?
For the exact semantics, you might have to go read tutorials, but one way to summarize it is that 1) any memory changes made by the last thread to access the volatile will be visible to the current thread accessing the volatile, and 2) that accessing the volatile is atomic (it won't be a partially constructed object, or a partially assigned double or long).
Synchronized blocks have analogous properties: 1) any memory changes made by the last thread to access to the lock will be visible to this thread, and 2) the changes made within the block are performed atomically with respect to other synchronized blocks
(1) means any memory changes, not just changes to the volatile (we're talking post JDK 1.5) or within the synchronized block. This is what people mean when they refer to ordering, and this is accomplished in different ways on different chip architectures, often by using memory barriers.
Also, in the case of synchronous blocks (2) only guarantees that you won't see inconsistent values if you are within another block synchronized on the same lock. It's usually a good idea to synchronize all access to shared variables, unless you really know what you are doing.
consider this class,with no instance variables and only methods which are non-synchronous can we infer from this info that this class in Thread-safe?
public class test{
public void test1{
// do something
}
public void test2{
// do something
}
public void test3{
// do something
}
}
It depends entirely on what state the methods mutate. If they mutate no shared state, they're thread safe. If they mutate only local state, they're thread-safe. If they only call methods that are thread-safe, they're thread-safe.
Not being thread safe means that if multiple threads try to access the object at the same time, something might change from one access to the next, and cause issues. Consider the following:
int incrementCount() {
this.count++;
// ... Do some other stuff
return this.count;
}
would not be thread safe. Why is it not? Imagine thread 1 accesses it, count is increased, then some processing occurs. While going through the function, another thread accesses it, increasing count again. The first thread, which had it go from, say, 1 to 2, would now have it go from 1 to 3 when it returns. Thread 2 would see it go from 1 to 3 as well, so what happened to 2?
In this case, you would want something like this (keeping in mind that this isn't any language-specific code, but closest to Java, one of only 2 I've done threading in)
int incrementCount() synchronized {
this.count++;
// ... Do some other stuff
return this.count;
}
The synchronized keyword here would make sure that as long as one thread is accessing it, no other threads could. This would mean that thread 1 hits it, count goes from 1 to 2, as expected. Thread 2 hits it while 1 is processing, it has to wait until thread 1 is done. When it's done, thread 1 gets a return of 2, then thread 2 goes throguh, and gets the expected 3.
Now, an example, similar to what you have there, that would be entirely thread-safe, no matter what:
int incrementCount(int count) {
count++;
// ... Do some other stuff
return this.count;
}
As the only variables being touched here are fully local to the function, there is no case where two threads accessing it at the same time could try working with data changed from the other. This would make it thread safe.
So, to answer the question, assuming that the functions don't modify anything outside of the specific called function, then yes, the class could be deemed to be thread-safe.
Consider the following quote from an article about thread safety ("Java theory and practice: Characterizing thread safety"):
In reality, any definition of thread safety is going to have a certain degree of circularity, as it must appeal to the class's specification -- which is an informal, prose description of what the class does, its side effects, which states are valid or invalid, invariants, preconditions, postconditions, and so on. (Constraints on an object's state imposed by the specification apply only to the externally visible state -- that which can be observed by calling its public methods and accessing its public fields -- rather than its internal state, which is what is actually represented in its private fields.)
Thread safety
For a class to be thread-safe, it first must behave correctly in a single-threaded environment. If a class is correctly implemented, which is another way of saying that it conforms to its specification, no sequence of operations (reads or writes of public fields and calls to public methods) on objects of that class should be able to put the object into an invalid state, observe the object to be in an invalid state, or violate any of the class's invariants, preconditions, or postconditions.
Furthermore, for a class to be thread-safe, it must continue to behave correctly, in the sense described above, when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, without any additional synchronization on the part of the calling code. The effect is that operations on a thread-safe object will appear to all threads to occur in a fixed, globally consistent order.
So your class itself is thread-safe, as long as it doesn't have any side effects. As soon as the methods mutate any external objects (e.g. some singletons, as already mentioned by others) it's not any longer thread-safe.
Depends on what happens inside those methods. If they manipulate / call any method parameters or global variables / singletons which are not themselves thread safe, the class is not thread safe either.
(yes I see that the methods as shown here here have no parameters, but no brackets either, so this is obviously not full working code - it wouldn't even compile as is.)
yes, as long as there are no instance variables. method calls using only input parameters and local variables are inherently thread-safe. you might consider making the methods static too, to reflect this.
If it has no mutable state - it's thread safe. If you have no state - you're thread safe by association.
No, I don't think so.
For example, one of the methods could obtain a (non-thread-safe) singleton object from another class and mutate that object.
Yes - this class is thread safe but this does not mean that your application is.
An application is thread safe if the threads in it cannot concurrently access heap state. All objects in Java (and therefore all of their fields) are created on the heap. So, if there are no fields in an object then it is thread safe.
In any practical application, objects will have state. If you can guarantee that these objects are not accessed concurrently then you have a thread safe application.
There are ways of optimizing access to shared state e.g. Atomic variables or with carful use of the volatile keyword, but I think this is going beyond what you've asked.
I hope this helps.
From Sun's tutorial:
Synchronized methods enable a simple strategy for preventing thread interference and memory consistency errors: if an object is visible to more than one thread, all reads or writes to that object's variables are done through synchronized methods. (An important exception: final fields, which cannot be modified after the object is constructed, can be safely read through non-synchronized methods, once the object is constructed) This strategy is effective, but can present problems with liveness, as we'll see later in this lesson.
Q1. Is the above statements mean that if an object of a class is going to be shared among multiple threads, then all instance methods of that class (except getters of final fields) should be made synchronized, since instance methods process instance variables?
In order to understand concurrency in Java, I recommend the invaluable Java Concurrency in Practice.
In response to your specific question, although synchronizing all methods is a quick-and-dirty way to accomplish thread safety, it does not scale well at all. Consider the much maligned Vector class. Every method is synchronized, and it works terribly, because iteration is still not thread safe.
No. It means that synchronized methods are a way to achieve thread safety, but they're not the only way and, by themselves, they don't guarantee complete safety in all situations.
Not necessarily. You can synchronize (e.g. place a lock on dedicated object) part of the method where you access object's variables, for example. In other cases, you may delegate job to some inner object(s) which already handles synchronization issues.
There are lots of choices, it all depends on the algorithm you're implementing. Although, 'synchronized' keywords is usually the simplest one.
edit
There is no comprehensive tutorial on that, each situation is unique. Learning it is like learning a foreign language: never ends :)
But there are certainly helpful resources. In particular, there is a series of interesting articles on Heinz Kabutz's website.
http://www.javaspecialists.eu/archive/Issue152.html
(see the full list on the page)
If other people have any links I'd be interested to see also. I find the whole topic to be quite confusing (and, probably, most difficult part of core java), especially since new concurrency mechanisms were introduced in java 5.
Have fun!
In the most general form yes.
Immutable objects need not be synchronized.
Also, you can use individual monitors/locks for the mutable instance variables (or groups there of) which will help with liveliness. As well as only synchronize the portions where data is changed, rather than the entire method.
synchronized methodName vs synchronized( object )
That's correct, and is one alternative. I think it would be more efficient to synchronize access to that object only instead synchronize all it's methods.
While the difference may be subtle, it would be useful if you use that same object in a single thread
ie ( using synchronized keyword on the method )
class SomeClass {
private int clickCount = 0;
public synchronized void click(){
clickCount++;
}
}
When a class is defined like this, only one thread at a time may invoke the click method.
What happens if this method is invoked too frequently in a single threaded app? You'll spend some extra time checking if that thread can get the object lock when it is not needed.
class Main {
public static void main( String [] args ) {
SomeClass someObject = new SomeClass();
for( int i = 0 ; i < Integer.MAX_VALUE ; i++ ) {
someObject.click();
}
}
}
In this case, the check to see if the thread can lock the object will be invoked unnecessarily Integer.MAX_VALUE ( 2 147 483 647 ) times.
So removing the synchronized keyword in this situation will run much faster.
So, how would you do that in a multithread application?
You just synchronize the object:
synchronized ( someObject ) {
someObject.click();
}
Vector vs ArrayList
As an additional note, this usage ( syncrhonized methodName vs. syncrhonized( object ) ) is, by the way, one of the reasons why java.util.Vector is now replaced by java.util.ArrayList. Many of the Vector methods are synchronized.
Most of the times a list is used in a single threaded app or piece of code ( ie code inside jsp/servlets is executed in a single thread ), and the extra synchronization of Vector doesn't help to performance.
Same goes for Hashtable being replaced by HashMap
In fact getters a should be synchronized too or fields are to be made volatile. That is because when you get some value, you're probably interested in a most recent version of the value. You see, synchronized block semantics provides not only atomicity of execution (e.g. it guarantees that only one thread executes this block at one time), but also a visibility. It means that when thread enters synchronized block it invalidates its local cache and when it goes out it dumps any variables that have been modified back to main memory. volatile variables has the same visibility semantics.
No. Even getters have to be synchronized, except when they access only final fields. The reason is, that, for example, when accessing a long value, there is a tiny change that another thread currently writes it, and you read it while just the first 4 bytes have been written while the other 4 bytes remain the old value.
Yes, that's correct. All methods that modify data or access data that may be modified by a different thread need to be synchronized on the same monitor.
The easy way is to mark the methods as synchronized. If these are long-running methods, you may want to only synchronize that parts that the the reading/writing. In this case you would definie the monitor, along with wait() and notify().
The simple answer is yes.
If an object of the class is going to be shared by multiple threads, you need to syncronize the getters and setters to prevent data inconsistency.
If all the threads would have seperate copy of object, then there is no need to syncronize the methods. If your instance methods are more than mere set and get, you must analyze the threat of threads waiting for a long running getter/setter to finish.
You could use synchronized methods, synchronized blocks, concurrency tools such as Semaphore or if you really want to get down and dirty you could use Atomic References. Other options include declaring member variables as volatile and using classes like AtomicInteger instead of Integer.
It all depends on the situation, but there are a wide range of concurrency tools available - these are just some of them.
Synchronization can result in hold-wait deadlock where two threads each have the lock of an object, and are trying to acquire the lock of the other thread's object.
Synchronization must also be global for a class, and an easy mistake to make is to forget to synchronize a method. When a thread holds the lock for an object, other threads can still access non synchronized methods of that object.
In the class below, is the method getIt() thread safe and why?
public class X {
private long myVar;
public void setIt(long var){
myVar = var;
}
public long getIt() {
return myVar;
}
}
It is not thread-safe. Variables of type long and double in Java are treated as two separate 32-bit variables. One thread could be writing and have written half the value when another thread reads both halves. In this situation, the reader would see a value that was never supposed to exist.
To make this thread-safe you can either declare myVar as volatile (Java 1.5 or later) or make both setIt and getIt synchronized.
Note that even if myVar was a 32-bit int you could still run into threading issues where one thread could be reading an out of date value that another thread has changed. This could occur because the value has been cached by the CPU. To resolve this, you again need to declare myVar as volatile (Java 1.5 or later) or make both setIt and getIt synchronized.
It's also worth noting that if you are using the result of getIt in a subsequent setIt call, e.g. x.setIt(x.getIt() * 2), then you probably want to synchronize across both calls:
synchronized(x)
{
x.setIt(x.getIt() * 2);
}
Without the extra synchronization, another thread could change the value in between the getIt and setIt calls causing the other thread's value to be lost.
This is not thread-safe. Even if your platform guarantees atomic writes of long, the lack of synchronized makes it possible that one thread calls setIt() and even after this call has finished it is possible that another thread can call getIt() and this call could return the old value of myVar.
The synchronized keyword does more than an exclusive access of one thread to a block or a method. It also guarantees that the second thread is informed about a change of a variable.
So you either have to mark both methods as synchronized or mark the member myVar as volatile.
There's a very good explanation about synchronization here:
Atomic actions cannot be interleaved, so they can be used without fear of thread interference. However, this does not eliminate all need to synchronize atomic actions, because memory consistency errors are still possible. Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable. This means that changes to a volatile variable are always visible to other threads. What's more, it also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
No, it's not. At least, not on platforms that lack atomic 64-bit memory accesses.
Suppose that Thread A calls setIt, copies 32 bits into memory where the backing value is, and is then pre-empted before it can copy the other 32 bits.
Then Thread B calls getIt.
No it is not, because longs are not atomic in java, so one thread could have written 32 bits of the long in the setIt method, and then the getIt could read the value, and then setIt could set the other 32 bits.
So the end result is that getIt returns a value that was never valid.
It ought to be, and generally is, but is not guaranteed to be thread safe. There could be issues with different cores having different versions in CPU cache, or the store/retrieve not being atomic for all architectures. Use the AtomicLong class.
The getter is not thread safe because it’s not guarded by any mechanism that guarantees the most up-to-date visibility. Your choices are:
making myVar final (but then you can’t mutate it)
making myVar volatile
use synchronized to accessing myVar
AFAIK, Modern JVMs no longer split long and double operations. I don't know of any reference which states this is still a problem. For example, see AtomicLong which doesn't use synchronization in Sun's JVM.
Assuming you want to be sure it is not a problem then you can use synchronize both get() and set(). However, if you are performing an operation like add, i.e. set(get()+1) then this synchronization doesn't buy you much, you still have to synchronize the object for the whole operation. (A better way around this is to use a single operation for add(n) which is synchronized)
However, a better solution is to use an AtomicLong. This supports atomic operations like get, set and add and DOESN'T use synchronization.
Since it is a read only method. You should synchronize the set method.
EDIT : I see why the get method needs to be synchronized as well. Good job explaining Phil Ross.