synchronized vs ReadWriteLock - java

Hi i have extremly simplified a java problem in the following code snippet
public class WhichJavaSynchroIsBestHere {
private BlockingQueue<CustomObject> queue = new PriorityBlockingQueue<CustomObject>();
public void add( CustomObject customO ) {
// The custom objects do never have the same id
// so it's no horizontal concurrency but more vertical one a la producer/consumer
if ( !queue.contains( customO ) ) {
// Between the two statement a remove can happen
queue.add( customO );
}
}
public void remove( CustomObject customO ) {
queue.remove( customO );
}
public static class CustomObject {
long id;
#Override
public boolean equals( Object obj ) {
if ( obj == null || getClass() != obj.getClass() )
return false;
CustomObject other = (CustomObject) obj;
return ( id == other.id;
}
}
}
So this more of producer / consumer problem because presumably the two thread calling add do not pass the same Customobject (id), but this can happen when if one thread is calling add with same object as a second thread calling remove. The code section in between the if condition and the adding is what seems to me as not thread, safe, i was thinking about the object Lock (no synchronized blocks) to secure that section, but is a ReadWriteLock better?

It would make no difference, as both sections would require a Write lock anyway.
The advantage of a ReadWriteLock is to easily allow multiple Readers that can work with shared access, and, yet, cooperate well with someone who requires exclusive access for Writing.
You could surround the contains code with a Read lock, and that would make sense if putting in potential duplicates is the bulk of your work. But if it's more a sanity check for a rare edge case, than a primary driver of your work (i.e. the test will pass the vast majority of the time), then there's no reason for the Read lock in this case. Just lock the whole section and be done with it.

The code section in between the if condition and the adding is what seems to me as not thread-safe
You are correct, it isn't thread-safe. Anytime you have multiple calls to a object, even a synchronized one, you need to worry about the state of the object changing between the calls if accessed by multiple threads. It's not just than an object might have been removed, but a duplicate object could have been added by a different producer causing duplicates in the queue. The race would be:
thread #1 tests to see if object A is in the queue, it's not
thread #2 tests to see if object A is in the queue, it's not
thread #1 adds object A to the queue
thread #2 adds object A to the queue
The queue would then have 2 copies of A in it.
i was thinking about the object Lock (no synchronized blocks) to secure that section
If you are shying away from synchronized because of perceived performance problems, then don't. This is a perfect example of where using synchronized is appropriate. You could get rid of the BlockingQueue if you are doing all operations inside of a synchronized block.
is a ReadWriteLock better ?
No because in both cases, the threads are "writing" to the queue. A removal is modifying the queue as much as an add. The ReadWriteLock allows multiple reading threads if there is no writers but exclusive access to a writing thread. Now the testing of the queue is considered a read but that's not going to save you very much unless there are is a large percentage of times where there are duplicates that are already in the queue.
Also, be very careful of queue.contains(customO). Most queues (this includes PriorityBlockingQueue) run through all items in the queue to look for the one you may be adding (O(N)). This can be very expensive depending on how many items are in the collection.
This feels to me to be be a good place to use the ConcurrentSkipListSet. You just do an queue.add()which internally does a put-if-absent. You can do aqueue.pollFirst()` to remove and get the first item. The collection then takes care of the memory synchronization and locking for you and resolves the race conditions.

Related

Is it thread-safe to synchronize only on add to HashSet?

Imagine having a main thread which creates a HashSet and starts a lot of worker threads passing HashSet to them.
Just like in code below:
void main() {
final Set<String> set = new HashSet<>();
final ExecutorService threadExecutor =
Executors.newFixedThreadPool(10);
threadExecutor.submit(() -> doJob(set));
}
void doJob(final Set<String> pSet) {
// do some stuff
final String x = ... // doesn't matter how we received the value.
if (!pSet.contains(x)) {
synchronized (pSet) {
// double check to prevent multiple adds within different threads
if (!pSet.contains(x)) {
// do some exclusive work with x.
pSet.add(x);
}
}
}
// do some stuff
}
I'm wondering is it thread-safe to synchronize only on add method? Is there any possible issues if contains is not synchronized?
My intuition telling me this is fine, after leaving synchronized block changes made to set should be visible to all threads, but JMM could be counter-intuitive sometimes.
P.S. I don't think it's a duplicate of How to lock multiple resources in java multithreading
Even though answers to both could be similar, this question addresses more particular case.
I'm wondering is it thread-safe to synchronize only on the add method? Are there any possible issues if contains is not synchronized as well?
Short answers: No and Yes.
There are two ways of explaining this:
The intuitive explanation
Java synchronization (in its various forms) guards against a number of things, including:
Two threads updating shared state at the same time.
One thread trying to read state while another is updating it.
Threads seeing stale values because memory caches have not been written to main memory.
In your example, synchronizing on add is sufficient to ensure that two threads cannot update the HashSet simultaneously, and that both calls will be operating on the most recent HashSet state.
However, if contains is not synchronized as well, a contains call could happen simultaneously with an add call. This could lead to the contains call seeing an intermediate state of the HashSet, leading to an incorrect result, or worse. This can also happen if the calls are not simultaneous, due to changes not being flushed to main memory immediately and/or the reading thread not reading from main memory.
The Memory Model explanation
The JLS specifies the Java Memory Model which sets out the conditions that must be fulfilled by a multi-threaded application to guarantee that one thread sees the memory updates made by another. The model is expressed in mathematical language, and not easy to understand, but the gist is that visibility is guaranteed if and only if there is a chain of happens before relationships from the write to a subsequent read. If the write and read are in different threads, then synchronization between the threads is the primary source of these relationships. For example in
// thread one
synchronized (sharedLock) {
sharedVariable = 42;
}
// thread two
synchronized (sharedLock) {
other = sharedVariable;
}
Assuming that the thread one code is run before the thread two code, there is a happens before relationships between thread one releasing the lock and thread two acquiring it. With this and the "program order" relations, we can build a chain from the write of 42 to the assignment to other. This is sufficient to guarantee that other will be assigned 42 (or possibly a later value of the variable) and NOT any value in sharedVariable before 42 was written to it.
Without the synchronized block synchronizing on the same lock, the second thread could see a stale value of sharedVariable; i.e. some value written to it before 42 was assigned to it.
That code is thread safe for the the synchronized (pSet) { } part :
if (!pSet.contains(x)) {
synchronized (pSet) {
// Here you are sure to have the updated value of pSet
if (!pSet.contains(x)) {
// do some exclusive work with x.
pSet.add(x);
}
}
because inside the synchronized statement on the pSet object :
one and only one thread may be in this block.
and inside it, pSet has also its updated state guaranteed by the happens-before relationship with the synchronized keyword.
So whatever the value returned by the first if (!pSet.contains(x)) statement for a waiting thread, when this waited thread will wake up and enter in the synchronized statement, it will set the last updated value of pSet. So even if the same element was added by a previous thread, the second if (!pSet.contains(x)) would return false.
But this code is not thread safe for the first statement if (!pSet.contains(x)) that could be executed during a writing on the Set.
As a rule of thumb, a collection not designed to be thread safe should not be used to perform concurrently writing and reading operations because the internal state of the collection could be in a in-progress/inconsistent state for a reading operation that would occur meanwhile a writing operation.
While some no thread safe collection implementations accept such a usage in the facts, that is not guarantee at all that it will always be true.
So you should use a thread safe Set implementation to guarantee the whole thing thread safe.
For example with :
Set<String> pSet = ConcurrentHashMap.newKeySet();
That uses under the hood a ConcurrentHashMap, so no lock for reading and a minimal lock for writing (only on the entry to modify and not the whole structure).
No,
You don't know in what state the Hashset might be during add by another Thread. There might be fundamental changes ongoing, like splitting of buckets, so that contains may return false during the adding by another thread, even if the element would be there in a singlethreaded HashSet. In that case you would try to add an element a second time.
Even Worse Scenario: contains might get into an endless loop or throw an exception because of an temporary invalid state of the HashSet in the memory used by the two threads at the same time.

Java ArrayList's set thread-safety

I have a usecase where multiple threads can be reading and modifying an ArrayList where the default values for these booleans are True.
The only modification the threads can make is setting an element of that ArrayList from True to False.
All of the threads will be also reading the ArrayList concurrently, but it is okay to read staled values.
Note:
The size of the ArrayList will not change throughout the lifetime of the ArrayList.
Question:
Is it necessary to synchronize the ArrayList across these threads? The only synchronization I'm doing is marking the ArrayList as volatile such that any update to it will be flushed back to the main memory from a thread's local memory. Is this enough?
Here is a sample code on how this ArrayList gets used by threads
myList is the ArrayList in question and its values are initialized to True
if (!myList.get(index)) {
return;
} else {
// do some operations to determine if we should
// update the value of myList to False.
if (needToUpdateList) {
myList.set(index, False);
}
}
Update
I previously said these threads do not care about staled values which is true. However, I have another thread that only reads these values and perform one final action. This thread does care about staleness. Does the synchronization requirement change?
Is there a cheaper way to "publish" the updated values besides requiring synchronization on every update? I'm trying to minimize locking as much as possible.
As it says in the Javadoc of ArrayList:
Note that this implementation is not synchronized. If multiple threads access an ArrayList instance concurrently, and at least one of the threads modifies the list structurally, it must be synchronized externally.
You're not modifying the list structurally, so you don't need to synchronize for that reason.
The other reason you'd want to synchronize is to avoid reading stale values; but you say that don't care about that.
As such there is no reason to synchronize.
Edit for the update #3
If you do care about reading stale values, you do need to synchronize.
An alternative to synchronization which would avoid locking the entire list would be to make it a List<AtomicBoolean>: this would not require synchronization of the list, because you aren't changing the values stored in the list; but reads of an AtomicBoolean value guarantees visibility.
It depends on what you want to do when an element is true. Consider your code, with a separate thread messing with the value you're looking at:
if (!myList.get(index)) { // <--- at this point, the value is True, so go to else branch
return;
} else {
// <--- at this point, other thread sets value to False.
// do some operations to determine if we should
// update the value of myList to False.
// **** Do these operations assume the value is still True?
// **** If so, then your application is not thread-safe.
if (needToUpdateList) {
myList.set(index, False);
}
}
Update
I previously said these threads do not care about staled values which is true. However, I have another thread that only reads these values and perform one final action. This thread does care about staleness. Does the synchronization requirement change?
You just invalidated a lot of perfectly good answers.
Yes, synchronization matters now. In fact probably atomicity matters too. Use a synchronized List or maybe even a Concurrent list or map of some sort.
Anytime you read-modify-write the list, you probably also have to hold the synchronized lock to preserve atomicity:
synchronized( myList ) {
if (!myList.get(index)) {
return;
} else {
// do some operations to determine if we should
// update the value of myList to False.
if (needToUpdateList) {
myList.set(index, False);
}
}
}
Edit: to reduce the time the lock is held, a lot depends on how long "do some operations" take, but a CocurrentHashMap reduces lock contention at the cost of some additional overhead. It might be worth profiling your code to determine the actual overhead and which method is faster/better.
ConcurrentHashMap<Integer,Boolean> myList = new ConcurrentHashMap<>();
//...
if( myList.get( index ) != null ) return;
// "do some opertaions" here
if( needToUpdate )
myList.put( index, false );
But I'm still not convinced that this isn't premature optimization. Write correct code first, fully synchronizing the list is probably fine. Then profile working code. If you do find a bottleneck, then you can worry about whether reducing lock contention is a good idea. But probably the code bottleneck won't be in the lock contention and will in fact be somewhere totally different.
I did some more googling and found that each thread might be storing the values in the registry or the local cache. The specs only offer some guarantee on when data would be written to shared/global memory.
https://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.4.5
basically volatile, synchronized, thread.start(), thread.join()...
So yeah using the AtmoicBoolean will probably be the easiest, but you can also synchronize or make a class with a volatile boolean in it.
check this link out:
http://tutorials.jenkov.com/java-concurrency/volatile.html#variable-visibility-problems

Can objects get lost if a LinkedList is add/remove fast by lots of threads?

sound like a silly question. I just started Java Concurrency.
I have a LinkedList that acts as a task queue and is accessed by multiple threads. They removeFirst() and execute it, other threads put more tasks (.add()). Tasks can have the thread put them back to the queue.
I notice that when there are a lot of tasks and they are put back to the queue a lot, the number of tasks I add to the queue initially are not what come out, 1, or sometimes 2 is missing.
I checked everything and I synchronized every critical section + notifyAll().
Already mark the LinkedList as 'volatile'.
Exact number is 384 tasks, each is put back 3072 times.
The problem doesn't occur if there is a small number of tasks & put back. Also if I System.out.println() all the steps then it doesn't happens anymore so I can't debug.
Could it be possible that LinkedList.add() is not fast enough so the threads somehow miss it?
Simplified code:
public void callByAllThreads() {
Task executedTask = null;
do
{
// access by multiple thread
synchronized(asyncQueue) {
executedTask = asyncQueue.poll();
if(executedTask == null) {
inProcessCount.incrementAndGet(); // mark that there is some processing going on
}
}
if(executedTask != null) {
executedTask.callMethod(); // subclass of task can override this method
synchronized(asyncQueue) {
inProcessCount.decrementAndGet();
asyncQueue.notifyAll();
}
}
}
while(executedTask != null);
}
The Task can override callMethod:
public void callMethodOverride() {
synchronized(getAsyncQueue()) {
getAsyncQueue().add(this);
getAsyncQueue().notifyAll();
}
}
From the docs for LinkedList:
Note that this implementation is not synchronized. If multiple threads access a linked list concurrently, and at least one of the threads modifies the list structurally, it must be synchronized externally.
i.e. you should synchronize access to the list. You say you are, but if you are seeing items get "lost" then you probably aren't synchronizing properly. Instead of trying to do that, you could use a framework class that does it for you ...
... If you are always removing the next available (first) item (effectively a producer/consumer implementation) then you could use a BlockingQueue implementation, This is guaranteed to be thread safe, and has the advantage of blocking the consumer until an item is available. An example is the ArrayBlockingQueue.
For non-blocking thread-safe queues you can look at ConcurrentLinkedQueue
Marking the list instance variable volatile has nothing to do with your list being synchronized for mutation methods like add or removeFirst. volatile is simply to do with ensuring that read/write for that instance variable is communicated correctly between, and ordered correctly within, threads. Note I said that variable, not the contents of that variable (see the Java Tutorials > Atomic Access)
LinkedList is definitely not thread safe; you cannot use it safely with multiple threads. It's not a question of "fast enough," it's a question of changes made by one thread being visible to other threads. Marking it volatile doesn't help; that only affects references to the LinkedList being changed, not changes to the contents of the LinkedList.
Consider ConcurrentLinkedQueue or ConcurrentLinkedDeque.
LinkedList is not thread safe, so yes, multiple threads accessing it simultaneously will lead to problems. Synchronizing critical sections can solve this, but as you are still having problems you probably made a mistake somewhere. Try wrapping it in a Collections.synchronizedList() to synchronize all method calls.
Linked list is not thread safe , you can use ConcurrentLinkedQueue if it fits your need,which seems possibly can.
As documentation says
An unbounded thread-safe queue based on linked nodes. This queue
orders elements FIFO (first-in-first-out). The head of the queue is
that element that has been on the queue the longest time. The tail of
the queue is that element that has been on the queue the shortest
time. New elements are inserted at the tail of the queue, and the
queue retrieval operations obtain elements at the head of the queue. A
ConcurrentLinkedQueue is an appropriate choice when many threads will
share access to a common collection. This queue does not permit null
elements.
You increment your inProcessCount when executedTask == null which is obviously the opposite of what you want to do. So it’s no wonder that it will have inconsistent values.
But there are other issues as well. You call notifyAll() at several places but as long as there is no one calling wait() that has no use.
Note further that if you access an integer variable consistently from inside synchronized blocks only throughout the code, there is no need to make it an AtomicInteger. On the other hand, if you use it, e.g. because it will be accessed at other places without additional synchronization, you can move the code updating the AtomicInteger outside the synchronized block.
Also, a method which calls a method like getAsyncQueue() three times looks suspicious to a reader. Just call it once and remember the result in a local variable, then everone can be confident that it is the same reference on all three uses. Generally, you have to ensure that all code is using the same list, hence the appropriate modifier for the variable holding it is final, not volatile.

what to use in multithreaded environment; Vector or ArrayList

I have this situation:
web application with cca 200 concurent requests (Threads) are in need to log something to local filesystem. I have one class to which all threads are placing their calls, and that class internally stores messages to one Array (Vector or ArrayList) which then in turn will be written to filesystem.
Idea is to return from thread's call ASAP so thread can do it's job as fast as possible, what thread wanted to log can be written to filesystem later, it is not so crucial.
So, that class in turn removes first element from that list and writes it to filesystem, while in real time there is 10 or 20 threads which are appending new logs at the end of that list.
I would like to use ArrayList since it is not synchronized and therefore thread's calls will last less, question is:
am I risking deadlocks / data loss? Is it better to use Vector since it is thread safe? Is it slower to use Vector?
Actually both ArrayList and Vector are very bad choices here, not because of synchronization (which you would definitely need), but because removing the first element is O(n).
The perfect data structure for your purspose is the ConcurrentLinkedQueue: it offers both thread safety (without using synchronization), and O(1) adding and removing.
Are you limitted to particular (old) java version? It not please consider using java.util.concurrent.LinkedBlockingQueue for this kind of stuff. It's really worth looking at java.util.concurrent.* package when dealing with concurrency.
Vector is worse than useless. Don't use it even when using multithreading. A trivial example of why it's bad is to consider two threads simultaneously iterating and removing elements on the list at the same time. The methods size(), get(), remove() might all be synchronized but the iteration loop is not atomic so - kaboom. One thread is bound to try removing something which is not there, or skip elements because the size() changes.
Instead use synchronized() blocks where you expect two threads to access the same data.
private ArrayList myList;
void removeElement(Object e)
{
synchronized (myList) {
myList.remove(e);
}
}
Java 5 provides explicit Lock objects which allow more finegrained control, such as being able to attempt to timeout if a resource is not available in some time period.
private final Lock lock = new ReentrantLock();
private ArrayList myList;
void removeElement(Object e) {
{
if (!lock.tryLock(1, TimeUnit.SECONDS)) {
// Timeout
throw new SomeException();
}
try {
myList.remove(e);
}
finally {
lock.unlock();
}
}
There actually is a marginal difference in performance between a sychronizedlist and a vector. (http://www.javacodegeeks.com/2010/08/java-best-practices-vector-arraylist.html)

Avoid synchronized(this) in Java?

Whenever a question pops up on SO about Java synchronization, some people are very eager to point out that synchronized(this) should be avoided. Instead, they claim, a lock on a private reference is to be preferred.
Some of the given reasons are:
some evil code may steal your lock (very popular this one, also has an "accidentally" variant)
all synchronized methods within the same class use the exact same lock, which reduces throughput
you are (unnecessarily) exposing too much information
Other people, including me, argue that synchronized(this) is an idiom that is used a lot (also in Java libraries), is safe and well understood. It should not be avoided because you have a bug and you don't have a clue of what is going on in your multithreaded program. In other words: if it is applicable, then use it.
I am interested in seeing some real-world examples (no foobar stuff) where avoiding a lock on this is preferable when synchronized(this) would also do the job.
Therefore: should you always avoid synchronized(this) and replace it with a lock on a private reference?
Some further info (updated as answers are given):
we are talking about instance synchronization
both implicit (synchronized methods) and explicit form of synchronized(this) are considered
if you quote Bloch or other authorities on the subject, don't leave out the parts you don't like (e.g. Effective Java, item on Thread Safety: Typically it is the lock on the instance itself, but there are exceptions.)
if you need granularity in your locking other than synchronized(this) provides, then synchronized(this) is not applicable so that's not the issue
I'll cover each point separately.
Some evil code may steal your lock (very popular this one, also has an
"accidentally" variant)
I'm more worried about accidentally. What it amounts to is that this use of this is part of your class' exposed interface, and should be documented. Sometimes the ability of other code to use your lock is desired. This is true of things like Collections.synchronizedMap (see the javadoc).
All synchronized methods within the same class use the exact same
lock, which reduces throughput
This is overly simplistic thinking; just getting rid of synchronized(this) won't solve the problem. Proper synchronization for throughput will take more thought.
You are (unnecessarily) exposing too much information
This is a variant of #1. Use of synchronized(this) is part of your interface. If you don't want/need this exposed, don't do it.
Well, firstly it should be pointed out that:
public void blah() {
synchronized (this) {
// do stuff
}
}
is semantically equivalent to:
public synchronized void blah() {
// do stuff
}
which is one reason not to use synchronized(this). You might argue that you can do stuff around the synchronized(this) block. The usual reason is to try and avoid having to do the synchronized check at all, which leads to all sorts of concurrency problems, specifically the double checked-locking problem, which just goes to show how difficult it can be to make a relatively simple check threadsafe.
A private lock is a defensive mechanism, which is never a bad idea.
Also, as you alluded to, private locks can control granularity. One set of operations on an object might be totally unrelated to another but synchronized(this) will mutually exclude access to all of them.
synchronized(this) just really doesn't give you anything.
While you are using synchronized(this) you are using the class instance as a lock itself. This means that while lock is acquired by thread 1, the thread 2 should wait.
Suppose the following code:
public void method1() {
// do something ...
synchronized(this) {
a ++;
}
// ................
}
public void method2() {
// do something ...
synchronized(this) {
b ++;
}
// ................
}
Method 1 modifying the variable a and method 2 modifying the variable b, the concurrent modification of the same variable by two threads should be avoided and it is. BUT while thread1 modifying a and thread2 modifying b it can be performed without any race condition.
Unfortunately, the above code will not allow this since we are using the same reference for a lock; This means that threads even if they are not in a race condition should wait and obviously the code sacrifices concurrency of the program.
The solution is to use 2 different locks for two different variables:
public class Test {
private Object lockA = new Object();
private Object lockB = new Object();
public void method1() {
// do something ...
synchronized(lockA) {
a ++;
}
// ................
}
public void method2() {
// do something ...
synchronized(lockB) {
b ++;
}
// ................
}
}
The above example uses more fine grained locks (2 locks instead one (lockA and lockB for variables a and b respectively) and as a result allows better concurrency, on the other hand it became more complex than the first example ...
While I agree about not adhering blindly to dogmatic rules, does the "lock stealing" scenario seem so eccentric to you? A thread could indeed acquire the lock on your object "externally"(synchronized(theObject) {...}), blocking other threads waiting on synchronized instance methods.
If you don't believe in malicious code, consider that this code could come from third parties (for instance if you develop some sort of application server).
The "accidental" version seems less likely, but as they say, "make something idiot-proof and someone will invent a better idiot".
So I agree with the it-depends-on-what-the-class-does school of thought.
Edit following eljenso's first 3 comments:
I've never experienced the lock stealing problem but here is an imaginary scenario:
Let's say your system is a servlet container, and the object we're considering is the ServletContext implementation. Its getAttribute method must be thread-safe, as context attributes are shared data; so you declare it as synchronized. Let's also imagine that you provide a public hosting service based on your container implementation.
I'm your customer and deploy my "good" servlet on your site. It happens that my code contains a call to getAttribute.
A hacker, disguised as another customer, deploys his malicious servlet on your site. It contains the following code in the init method:
synchronized (this.getServletConfig().getServletContext()) {
while (true) {}
}
Assuming we share the same servlet context (allowed by the spec as long as the two servlets are on the same virtual host), my call on getAttribute is locked forever. The hacker has achieved a DoS on my servlet.
This attack is not possible if getAttribute is synchronized on a private lock, because 3rd-party code cannot acquire this lock.
I admit that the example is contrived and an oversimplistic view of how a servlet container works, but IMHO it proves the point.
So I would make my design choice based on security consideration: will I have complete control over the code that has access to the instances? What would be the consequence of a thread's holding a lock on an instance indefinitely?
It depends on the situation.
If There is only one sharing entity or more than one.
See full working example here
A small introduction.
Threads and shareable entities
It is possible for multiple threads to access same entity, for eg multiple connectionThreads sharing a single messageQueue. Since the threads run concurrently there may be a chance of overriding one's data by another which may be a messed up situation.
So we need some way to ensure that shareable entity is accessed only by one thread at a time. (CONCURRENCY).
Synchronized block
synchronized() block is a way to ensure concurrent access of shareable entity.
First, a small analogy
Suppose There are two-person P1, P2 (threads) a Washbasin (shareable entity) inside a washroom and there is a door (lock).
Now we want one person to use washbasin at a time.
An approach is to lock the door by P1 when the door is locked P2 waits until p1 completes his work
P1 unlocks the door
then only p1 can use washbasin.
syntax.
synchronized(this)
{
SHARED_ENTITY.....
}
"this" provided the intrinsic lock associated with the class (Java developer designed Object class in such a way that each object can work as monitor).
Above approach works fine when there are only one shared entity and multiple threads (1: N).
N shareable entities-M threads
Now think of a situation when there is two washbasin inside a washroom and only one door. If we are using the previous approach, only p1 can use one washbasin at a time while p2 will wait outside. It is wastage of resource as no one is using B2 (washbasin).
A wiser approach would be to create a smaller room inside washroom and provide them one door per washbasin. In this way, P1 can access B1 and P2 can access B2 and vice-versa.
washbasin1;
washbasin2;
Object lock1=new Object();
Object lock2=new Object();
synchronized(lock1)
{
washbasin1;
}
synchronized(lock2)
{
washbasin2;
}
See more on Threads----> here
There seems a different consensus in the C# and Java camps on this. The majority of Java code I have seen uses:
// apply mutex to this instance
synchronized(this) {
// do work here
}
whereas the majority of C# code opts for the arguably safer:
// instance level lock object
private readonly object _syncObj = new object();
...
// apply mutex to private instance level field (a System.Object usually)
lock(_syncObj)
{
// do work here
}
The C# idiom is certainly safer. As mentioned previously, no malicious / accidental access to the lock can be made from outside the instance. Java code has this risk too, but it seems that the Java community has gravitated over time to the slightly less safe, but slightly more terse version.
That's not meant as a dig against Java, just a reflection of my experience working on both languages.
Make your data immutable if it is possible ( final variables)
If you can't avoid mutation of shared data across multiple threads, use high level programming constructs [e.g. granular Lock API ]
A Lock provides exclusive access to a shared resource: only one thread at a time can acquire the lock and all access to the shared resource requires that the lock be acquired first.
Sample code to use ReentrantLock which implements Lock interface
class X {
private final ReentrantLock lock = new ReentrantLock();
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
}
Advantages of Lock over Synchronized(this)
The use of synchronized methods or statements forces all lock acquisition and release to occur in a block-structured way.
Lock implementations provide additional functionality over the use of synchronized methods and statements by providing
A non-blocking attempt to acquire a lock (tryLock())
An attempt to acquire the lock that can be interrupted (lockInterruptibly())
An attempt to acquire the lock that can timeout (tryLock(long, TimeUnit)).
A Lock class can also provide behavior and semantics that is quite different from that of the implicit monitor lock, such as
guaranteed ordering
non-re entrant usage
Deadlock detection
Have a look at this SE question regarding various type of Locks:
Synchronization vs Lock
You can achieve thread safety by using advanced concurrency API instead of Synchronied blocks. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
The java.util.concurrent package has vastly reduced the complexity of my thread safe code. I only have anecdotal evidence to go on, but most work I have seen with synchronized(x) appears to be re-implementing a Lock, Semaphore, or Latch, but using the lower-level monitors.
With this in mind, synchronizing using any of these mechanisms is analogous to synchronizing on an internal object, rather than leaking a lock. This is beneficial in that you have absolute certainty that you control the entry into the monitor by two or more threads.
If you've decided that:
the thing you need to do is lock on
the current object; and
you want to
lock it with granularity smaller than
a whole method;
then I don't see the a taboo over synchronizezd(this).
Some people deliberately use synchronized(this) (instead of marking the method synchronized) inside the whole contents of a method because they think it's "clearer to the reader" which object is actually being synchronized on. So long as people are making an informed choice (e.g. understand that by doing so they're actually inserting extra bytecodes into the method and this could have a knock-on effect on potential optimisations), I don't particularly see a problem with this. You should always document the concurrent behaviour of your program, so I don't see the "'synchronized' publishes the behaviour" argument as being so compelling.
As to the question of which object's lock you should use, I think there's nothing wrong with synchronizing on the current object if this would be expected by the logic of what you're doing and how your class would typically be used. For example, with a collection, the object that you would logically expect to lock is generally the collection itself.
I think there is a good explanation on why each of these are vital techniques under your belt in a book called Java Concurrency In Practice by Brian Goetz. He makes one point very clear - you must use the same lock "EVERYWHERE" to protect the state of your object. Synchronised method and synchronising on an object often go hand in hand. E.g. Vector synchronises all its methods. If you have a handle to a vector object and are going to do "put if absent" then merely Vector synchronising its own individual methods isn't going to protect you from corruption of state. You need to synchronise using synchronised (vectorHandle). This will result in the SAME lock being acquired by every thread which has a handle to the vector and will protect overall state of the vector. This is called client side locking. We do know as a matter of fact vector does synchronised (this) / synchronises all its methods and hence synchronising on the object vectorHandle will result in proper synchronisation of vector objects state. Its foolish to believe that you are thread safe just because you are using a thread safe collection. This is precisely the reason ConcurrentHashMap explicitly introduced putIfAbsent method - to make such operations atomic.
In summary
Synchronising at method level allows client side locking.
If you have a private lock object - it makes client side locking impossible. This is fine if you know that your class doesn't have "put if absent" type of functionality.
If you are designing a library - then synchronising on this or synchronising the method is often wiser. Because you are rarely in a position to decide how your class is going to be used.
Had Vector used a private lock object - it would have been impossible to get "put if absent" right. The client code will never gain a handle to the private lock thus breaking the fundamental rule of using the EXACT SAME LOCK to protect its state.
Synchronising on this or synchronised methods do have a problem as others have pointed out - someone could get a lock and never release it. All other threads would keep waiting for the lock to be released.
So know what you are doing and adopt the one that's correct.
Someone argued that having a private lock object gives you better granularity - e.g. if two operations are unrelated - they could be guarded by different locks resulting in better throughput. But this i think is design smell and not code smell - if two operations are completely unrelated why are they part of the SAME class? Why should a class club unrelated functionalities at all? May be a utility class? Hmmmm - some util providing string manipulation and calendar date formatting through the same instance?? ... doesn't make any sense to me at least!!
No, you shouldn't always. However, I tend to avoid it when there are multiple concerns on a particular object that only need to be threadsafe in respect to themselves. For example, you might have a mutable data object that has "label" and "parent" fields; these need to be threadsafe, but changing one need not block the other from being written/read. (In practice I would avoid this by declaring the fields volatile and/or using java.util.concurrent's AtomicFoo wrappers).
Synchronization in general is a bit clumsy, as it slaps a big lock down rather than thinking exactly how threads might be allowed to work around each other. Using synchronized(this) is even clumsier and anti-social, as it's saying "no-one may change anything on this class while I hold the lock". How often do you actually need to do that?
I would much rather have more granular locks; even if you do want to stop everything from changing (perhaps you're serialising the object), you can just acquire all of the locks to achieve the same thing, plus it's more explicit that way. When you use synchronized(this), it's not clear exactly why you're synchronizing, or what the side effects might be. If you use synchronized(labelMonitor), or even better labelLock.getWriteLock().lock(), it's clear what you are doing and what the effects of your critical section are limited to.
Short answer: You have to understand the difference and make choice depending on the code.
Long answer: In general I would rather try to avoid synchronize(this) to reduce contention but private locks add complexity you have to be aware of. So use the right synchronization for the right job. If you are not so experienced with multi-threaded programming I would rather stick to instance locking and read up on this topic. (That said: just using synchronize(this) does not automatically make your class fully thread-safe.) This is a not an easy topic but once you get used to it, the answer whether to use synchronize(this) or not comes naturally.
A lock is used for either visibility or for protecting some data from concurrent modification which may lead to race.
When you need to just make primitive type operations to be atomic there are available options like AtomicInteger and the likes.
But suppose you have two integers which are related to each other like x and y co-ordinates, which are related to each other and should be changed in an atomic manner. Then you would protect them using a same lock.
A lock should only protect the state that is related to each other. No less and no more. If you use synchronized(this) in each method then even if the state of the class is unrelated all the threads will face contention even if updating unrelated state.
class Point{
private int x;
private int y;
public Point(int x, int y){
this.x = x;
this.y = y;
}
//mutating methods should be guarded by same lock
public synchronized void changeCoordinates(int x, int y){
this.x = x;
this.y = y;
}
}
In the above example I have only one method which mutates both x and y and not two different methods as x and y are related and if I had given two different methods for mutating x and y separately then it would not have been thread safe.
This example is just to demonstrate and not necessarily the way it should be implemented. The best way to do it would be to make it IMMUTABLE.
Now in opposition to Point example, there is an example of TwoCounters already provided by #Andreas where the state which is being protected by two different locks as the state is unrelated to each other.
The process of using different locks to protect unrelated states is called Lock Striping or Lock Splitting
The reason not to synchronize on this is that sometimes you need more than one lock (the second lock often gets removed after some additional thinking, but you still need it in the intermediate state). If you lock on this, you always have to remember which one of the two locks is this; if you lock on a private Object, the variable name tells you that.
From the reader's viewpoint, if you see locking on this, you always have to answer the two questions:
what kind of access is protected by this?
is one lock really enough, didn't someone introduce a bug?
An example:
class BadObject {
private Something mStuff;
synchronized setStuff(Something stuff) {
mStuff = stuff;
}
synchronized getStuff(Something stuff) {
return mStuff;
}
private MyListener myListener = new MyListener() {
public void onMyEvent(...) {
setStuff(...);
}
}
synchronized void longOperation(MyListener l) {
...
l.onMyEvent(...);
...
}
}
If two threads begin longOperation() on two different instances of BadObject, they acquire
their locks; when it's time to invoke l.onMyEvent(...), we have a deadlock because neither of the threads may acquire the other object's lock.
In this example we may eliminate the deadlock by using two locks, one for short operations and one for long ones.
As already said here synchronized block can use user-defined variable as lock object, when synchronized function uses only "this". And of course you can manipulate with areas of your function which should be synchronized and so on.
But everyone says that no difference between synchronized function and block which covers whole function using "this" as lock object. That is not true, difference is in byte code which will be generated in both situations. In case of synchronized block usage should be allocated local variable which holds reference to "this". And as result we will have a little bit larger size of function (not relevant if you have only few number of functions).
More detailed explanation of the difference you can find here:
http://www.artima.com/insidejvm/ed2/threadsynchP.html
Also usage of synchronized block is not good due to following point of view:
The synchronized keyword is very limited in one area: when exiting a synchronized block, all threads that are waiting for that lock must be unblocked, but only one of those threads gets to take the lock; all the others see that the lock is taken and go back to the blocked state. That's not just a lot of wasted processing cycles: often the context switch to unblock a thread also involves paging memory off the disk, and that's very, very, expensive.
For more details in this area I would recommend you read this article:
http://java.dzone.com/articles/synchronized-considered
This is really just supplementary to the other answers, but if your main objection to using private objects for locking is that it clutters your class with fields that are not related to the business logic then Project Lombok has #Synchronized to generate the boilerplate at compile-time:
#Synchronized
public int foo() {
return 0;
}
compiles to
private final Object $lock = new Object[0];
public int foo() {
synchronized($lock) {
return 0;
}
}
A good example for use synchronized(this).
// add listener
public final synchronized void addListener(IListener l) {listeners.add(l);}
// remove listener
public final synchronized void removeListener(IListener l) {listeners.remove(l);}
// routine that raise events
public void run() {
// some code here...
Set ls;
synchronized(this) {
ls = listeners.clone();
}
for (IListener l : ls) { l.processEvent(event); }
// some code here...
}
As you can see here, we use synchronize on this to easy cooperate of lengthly (possibly infinite loop of run method) with some synchronized methods there.
Of course it can be very easily rewritten with using synchronized on private field. But sometimes, when we already have some design with synchronized methods (i.e. legacy class, we derive from, synchronized(this) can be the only solution).
It depends on the task you want to do, but I wouldn't use it. Also, check if the thread-save-ness you want to accompish couldn't be done by synchronize(this) in the first place? There are also some nice locks in the API that might help you :)
I only want to mention a possible solution for unique private references in atomic parts of code without dependencies. You can use a static Hashmap with locks and a simple static method named atomic() that creates required references automatically using stack information (full class name and line number). Then you can use this method in synchronize statements without writing new lock object.
// Synchronization objects (locks)
private static HashMap<String, Object> locks = new HashMap<String, Object>();
// Simple method
private static Object atomic() {
StackTraceElement [] stack = Thread.currentThread().getStackTrace(); // get execution point
StackTraceElement exepoint = stack[2];
// creates unique key from class name and line number using execution point
String key = String.format("%s#%d", exepoint.getClassName(), exepoint.getLineNumber());
Object lock = locks.get(key); // use old or create new lock
if (lock == null) {
lock = new Object();
locks.put(key, lock);
}
return lock; // return reference to lock
}
// Synchronized code
void dosomething1() {
// start commands
synchronized (atomic()) {
// atomic commands 1
...
}
// other command
}
// Synchronized code
void dosomething2() {
// start commands
synchronized (atomic()) {
// atomic commands 2
...
}
// other command
}
Avoid using synchronized(this) as a locking mechanism: This locks the whole class instance and can cause deadlocks. In such cases, refactor the code to lock only a specific method or variable, that way whole class doesn't get locked. Synchronised can be used inside method level.
Instead of using synchronized(this), below code shows how you could just lock a method.
public void foo() {
if(operation = null) {
synchronized(foo) {
if (operation == null) {
// enter your code that this method has to handle...
}
}
}
}
My two cents in 2019 even though this question could have been settled already.
Locking on 'this' is not bad if you know what you are doing but behind the scene locking on 'this' is (which unfortunately what synchronized keyword in method definition allows).
If you actually want users of your class to be able to 'steal' your lock (i.e. prevent other threads from dealing with it), you actually want all the synchronized methods to wait while another sync method is running and so on.
It should be intentional and well thought off (and hence documented to help your users understand it).
To further elaborate, in the reverse you must know what you are 'gaining' (or 'losing' out on) if you lock on a non accessible lock (nobody can 'steal' your lock, you are in total control and so on...).
The problem for me is that synchronized keyword in the method definition signature makes it just too easy for programmers not to think about what to lock on which is a mighty important thing to think about if you don't want to run into problems in a multi-threaded program.
One can't argue that 'typically' you don't want users of your class to be able to do these stuff or that 'typically' you want...It depends on what functionality you are coding. You can't make a thumb rule as you can't predict all the use cases.
Consider for e.g. the printwriter which uses an internal lock but then people struggle to use it from multiple threads if they don't want their output to interleave.
Should your lock be accessible outside of the class or not is your decision as a programmer on the basis of what functionality the class has. It is part of the api. You can't move away for instance from synchronized(this) to synchronized(provateObjet) without risking breaking changes in the code using it.
Note 1: I know you can achieve whatever synchronized(this) 'achieves' by using a explicit lock object and exposing it but I think it is unnecessary if your behaviour is well documented and you actually know what locking on 'this' means.
Note 2: I don't concur with the argument that if some code is accidentally stealing your lock its a bug and you have to solve it. This in a way is same argument as saying I can make all my methods public even if they are not meant to be public. If someone is 'accidentally' calling my intended to be private method its a bug. Why enable this accident in the first place!!! If ability to steal your lock is a problem for your class don't allow it. As simple as that.
Let me put the conclusion first - locking on private fields does not work for slightly more complicated multi-threaded program. This is because multi-threading is a global problem. It is impossible to localize synchronization unless you write in a very defensive way (e.g. copy everything on passing to other threads).
Here is the long explanation:
Synchronization includes 3 parts: Atomicity, Visibility and Ordering
Synchronized block is very coarse level of synchronization. It enforces visibility and ordering just as what you expected. But for atomicity, it does not provide much protection. Atomicity requires global knowledge of the program rather than local knowledge. (And that makes multi-threading programming very hard)
Let's say we have a class Account having method deposit and withdraw. They are both synchronized based on a private lock like this:
class Account {
private Object lock = new Object();
void withdraw(int amount) {
synchronized(lock) {
// ...
}
}
void deposit(int amount) {
synchronized(lock) {
// ...
}
}
}
Considering we need to implement a higher-level class which handles transfer, like this:
class AccountManager {
void transfer(Account fromAcc, Account toAcc, int amount) {
if (fromAcc.getBalance() > amount) {
fromAcc.setBalance(fromAcc.getBalance() - amount);
toAcc.setBalance(toAcc.getBalance + amount);
}
}
}
Assuming we have 2 accounts now,
Account john;
Account marry;
If the Account.deposit() and Account.withdraw() are locked with internal lock only. That will cause problem when we have 2 threads working:
// Some thread
void threadA() {
john.withdraw(500);
}
// Another thread
void threadB() {
accountManager.transfer(john, marry, 100);
}
Because it is possible for both threadA and threadB run at the same time. And thread B finishes the conditional check, thread A withdraws, and thread B withdraws again. This means we can withdraw $100 from John even if his account has no enough money. This will break atomicity.
You may propose that: why not adding withdraw() and deposit() to AccountManager then? But under this proposal, we need to create a multi-thread safe Map which maps from different accounts to their locks. We need to delete the lock after execution (otherwise will leak memory). And we also need to ensure no other one accesses the Account.withdraw() directly. This will introduce a lots of subtle bugs.
The correct and most idiomatic way is to expose the lock in the Account. And let the AccountManager to use the lock. But in this case, why not just use the object itself then?
class Account {
synchronized void withdraw(int amount) {
// ...
}
synchronized void deposit(int amount) {
// ...
}
}
class AccountManager {
void transfer(Account fromAcc, Account toAcc, int amount) {
// Ensure locking order to prevent deadlock
Account firstLock = fromAcc.hashCode() < toAcc.hashCode() ? fromAcc : toAcc;
Account secondLock = fromAcc.hashCode() < toAcc.hashCode() ? toAcc : fromAcc;
synchronized(firstLock) {
synchronized(secondLock) {
if (fromAcc.getBalance() > amount) {
fromAcc.setBalance(fromAcc.getBalance() - amount);
toAcc.setBalance(toAcc.getBalance + amount);
}
}
}
}
}
To conclude in simple English, private lock does not work for slightly more complicated multi-threaded program.
(Reposted from https://stackoverflow.com/a/67877650/474197)
I think points one (somebody else using your lock) and two (all methods using the same lock needlessly) can happen in any fairly large application. Especially when there's no good communication between developers.
It's not cast in stone, it's mostly an issue of good practice and preventing errors.

Categories