locking synchronization in java using reentrant lock - java

I have and application made in java that uses an reentrant global lock and I have a problem like this:
One thread is acquire the reentrant global lock and let say that is keeping the lock 30 seconds in this interval of time it makes some operations. In this interval of time all other threads are blocked.
My problem is that I want some threads like RMI to have the chance to execute.
What would be a good locking policy or optimization in order to let some other threads to acquire the lock for a short period of time ?

So you basically have a job queue which should be executed in a single-threaded environent. Each time before polling from this queue you need to sort it's entries by priority.
abstract class JobEntry<V> implements Callable<V>{
Date registeredAt;
long runEstimationMs;
JobType type;
}
So you could come up with a weighing function for this entry and sort entries based on it or with implementing Comparable.
And this is almost it. You could send these jobs to a fixed thread pool of a single thread. If you need to interrupt them, you would need to cancel the future and each job should check Thread.interrupted() state.
The most difficult part here is the weighing function, a possible way to build it could be making a set of experiments on your system.

If one thread has acquired lock other threads cant proceed , we can not change this behavior.
Now to solve your problem there are few suggestions I want to give.
Try to Reduce lock scope so that other threads also get chance for execution.
Look at the possibility if you can acquire lock for really required part and release lock after that. Because as you said one thread is taking good amount of time there could be some part of code where you don't need locking.
operationInLock(){
----
lock.lock();
Code where lock is required;
lock.unlock();
Code where lock is Not required;
Code where lock is Not required;
Code where lock is Not required;
lock.lock();
Code where lock is required;
lock.unlock();
Code where lock is Not required;
Code where lock is Not required;
Code where lock is Not required;
}
If you don't feel this answer useful , give us some more info about code/functionality because without seeing code it becomes very difficult to give solution we can just give u suggestions based on best practice.

I believe this can be achieved via acquiring on only particular section of code by first thread which has global lock for 30 seconds and also you can also use Read and write separate locks which comes with ReadWriteLock Object in java.
ReadWriteLock is implemented by ReentrantReadWriteLock Class in java.util.concurrent.locks package.Multiple Threads can acquire multiple read Locks, but only a single Thread can acquire mutually-exclusive write Lock .Other threads requesting readLocks have to wait till the write Lock is released. A thread is allowed to degrade from write lock to read lock but not vice-versa. Allowing a read thread to upgrade would lead to a deadlock as more than one thread can try to upgrade its lock. The ReentrantReadWriteLock also supports all the features of the Reentrant lock like providing the fair mechanism, reentrant locks, Condition Support (on a write Lock only), allowing interruption on read as well as write Locks.
. A Condition object, also known as condition variable, provides a thread with the ability to suspend its execution, until the condition is true. A Condition object is necessarily bound to a Lock and can be obtained using the newCondition() method.
Furthermore, a Condition enables the effect of having multiple wait-sets per object, by combining these sets with the use of a Lock implementation. Moreover, due to the fact that Conditions access portions of state shared among different threads, the usage of a Lock is mandatory. It is important to mention that a Condition must atomically release the associated Lock and suspend the current’s thread execution.
For your reference I am giving you the URLs ->
https://examples.javacodegeeks.com/core-java/util/concurrent/locks-concurrent/condition/java-util-concurrent-locks-condition-example/
https://examples.javacodegeeks.com/core-java/util/concurrent/locks-concurrent/readwritelock/java-readwritelock-example/
Please let me know if you need other help

Related

Difference between Locks and .join() method

Let's say you have two threads, thread1 and thread2. If you call thread1.start() and thread2.start() at the same time and they both print out numbers between 1 and 5, they will both run at the same time and they will randomly print out the numbers in any order, if I am not mistaken. To prevent this, you use the .join() method to make sure that a certain thread gets executed first. If this is what the .join() method does, what is the Lock object used for?
Thread.join is used to wait for another thread to finish. The join method uses the implicit lock on the Thread object and calls wait on it. When the thread being waited for finishes it notifies the waiting thread so it can stop waiting.
Java has different ways to use locks to protect access to data. There is implicit locking that uses a lock built into every Java object (this is where the synchronized keyword comes in), and then there are explicit Lock objects. Both of them protect data from concurrent access, the difference is the explicit Locks are more flexible and powerful, while implicit locking is designed to be easier to use.
With implicit locks, for instance, I can't not release the lock at the end of a synchronized method or block, the JVM makes sure that the lock gets released as the thread leaves. But programming with implicit locks can be limiting. For instance, there aren't separate condition objects so if there are different threads accessing a shared object for different things, notifying only a subset of them is not possible.
With explicit Locks you get separate condition objects and can notify only those threads waiting on a particular condition (producers might wait on one condition while consumers wait on another, see the ArrayBlockingQueue class for an example), and you can implement more involved kinds of patterns, like hand-over-hand locking. But you need to be much more careful, because the extra features introduce complications, and releasing the lock is up to you.
Locking typically prevents more than one thread from running a block of code at the same time. This is because only ONE thread at a time can acquire the lock and run the code within. If a thread wants the lock but it is already taken, then that thread goes into a wait state until the lock is released. If you have many threads waiting for the lock to be released, which one gets the lock next is INDETERMINATE (can't be predicted). This can lead to "thread starvation" where a thread is waiting for the lock, but it just never gets it because other threads always seem to get it instead. This is a very generic answer because you didn't specify a language. Some languages may differ slightly in that they might have a determinate method of deciding who gets the lock next.

Why intrinsic lock like synchronized is discouraged [duplicate]

I'm trying to understand what makes the lock in concurrency so important if one can use synchronized (this). In the dummy code below, I can do either:
synchronized the entire method or synchronize the vulnerable area (synchronized(this){...})
OR lock the vulnerable code area with a ReentrantLock.
Code:
private final ReentrantLock lock = new ReentrantLock();
private static List<Integer> ints;
public Integer getResult(String name) {
.
.
.
lock.lock();
try {
if (ints.size()==3) {
ints=null;
return -9;
}
for (int x=0; x<ints.size(); x++) {
System.out.println("["+name+"] "+x+"/"+ints.size()+". values >>>>"+ints.get(x));
}
} finally {
lock.unlock();
}
return random;
}
A ReentrantLock is unstructured, unlike synchronized constructs -- i.e. you don't need to use a block structure for locking and can even hold a lock across methods. An example:
private ReentrantLock lock;
public void foo() {
...
lock.lock();
...
}
public void bar() {
...
lock.unlock();
...
}
Such flow is impossible to represent via a single monitor in a synchronized construct.
Aside from that, ReentrantLock supports lock polling and interruptible lock waits that support time-out. ReentrantLock also has support for configurable fairness policy, allowing more flexible thread scheduling.
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order. Programs using fair locks accessed by many threads may display lower overall throughput (i.e., are slower; often much slower) than those using the default setting, but have smaller variances in times to obtain locks and guarantee lack of starvation. Note however, that fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock. Also note that the untimed tryLock method does not honor the fairness setting. It will succeed if the lock is available even if other threads are waiting.
ReentrantLock may also be more scalable, performing much better under higher contention. You can read more about this here.
This claim has been contested, however; see the following comment:
In the reentrant lock test, a new lock is created each time, thus there is no exclusive locking and the resulting data is invalid. Also, the IBM link offers no source code for the underlying benchmark so its impossible to characterize whether the test was even conducted correctly.
When should you use ReentrantLocks? According to that developerWorks article...
The answer is pretty simple -- use it when you actually need something it provides that synchronized doesn't, like timed lock waits, interruptible lock waits, non-block-structured locks, multiple condition variables, or lock polling. ReentrantLock also has scalability benefits, and you should use it if you actually have a situation that exhibits high contention, but remember that the vast majority of synchronized blocks hardly ever exhibit any contention, let alone high contention. I would advise developing with synchronization until synchronization has proven to be inadequate, rather than simply assuming "the performance will be better" if you use ReentrantLock. Remember, these are advanced tools for advanced users. (And truly advanced users tend to prefer the simplest tools they can find until they're convinced the simple tools are inadequate.) As always, make it right first, and then worry about whether or not you have to make it faster.
One final aspect that's gonna become more relevant in the near future has to do with Java 15 and Project Loom. In the (new) world of virtual threads, the underlying scheduler would be able to work much better with ReentrantLock than it's able to do with synchronized, that's true at least in the initial Java 15 release but may be optimized later.
In the current Loom implementation, a virtual thread can be pinned in two situations: when there is a native frame on the stack — when Java code calls into native code (JNI) that then calls back into Java — and when inside a synchronized block or method. In those cases, blocking the virtual thread will block the physical thread that carries it. Once the native call completes or the monitor released (the synchronized block/method is exited) the thread is unpinned.
If you have a common I/O operation guarded by a synchronized, replace the monitor with a ReentrantLock to let your application benefit fully from Loom’s scalability boost even before we fix pinning by monitors (or, better yet, use the higher-performance StampedLock if you can).
ReentrantReadWriteLock is a specialized lock whereas synchronized(this) is a general purpose lock. They are similar but not quite the same.
You are right in that you could use synchronized(this) instead of ReentrantReadWriteLock but the opposite is not always true.
If you'd like to better understand what makes ReentrantReadWriteLock special look up some information about producer-consumer thread synchronization.
In general you can remember that whole-method synchronization and general purpose synchronization (using the synchronized keyword) can be used in most applications without thinking too much about the semantics of the synchronization but if you need to squeeze performance out of your code you may need to explore other more fine-grained, or special-purpose synchronization mechanisms.
By the way, using synchronized(this) - and in general locking using a public class instance - can be problematic because it opens up your code to potential dead-locks because somebody else not knowingly might try to lock against your object somewhere else in the program.
From oracle documentation page about ReentrantLock:
A reentrant mutual exclusion Lock with the same basic behaviour and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities.
A ReentrantLock is owned by the thread last successfully locking, but not yet unlocking it. A thread invoking lock will return, successfully acquiring the lock, when the lock is not owned by another thread. The method will return immediately if the current thread already owns the lock.
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order.
ReentrantLock key features as per this article
Ability to lock interruptibly.
Ability to timeout while waiting for lock.
Power to create fair lock.
API to get list of waiting thread for lock.
Flexibility to try for lock without blocking.
You can use ReentrantReadWriteLock.ReadLock, ReentrantReadWriteLock.WriteLock to further acquire control on granular locking on read and write operations.
Have a look at this article by Benjamen on usage of different type of ReentrantLocks
Synchronized locks does not offer any mechanism of waiting queue in which after the execution of one thread any thread running in parallel can acquire the lock. Due to which the thread which is there in the system and running for a longer period of time never gets chance to access the shared resource thus leading to starvation.
Reentrant locks are very much flexible and has a fairness policy in which if a thread is waiting for a longer time and after the completion of the currently executing thread we can make sure that the longer waiting thread gets the chance of accessing the shared resource hereby decreasing the throughput of the system and making it more time consuming.
You can use reentrant locks with a fairness policy or timeout to avoid thread starvation. You can apply a thread fairness policy. it will help avoid a thread waiting forever to get to your resources.
private final ReentrantLock lock = new ReentrantLock(true);
//the param true turns on the fairness policy.
The "fairness policy" picks the next runnable thread to execute. It is based on priority, time since last run, blah blah
also,
Synchronize can block indefinitely if it cant escape the block. Reentrantlock can have timeout set.
One thing to keep in mind is :
The name 'ReentrantLock' gives out a wrong message about other locking mechanism that they are not re-entrant. This is not true. Lock acquired via 'synchronized' is also re-entrant in Java.
Key difference is that 'synchronized' uses intrinsic lock ( one that every Object has ) while Lock API doesn't.
I think the wait/notify/notifyAll methods don't belong on the Object class as it pollutes all objects with methods that are rarely used. They make much more sense on a dedicated Lock class. So from this point of view, perhaps it's better to use a tool that is explicitly designed for the job at hand - ie ReentrantLock.
Lets assume this code is running in a thread:
private static ReentrantLock lock = new ReentrantLock();
void accessResource() {
lock.lock();
if( checkSomeCondition() ) {
accessResource();
}
lock.unlock();
}
Because the thread owns the lock it will allow multiple calls to lock(), so it re-enter the lock. This can be achieved with a reference count so it doesn't has to acquire lock again.

how using Lock interface gives more performance over using synchronise keyword in concurrent applications design?

I was going through "Java Concurrency CookBook". In that author mentioned using Lock interface gives more performance over using synchronized keyword.Can any one tell how? Using the terms like stack-frame, ornumber of method calls.
Don't mind, please help me get rid of java concurrency concepts.
The raison d'etre for Lock and friends isn't that it is inherently faster than synchronized(), it is that it can be used in different ways that don't necessarily correspond to the lexical block structure, and also that it can offer more facilities such as read-write locks, counting semaphores, etc.
Whether a specific Lock implementation is actually faster than synchronized is a moot point and implementation-dependent. There is certainly no such claim in the Javadoc. Doug Leas's book[1] where it all started doesn't make any claim that I can see quickly stronger than 'often with better performance'.
[1]: Lea, Concurrent Programming in Java, 2nd edition, Addison Wesley 2000.
1 Synchronisation is the only culprit that leads to the problem of deadlock unlike lock which is free of deadlock issue.
2 In synchronisation , we don’t know after how much time a thread will get a chance after a previous thread has released the lock. This can lead to problem of starvation whereas incase of lock we have its implementing class reentrant lock which has one of its constructor which lets you pass fairness property as one of its argument that leta longest waiting thread get the chance to acquire the lock.
3 In synchronisation, if a thread is waiting for another thread, then the waiting thread won’t do any other activity which doesn’t require lock access but with lock interface there is a trylock() method with which you can try for access the lock and if you don’t get the lock you can perform other alternate tasks. This helps to improve the performance of the application .
4 There is no api to check how many threads are waiting for a particular lock whereas this is possible with lock interface implementation class ReentrantLock methods.
5 One can get better control of locks using lock interface with holdCount() method which is not found with synchronization.

Why use a ReentrantLock if one can use synchronized(this)?

I'm trying to understand what makes the lock in concurrency so important if one can use synchronized (this). In the dummy code below, I can do either:
synchronized the entire method or synchronize the vulnerable area (synchronized(this){...})
OR lock the vulnerable code area with a ReentrantLock.
Code:
private final ReentrantLock lock = new ReentrantLock();
private static List<Integer> ints;
public Integer getResult(String name) {
.
.
.
lock.lock();
try {
if (ints.size()==3) {
ints=null;
return -9;
}
for (int x=0; x<ints.size(); x++) {
System.out.println("["+name+"] "+x+"/"+ints.size()+". values >>>>"+ints.get(x));
}
} finally {
lock.unlock();
}
return random;
}
A ReentrantLock is unstructured, unlike synchronized constructs -- i.e. you don't need to use a block structure for locking and can even hold a lock across methods. An example:
private ReentrantLock lock;
public void foo() {
...
lock.lock();
...
}
public void bar() {
...
lock.unlock();
...
}
Such flow is impossible to represent via a single monitor in a synchronized construct.
Aside from that, ReentrantLock supports lock polling and interruptible lock waits that support time-out. ReentrantLock also has support for configurable fairness policy, allowing more flexible thread scheduling.
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order. Programs using fair locks accessed by many threads may display lower overall throughput (i.e., are slower; often much slower) than those using the default setting, but have smaller variances in times to obtain locks and guarantee lack of starvation. Note however, that fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock. Also note that the untimed tryLock method does not honor the fairness setting. It will succeed if the lock is available even if other threads are waiting.
ReentrantLock may also be more scalable, performing much better under higher contention. You can read more about this here.
This claim has been contested, however; see the following comment:
In the reentrant lock test, a new lock is created each time, thus there is no exclusive locking and the resulting data is invalid. Also, the IBM link offers no source code for the underlying benchmark so its impossible to characterize whether the test was even conducted correctly.
When should you use ReentrantLocks? According to that developerWorks article...
The answer is pretty simple -- use it when you actually need something it provides that synchronized doesn't, like timed lock waits, interruptible lock waits, non-block-structured locks, multiple condition variables, or lock polling. ReentrantLock also has scalability benefits, and you should use it if you actually have a situation that exhibits high contention, but remember that the vast majority of synchronized blocks hardly ever exhibit any contention, let alone high contention. I would advise developing with synchronization until synchronization has proven to be inadequate, rather than simply assuming "the performance will be better" if you use ReentrantLock. Remember, these are advanced tools for advanced users. (And truly advanced users tend to prefer the simplest tools they can find until they're convinced the simple tools are inadequate.) As always, make it right first, and then worry about whether or not you have to make it faster.
One final aspect that's gonna become more relevant in the near future has to do with Java 15 and Project Loom. In the (new) world of virtual threads, the underlying scheduler would be able to work much better with ReentrantLock than it's able to do with synchronized, that's true at least in the initial Java 15 release but may be optimized later.
In the current Loom implementation, a virtual thread can be pinned in two situations: when there is a native frame on the stack — when Java code calls into native code (JNI) that then calls back into Java — and when inside a synchronized block or method. In those cases, blocking the virtual thread will block the physical thread that carries it. Once the native call completes or the monitor released (the synchronized block/method is exited) the thread is unpinned.
If you have a common I/O operation guarded by a synchronized, replace the monitor with a ReentrantLock to let your application benefit fully from Loom’s scalability boost even before we fix pinning by monitors (or, better yet, use the higher-performance StampedLock if you can).
ReentrantReadWriteLock is a specialized lock whereas synchronized(this) is a general purpose lock. They are similar but not quite the same.
You are right in that you could use synchronized(this) instead of ReentrantReadWriteLock but the opposite is not always true.
If you'd like to better understand what makes ReentrantReadWriteLock special look up some information about producer-consumer thread synchronization.
In general you can remember that whole-method synchronization and general purpose synchronization (using the synchronized keyword) can be used in most applications without thinking too much about the semantics of the synchronization but if you need to squeeze performance out of your code you may need to explore other more fine-grained, or special-purpose synchronization mechanisms.
By the way, using synchronized(this) - and in general locking using a public class instance - can be problematic because it opens up your code to potential dead-locks because somebody else not knowingly might try to lock against your object somewhere else in the program.
From oracle documentation page about ReentrantLock:
A reentrant mutual exclusion Lock with the same basic behaviour and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities.
A ReentrantLock is owned by the thread last successfully locking, but not yet unlocking it. A thread invoking lock will return, successfully acquiring the lock, when the lock is not owned by another thread. The method will return immediately if the current thread already owns the lock.
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order.
ReentrantLock key features as per this article
Ability to lock interruptibly.
Ability to timeout while waiting for lock.
Power to create fair lock.
API to get list of waiting thread for lock.
Flexibility to try for lock without blocking.
You can use ReentrantReadWriteLock.ReadLock, ReentrantReadWriteLock.WriteLock to further acquire control on granular locking on read and write operations.
Have a look at this article by Benjamen on usage of different type of ReentrantLocks
Synchronized locks does not offer any mechanism of waiting queue in which after the execution of one thread any thread running in parallel can acquire the lock. Due to which the thread which is there in the system and running for a longer period of time never gets chance to access the shared resource thus leading to starvation.
Reentrant locks are very much flexible and has a fairness policy in which if a thread is waiting for a longer time and after the completion of the currently executing thread we can make sure that the longer waiting thread gets the chance of accessing the shared resource hereby decreasing the throughput of the system and making it more time consuming.
You can use reentrant locks with a fairness policy or timeout to avoid thread starvation. You can apply a thread fairness policy. it will help avoid a thread waiting forever to get to your resources.
private final ReentrantLock lock = new ReentrantLock(true);
//the param true turns on the fairness policy.
The "fairness policy" picks the next runnable thread to execute. It is based on priority, time since last run, blah blah
also,
Synchronize can block indefinitely if it cant escape the block. Reentrantlock can have timeout set.
One thing to keep in mind is :
The name 'ReentrantLock' gives out a wrong message about other locking mechanism that they are not re-entrant. This is not true. Lock acquired via 'synchronized' is also re-entrant in Java.
Key difference is that 'synchronized' uses intrinsic lock ( one that every Object has ) while Lock API doesn't.
I think the wait/notify/notifyAll methods don't belong on the Object class as it pollutes all objects with methods that are rarely used. They make much more sense on a dedicated Lock class. So from this point of view, perhaps it's better to use a tool that is explicitly designed for the job at hand - ie ReentrantLock.
Lets assume this code is running in a thread:
private static ReentrantLock lock = new ReentrantLock();
void accessResource() {
lock.lock();
if( checkSomeCondition() ) {
accessResource();
}
lock.unlock();
}
Because the thread owns the lock it will allow multiple calls to lock(), so it re-enter the lock. This can be achieved with a reference count so it doesn't has to acquire lock again.

Java concurrency lock and condition usage

I can use object.wait ,object.notify and synchronized blocks to solve the producer consumer type of problems. At the same time I can use locks and conditions from java.util.concurrent package. I am sure I am not able to understand why we need conditions when we can use object.wait and notify to make threads waiting on some condition like queue is empty or full. Is there any other benefit we are getting if we use java.util.concurrent.locks.Condition ?
This article provides a good explanation:
Just as Lock is a generalization for
synchronization, the Lock framework
includes a generalization of wait and
notify called Condition. A Lock object
acts as a factory object for condition
variables bound to that lock, and
unlike with the standard wait and
notify methods, there can be more than
one condition variable associated with
a given Lock.

Categories