As shown in example below, once lock is taken on an object in call method, there is no need for further methods to have synchronized keyword.
public class Prac
{
public static void main(String[] args)
{
new Prac().call();
}
private synchronized void call()
{
further();
}
private synchronized void further()
{
oneMore();
}
private synchronized void oneMore()
{
// do something
}
}
But, if I still add synchronized keyword to further and onceMore, what java does on such encounters? Does java checks if lock is required or not? or as method call is in same stack, it just proceeds without checking if lock is required or not as lock is already acquired.
Note : My doubt is how java will behave in such situation, I am not sure, but I think it is different from biased locking.
In fact, java checks if the current thread has the lock every time it enters a synchronized method.
private synchronized void oneMore()
{
// do something
}
This is equivalent to
private void oneMore(){
synchronized(this){
// do something
}
}
But because of the fact that intrinsic locks in java are reentrant; if a thread has the lock, it doesn't reacquire it once it enters another synchronized block as in you example. Otherwise, this will create a deadlock.
Update: To answer your comment below. From Java Concurency in practice:
Reentrancy is implemented by associating with each lock an acquisition count
and an owning thread. When the count is zero, the lock is considered unheld.
When a thread acquires a previously unheld lock, the JVM records the owner
and sets the acquisition count to one. If that same thread acquires the lock
again, the count is incremented, and when the owning thread exits the
synchronized block, the count is decremented. When the count reaches zero,
the lock is released.
Therefore, checking if a lock is acquired, is equivalent to an if statement (more or less) that the variable holding the owning thread is equal or not to the thread trying to acquire the lock.
However, as you pointed out, there is no need for the synchronized keyword on the private methods. In general, you should try to remove unnecessary synchronization since that usually leads to degraded performance.
Related
I hope I can understandably describe the situation.
I want to start some amount of threads and all of them will execute one synchronized method. Consider first thread checks value of a variable in this method then the lock will be released after check.Then the second thread calls the same function. But first thread will then (after some ms) modify this variable which is in another class but second thread will (maybe) check the variable before the first changed it. How can I force the second thread to wait (without sleep) till the first has finished and changed the variable before the second checks the value? Can the first send some signal like "variable changed, u can check it now"?
Now I try to write this in code: threads started all all do this run:
abstract class Animal {
protected House house;
abstract boolean eating();
#Override
public void run() {
try {
while(!Thread.interrupted()) {
if(eating()) {
goEat();//here house.eatingRoom.count will be changed
Thread.sleep(1000);
goback();
}
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
All of them access this method:
class Cat extends Animal {
#Override
synchronized boolean eating() {
if (house.eatingRoom.count == 0)
return true;//first thread release lock and 2 thread access it but the value is not changed yet
else
return false;
}
}
And:
class EatingRoom {
final Set<Animal> count = new HashSet<>();
synchronized void add(Cat c) {
count.add(c);
}
}
to complete:
public class House extends Thread {
final EatingRoom eatingRoom = new EatingRoom();
//start all threads here so run in Animal class is executed..
}
The problem you are describing sounds like you could benefit from the Java synchronisation primitives like Object.wait and Object.notify.
A thread that owns the lock/monitor of a given object (such as by using the synchronized keyword) can call wait instead of looping and sleeping in a busy/wait pattern like you have in while(!Thread.interrupted()) which may waste many CPU cycles.
Once the thread enters the wait state it will release the lock it holds, which allows another thread to acquire that same lock and potentially change some state before then notifying one or more waiting threads via notify/notifyAll.
Note that one must be careful to ensure locks are acquired and released in the same order to help avoid deadlock scenarios when more than one lock is involved. Consider also using timeouts when waiting to ensure that your thread doesn't wait indefinitely for a condition that might never arise. If there are many waiting threads when you call notify be aware that you might not know which thread will be scheduled but you can set a fairness policy to help influence this.
Depending on the structure of your code you may be able to avoid some of the lower level primitives like synchronised blocks by using some higher level APIs such as https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/Lock.html or keywords like volatile for variables that contain shared mutable state (like a condition you want to wait for to ensure the result of a write is observed on a subsequent read in a "happens before" relationship.
I have seen some posts about the custom read-write lock implementation in java using wait/notify. It looks like:
public class ReadWriteLock{
private int readers;
private int writers;
private int writeRequests;
public synchronized void lockRead() throws InterruptedException{
while(writers > 0 || writeRequests > 0){
wait();
}
readers++;
}
public synchronized void unlockRead(){
readers--;
notifyAll();
}
public synchronized void lockWrite() throws InterruptedException{
writeRequests++;
while(readers > 0 || writers > 0){
wait();
}
writeRequests--;
writers++;
}
public synchronized void unlockWrite() throws InterruptedException{
writers--;
notifyAll();
}
}
I cannot comprehend how it could correctly work, unless I have not understood correctly how wait/notify really works. Assuming the read requests and consequently Threads are more, my questions are:
If read Threads acquire repeatedly the lock on the instance, how could a write Thread increase the variable writeRequests, since it can be increased only within a synchronized method. Hence a Thread should acquire first the lock to do it (if I am not mistaken). As long as a read Thread calls wait only if writeRequests or writers are greater than 0 , how can a write Thread have the chance to acquire the lock?
Based on the above presumptions and statements, how could more than one read Threads access a method at the same time, since they should first call lockRead() which is synchronized as well?
Edit : After seeing you edit to the question, you're asking what happens when multiple threads call wait() inside the same synchronized blocks - see this for a detailed explanation on what is called 'releasing the monitor' - http://www.artima.com/insidejvm/ed2/threadsynchP.html
To simplify things:
Synchronized methods are like synchronized(this) blocks.
calling wait() inside synchronized blocks release the lock and switches the thread to WAITING state. in this scenario other threads can acquire the lock on the same object and possibly notify the other waiting threads on state change (your unlock methods demonstrate that) by using the same object waited on (this in our case, because you're using synchronized methods)
If you map the possible scenarios for calling each method according to that priniciple you can see that methods are either non-waiting ( unlockRead()/unlockWrite()) - meaning they can block on mutual exclusion upon entry, but don't run any blocking code(and end swiftly).
Or, they are waiting but non-blocking ( lockRead()/lockWrite()) - Just like the unlock methods with the addition their execution could potentially be stalled, however they don't block, but rather wait in such scenarios.
So in any case you can consider your code as non-blocking and therefor it doesn't pose any real issue ( at least none that I can see ).
That said, you should protect against unlocking non-existent locks, causes that'll cause an undesired behavior where counters would go below 0 ( which would in turn affect the lock methods)
I'm wondering if there is an easy way to make a synchronized lock that will respond to changing references. I have code that looks something like this:
private void fus(){
synchronized(someRef){
someRef.roh();
}
}
...
private void dah(){
someRef = someOtherRef;
}
What I would like to happen is:
Thread A enters fus, and acquires a lock on someref as it calls roh(). Assume roh never terminates.
Thread B enters fus, begins waiting for someRef` to be free, and stays there (for now).
Thread C enters dah, and modifies someRef.
Thread B is now allowed to enter the synchronized block, as someRef no longer refers to the object Thread A has a lock on.
What actually happens is:
Thread A enters fus, and acquires a lock on someref as it calls roh(). Assume roh never terminates.
Thread B enters fus, finds the lock, and waits for it to be released (forever).
Thread C enters dah, and modifies someRef.
Thread B continues to wait, as it's no longer looking at someref, it's looking at the lock held by A.
Is there a way to set this up such that Thread B will either re-check the lock for changing references, or will "bounce off" into other code? (something like sychronizedOrElse?)
There surely is a way, but not with synchronized. Reasoning: At the point in time, where the 2nd thread enters fus(), the first thread holds the intrinsic lock of the object referenced by someRef. Important: the 2nd thread will still see someRef referencing on this very object and will try to acquire this lock. Later on, when the 3rd thread changes the reference someRef, it would have to notify the 2nd thread somehow about this event. This is not possible with synchronized.
To my knowledge, there is no built-in language-feature like synchronized to handle this kind of synchronization.
A somewhat different approach would be to either manage a Lock within your class or give someRef an attribute of type Lock. Instead of working with lock() you can use tryLock() or tryLock(long timeout, TimeUnit unit). This is a scheme on how I would implement this (assuming that someRef has a Lock attribute):
volatile SomeRef someRef = ... // important: make this volatile to deny caching
...
private void fus(){
while (true) {
SomeRef someRef = this.someRef;
Lock lock = someRef.lock;
boolean unlockNecessary = false;
try {
if (lock.tryLock(10, TimeUnit.MILLISECONDS)) { // I have chonse this arbritrarily
unlockNecessary = true;
someRef.roh();
return; // Job is done -> return. Remember: finally will still be executed.
// Alternatively, break; could be used to exit the loop.
}
} catch (InterruptException e) {
e.printStackTrace();
} finally {
if (unlockNecessary) {
lock.unlock();
}
}
}
}
...
private void dah(){
someRef = someOtherRef;
}
Now, when someRef is changed, the 2nd thread will see the new value of someRef in its next cycle and therefore will try to synchronize on the new Lock and succeed, if no other thread has acquired the Lock.
What actually happens is ... Thread B continues to wait, as it's no longer looking at someref, it's looking at the lock held by A.
That's right. You can't write code to synchronize on a variable. You can only write code to synchronize on some object.
Thread B found the object on which to synchronize by looking at the variable someref, but it only ever looks at that variable one time to find the object. The object is what it locks, and until thread A releases the lock on that object, thread B is going to be stuck.
I would like to add some more info on top of excellent answers by #Turing85 and #james large.
I agree that Thread B continues to wait.
It's better to avoid synchronization for this type of program by using better lock free API.
Atomic variables have features that minimize synchronization and help avoid memory consistency errors.
From the code you have posted, AtomicReference seems to be right solution for your problem.
Have a look at documentation page on Atomic package.
A small toolkit of classes that support lock-free thread-safe programming on single variables. In essence, the classes in this package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update operation of the form:
boolean compareAndSet(expectedValue, updateValue);
One more nice post in SE related to this topic.
When to use AtomicReference in Java?
Sample code:
String initialReference = "value 1";
AtomicReference<String> someRef =
new AtomicReference<String>(initialReference);
String newReference = "value 2";
boolean exchanged = someRef.compareAndSet(initialReference, newReference);
System.out.println("exchanged: " + exchanged);
Refer to this jenkov tutorial for better understanding.
Consider the following:
class A {
public static void main(String[] args) throws InterruptedException
{
final A a = new A();
new Thread()
{
public void run()
{
a.intrudeLock();
};
}.start();
Thread.sleep(1000);
new Thread()
{
public void run()
{
a.doSomethingAfterLocking();
};
}.start();
}
synchronized void doSomethingAfterLocking() throws InterruptedException
{
System.out.println("aquired lock");
Thread.sleep(10000);
System.out.println("finished stuff");
}
void intrudeLock()
{
System.out.println("don't need object's lock");
}
}
Going by the locking mechanism - the expected output is (at least in most of the cases):
aquired lock
don't need object's lock
finished stuff
I am not asking why this output, and understand the reason that the second thread doesn't require a lock for its method call and can thus intrude.
Now here is my doubt - When a thread acquires lock, its intention is gaining exclusivity over the object and intuitively the execution environment should prevent any state change by other threads. But this is not how Java implements it. Is there reason why this mechanism has been designed so?
When a thread acquires lock, its intention is gaining exclusivity over the object and intuitively the execution environment should
prevent any state change by other threads. .
Small correction :
When a thread acquires lock, its intention is gaining exclusivity over
the monitor of the object and intuitively the execution environment
should prevent any state change by other threads which are waiting
(which need) to acquire the same lock.*
Its completely left to the programmer to specify whether he wants some field / resource to be used only after acquiring a lock. If you have a field that can only be accessed by one thread, then it doesn't need synchronization (getting lock).
The important point that must be noted is that, it is completely left to the programmer to synchronize access to fields based on the code paths in the program. For example a field could be accessed by multiple threads in one code path (which calls for synchronization) and may be accessed by only one thread in another path. But since there is good probability that both the code paths can be accessed at the same time by different threads, you should acquire lock before entering any of the above mentioned code paths.
Now, the JIT may decide to ignore your lock requests (lock-elision) if it thinks that they are unnecessary (like trying to lock method local fields which never escape).
When we say we lock on an object using the synchronized keyword, does it mean we are acquiring a lock on the whole object or only at the code that exists in the block?
In the following example listOne.add is synchronized, does it mean if another thread accesses listOne.get it would be blocked until the first thread gets out of this block? What if a second thread accesses the listTwo.get or listTwo.add methods on the instance variables of the same object when the first thread is still in the synchronized block?
List<String> listONe = new ArrayList<String>();
List<String> listTwo = new ArrayList<String>();
/* ... ... ... */
synchronized(this) {
listOne.add(something);
}
Given the methods:
public void a(String s) {
synchronized(this) {
listOne.add(s);
}
}
public void b(String s) {
synchronized(this) {
listTwo.add(s);
}
}
public void c(String s) {
listOne.add(s);
}
public void d(String s) {
synchronized(listOne) {
listOne.add(s);
}
}
You can not call a and b at the same time, as they are locked on the same lock.
You can however call a and c at the same time (with multiple threads obviously) as they are not locked on the same lock. This can lead to trouble with listOne.
You can also call a and d at the same time, as d is no different in this context from c. It does not use the same lock.
It is important that you always lock listOne with the same lock, and allow no access to it without a lock. If listOne and listTwo are somehow related and sometimes need updates at the same time / atomically you'd need one lock for access to both of them. Otherwise 2 separate locks may be better.
Of course, you'd probably use the relatively new java.util.concurrent classes if all you need is a concurrent list :)
The lock is on the object instance that you include in the synchronized block.
But take care! That object is NOT intrinsically locked for access by other threads. Only threads that execute the same synchronized(obj), where obj is this in your example but could in other threads also be a variable reference, wait on that lock.
Thus, threads that don't execute any synchronized statements can access any and all variables of the 'locked' object and you'll probably run into race conditions.
Other threads will block only on if you have a synchronized block on the same instance. So no operations on the lists themselves will block.
synchronized(this) {
will only lock the object this. To lock and work with the object listOne:
synchronized(listOne){
listOne.add(something);
}
so that listOne is accessed one at a time by multiple threads.
See: http://download.oracle.com/javase/tutorial/essential/concurrency/locksync.html
You need to understand that the lock is advisory and is not physically enforced. For example if you decided that you where going to use an Object to lock access to certain class fields, you must write the code in such a way to actually acquire the lock before accessing those fields. If you don't you can still access them and potentially cause deadlocks or other threading issues.
The exception to this is the use of the synchronized keyword on methods where the runtime will automatically acquire the lock for you without you needing to do anything special.
The Java Language specification defines the meaning of the synchronized statement as follows:
A synchronized statement acquires a mutual-exclusion lock (ยง17.1) on behalf of the executing thread, executes a block, then releases the lock. While the executing thread owns the lock, no other thread may acquire the lock.
SynchronizedStatement:`
synchronized ( Expression ) Block`
The type of Expression must be a reference type, or a compile-time error occurs.
A synchronized statement is executed by first evaluating the Expression.
If evaluation of the Expression completes abruptly for some reason, then the synchronized statement completes abruptly for the same reason.
Otherwise, if the value of the Expression is null, a NullPointerException is thrown.
Otherwise, let the non-null value of the Expression be V. The executing thread locks the lock associated with V. Then the Block is executed. If execution of the Block completes normally, then the lock is unlocked and the synchronized statement completes normally. If execution of the Block completes abruptly for any reason, then the lock is unlocked and the synchronized statement then completes abruptly for the same reason.
Acquiring the lock associated with an object does not of itself prevent other threads from accessing fields of the object or invoking unsynchronized methods on the object. Other threads can also use synchronized methods or the synchronized statement in a conventional manner to achieve mutual exclusion.
That is, in your example
synchronized(this) {
listOne.add(something);
}
the synchronized block does treat the object referred to by listOne in any special way, other threads may work with it as they please. However, it ensures that no other thread may enter a synchronized block for the object referred to by this at the same time. Therefore, if all code working with listOne is in synchronized blocks for the same object, at most one thread may work with listOne at any given time.
Also note that the object being locked on gets no special protection from concurrent access of its state, so the code
void increment() {
synchronized (this) {
this.counter = this.counter + 1;
}
}
void reset() {
this.counter = 0;
}
is incorrectly synchronized, as a second thread may execute reset while the first thread has read, but not yet written, counter, causing the reset to be overwritten.