I'm wondering if there is an easy way to make a synchronized lock that will respond to changing references. I have code that looks something like this:
private void fus(){
synchronized(someRef){
someRef.roh();
}
}
...
private void dah(){
someRef = someOtherRef;
}
What I would like to happen is:
Thread A enters fus, and acquires a lock on someref as it calls roh(). Assume roh never terminates.
Thread B enters fus, begins waiting for someRef` to be free, and stays there (for now).
Thread C enters dah, and modifies someRef.
Thread B is now allowed to enter the synchronized block, as someRef no longer refers to the object Thread A has a lock on.
What actually happens is:
Thread A enters fus, and acquires a lock on someref as it calls roh(). Assume roh never terminates.
Thread B enters fus, finds the lock, and waits for it to be released (forever).
Thread C enters dah, and modifies someRef.
Thread B continues to wait, as it's no longer looking at someref, it's looking at the lock held by A.
Is there a way to set this up such that Thread B will either re-check the lock for changing references, or will "bounce off" into other code? (something like sychronizedOrElse?)
There surely is a way, but not with synchronized. Reasoning: At the point in time, where the 2nd thread enters fus(), the first thread holds the intrinsic lock of the object referenced by someRef. Important: the 2nd thread will still see someRef referencing on this very object and will try to acquire this lock. Later on, when the 3rd thread changes the reference someRef, it would have to notify the 2nd thread somehow about this event. This is not possible with synchronized.
To my knowledge, there is no built-in language-feature like synchronized to handle this kind of synchronization.
A somewhat different approach would be to either manage a Lock within your class or give someRef an attribute of type Lock. Instead of working with lock() you can use tryLock() or tryLock(long timeout, TimeUnit unit). This is a scheme on how I would implement this (assuming that someRef has a Lock attribute):
volatile SomeRef someRef = ... // important: make this volatile to deny caching
...
private void fus(){
while (true) {
SomeRef someRef = this.someRef;
Lock lock = someRef.lock;
boolean unlockNecessary = false;
try {
if (lock.tryLock(10, TimeUnit.MILLISECONDS)) { // I have chonse this arbritrarily
unlockNecessary = true;
someRef.roh();
return; // Job is done -> return. Remember: finally will still be executed.
// Alternatively, break; could be used to exit the loop.
}
} catch (InterruptException e) {
e.printStackTrace();
} finally {
if (unlockNecessary) {
lock.unlock();
}
}
}
}
...
private void dah(){
someRef = someOtherRef;
}
Now, when someRef is changed, the 2nd thread will see the new value of someRef in its next cycle and therefore will try to synchronize on the new Lock and succeed, if no other thread has acquired the Lock.
What actually happens is ... Thread B continues to wait, as it's no longer looking at someref, it's looking at the lock held by A.
That's right. You can't write code to synchronize on a variable. You can only write code to synchronize on some object.
Thread B found the object on which to synchronize by looking at the variable someref, but it only ever looks at that variable one time to find the object. The object is what it locks, and until thread A releases the lock on that object, thread B is going to be stuck.
I would like to add some more info on top of excellent answers by #Turing85 and #james large.
I agree that Thread B continues to wait.
It's better to avoid synchronization for this type of program by using better lock free API.
Atomic variables have features that minimize synchronization and help avoid memory consistency errors.
From the code you have posted, AtomicReference seems to be right solution for your problem.
Have a look at documentation page on Atomic package.
A small toolkit of classes that support lock-free thread-safe programming on single variables. In essence, the classes in this package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update operation of the form:
boolean compareAndSet(expectedValue, updateValue);
One more nice post in SE related to this topic.
When to use AtomicReference in Java?
Sample code:
String initialReference = "value 1";
AtomicReference<String> someRef =
new AtomicReference<String>(initialReference);
String newReference = "value 2";
boolean exchanged = someRef.compareAndSet(initialReference, newReference);
System.out.println("exchanged: " + exchanged);
Refer to this jenkov tutorial for better understanding.
Related
Thread thread = new Thread(() -> {
synchronized (this){
try {
this.wait();
System.out.println("Woke");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
});
thread.start();
TimeUnit.SECONDS.sleep(1);
this.notify();
When calling notify it says
java.lang.IllegalMonitorStateException: current thread is not owner
The typical usage of notify is that you call it and then you release the lock implicitly (by leaving the synchronized block) so that the waiting threads may re-acquire the lock.
But code above calls notify even before it has the lock, so other threads can just try to acquire the lock, why not? I think the holding the lock is not necessary.
I think the holding the lock is not necessary.
It is necessary because the javadoc for Object.notify() says it is necessary. It states:
"This method should only be called by a thread that is the owner of
this object's monitor. A thread becomes the owner of the object's
monitor in one of three ways:
By executing a synchronized instance method of that object.
By executing the body of a synchronized statement that synchronizes on the object.
For objects of type Class, by executing a synchronized static method of that class."
But your real question is why is it necessary? Why did they design it this way?
To answer that, we need to understand that Java's wait / notify mechanism is primarily designed for implementing condition variables. The purpose of a condition variable is to allow one thread to wait for a condition to become true and for another thread to notify it that this has occurred. The basic pattern for implementing condition variables using wait() / notify() is as follows:
// Shared lock that provides mutual exclusion for 'theCondition'.
final Object lock = new Object();
// Thread #1
synchronized (lock) {
// ...
while (! theCondition) { // One reason for this loop will
// become later ...
lock.wait();
}
// HERE
}
// Thread # 2
synchronized (lock) {
// ...
if (theCondition) {
lock.notify();
}
}
This when thread #1 reaches // HERE, it knows that theCondition is now true. Furthermore it is guaranteed the current values variables that make up the condition, and any others controlled by the lock monitor will now be visible to thread #1.
But one of the prerequisites for this actually working is that both thread #1 and thread #2 are synchronized on the same monitor. That will guarantee the visibility of the values according to a happens before analysis based on the Java Memory Model (see JLS 17.4).
A second reason that the above needs synchronization is because thread #1 needs exclusive access to the variables to check the condition and then use them. Without mutual exclusion for the shared state between threads #1 and #2, race conditions are possible that can lead to a missed notification.
Since the above only works reliably when threads #1 and #2 hold the monitor when calling wait and notify, the Java designers decided to enforce this in implementations of the wait and notify methods themselves. Hence the javadoc that I quoted above.
Now ... your use-case for wait() / notify() is simpler. No information is shared between the two threads ... apart from the fact that the notify occurred. But it is still necessary to follow the pattern above.
Consider the consequences of this caveat in the javadoc for the wait() methods:
"A thread can wake up without being notified, interrupted, or timing out, a so-called "spurious wakeup". While this will rarely occur in practice, applications must guard against it ..."
So one issue is that a spurious wakeup could cause the child thread to be woken before the main thread's sleep(...) completes.
A second issue is that is the child thread is delayed, the main thread may notify the child before the child has reached the wait. The notification then be lost. (This might happen due to system load.)
What these issues mean is that your example is incorrect ... in theory, if not in reality. And in fact, it is not possible to solve your problem using wait / notify without following the pattern above/
A corrected version of your example (i.e. one that is not vulnerable to spurious wakeups, and race conditions) looks like this:
final Object lock = new Object;
boolean wakeUp = false;
Thread thread = new Thread(() -> {
synchronized (lock){
try {
while (!wakeUp) {
this.wait();
}
System.out.println("Woke");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
});
thread.start();
TimeUnit.SECONDS.sleep(1);
synchronized (lock) {
wakeUp = true;
this.notify();
}
Note that there are simpler and more obviously correct ways to do this using various java.concurrent.* classes.
The case where using synchronized makes sense is where the thing using the lock has state that needs to be protected. In that case the lock has to be held while notifying because there are going to be state changes that go along with the notification, so that requiring notify to be called with the lock makes sense.
Using wait/notify without state that indicates when the thread should wait is not safe, it allows race conditions that can result in hanging threads, or threads can stop waiting without having been notified. It really isn't safe to use wait and notify without keeping state.
If you have code that doesn't otherwise need that state, then synchronized is an overcomplicated/tricky/buggy solution. In the case of the posted code example you could use a CountdownLatch instead, and have something that is simple and safe.
I hope I can understandably describe the situation.
I want to start some amount of threads and all of them will execute one synchronized method. Consider first thread checks value of a variable in this method then the lock will be released after check.Then the second thread calls the same function. But first thread will then (after some ms) modify this variable which is in another class but second thread will (maybe) check the variable before the first changed it. How can I force the second thread to wait (without sleep) till the first has finished and changed the variable before the second checks the value? Can the first send some signal like "variable changed, u can check it now"?
Now I try to write this in code: threads started all all do this run:
abstract class Animal {
protected House house;
abstract boolean eating();
#Override
public void run() {
try {
while(!Thread.interrupted()) {
if(eating()) {
goEat();//here house.eatingRoom.count will be changed
Thread.sleep(1000);
goback();
}
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
All of them access this method:
class Cat extends Animal {
#Override
synchronized boolean eating() {
if (house.eatingRoom.count == 0)
return true;//first thread release lock and 2 thread access it but the value is not changed yet
else
return false;
}
}
And:
class EatingRoom {
final Set<Animal> count = new HashSet<>();
synchronized void add(Cat c) {
count.add(c);
}
}
to complete:
public class House extends Thread {
final EatingRoom eatingRoom = new EatingRoom();
//start all threads here so run in Animal class is executed..
}
The problem you are describing sounds like you could benefit from the Java synchronisation primitives like Object.wait and Object.notify.
A thread that owns the lock/monitor of a given object (such as by using the synchronized keyword) can call wait instead of looping and sleeping in a busy/wait pattern like you have in while(!Thread.interrupted()) which may waste many CPU cycles.
Once the thread enters the wait state it will release the lock it holds, which allows another thread to acquire that same lock and potentially change some state before then notifying one or more waiting threads via notify/notifyAll.
Note that one must be careful to ensure locks are acquired and released in the same order to help avoid deadlock scenarios when more than one lock is involved. Consider also using timeouts when waiting to ensure that your thread doesn't wait indefinitely for a condition that might never arise. If there are many waiting threads when you call notify be aware that you might not know which thread will be scheduled but you can set a fairness policy to help influence this.
Depending on the structure of your code you may be able to avoid some of the lower level primitives like synchronised blocks by using some higher level APIs such as https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/Lock.html or keywords like volatile for variables that contain shared mutable state (like a condition you want to wait for to ensure the result of a write is observed on a subsequent read in a "happens before" relationship.
I have a class in java that reads UDP packets and puts them in an object (in a basically infinite loop). This object is then accessed in multiple separate threads, but obviously, since it is being filled at the same time, all these getters/setters are in synchronized methods. Problem is, right now these getters have code like this:
public synchronized SomeObject exampleGetter() {
if(this.isReceiving)
return oldCachedObject;
else
return currentObject;
}
Obviously, that's not quite the best way of doing things, so how should I go about writing methods (lots of different ones) that totally lock the object to one thread at a time and block the others (including the thread that created the object in the first place)? I looked at synchronized blocks, but I am kinda confused as to what effect the "lock object" has, is that the object that has access to the block at that given time? Any advice would be appreciated. Thanks!
The synchronized keyword synchronizes on the whole object instance not just the setter. I would rather go for a fine grained locking strategy or better... use a thread safe data structure where you store and get the received data. I personally love the BlockingQueue<T> where T is the type of data you receive on the network.
So suppose you are receiving Objects over a socket:
public class ReceivedDataHolder{
BlockingQueue<Object> dataBuffer = new LinkedBlockingQueue<Object>();
//...
public void dataReceived(Object data){
dataBuffer.offer(data);
}
public Object getReceivedData(){
return dataBuffer.take();
}
}
And in your socket you could do this whenever you receive data:
receivedDataHolder.dataReceived(object);
Any thread that wants to get data should do:
receivedDataHolder.getReceivedData();
This latter method call will block the calling thread until there is an element available on the queue (check this out for more details)
I hope this helps
Maybe AtomicReference would be suitable for you.
See:
java.util.concurrent.atomic
Java volatile reference vs. AtomicReference
All objects in java has something called Intrinsic locks, If any thread wants to do any operation on any object then it needs to acquire the intrinsic lock of that object. it will guarantee that only 1 thread will process your block of code at any given time.
A thread can acquire lock on any object, if that object is not locked by any other thread, if it is locked then the thread will wait till the other thread releases the lock on that object.
if you use synchronized block, your code will be somewhat like this
public void SomeObject exampleGetter() {
synchronized(this)
{
if(this.isReceiving)
return oldCachedObject;
else
return currentObject;
}
In this case when your thread enters the synchronized block, if any other thread is having lock on this object, then it will wait till that thread releases the lock. and if that object is free then your thread will acquire the lock on this object and perform the operation and then release the lock on that object.
for further information on synchronized blocks, methods and intrinsic locks, refer
http://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html
I hope it helped you :)
I came across a code like this
synchronized(obj) {
obj = new Object();
}
Something does not feel right about this , I am unable to explain, Is this piece of code OK or there is something really wrong in it, please point it out.
Thanks
It's probably not what you want to do. You're synchronizing on an object that you're no longer holding a reference to. Consider another thread running this method: they may enter and try to hit the lock at the moment after the reference to obj has been updated to point to the new object. At that point, they're synchronizing on a different object than the first thread. This is probably not what you're expecting.
Unless you have a good reason not to, you probably want to synchronize on a final Object (for visibility's sake.) In this case, you would probably want to use a separate lock variable. For example:
class Foo
{
private final Object lock = new Object();
private Object obj;
public void method()
{
synchronized(lock)
{
obj = new Object();
}
}
}
If obj is a local variable and no other thread is evaluating it in order to acquire a lock on it as shown here then it doesn't matter. Otherwise this is badly broken and the following applies:
(Posting this because the other answers are not strongly-worded enough --"probably" is not sufficient here -- and do not have enough detail.)
Every time a thread encounters a synchronized block,
before it can acquire the lock, it has to figure out what object it needs to lock on, by evaluating the expression in parens following the synchronized keyword.
If the reference is updated after the thread evaluates this expression, the thread has no way of knowing that. It will proceed to acquire the lock on the old object that it identified as the lock before. Eventually it enters the synchronized block locking on the old object, while another thread (that tries to enter the block after the lock changed) now evaluates the lock as being the new object and enters the same block of the same object holding the new lock, and you have no mutual exclusion.
The relevant section in the JLS is 14.19. The thread executing the synchronized statement:
1) evaluates the expression, then
2) acquires the lock on the value that the expression evaluates to, then
3) executes the block.
It doesn't revisit the evaluation step again at the time it successfully acquires the lock.
This code is broken. Don't do this. Lock on things that don't change.
This is a case where someone might think what they are doing is OK, but it probably isn't what they intended. In this case, you are synchronizing on the current value in the obj variable. Once you create a new instance and place it in the obj variable, the lock conditions will change. If that is all that is occurring in this block, it will probably work - but if it is doing anything else afterwards, the object will not be properly synchronized.
Better to be safe and synchronize on the containing object, or on another mutex entirely.
It's a uncommon usage but seems to be of valid in same scenarios. One I found in the codebase of JmDNS:
public Collection<? extends DNSEntry> getDNSEntryList(String name) {
Collection<? extends DNSEntry> entryList = this._getDNSEntryList(name);
if (entryList != null) {
synchronized (entryList) {
entryList = new ArrayList<DNSEntry>(entryList);
}
} else {
entryList = Collections.emptyList();
}
return entryList;
}
What it does is to synchonize on the returned list so this list does not get modified by others and then makes a copy of this list. In this special situation the lock is only needed for the original object.
so let's say that I have a static variable, which is an array of size 5.
And let's say I have two threads, T1 and T2, they both are trying to change the element at index 0 of that array. And then use the element at index 0.
In this case, I should lock the array until T1 is finished using the element right?
Another question is let's say T1 and T2 are already running, T1 access element at index 0 first, then lock it. But then right after T2 tries to access element at index 0, but T1 hasn't unlocked index 0 yet. Then in this case, in order for T2 to access element at index 0, what should T2 do? should T2 use call back function after T1 unlocks index 0 of the array?
Synchronization in java is (technically) not about refusing other threads access to an object, it about ensuring unique usage of it (at one time) between threads using synchronization locks. So T2 can access the object while T1 has synchronization lock, but will be unable to obtain the synchronization lock until T1 releases it.
You synchronize (lock) when you're going to have multiple threads accessing something.
The second thread is going to block until the first thread releases the lock (exits the synchronized block)
More fine-grained control can be had by using java.util.concurrent.locks and using non-blocking checks if you don't want threads to block.
1) Basically, yes. You needn't necessarily lock the array, you could lock at a higher level of granularity (say, the enclosing class if it were a private variable). The important thing is that no part of the code tries to modify or read from the array without holding the same lock. If this condition is violated, undefined behaviour could result (including, but not limited to, seeing old values, seeing garbage values that never existed, throwing exceptions, and going into infinite loops).
2) This depends partly on the synchronization scheme you're using, and your desired semantics. With the standard synchronized keyword, T2 would block indefinitely until the monitor is released by T1, at which point T2 will acquire the monitor and continue with the logic inside the synchronized block.
If you want finer-grained control over the behaviour when a lock is contended, you could use explicit Lock objects. These offer tryLock methods (both with a timeout, and returning immediately) which return true or false according to whether the lock could be obtained. Thus you could then test the return value and take whatever action you like if the lock isn't immediately obtained (such as registering a callback function, incrementing a counter and giving feedback to a user before trying again, etc.).
However, this custom reaction is seldom necessary, and notably increases the complexity of your locking code, not to mention the large possibility of mistakes if you forget to always release the lock in a finally block if and only if it was acquired successfully, etc. As a general rule, just go with synchronized unless/until you can show that it's providing a significant bottleneck to your application's required throughput.
I should lock the array until T1 is finished using the element right?
Yes, to avoid race conditions that would be a good idea.
what should T2 do
Look the array, then read the value. At this time you know noone else can modify it. When using locks such as monitors an queue is automatically kept by the system. Hence if T2 tries to access an object locked by T1 it will block (hang) until T1 releases the lock.
Sample code:
private Obect[] array;
private static final Object lockObject = new Object();
public void modifyObject() {
synchronized(lockObject) {
// read or modify the objects
}
}
Technically you could also synchronize on the array itself.
You don't lock a variable; you lock a mutex, which protects
a specific range of code. And the rule is simple: if any thread
modifies an object, and more than one thread accesses it (for
any reason), all accesses must be fully synchronized. The usual
solution is to define a mutex to protect the variable, request
a lock on it, and free the lock once the access has finished.
When a thread requests a lock, it is suspended until that lock
has been freed.
In C++, it is usual to use RAII to ensure that the lock is
freed, regardless of how the block is exited. In Java,
a synchronized block will acquire the lock at the start
(waiting until it is available), and leave the lock when the
program leaves the block (for whatever reasons).
Have you considered using AtomicReferenceArray? http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/atomic/AtomicReferenceArray.html It provides a #getAndSet method, that provides a thread safe atomic way to update indexes.
T1 access element at index 0 first, then lock it.
Lock first on static final mutex variable then access your static variable.
static final Object lock = new Object();
synchronized(lock) {
// access static reference
}
or better access on class reference
synchronized(YourClassName.class) {
// access static reference
}