I have a confusion regarding the ReentrantLock's Condition. Here is the documentation:
Waiting threads are signalled in FIFO order.
The ordering of lock reacquisition for threads returning from waiting
methods is the same as for threads initially acquiring the lock, which
is in the default case not specified, but for fair locks favors those
threads that have been waiting the longest.
According to the latest bullet the fairness brings a well-specified ordering of lock reaquisition on signalling.
But what is the meaning of the first bullet Waiting threads are signalled in FIFO order? I presume in this case signalling means just "signalling" meaning that it "unparks" the thread in the order FIFO order, but the actual reaquiring order on wake up is governed by the fairness.
There are pretty large amount of staff tied with cxq and wait queues internal to HotSpot which I don't understand well (unfortunately).
QUESTION:
Does Waiting threads are signalled in FIFO order mean that waiting threads are unparked in the same order they were parked (even though the lock itself is unfair)?
Does fairness provides reaquisition ordering guarantees which is necessary since there is unpark-reaquire race in general case?
As explained in Difference in internal storing between 'fair' and 'unfair' lock, the actual difference between “fair” and “unfair” is not the organization of the queue, but that in unfair mode, a thread trying to acquire the lock might succeed even when there are already waiting threads in the queue. Such an overtaking thread will not interact with the queue at all.
A thread calling one of the await methods on a Condition must already own the associated lock and will release it so that another thread can acquire it, fulfill the condition and invoke signal or signalAll. So the thread must enqueue itself, so that the other thread knows which thread to signal. When signal is invoked, the thread waiting the longest time for the condition is fetched from the FIFO.
The signalled thread may get unparked but it’s also possible that it hasn’t parked yet. In either case, it must reacquire the lock and this reacquisition is subject to the lock’s fairness guaranty. By the time a thread calls signal it must own the lock. Therefore, the signalled thread can’t succeed immediately. When the lock is released, there might be a race between multiple threads.
But the signalling in FIFO order for a condition implies that when two or more threads are waiting on the same condition and one gets signalled, it will be the longest waiting thread and none of the others can overtake, even for an unfair lock. Only when more than one thread is signalled or other threads, not waiting for the condition, try to acquire the lock, the acquisition order of an unfair lock is arbitrary. Also, as the linked answer mentions, tryLock() may overtake even on a fair lock.
Reading the source code of ReentrantLock (Java 12) we can see that there is only a small difference from fair and not fair ReentrantLock. The difference consists in the class that extends java.util.concurrent.locks.AbstractQueuedSynchronizer. In one case it is FairSync in the other is NonfairSync. Both are defined in ReentrantLock and the only difference is that FairSync implements one more check in the method tryAcquire.
Reading the code seems that in optimal condition also in non-fair ReentrantLock FIFO is respected but this is not guaranteed due to cancellation, time-outs or similar. In fair ReentrantLock any thread before acquire the lock (also if unparked from the queue) re-check if there is older threads.
I'm not sure to understand the second question but notice that a thread is unparked from the queue by the thread that release the lock. Also if the thread that release the lock unpark the older thread in the queue, this is not enough to avoid starvation because a third thread can require the lock concurrently gaining it before the exiting thread unpark the waiting one. In fair mode there is a check of all thread waiting every time a new one try to gain the lock and this grantees FIFO and avoid starvation.
External interrupts of waiting thread does not change the queue order.
Related
The (Oracle) javadoc for Semaphore.release() includes:
If any threads are trying to acquire a permit, then one is selected and given the permit that was just released.
Is this a hard promise? This implies that if thread A is waiting in acquire() and thread B does this:
sem.release()
sem.acquire()
Then the release() should pass control to A and B will be blocked in acquire(). If these are the only two threads that can hold the semaphore and the doc statement is formally true, then this is a completely deterministic process: Afterward, A will have the permit and B will be blocked.
But this is not true, or at least it does seem that way to me. I haven't bothered with an SSCCE here since I am really just looking for confirmation that:
Race conditions apply: Even though thread A is waiting on the permit, when it is released it can be immediately re-acquired by thread B, leaving thread A still blocked.
These are "fair" semaphores if that makes any difference, and I'm actually working in kotlin.
In comments on the question Slaw pointed out something else from the documentation:
When fairness is set true, the semaphore guarantees that threads invoking any of the acquire methods are selected to obtain permits in the order in which their invocation of those methods was processed (first-in-first-out; FIFO). Note that FIFO ordering necessarily applies to specific internal points of execution within these methods. So, it is possible for one thread to invoke acquire before another, but reach the ordering point after the other, and similarly upon return from the method.
The point here is that acquire() is an interruptable function with a beginning and an end. At some point during its exception the calling thread secures a spot in the fairness queue, but when that is in relation to another thread concurrently accessing the same function is still indeterminate. Call this point X and consider two threads, one of which holds the semaphore. At some point another thread calls:
sem.acquire()
There is no guarantee that the scheduler won't sideline the thread inside acquire() before point X is reached. If the owner thread then does this (this could be, eg., intended as some kind of synchronization checkpoint or barrier control):
sem.release()
sem.acquire()
It could simply release and acquire the semaphore without it being acquired by another thread even if that thread has already entered acquire.
The injection of Thread.sleep() or yield() between the calls might often work, but it is not a guarantee. To create such a checkpoint with that guarantee you need two locks/semaphores for an exchange:
Owner thread holds semA.
Client thread can take semB and then wait on semA.
Owner can release semA then wait on semB, which if another thread is really waiting for semA by holding semB, will block and guarantee semA can now be acquired by the client.
When the client is done, it releases semB, then semA.
When the owner is released from waiting on semB, it can acquire semA and release semB.
If these are properly encapsulated this mechanism is rock solid.
I would like to ask you a question related to multithreading in Java.
I have a monitor and multiple threads are eager to own it.
Also inside the critical section this.wait() is invoked based on some conditions.
AFAIK, the monitor has 2 sets of threads:
entry set - where just arrived threads congregate and wait for their turn to own the monitor
wait set - where this.wait() threads that are waiting to be awakened
But how do they compete when notify/notifyAll is called?
Do threads from wait set have a priority in acquiring the monitor over threads in entry set or do they move to entry set?
Can I be sure that in case of notify the next executed thread will be one from the wait set?
No. The scheduler is in charge of which thread gets the lock next. It might be one from the wait set that got notified. It might be a thread that is just arrived and hasn't entered the wait set. Assuming the thread that just got notified will get the monitor next is not safe.
The standard advice is to call wait in a loop where we check the condition being waited on:
synchronized (lock) {
while (!condition) {
lock.wait();
}
...
That way when a thread comes out of a wait, it makes the same check as any thread that hasn't waited yet to know whether to progress or not.
If you need fairness, where you want the longest-waiting thread to acquire the lock next, then you might try one of the explicit Locks from java.util.concurrent.locks, such as ReentrantLock, but read the fine print:
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order. Programs using fair locks accessed by many threads may display lower overall throughput (i.e., are slower; often much slower) than those using the default setting, but have smaller variances in times to obtain locks and guarantee lack of starvation. Note however, that fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock. Also note that the untimed tryLock method does not honor the fairness setting. It will succeed if the lock is available even if other threads are waiting.
Under what conditions can this happen?
As far as I know
Blocked queue is a buffer between threads producing objects and consuming objects.
Wait queue prevents threads from competing for the same lock.
So thread gets a lock, but is unable to be passed onto consumer as it is now busy?
The question makes only sense under the assumption that it actually means “What circumstances can cause a thread to change from the wait state to the blocked state?”
There might be a particular scheduler implementation maintaining these threads in a dedicated queue, having to move threads from one queue to another upon these state changes and influencing the mind set of whoever originally formulated the question, but such a question shouldn’t be loaded with assumed implementation details. As a side note, while a queue of runnable threads would make sense, I can’t imagine a real reason to put blocked or waiting threads into a (global) queue.
If this is the original intention of the question, it should not be confused with Java classes implementing queues and having similar sounding names.
A thread is in the blocked state if it tries to enter a synchronized method or code fragment while another thread owns the object monitor. From there, the thread will turn to the runnable state if the owner releases the monitor and the blocked thread succeeds in acquiring the monitor
A thread is in the waiting state if performs an explicit action that can only proceed, if another thread performs an associated action, i.e. if the thread calls wait on an object, it can only proceed when another thread calls notify on the same object. If the thread calls LockSupport.park(), another thread has to call LockSupport.unpark() with that thread as argument. When it calls join on another thread, that thread must end its execution to end the wait. The waiting state may also end due to interruption or spuriuos wakeups.
As a special case, Java considers threads to be in the timed_waiting state, if they called the methods mentioned above with a timeout or when it executes Thread.sleep. This state only differs from the waiting state in that the state might also end due to elapsed time.
When a thread calls wait on an object, it must own the object’s monitor, i.e. be inside a synchronized method or code block. The monitor is released upon this call and reacquired when returning. When it can’t be reacquired immediately, the thread will go from the waiting or timed_waiting state to the blocked state.
I have a set of wrapped ReentrantLocks that have unique integer ids, where I require threads to acquire lower-id locks before they acquire higher-id locks in order to prevent deadlock. Three of the locks (lock0, lock1, and lock2) have greater priority than the locks with higher ids, while all of the other locks have the same priority - this means that if a thread acquires one of these three high-priority locks then they need to interrupt the other threads that are holding on to the necessary low-priority locks. For example, Thread1 is holding on to lock4 and lock5, and Thread0 is holding on to lock0 and needs lock4 and lock5, so Thread0 interrupts Thread1 (which acquired its locks using lockInterruptibly and which occasionally queries its isInterrupted method) and takes its locks. Thread1 waits until signaled by Thread0 (i.e. it doesn't try to reacquire the locks until Thread0 is finished with them).
This isn't ideal for several reasons. Ideally I'd like Thread0 to immediately signal that it wants the lock so that other threads don't jump ahead and acquire the lock while Thread1 is taking care of interrupting Thread0 (i.e. Thread0 goes on the queue of waiting threads so that if Thread2 tries to acquire the lock while Thread0 is interrupting Thread1 then Thread0 will still get the lock before Thread2) - e.g. I'd like a tryLock variant that adds the thread to the lock's queue if another thread is holding the lock. Also, at present if there are threads queued on lock4 or lock5 then Thread0 has to wait for them to acquire the lock and then interrupt them - it would be better if Thread0 could empty the waiting queue on the locks (assuming that none of the waiting threads held high-priority locks). Finally, it isn't necessary for Thread1 to give up all of its locks: in the case that Thread0 wants lock0 and lock5 while Thread1 wants lock4 and lock5, then Thread1 need not give up its lock on lock4, but lockInterruptibly will cause the thread to give up all of its locks when interrupted (assuming that the thread was interrupted while waiting to acquire a lock).
Before I reinvent the wheel, I was wondering if there was a lock class that already implemented some/all of these requirements, some sort of PriorityReentrantLock or whatever. I've looked at the AbstractQueuedSynchronizer and I don't think it wouldn't require too much work on my part to modify it to meet my needs (especially since I only need two priorities), but the less new code I need to test the better.
I've read many docs about thread states, some of them tells that there is two different states: blocked (before synchronized) and wait (if calls wait), some others are telling that there is only one state: wait. Moreover, some docs telling that you should call notify() for every wait() and if you don't then threads waiting() will never be eligible for execution even if monitor is unlocked.
From you last sentence I see you don't fully understand the difference between synchronized and wait()/notify().
Basically, monitor has lock and condition. It's almost orthogonal concepts.
When thread enters a synchronized block, it acquires a lock. When thread leaves that block, it releases a lock. Only one thread can have a lock on a particular monitor.
When thread having a lock calls wait(), it releases a lock and starts waiting on its condition. When thread having a lock calls notify(), one of the threads (all threads in the case of notifyAll()) waiting on the condition becomes eligible for execution (and starts waiting to acquire a lock, since notifying thread still has it).
So, waiting to acquire a lock (Thread.State.BLOCKED) and waiting on the monitor's condition (Thread.State.WAITING) are different and independent states.
This behaviour becames more clear if you look at Lock class - it implements the same synchronization primitives as synchronized block (with some extensions), but provides clear distinction between locks and conditions.
There are two different states BLOCKED and WAITING.
The part about waiting forever if no one notifies (or interrupts) you is true.
Standard doc is here
When a thread calls Object.wait
method, it releases this acquired
monitor and is put into WAITING (or
TIMED_WAITING if we call the timeout
versions of the wait method) state.
Now when the thread is notified either
by notify() or by notifyAll() call on
the same object then the waiting state
of the thread ends and the thread
starts attempting to regain all the
monitors which it had acquired at the
time of wait call. At one time there
may be several threads trying to
regain (or maybe gain for the first
time) their monitors. If more than one
threads attempt to acquire the monitor
of a particular object then only one
thread (selected by the JVM scheduler)
is granted the monitor and all other
threads are put into BLOCKED state.
In Java's perspective (Thread.State), there are two different states: BLOCKED and WAITING . When a thread synchronizes on a Object, it is in BLOCKED state. After a thread executes wait, it is in WAITING state.
On Linux platform, Java thread is OS native thread. The OS thread state for both BLOCKED and WAITING states is Interruptible sleep. When being checked with ps, the state for both BLOCKED and WAITING threads is "Sl+".