This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Does thread.yield() lose the lock on object if called inside a synchronized method?
I know Thread.sleep() holds the lock, but Object.wait() releases the lock. Some say yield actually implements sleep(0). Does this mean yield will not release the lock?
Another question. Say the current thread has acquired a lock, and then called anotherThread.join(). Does the current thread release the lock?
Unless the javadoc mentions an object's monitor (such as Object.wait()), you should assume that any locks will continue to be held. So:
Does this mean yield will not release the lock?
Yes.
Does the current thread release the lock?
No.
sleep puts the thread in a wait state, yield returns the thread directly to the ready pool. (So if a thread yields it could go directly from running to the ready pool to getting picked by the scheduler again without ever waiting.) Neither one has anything to do with locking.
From the Java Language Specification:
Thread.sleep causes the currently executing thread to sleep
(temporarily cease execution) for the specified duration, subject to
the precision and accuracy of system timers and schedulers. The thread
does not lose ownership of any monitors, and resumption of execution
will depend on scheduling and the availability of processors on which
to execute the thread.
It is important to note that neither Thread.sleep nor Thread.yield
have any synchronization semantics. In particular, the compiler does
not have to flush writes cached in registers out to shared memory
before a call to Thread.sleep or Thread.yield, nor does the compiler
have to reload values cached in registers after a call to Thread.sleep
or Thread.yield.
For example, in the following (broken) code fragment, assume that
this.done is a non-volatile boolean field:
while (!this.done)
Thread.sleep(1000);
The compiler is free to read the field this.done just once, and reuse
the cached value in each execution of the loop. This would mean that
the loop would never terminate, even if another thread changed the
value of this.done.
Related
This question already has answers here:
How does "Compare And Set" in AtomicInteger works
(2 answers)
Closed 2 years ago.
While going through the Blocking/Non Blocking Algorithms section at the link
and the below code to explain the Atomic compareAndSet operation
boolean updated = false;
while(!updated){
long prevCount = this.count.get();
updated = this.count.compareAndSet(prevCount, prevCount + 1);
}
It states that
Therefore no synchronization is necessary, and no thread suspension is
necessary. This saves the thread suspension overhead.
Does it mean that if there are 2 threads that call compareAndSet() at the same time in the above code, both of them will execute concurrently or parallelly which is in contrast to the synchronized block where one thread gets blocked if both access simultaneously? If that's the case wouldn't the values get overwritten in the above case? Which is same case when there is no synchronization?
The value of the AtomicLong is on a cacheline. On the X86 there is a feature called cacheline locking which is used for locked instructions. So when a cas is done, the cacheline is first acquired in modified/exclusive state and then locked.
If a different CPU wants to access the same cacheline, its cache coherence requests including the request-for-ownership will be ignored till the cacheline is unlocked.
So it is a very lightweight form of synchronization. If you are lucky, the other CPU has some out of order instructions it can execute while it waits for the cacheline.
This approach is called non blocking even though it could lead to other threads 'blocking' since they need to wait. The primary difference with a blocking algorithm is that it can't happen that the CPU (thread) owning the locked cacheline gets suspended while it has locked that cacheline. This is taken care of at the hardware level (so the CPU can't be interrupted between cacheline lock acquire and release). So the blocking is guaranteed to be very short instead of unbounded like with a blocking algorithm.
According to #BeeOnRope there might be some optimistic behavior as well involved but this goes beyond my knowledge level.
The (Oracle) javadoc for Semaphore.release() includes:
If any threads are trying to acquire a permit, then one is selected and given the permit that was just released.
Is this a hard promise? This implies that if thread A is waiting in acquire() and thread B does this:
sem.release()
sem.acquire()
Then the release() should pass control to A and B will be blocked in acquire(). If these are the only two threads that can hold the semaphore and the doc statement is formally true, then this is a completely deterministic process: Afterward, A will have the permit and B will be blocked.
But this is not true, or at least it does seem that way to me. I haven't bothered with an SSCCE here since I am really just looking for confirmation that:
Race conditions apply: Even though thread A is waiting on the permit, when it is released it can be immediately re-acquired by thread B, leaving thread A still blocked.
These are "fair" semaphores if that makes any difference, and I'm actually working in kotlin.
In comments on the question Slaw pointed out something else from the documentation:
When fairness is set true, the semaphore guarantees that threads invoking any of the acquire methods are selected to obtain permits in the order in which their invocation of those methods was processed (first-in-first-out; FIFO). Note that FIFO ordering necessarily applies to specific internal points of execution within these methods. So, it is possible for one thread to invoke acquire before another, but reach the ordering point after the other, and similarly upon return from the method.
The point here is that acquire() is an interruptable function with a beginning and an end. At some point during its exception the calling thread secures a spot in the fairness queue, but when that is in relation to another thread concurrently accessing the same function is still indeterminate. Call this point X and consider two threads, one of which holds the semaphore. At some point another thread calls:
sem.acquire()
There is no guarantee that the scheduler won't sideline the thread inside acquire() before point X is reached. If the owner thread then does this (this could be, eg., intended as some kind of synchronization checkpoint or barrier control):
sem.release()
sem.acquire()
It could simply release and acquire the semaphore without it being acquired by another thread even if that thread has already entered acquire.
The injection of Thread.sleep() or yield() between the calls might often work, but it is not a guarantee. To create such a checkpoint with that guarantee you need two locks/semaphores for an exchange:
Owner thread holds semA.
Client thread can take semB and then wait on semA.
Owner can release semA then wait on semB, which if another thread is really waiting for semA by holding semB, will block and guarantee semA can now be acquired by the client.
When the client is done, it releases semB, then semA.
When the owner is released from waiting on semB, it can acquire semA and release semB.
If these are properly encapsulated this mechanism is rock solid.
Let's say you have two threads, thread1 and thread2. If you call thread1.start() and thread2.start() at the same time and they both print out numbers between 1 and 5, they will both run at the same time and they will randomly print out the numbers in any order, if I am not mistaken. To prevent this, you use the .join() method to make sure that a certain thread gets executed first. If this is what the .join() method does, what is the Lock object used for?
Thread.join is used to wait for another thread to finish. The join method uses the implicit lock on the Thread object and calls wait on it. When the thread being waited for finishes it notifies the waiting thread so it can stop waiting.
Java has different ways to use locks to protect access to data. There is implicit locking that uses a lock built into every Java object (this is where the synchronized keyword comes in), and then there are explicit Lock objects. Both of them protect data from concurrent access, the difference is the explicit Locks are more flexible and powerful, while implicit locking is designed to be easier to use.
With implicit locks, for instance, I can't not release the lock at the end of a synchronized method or block, the JVM makes sure that the lock gets released as the thread leaves. But programming with implicit locks can be limiting. For instance, there aren't separate condition objects so if there are different threads accessing a shared object for different things, notifying only a subset of them is not possible.
With explicit Locks you get separate condition objects and can notify only those threads waiting on a particular condition (producers might wait on one condition while consumers wait on another, see the ArrayBlockingQueue class for an example), and you can implement more involved kinds of patterns, like hand-over-hand locking. But you need to be much more careful, because the extra features introduce complications, and releasing the lock is up to you.
Locking typically prevents more than one thread from running a block of code at the same time. This is because only ONE thread at a time can acquire the lock and run the code within. If a thread wants the lock but it is already taken, then that thread goes into a wait state until the lock is released. If you have many threads waiting for the lock to be released, which one gets the lock next is INDETERMINATE (can't be predicted). This can lead to "thread starvation" where a thread is waiting for the lock, but it just never gets it because other threads always seem to get it instead. This is a very generic answer because you didn't specify a language. Some languages may differ slightly in that they might have a determinate method of deciding who gets the lock next.
I am creating a multiple threads and calling yield() inside it.
The java.lang.Thread.yield() method causes the currently executing thread object to temporarily pause and allow other threads to execute.
Will It be possible for other threads to execute which also want to go inside synchronized block?
synchronized(this.lock)
{
//calling yield here.
}
thanks.
As far as I know, Yield() only gives up the remaining time slice on the CPU and steps back in the queue. It doesn't release any synchronized objects.
yield does not take or release locks, it simply pauses the current thread execution. So yielding in the synchronized block will not let the current thread to release lock and let the other methods to enter the synchronized block. wait/notify method should be used to release the lock.
From Java Language Specification
Thread.sleep causes the currently executing thread to sleep
(temporarily cease execution) for the specified duration, subject to
the precision and accuracy of system timers and schedulers. The thread
does not lose ownership of any monitors, and resumption of execution
will depend on scheduling and the availability of processors on which
to execute the thread.
It is important to note that neither Thread.sleep nor Thread.yield have any synchronization semantics. In particular, the compiler does
not have to flush writes cached in registers out to shared memory
before a call to Thread.sleep or Thread.yield, nor does the compiler
have to reload values cached in registers after a call to Thread.sleep
or Thread.yield.
yield allows a context switch to other threads, so this thread will not consume the entire CPU usage of the process. The thread still holds the lock. It is the developer responsibility to take care of deadlocks.
I understand that Thread.currentThread().yield() is a notification to thread scheduler that it may assign cpu cycle to some other thread of same priority if any such is present.
My question is: If current thread has got lock on some object and calls yield(), will it loses that lock right away? And when thread scheduler finds out there is no such thread to assign cpu cycle, then the thread which has called yield() will again be in fight to get lock on the object which it has lost earlier??
I couldn't find it in javadoc and forums [http://www.coderanch.com/t/226223/java-programmer-SCJP/certification/does-sleep-yield-release-lock] have 50-50 answers.
I think yield() (lets say thread1) should release lock because if some thread (lets say thread2) of same priority wants to operate on same object, then it can have chance when thread scheduler eventually assign cup to thread2.
No. Thread.yield() is not like Object.wait(). It just gives up control to allow a thread switch. It will have no effect on the concurrency of your program.
There is no guarantee which thread the scheduler will run after a yield.
In Java Language specification
17.3 Sleep and Yield
It is important to note that neither Thread.sleep nor Thread.yield have any synchronization semantics. In particular, the compiler does not have to flush writes cached in registers out to shared memory before a call to Thread.sleep or Thread.yield, nor does the compiler have to reload values cached in registers after a call to Thread.sleep or Thread.yield.
My comment:
In java's early days, when it did not really supported parallel executions, but only concurrent (green threads), yield() was suspending the current thread, and the jvm was picking up another thread to resume. Now-days, yield does not have much meaning as usually the tread scheduling is on OS level.
So, yield is just a hint to the JVM that current thread wants to take a rest and nothing else, it is up to the thread scheduler to decide what to do. yield does not have any synchronization semantic. If thread holds lock, it will continue to hold it.
Only wait methods of the Object class release the intrinsic lock of the current instance (the thread may have other locks acquired, they don't get released). Yield, sleep, join do not bother about locks. However, join is a little more special, you are guaranteed to see all the changes made by the thread you're waiting for to finish.