When a synchronized method is completed, will it push only the data modified by it to main memory, or all the member variables, similarly when a synchronized method executes, will it read only the data it needs from main memory or will it clear all the member variables in the cache and read their values from main memory ? For example
public class SharedData
{
int a; int b; int c; int d;
public SharedData()
{
a = b = c = d = 10;
}
public synchronized void compute()
{
a = b * 20;
b = a + 10;
}
public synchronized int getResult()
{
return b*c;
}
}
In the above code assume compute is executed by threadA and getResult is executed by threadB. After the execution of compute, will threadA update main memory with a and b or will it update a,b,c and d. And before executing getResult will threadB get only the value of b and c from main memory or will it clear the cache and fetch values for all member variables a,b,c and d ?
synchronized ensures you have a consistent view of the data. This means you will read the latest value and other caches will get the latest value. Caches are smart enough to talk to each other via a special bus (not something required by the JLS, but allowed) This bus means that it doesn't have to touch main memory to get a consistent view.
I think following thread should answer your question.
Memory effects of synchronization in Java
In practice, the whole cache is not flushed.
1. synchronized keyword on a method or on an atomic statement, will lock the access to
the resource that it can modify, by allowing only one thread to gain the lock.
2. Now preventing of caching of values into the variables is done by volatile keyword.
Using volatile keyword will ask the JVM to make the thread that access the instance variable to reconcile its copy of the instance variable with the one saved in the memory.
3. Moreover in your above example, if threadA execute the compute(), then threadB canNot access the getResult() method simultaneously, as they both are synchronized methods, and only one thread can have access to the all the synchronized methods of the object, cause its not the method that is locked but the Object. Its like this... Every object has one lock, and the thread which wants to access its synchronized block must get that lock
4. Even every class has a lock, that is used to protect the crucial state of the static variables in the class.
Before answering your question lets clear few terms related to Multi-Threaded environments to understand the basic things.
Race-Condition : When two or more thread trying to perform read or write operations on same variable on same time (here same variable = shared data between thread) eg. In your question Thraed-A
execute b = a + 10 which is write operation on b and At same time Thread-B can execute b*c which is read operation on b. So here race condition is happening.
We can handle race condition by using two ways one is by using Synchronized method or block and second by using Volatile Keyword.
Volatile : Volatile keyword in Java guarantees that the value of the volatile variable will always be read from the main memory and not from Thread’s local cache. Normal variable without Volatile keyword is temporarily stored in Local cache for quick access and easy read write operations. Volatile doesn't block your thread it just make sure the write and read operation is perform in the sync. In the context of your example we can avoid race condition by making all variable as volatile
Synchronized : Synchronization is achived by blocking of thread. It means it use lock and key mechanism in such a way that only one thread can execute this block of code at a time. So Thread-B is waiting in the doors of syncronized block until Thread-A finish completely and release key. if you use syncronized in front of static method then lock is considered on your class (.class) and if method is non static or instance method (same as your case) at that time lock is considered on instance of class or current object read
Now come to the point lets modify your example with few print statements and kotlin code
class SharedData {
var a: Int
var b: Int
var c: Int
var d = 10
init {
c = d
b = c
a = b
}
#Synchronized
fun compute() : Pair<Int,Int>{
a = b * 20
b = a + 10
return a to b
}
#Synchronized
fun getComputationResult() : Int{
return b * c
}
}
#Test
fun testInstanceNotShared (){
println("Instance Not Shared Example")
val threadA = Thread{
val pair = SharedData().compute()
println("Running inside ${Thread.currentThread().name} compute And get A = ${pair.first}, B = ${pair.second} ")
}
threadA.name = "threadA"
threadA.start()
val threadB = Thread{
println("Running inside ${Thread.currentThread().name} getComputationResult = ${SharedData().getComputationResult()}")
}
threadB.name = "threadB"
threadB.start()
threadA.join()
threadB.join()
}
// Output
//Instance Not Shared Example
//Running inside threadB getComputationResult = 100
//Running inside threadA compute And get A = 200, B = 210
#Test
fun testInstanceShared (){
println("Instance Shared Example")
val sharedInstance = SharedData()
val threadA = Thread{
val pair = sharedInstance.compute()
println("Running inside ${Thread.currentThread().name} compute And get A = ${pair.first}, B = ${pair.second} ")
}
threadA.name = "threadA"
threadA.start()
val threadB = Thread{
println("Running inside ${Thread.currentThread().name} getComputationResult = ${sharedInstance.getComputationResult()}")
}
threadB.name = "threadB"
threadB.start()
threadA.join()
threadB.join()
}
//Instance Shared Example
//Running inside threadB getComputationResult = 2100
//Running inside threadA compute And get A = 200, B = 210
From above two test case you can identify that answer to your question is actually hidden in way how you call those methods (compute, getComputationResult) in multi-threaded environment.
After the execution of compute, will threadA update main memory
There is no guarantee that threadA Update the value of variable a,b,c,d on main memory but if you use Volatile keyword in front of those variable then it gives you 100% guarantee that those variable is updated with correct state immediately after modification happen
before executing getResult will threadB get only the value of b and c from main memory or will it clear the cache and fetch values for all member variables a,b,c and d
No
Addition to this - if you notice that in second test example even when two thread call the method same time you will get exact result. Means calling compute, getComputationResult same time still the getComputationResult method return the updated value from compute method this is because Synchronized and Volatile provide functionality called happens before which make sure that every write operations should be called before subsequent read operation.
Related
This question already has answers here:
How to understand happens-before consistent
(5 answers)
Closed 4 years ago.
I'm trying to understand Java happens-before order concept and there are a few things that seem very confusing. As far as I can tell, happens before is just an order on the set of actions and does not provide any guarantees about real-time execution order. Actually (emphasize mine):
It should be noted that the presence of a happens-before relationship
between two actions does not necessarily imply that they have to take
place in that order in an implementation. If the reordering produces
results consistent with a legal execution, it is not illegal.
So, all it says is that if there are two actions w (write) and r (read) such that hb(w, r), than r might actually happens before w in an execution, but there's no guarantee that it will. Also the write w is observed by the read r.
How I can determine that two actions are performed subsequently in run-time? For instance:
public volatile int v;
public int c;
Actions:
Thread A
v = 3; //w
Thread B
c = v; //r
Here we have hb(w, r) but that doesn't mean that c will contain value 3 after assignment. How do I enforce that c is assigned with 3? Does synchronization order provide such guarantees?
When the JLS says that some event X in thread A establishes a happens before relationship with event Y in thread B, it does not mean that X will happen before Y.
It means that IF X happens before Y, then both threads will agree that X happened before Y. That is to say, both threads will see the program's memory in a state that is consistent with X happening before Y.
It's all about memory. Threads communicate through shared memory, but when there are multiple CPUs in a system, all trying to access the same memory system, then the memory system becomes a bottleneck. Therefore, the CPUs in a typical multi-CPU computer are allowed to delay, re-order, and cache memory operations in order to speed things up.
That works great when threads are not interacting with one another, but it causes problems when they actually do want to interact: If thread A stores a value into an ordinary variable, Java makes no guarantee about when (or even if) thread B will see the value change.
In order to overcome that problem when it's important, Java gives you certain means of synchronizing threads. That is, getting the threads to agree on the state of the program's memory. The volatile keyword and the synchronized keyword are two means of establishing synchronization between threads.
I think the reason they called it "happens before" is to emphasize the transitive nature of the relationship: If you can prove that A happens before B, and you can prove that B happens before C, then according to the rules specified in the JLS, you have proved that A happens before C.
I would like to associate the above statement with some sample code flow.
To understand this, let us take below class that has two fields counter and isActive.
class StateHolder {
private int counter = 100;
private boolean isActive = false;
public synchronized void resetCounter() {
counter = 0;
isActive = true;
}
public synchronized void printStateWithLock() {
System.out.println("Counter : " + counter);
System.out.println("IsActive : " + isActive);
}
public void printStateWithNoLock() {
System.out.println("Counter : " + counter);
System.out.println("IsActive : " + isActive);
}
}
And assume that there are three thread T1, T2, T3 calling the following methods on the same object of StateHolder:
T1 calls resetCounter() and T2 calls printStateWithLock() at a same time and T1 gets the lock
T3 -> calls printStateWithNoLock() after T1 has completed its execution
It should be noted that the presence of a happens-before relationship between two actions does not necessarily imply that they have to take place in that order in an implementation. If the reordering produces results consistent with a legal execution, it is not illegal.
and the immediate line says,
As per the above statement, it gives the flexibility for JVM, OS or underlying hardware to reorder the statements within the resetCounter() method. And as T1 gets executed it could execute the statements in the below order.
public synchronized void resetCounter() {
isActive = true;
counter = 0;
}
This is inline with the statement not necessarily imply that they have to take place in that order in an implementation.
Now looking at it from a T2 perspective, this reordering doesn't have any negative impact, because both T1 and T2 are synchronizing on the same object and T2 is guaranteed to see changes changes to both of the fields, irrespective of whether the reordering has happened or not, as there is happens-before relationship. So output will always be:
Counter : 0
IsActive : true
This is as per statement, If the reordering produces results consistent with a legal execution, it is not illegal
But look at it from a T3 perspective, with this reordering it possible that T3 will see the updated value of isActive as 'truebut still see thecountervalue as100`, although T1 has completed its execution.
Counter : 100
IsActive : true
The next point in the above link further clarifies the statement and says that:
More specifically, if two actions share a happens-before relationship, they do not necessarily have to appear to have happened in that order to any code with which they do not share a happens-before relationship. Writes in one thread that are in a data race with reads in another thread may, for example, appear to occur out of order to those reads.
In this example T3 has encountered this problem as it doesn't have any happens-before relationship with T1 or T2. This is inline with Not necessarily have to appear to have happened in that order to any code with which they do not share a happens-before relationship.
NOTE: To simplify the case, we have single thread T1 modifying the state and T2 and T3 reading the state. It is possible to have
T1 updates counter to 0, later
T2 modifies isActive to true and sees counter is 0, after sometime
T3 that prints the state could still see only isActive as true but counter is 100, although both T1 and T2 have completed the execution.
As to the last question:
we have hb(w, r) but that doesn't mean that c will contain value 3 after assignment. How do I enforce that c is assigned with 3?
public volatile int v;
public int c;
Thread A
v = 3; //w
Thread B
c = v; //r
Since v is a volatile, as per Happens-before Order
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
So it is safe to assume that when Thread B tries to read the variable v it will always read the updated value and c will be assigned 3 in the above code.
Interpreting #James' answer to my liking:
// Definition: Some variables
private int first = 1;
private int second = 2;
private int third = 3;
private volatile boolean hasValue = false;
// Thread A
first = 5;
second = 6;
third = 7;
hasValue = true;
// Thread B
System.out.println("Flag is set to : " + hasValue);
System.out.println("First: " + first); // will print 5
System.out.println("Second: " + second); // will print 6
System.out.println("Third: " + third); // will print 7
if you want the state/value of the memory(memory and CPU cache) seen at the
time of a write statement of a variable by one thread,
State of the memory seen by hasValue=true(write statement) in Thread A :
first having value 5,second having value 6,third having value 7
to be seen from every subsequent(why subsequent even though only one
read in Thread B in this example? We many have Thread C doing exactly
similar to Thread B) read statement of the same variable by another
thread,then mark that variable volatile.
If X (hasValue=true) in Thread A happens before Y (sysout(hasValue)) in Thread B, the behaviour should be as if X happened before Y in the same thread (memory values seen at X should be same starting from Y)
Here we have hb(w, r) but that doesn't mean that c will contain value 3 after assignment. How do I enforce that c is assigned with 3? Does synchronization order provide such guarantees?
And your example
public volatile int v;
public int c;
Actions:
Thread A
v = 3; //w
Thread B
c = v; //r
You don't need volatile for v in your example. Let's take a look at a similar example
int v = 0;
int c = 0;
volatile boolean assigned = false;
Actions:
Thread A
v = 3;
assigned = true;
Thread B
while(!assigned);
c = v;
assigned field is volatile.
We will have c = v statement in Thread B only after assigned will be true (while(!assigned) is responsible for that).
if we have volatile — we have happens before.
happens before means that, if we see assigned == true — we will see all that happened before a statement assigned = true: we will see v = 3.
So when we have assigned == true -> we have v = 3.
We have c = 3 as a result.
What will happen without volatile
int v = 0;
int c = 0;
boolean assigned = false;
Actions:
Thread A
v = 3;
assigned = true;
Thread B
while(!assigned);
c = v;
We have assigned without volatile for now.
The value of c in the Thread B can be equal 0 or 3 in such situation. So there is not any guaranties
that c == 3.
I am trying to see how volatile works here. If I declare cc as volatile, I get the output below. I know thread execution output varies from time to time, but I read somewhere that volatile is the same as synchronized, so why do I get this output? And if I use two instances of Thread1 does that matter?
2Thread-0
2Thread-1
4Thread-1
3Thread-0
5Thread-1
6Thread-0
7Thread-1
8Thread-0
9Thread-1
10Thread-0
11Thread-1
12Thread-0
public class Volexample {
int cc=0;
public static void main(String[] args) {
Volexample ve=new Volexample();
CountClass count =ve.new CountClass();
Thread1 t1=ve.new Thread1(count);
Thread2 t2=ve.new Thread2(count);
t1.start();
t2.start();
}
class Thread1 extends Thread{
CountClass count =new CountClass();
Thread1(CountClass count ){
this.count=count;
}
#Override
public void run() {
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class Thread2 extends Thread {
CountClass count =new CountClass();
Thread2(CountClass count ){
this.count=count;
}
#Override
public void run(){
/*for(int i=0;i<=5;i++)
count.countUp();*/
for(int i=0;i<=5;i++){
cc++;
System.out.println(cc + Thread.currentThread().getName());
}
}
}
class CountClass{
volatile int count=0;
void countUp(){
count++;
System.out.println(count + Thread.currentThread().getName());
}
}
}
In Java, the semantics of the volatile keyword are very well defined. They ensure that other threads will see the latest changes to a variable. But they do not make read-modify-write operations atomic.
So, if i is volatile and you do i++, you are guaranteed to read the latest value of i and you are guaranteed that other threads will see your write to i immediately, but you are not guaranteed that two threads won't interleave their read/modify/write operations so that the two increments have the effect of a single increment.
Suppose i is a volatile integer whose value was initialized to zero, no writes have occurred other than that yet, and two threads do i++;, the following can happen:
The first thread reads a zero, the latest value of i.
The second threads reads a zero, also the latest value of i.
The first thread increments the zero it read, getting one.
The second thread increments the zero it read, also getting one.
The first thread writes the one it computed to i.
The second thread writes the one it computed to i.
The latest value written to i is one, so any thread that accesses i now will see one.
Notice that an increment was lost, even though every thread always read the latest value written by any other thread. The volatile keyword gives visibility, not atomicity.
You can use synchronized to form complex atomic operations. If you just need simple ones, you can use the various Atomic* classes that Java provides.
A use-case for using volatile would be reading/writing from memory that is mapped to device registers, for example on a micro-controller where something other than the CPU would be reading/writing values to that "memory" address and so the compiler should not optimise that variable away .
The Java volatile keyword is used to mark a Java variable as "being stored in main memory". That means, that every read of a volatile variable will be read from the computer's main memory, and not from the cache and that every write to a volatile variable will be written to main memory, and not just to the cache.
It guarantees that you are accessing the newest value of this variable.
P. S. Use larger loops to notice bugs. For example try to iterate 10e9 times.
I have a hunch that using the holder idiom without declaring the holder field as final is not thread safe (due to the way immutability works in Java). Can somebody confirm this (hopefully with some sources)?
public class Something {
private long answer = 1;
private Something() {
answer += 10;
answer += 10;
}
public int getAnswer() {
return answer;
}
private static class LazyHolder {
// notice no final
private static Something INSTANCE = new Something();
}
public static Something getInstance() {
return LazyHolder.INSTANCE;
}
}
EDIT: I definitely want sourced statements, not just assertions like "it works" -- please explain/prove it's safe
EDIT2: A little modification to make my point more clear - can I be sure that the getAnswer() method will return 21 regardless of calling thread?
The class initialization procedure guarantees that if a static field's value is set using a static initializer (i.e. static variable = someValue;) that value is visible to all threads:
10 - If the execution of the initializers completes normally, then acquire LC, label the Class object for C as fully initialized, notify all waiting threads, release LC, and complete this procedure normally.
Regarding your edit, let's imagine a situation with two threads T1 and T2, executing in that order from a wall clock's perspective:
T1: Something s = Something.getInstance();
T2: Something s = Something.getInstance(); i = s.getAnswer();
Then you have:
T1 acquire LC, T1 run Something INSTANCE = new Something();, which initialises answer, T1 release LC
T2 tries to acquire LC, but already locked by T1 => waits. When T1 releases LC, T2 acquire LC, reads INSTANCE then reads answer.
So you can see that you have a proper happens-before relationship between the write and the read to answer, thanks to the LC lock.
It is thread-safe for sure, but is mutable. So anyone who gets it may assign it to something else. And that is first thing to worry about (even before considering thread-safety).
class Counter
{
public int i=0;
public void increment()
{
i++;
System.out.println("i is "+i);
System.out.println("i/=2 executing");
i=i+22;
System.out.println("i is (after i+22) "+i);
System.out.println("i+=1 executing");
i++;
System.out.println("i is (after i++) "+i);
}
public void decrement()
{
i--;
System.out.println("i is "+i);
System.out.println("i*=2 executing");
i=i*2;
System.out.println("i is after i*2"+i);
System.out.println("i-=1 executing");
i=i-1;
System.out.println("i is after i-1 "+i);
}
public int value()
{
return i;
} }
class ThreadA
{
public ThreadA(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread A trying to increment");
c.increment();
System.out.println("Increment completed "+c.i);
}
}).start();
}
}
class ThreadB
{
public ThreadB(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread B trying to decrement");
c.decrement();
System.out.println("Decrement completed "+c.i);
}
}).start();
}
}
class ThreadInterference
{
public static void main(String args[]) throws Exception
{
Counter c=new Counter();
new ThreadA(c);
new ThreadB(c);
}
}
In the above code, ThreadA first got access to Counter object and incremented the value along with performing some extra operations. For the very first time ThreadA does not have a cached value of i. But after the execution of i++ (in first line) it will get cache the value. Later on the value is updated and gets 24. According to the program, as the variable i is not volatile so the changes will be done in the local cache of ThreadA,
Now when ThreadB accesses the decrement() method the value of i is as updated by ThreadA i.e. 24. How could that be possible?
Assuming that threads won't see each updates that other threads make to shared data is as inappropriate as assuming that all threads will see each other's updates immediately.
The important thing is to take account of the possibility of not seeing updates - not to rely on it.
There's another issue besides not seeing the update from other threads, mind you - all of your operations act in a "read, modify, write" sense... if another thread modifies the value after you've read it, you'll basically ignore it.
So for example, suppose i is 5 when we reach this line:
i = i * 2;
... but half way through it, another thread modifies it to be 4.
That line can be thought of as:
int tmp = i;
tmp = tmp * 2;
i = tmp;
If the second thread changes i to 4 after the first line in the "expanded" version, then even if i is volatile the write of 4 will still be effectively lost - because by that point, tmp is 5, it will be doubled to 10, and then 10 will be written out.
As specified in JLS 8.3.1.4:
The Java programming language allows threads to access shared
variables (§17.1). As a rule, to ensure that shared variables are
consistently and reliably updated, a thread should ensure that it has
exclusive use of such variables by obtaining a lock that,
conventionally, enforces mutual exclusion for those shared variables........A field may be
declared volatile, in which case the Java Memory Model ensures that
all threads see a consistent value for the variable
Although not always but there is still a chance that the shared values among threads are not consistenly and reliably updated, which would lead to some unpredictable outcome of program. In code given below
class Test {
static int i = 0, j = 0;
static void one() { i++; j++; }
static void two() {
System.out.println("i=" + i + " j=" + j);
}
}
If, one thread repeatedly calls the method one (but no more than Integer.MAX_VALUE times in all), and another thread repeatedly calls the method two then method two could occasionally print a value for j that is greater than the value of i, because the example includes no synchronization and, the shared values of i and j might be updated out of order.
But if you declare i and j to be volatile , This allows method one and method two to be executed concurrently, but guarantees that accesses to the shared values for i and j occur exactly as many times, and in exactly the same order, as they appear to occur during execution of the program text by each thread. Therefore, the shared value for j is never greater than that for i,because each update to i must be reflected in the shared value for i before the update to j occurs.
Now i came to know that common objects (the objects that are being shared by multiple threads) are not cached by those threads. As the object is common, Java Memory Model is smart enough to identify that common objects when cached by threads could produce surprising results.
How could that be possible?
Because there is nowhere in the JLS that says values have to be cached within a thread.
This is what the spec does say:
If you have a non-volatile variable x, and it's updated by a thread T1, there is no guarantee that T2 can ever observe the change of x by T1. The only way to guarantee that T2 sees a change of T1 is with a happens-before relationship.
It just so happens that some implementations of Java cache non-volatile variables within a thread in certain cases. In other words, you can't rely on a non-volatile variable being cached.
I understood synchronization of a block of code means that particular code will be accessed by only one thread at time even many thread are waiting to access that.
when we are writing thread class in run method we starting synchronized block by giving object.
for example
class MyThread extends Thread{
String sa;
public MyThread(String s){
sa=s;
}
public void run(){
synchronized(sa){
if(sa.equals("notdone"){
//do some thing on object
}
}
}
}
here we gave sa object to synchronized block what is the need of that.any how we are going to provide synchronization for that particular block of code
I would suggest
extend Runnable rather than Thread.
don't lock in the Runnable on an external. Instead you should be calling a method which may use an internal lock.
String is not a good choice as a lock. It means that "hi" and "hi" will share a lock but new String("hi") will not.
if you are locking all other threads for the life of the thread, why are you using multiple threads?
The parameter object of the synchronized block is the object on which the block locks.
Thus all synchronized blocks with the same object are excluding each other's (and all synchronized methods' of this same object) simultaneous execution.
So if you have this example
class ExampleA extends Thread() {
public ExampleA(Object l) {
this.x = l;
}
private Object x;
public void run() {
synchronized(x) { // <-- synchronized-block A
// do something
}
}
}
class ExampleB extends Thread() {
public ExampleB(Object l) {
this.x = l;
}
private Object x;
public void run() {
synchronized(x) { // <-- synchronized-block B
// do something else
}
}
}
Object o1 = new Object();
Object o2 = new Object();
Thread eA1 = new ExampleA(o1);
Thread eA2 = new ExampleA(o2);
Thread eB1 = new ExampleB(o1);
Thread eB2 = new ExampleB(o2);
eA1.start(); eA2.start(); eB1.start(); eB2.start();
Now we have two synchronized blocks (A and B, in classes ExampleA and ExampleB), and we have two lock objects (o1 and o2).
If we now look at the simultaneous execution, we can see that:
A1 can be executed in parallel to A2 and B2, but not to B1.
A2 can be executed in parallel to A1 and B1, but not to B2.
B1 can be executed in parallel to A2 and B2, but not to A1.
B2 can be executed in parallel to A1 and B1, but not to A2.
Thus, the synchronization depends only on the parameter object, not on the choice of synchronization block.
In your example, you are using this:
synchronized(sa){
if(sa.equals("notdone"){
//do some thing on object
}
}
This looks like you try to avoid that someone changes your instance variable sa to another string while you are comparing it and working - but it does not avoid this.
Synchronization does not work on a variable, it works on an object - and the object in question should usually be either some object which contains the variable (the current MyThread object in your case, reachable by this), or a special object used just for synchronization, and which is not changed.
As Peter Lawrey said, String objects usually are bad choices for synchronization locks, since all equal String literals are the same object (i.e. would exclude each other's synchronized blocks), while a equal non-literal string (e.g. created at runtime) is not the same object, and thus would not exclude synchronized blocks by other such objects or literals, which often leads to subtle bugs.
All threads synchronized on this objects will wait until current thread finishes its work. This is useful for example if you have read/write operation to collection that your wish to synchronized. So you can write sychronized block in methods set and get. In this case if one thread is reading information not all other threads that want to either read or write will wait.
So the question is what is the function of the object that a block synchronizes on?
All instances of Object have what is called a monitor. In normal execution this monitor is unowned.
A thread wishing to enter a synchronized block must take possession of the object monitor. Only one thread can posses the monitor at a time, however. So, if the monitor is currently unowned, the thread takes possession and executes the synchronized block of code. The thread releases the monitor when it leaves the synchronized block.
If the monitor is currently owned, then the thread needing to enter the synchronized block must wait for the monitor to be freed so it can take ownership and enter the block. More than one thread can be waiting and if so, then only one will be given ownership of the monitor. The rest will go back to waiting.