mutual exclusion using thread of java - java

I have this code , It is mutual exclusion algorithm
turn = 0 // shared control variable
while (turn != i);
// CS
turn = (turn + 1) % n;
I know how thread works but really I'm little weak in using thread in java so please any suggestion to help me to understand how to convert it in real code using thread of java
sorry for my bad english

Mutual exclusion is typically achieved, in the simplest form, by marking a method as synchronized. By marking an object's method as synchronized, only one thread can ever execute that object's method at a time. The object owning the method is the monitor.
Additionally, you can define a synchronized block in the code itself, passing it the object to act as the monitor.
I believe you could achieve the same thing in a simpler fashion, by defining a Runnable object which has the logic you want done. Where you want the mutual exclusion, define a synchronized method.
Then that Runnable instance can be passed to as many Threads you need. As they all reference the same Runnable, calls into the synchronized method will be mutually exclusive.
This is not the only way, but it should be what you're after. Hope this helps.

this code is not mutually exclusive, consider this execution-
thread 0 enters the code and CS and then increments turn to 1 in the last line.
thread 1 enters the CS as turn equals 1,and stays
now thread 0 goes back to the first line and sets turn to 0 and then enters the CS together with thread 1

Related

Intra-thread coherence [duplicate]

This question already has an answer here:
Does the Java Memory Model guarantee visibility of intra-thread writes?
(1 answer)
Closed 2 years ago.
The code is simple.
// not annotated with volatile
public static int I = 0;
public static int test(){
I = 1;
return I;
}
There is a thread that invokes the method test.
Is it possible the method test will return the value '0'?
In other words, the reading of a shared variable maybe not see the modifying by the same thread.
update
The question just very simple, but I make its obscurity, I'm really sorry about it.
The a thread means a single thread.
And the question is duplicated with it.
Any answer that does not explain in terms on java language specification is only partially correct, if correct at all.
You need to make a clear distinction between actions that happens within a single thread and are tied together by program order and that in turn creates a happens-before connection, specifically via:
If x and y are actions of the same thread and x comes before y in program order, then hb(x, y).
That rule tells you that if you think about this code in single threaded world, it will always print 1.
And on the other hand, actions that create synchronizes with connections across different threads, and implicitly those create happens-before, via:
If an action x synchronizes-with a following action y, then we also have hb(x, y).
In your case, I is a plain field, as such every operation related to it is a plain store and/or a plain load. Such stores and loads do not created any connections at all according to the JLS. As such some thread that reads I can always read it as 0 if there is a writing thread involved.
No, it will be 1 if no other thread is involved but the one that will invoke the method.
chrylis -cautiouslyoptimistic's answer is worth reading for an alternative scenario as well.
Two reasons:
I is just altered by its owner, and if the other thread just calls test(), there's no option for it to get a 0 as I's value.
The second thread won't read Class.I's value, but the result of the test() method. The assignation I=1 happens before the return so is guaranteed to offer the latest updated value (which has only been updated by the owner, once).
Yes, it is possible for the test method to return 0, if another thread writes to i between the assignment and the return statement:
Thread 1: assign i = 1
Thread 2: assign i = 0
Thread 1: return i (sees the 0 that Thread 2 just wrote)
To prevent this, all access to i, reads and writes, would need to be synchronized on the same condition. Making i volatile is not sufficient to prevent threads from taking turns modifying it.
Note that it's not that Thread 1 "does not see" the i = 1 write; that is guaranteed, because all statements logically execute in program order. However, another thread might change the value after that write happens but before Thread 1 reads it.

Using the lock of synchronize block

if I have variable Integer[] arr = new Integer[5] and i use one of the cells as synchronize block lock - can I use it inside the block?
synchronize(arr[index])
{
arr[index]++;
}
If the answer is yes - so what exactly the lock means? what the program do to this lock while synchronization?
another question - does it lock only the cell or all of the array?
In other words - Does another Thread can use the arr[index+1] in the block in parallel?
Thanks!
1) .... can I use it inside the block?
Yes
2) If the answer is yes - so what exactly the lock means? what the program do to this lock while synchronization?
What it means is that some other thread that attempts to synchronize on the same object will be blocked until "this code" releases the lock on the object.
There are also memory coherency effects. If you synchronize (properly) one thread is guaranteed to see changes made by another one.
3) another question - does it lock only the cell or all of the array?
Neither. It locks on the object (the Integer instance) that the array cell refers to.
Also the lock applies only to other threads that attempt to synchronize on the same object. If another thread attempts to synchronize on a different object, or if it attempts to access the object without synchronizing, then it is not blocked.
4) In other words - Can another thread use the arr[index+1] in the block in parallel?
It depends on precisely what the other thread does. See above.
Aside: Your example is rather odd. An Integer object is immutable, so there seems little point in synchronizing on it. This may be just a contrived example, but if not, then you most likely have a problem in your application design. Unfortunately, the example offers us no clues to understand what you are really trying to do here.
But the simple lessons are:
you synchronize on objects, not array elements, or variables
synchronization only works if all threads synchronize when using a shared object.
It locks the object (Integer) that happens to be at position index in the array at the beginning of the synchronized block. Which is not very useful in the case of Integers, because the statement arr[index]++ will replace the object with another (unlocked) one.
UPDATE
It doesn't lock anything useful, neither the full array, not the cell at position index. Besides, Integer objects (that are immutable) can be kept in a cache to be reused as a result of valueOf(). You may also get a NullPointerException if the array is not initialized.
In summary: don't do that.

Java synchronized on atomic operation

Is this Java class thread safe or reset method needs to be synchronized too? If yes can someone tell me the reason why?
public class NamedCounter {
private int count;
public synchronized void increment() { count++; }
public synchronized int getCount() { return count; }
public void reset() { count = 0; }
}
Not without synchronizing rest() and adding more methods. You will run into cases where you will need more methods. For example
NamedCounter counter = new NamedCounter();
counter.increment();
// at this exact time (before reaching the below line) another thread might change changed the value of counter!!!!
if(counter.getCount() == 1) {
//do something....this is not thread safe since you depeneded on a value that might have been changed by another thread
}
To fix the above you need something like
NamedCounter counter = new NamedCounter();
if(counter.incrementAndGet()== 1) { //incrementAndGet() must be a synchronized method
//do something....now it is thread safe
}
Instead, use Java's bulit-in class AtomicInteger which covers all cases. Or if you are trying to learn thread safety then use AtomicInteger as a standard (to learn from).
For production code, go with AtomicInteger without even thinking twice! Please note that using AtomicInteger does not automatically guarantee thread safety in your code. You MUST make use of the methods that are provided by the api. They are there for a reason.
Note that synchronized is not just about mutual exclusion, it is fundamentally about the proper ordering of operations in terms of the visibility of their actions. Therefore reset must be synchronized as well, otherwise the writes it makes may occur concurrently to other two methods, and have no guarantee to be visible.
To conclude, your class is not thread-safe as it stands, but will be as soon as you synchronize the reset method.
You have to synchronize your reset() method also.
To make a class thread safe you have to synchronize all paths that access a variable else you will have undesired results with the unsynchronized paths.
You need to add synchronized to reset method too and then it will be synchronized. But in this way you achieve syncronization through locks, that is, each thread accesing the method will lock on the NamedCounter object instace.
However, if you use AtomicInteger as your count variable, you don't need to syncronize anymore because it uses the CAS cpu operation to achieve atomicity without the need to synchronize.
Not an answer, but too long for a comment:
If reset() is synch'ed, then the 0 become visible to any thread that reads or increments the counter later. Without synchronization, there is no visibility guarantee. Looking at the interaction of concurrent increment and the unsychronized reset, it may be that 0 becomes visible to the incrementing thread before entering the method, then the result will be 1. If counter is set to 0 between increment's read and write, the reset will be forgotten. If it is set after the write, the end result will be 0. So, if you want to assert that for every reading thread, the counter is 0 after reset, that method must be synchronized, too. But David Schwartz is correct that those low-level synchronizations make little sense whithout higher-level semantics of those interactions.

Out-of-order writes for Double-checked locking

In the examples mentioned for Out-of-order writes for double-checked locking scenarios (ref:
IBM article & Wikipedia Article)
I could not understand the simple reason of why Thread1 would come of out synchronized block before the constructor is fully initialized. As per my understanding, creating "new" and the calling constructor should execute in-sequence and the synchronized lock should not be release till all the work in not completed.
Please let me know what I am missing here.
The constructor can have completed - but that doesn't mean that all the writes involved within that constructor have been made visible to other threads. The nasty situation is when the reference becomes visible to other threads (so they start using it) before the contents of the object become visible.
You might find Bill Pugh's article on it helps shed a little light, too.
Personally I just avoid double-checked locking like the plague, rather than trying to make it all work.
The code in question is here:
public static Singleton getInstance()
{
if (instance == null)
{
synchronized(Singleton.class) { //1
if (instance == null) //2
instance = new Singleton(); //3
}
}
return instance;
}
Now the problem with this cannot be understood as long as you keep thinking that the code executes in the order it is written. Even if it does, there is the issue of cache synchronization across multiple processors (or cores) in a Symmetrical Multiprocessing architecture, which is the mainstream today.
Thread1 could for example publish the instance reference to the main memory, but fail to publish any other data inside the Singleton object that was created. Thread2 will observe the object in an inconsistent state.
As long as Thread2 doesn't enter the synchronized block, the cache synchronization doesn't have to happen, so Thread2 can go on indefinitely without ever observing the Singleton in a consistent state.
Thread 2 checks to see if the instance is null when Thread 1 is at //3 .
public static Singleton getInstance()
{
if (instance == null)
{
synchronized(Singleton.class) { //1
if (instance == null) //2
instance = new Singleton(); //3
}
}
return instance;//4
}
At this point the memory for instance has been allocated from the heap and the pointer to it is stored in the instance reference, so the "if statement" executed by Thread 2 returns "false".
Note that because instance is not null when Thread2 checks it, thread 2 does not enter the synchronized block and instead returns a reference to a " fully constructed, but partially initialized, Singleton object."
There's a general problem with code not being executed in the order it's written. In Java, a thread is only obligated to be consistent with itself. An instance created on one line with new has to be ready to go on the next. There's no such oblgation to other threads. For instance, if fieldA is 1 and 'fieldB' is 2 going into this code on thread 1:
fieldA = 5;
fieldB = 10;
and thread 2 runs this code:
int x = fieldA;
int y = FieldB;
x y values of 1 2, 5 2, and 5 10 are all to be expected, but 1 10--fieldB was set and/or picked up before fieldA--is perfectly legal, and likely, as well. So double-checked locking is a special case of a more general problem, and if you work with multiple threads you need to be aware of it, particularly if they all access the same fields.
One simple solution from Java 1.5 that should be mentioned: fields marked volatile are guaranteed to be read from main memory immediately before being referenced and written immediately after. If fieldA and fieldB above were declared volatile, an x y value of 1 10 would not be possible. If instance is volatile, double-checked locking works. There's a cost to using volatile fields, but it's less than synchronizing, so the double-checked locking becomes a pretty good idea. It's an even better idea because it avoids having a bunch of threads waiting to synch while CPU cores are sitting idle.
But you do want to understand this (if you can't be talked out of multithreading). On the one hand you need to avoid timing problems and on the other avoid bringing your program to a halt with all the threads waiting to get into synch blocks. And it's very difficult to understand.

Should i use lock.lock(): in this method?

I wrote this method whose purpose is to give notice of the fact that a thread is leaving a
specific block of code
A thread stands for a car which is leaving a bridge so other cars can traverse it .
The bridge is accessible to a given number of cars (limited capacity) and it's one way only.
public void getout(int diection){
// release the lock
semaphore.release();
try{
lock.lock(); //access to shared data
if(direction == Car.NORTH)
nNordTraversing--; //decreasing traversing threads
else
nSudTraversing--;
bridgeCond.signal();
}finally{
lock.unlock();
}
}
My question is: should I use lock.lock(); or it's just nonsense?
thanks in advance
As we don't have the complete code (what is that semaphore ?), this answer is partly based on guess.
If your question is related to the increment and decrement operations, then you should know that those operation aren't, in fact, atomic.
So yes, if you have other threads accessing those variables, you need to protect them to ensure that no other thread can read them or worse try to do the same operation, as two parallel increments may result in only one effective.
But as locking has a cost, you may also encapsulate your variable in AtomicLong.
From the code snippet and requirement getout will not be called by simulatenous thread, only thread which is at the front of the queue, hence the method which is calling getout should be synchronized, as not all threads(cars) can be at the front of the queue.
I also think you are using semaphore as your guard lock in the calling method.
If in your implementation getout is being called by multiple methods then yes you need synchronization and your code is correct.
Well, I assume that nNordTraversing and nSudTraversing are shared data. Since ++ and -- are not atomic operations it is sensefull to lock them, before changing. Otherwise what could happen is the following:
you read the variable nNordTraversing (e.g. 7)
another Thread gets scheduled and completes its getout method, it changed the variable (e.g. 7 -- --> 6)
you are scheduled back, change the variable but on the old data you read (e.g. 7 --> 8)before the other thread changed it
the changes of the other thread got over written, the count is not consistent anymore (e.g. its 8 now, but should be 7)
This is called the lost update problem.

Categories