I have an unusual problem.
I have a function, operation in this function can be done by two threads at a time.
static int iCount = 1;
public synchronized void myFunct(){
while(iCount >= 3)
{
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
iCount++;
//Do Stuffs
//After operation decrement count
iCount --;
notifyAll();
}
What i am trying to do is, i want to allow only two threads to do some operation, and other threads must wait.
But here first two threads increment the count and does the operation and other threads go for an wait state but do not get the notification.
I guess i am overlooking something.
Sounds like you want to use a Semaphore, you always call acquire() before doing your operation, and then release() in a finally block.
private static final Semphore semaphore = new Semaphore(2);
public static void myFunct() throws InterruptedException {
semaphore.aquire();
try {
// do stuff
} finally {
semaphore.release();
}
}
Your function is synchronized, so only one thread at a time can be in it.
I'm not sure I understand your question... But if you want to allow two threads to go somewhere at once, have a look at Semaphore.
Is this a singleton class?
If not then it's a problem because many concurrent instances may change the value of icounter and in addition they will block on it forever because no thread will be able to call notify on their instance object.
Anyway you should move the sync inside the function and lock iCount and not the instance, also make it volatile.
public void myFunct(){
synchronized(iCount) {
while(iCount >= 3)
{
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
iCount++;
}
//Do Stuffs
//After operation decrement count
synchronized(iCount) {
iCount--;
}
notifyAll();
Why aren't you just using a Semaphore?
An alternative might be to use a ThreadPoolExecutor with a maximum of two threads.
You need java.util.concurrent.Semaphore, initialized with 2 permits.
As for your current code - threads may cache values of variables. Try adding the volatile keyword.
There are many problems with this code. Among them:
You have no real control on the number of threads running myFunct, since the method is synchronized on the instance level, while the counter is static. So N different threads operating on N different instances may run the same method concurrently.
Manipulating the counter by multiple threads is not thread safe. Consider synchronizing it or using AtomicInteger.
Regarding the limit on the number of threads, consider using the Semaphore class.
Related
This program attempts to print numbers 1 to 10 in a sequential manner, 1 thread prints odd numbers and the second threads prints even numbers.
I have been reading JCIP book and it says:
Ensure that the state variables making up the condition predicate are guarded by the lock associated with the condition queue.
In the below program, the condition queue will correspond to static member 'obj1' while the state variable that makes up the condition predicate is static volatile member 'count'. (let me know if I am wrong in the interpretation of condition, state variable, condition predicate)
The below program works correctly but is clearly violating the above idiom. Have I understood what the author is trying to say correctly? Is the below code really a poor programming practice (which happens to work correctly)
Can you give me an example where not following the above idiom will make me run into problems?
public class OddEvenSynchronized implements Runnable {
static Object obj1 = new Object(); // monitor to share data
static volatile int count =1; // condition predicate
boolean isEven;
public OddEvenSynchronized(boolean isEven) { //constructor
this.isEven=isEven;
}
public void run (){
while (count<=10){
if (this.isEven == true){
printEven(); //print an even number
}
else{
printOdd(); //print an odd number
}
}
}
public static void main(String[] args) {
Thread t1 = new Thread (new OddEvenSynchronized(true));
Thread t2 = new Thread (new OddEvenSynchronized(false));
//start the 2 threads
t1.start();
t2.start();
}
void printEven(){
synchronized (obj1) {
while (count%2 != 0){
try{
obj1.wait();
}catch (InterruptedException e) {
e.printStackTrace();
}
}
}
System.out.println("Even"+count);
count++; //unguarded increment (violation)
synchronized (obj1) {
obj1.notifyAll();
}
} //end method
void printOdd(){
synchronized (obj1) {
while (count%2 == 0){
try{
obj1.wait();
}catch (InterruptedException e) {
e.printStackTrace();
}
}
}
System.out.println("Odd"+count);
count++; //unguarded increment (violation)
synchronized (obj1) {
obj1.notifyAll();
}
} //end method
} //end class
Do not read from or write to count if you're not synchronized on obj1. That's a no no! The prints and the increments should be done from inside synchronized blocks.
synchronized (obj1) {
while (count%2 != 0){
try {
obj1.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("Even"+count);
}
synchronized (obj1) {
count++;
obj1.notifyAll();
}
You'll notice that there's no reason to drop the synchronization now. Combine the two blocks.
synchronized (obj1) {
while (count%2 != 0){
try {
obj1.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("Even"+count);
count++;
obj1.notifyAll();
}
The below program works correctly but is clearly violating the above idiom.
The insidious danger of multithreaded programming is that a buggy program can appear to work correctly most of the time. Race conditions can be quite devious because they often require very tight timing conditions which rarely happen.
It's really, really important to follow the rules to the letter. It's very difficult to get multithreaded programming right. It's a near certainty that any time you deviate from the rules and try to be clever you will introduce subtle bugs.
The only reason I have been able to come up with for this question, as hinted by my discussion with John Kugelman in his answer (please correct if something is wrong):
1st Key insight: In Java, there is only one condition queue associated with the object's monitor. And although they share the condition queue, there condition predicates are different. This sharing results in unnecessary wakeup ->check condition predicate -> sleep again. So although inefficient, they will still behave as kind of separate condition queues if coded properly ( while ( condition predicate ) { thread.wait() } )
In the above program, the condition predicates
count%2 == 0
count%2 != 0
are different, although they are part of the same condition queue (i.e. doing a notify() on this object's monitor will wake both of them, however only one would be able to proceed at a time).
The 2nd key insight:
The volatile count variable ensure memory visibility.
Conclusion:
As soon as we introduce another thread with the same condition predicate, the program will be susceptible to race conditions (if not other defects).
Also, note that usually wait() notify() mechanism is employed for object with same condition predicated, for example, waiting for a resource lock. The above program is usually used in interviews and I doubt if it would be common in real-life code.
So, if there are two or more threads in the same condition queue with different condition predicates, and the condition predicate variable is volatile (and hence ensures memory visibility), then ignoring the above advice can produce a correct program. Although this is of little significance, this really helped me understand multi-threading better.
I'm trying to learn about threads and I do not understand the join() method.
I have a Thread (ThreadAdd.java) which adds 1 to a static int.
public class ThreadAdd extends Thread{
public static int count;
#Override
public void run() {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
Logger.getLogger(ThreadAdd.class.getName()).log(Level.SEVERE, null, ex);
}
ThreadAdd.count++;
}
}
In my main method I launch 2 threads :
public static void main(String[] args) throws InterruptedException {
ThreadAdd s1 = new ThreadAdd();
ThreadAdd s2 = new ThreadAdd();
s1.start();s2.start();
s1.join();
s2.join();
System.out.println(ThreadAdd.count);
}
I do not understand why most of the time the result is 2 but sometimes it returns 1.
The reason why you sometimes see 1 is not because join() fails to wait for the thread to finish, but because both threads tried to modify the value concurrently. When this happens, you may see unexpected results: for example, when both threads try to increment count which is zero, they both could read zero, then add 1 to it, and store the result. Both of them will store the same exact result, i.e. 1, so that's what you are going to see no matter how long you wait.
To fix this problem, add synchronized around the increment, or use AtomicInteger:
public static AtomicInteger count = new AtomicInteger(0);
#Override
public void run() {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
Logger.getLogger(ThreadAdd.class.getName()).log(Level.SEVERE, null, ex);
}
ThreadAdd.count.incrementAndGet();
}
The join method is not the real issue here. The problem is that your counter is not prepared for interthread synchronization, which may lead to each thread observing a different value in count.
It is highly recommended that you study some topics of concurrent programming, including how it is handled in Java.
Because you're not synchronizing the increment of the integer count. The two threads may interleave while incrementing the variable.
See http://docs.oracle.com/javase/tutorial/essential/concurrency/interfere.html for an explanation. The example in the link is similar to your example and a solution provided to avoid this thread interference is to use atomic variables like java.util.concurrent.atomic.AtomicInteger.
Your count variable isn't volatile, and so there's no requirement for threads to check its value each time, and occasionally instruction ordering will cause errors like that.
In fact, though, since count++ is syntactic sugar for count = count + 1, even making the variable volatile won't ensure that you don't have the problem, since there's a race condition between the read and the subsequent write.
To make code like this safe, use an AtomicInteger instead.
This has nothing to do with the join. The thread that waits by using join() is your main thread. The two other threads are not waiting for anything. And the join is not causing them to do anything differently.
And as the other answers said, the two threads are concurrently writing to the same variable, and therefore you get the result you see.
Perhaps you were expecting the join() to delay one of the threads so that it doesn't work concurrently with the other, but that's not how it works. The only thread that is delayed is the caller of join(), not the target thread.
I want to clear my understanding that if I surround a block of code with synchronized(this){} statement, does this mean that I am making those statements atomic?
No, it does not ensure your statements are atomic. For example, if you have two statements inside one synchronized block, the first may succeed, but the second may fail. Hence, the result is not "all or nothing". But regarding multiple threads, you ensure that no statement of two threads are interleaved. In other words: all statements of all threads are strictly serialized, even so, there is no guarantee, that all or none statements of a thread gets executed.
Have a look at how Atomicity is defined.
Here is an example showing that the reader is able to ready a corrupted state. Hence the synchronized block was not executed atomically (forgive me the nasty formatting):
public class Example {
public static void sleep() {
try { Thread.sleep(400); } catch (InterruptedException e) {};
}
public static void main(String[] args) {
final Example example = new Example(1);
ExecutorService executor = newFixedThreadPool(2);
try {
Future<?> reader = executor.submit(new Runnable() { #Override public void run() {
int value; do {
value = example.getSingleElement();
System.out.println("single value is: " + value);
} while (value != 10);
}});
Future<?> writer = executor.submit(new Runnable() { #Override public void run() {
for (int value = 2; value < 10; value++) example.failDoingAtomic(value);
}});
reader.get(); writer.get();
} catch (Exception e) { e.getCause().printStackTrace();
} finally { executor.shutdown(); }
}
private final Set<Integer> singleElementSet;
public Example(int singleIntValue) {
singleElementSet = new HashSet<>(Arrays.asList(singleIntValue));
}
public synchronized void failDoingAtomic(int replacement) {
singleElementSet.clear();
if (new Random().nextBoolean()) sleep();
else throw new RuntimeException("I failed badly before adding the new value :-(");
singleElementSet.add(replacement);
}
public int getSingleElement() {
return singleElementSet.iterator().next();
}
}
No, synchronization and atomicity are two different concepts.
Synchronization means that a code block can be executed by at most one thread at a time, but other threads (that execute some other code that uses the same data) can see intermediate results produced inside the "synchronized" block.
Atomicity means that other threads do not see intermediate results - they see either the initial or the final state of the data affected by the atomic operation.
It's unfortunate that java uses synchronized as a keyword. A synchronized block in Java is a "mutex" (short for "mutual exclusion"). It's a mechanism that insures only one thread at a time can enter the block.
Mutexes are just one of many tools that are used to achieve "synchronization" in a multi-threaded program: Broadly speaking, synchronization refers to all of the techniques that are used to insure that the threads will work in a coordinated fashion to achieve a desired outcome.
Atomicity is what Oleg Estekhin said, above. We usually hear about it in the context of "transactions." Mutual exclusion (i.e., Java's synchronized) guarantees something less than atomicity: Namely, it protects invariants.
An invariant is any assertion about the program's state that is supposed to be "always" true. E.g., in a game where players exchange virtual coins, the total number of coins in the game might be an invariant. But it's often impossible to advance the state of the program without temporarily breaking the invariant. The purpose of mutexes is to insure that only one thread---the one that is doing the work---can see the temporary "broken" state.
For code that use syncronized on that object - yes.
For code, that don't use syncronized keyword for that object - no.
Can we say that by synchronizing a block of code we are making the contained statements atomic?
You are taking a very big leap there. Atomicity means that the operation if atomic will complete in one CPU cycle or equivalent to one CPU cycle whereas Synchronizing a block means only one thread can access the critical region. It may take multiple CPU cycles for processing code in the critical region(which will make it non atomic).
I am trying to have some Threads on a Queue so I can manage them from there. Is this possible? I have some code but it don't work correctly.
The main idea is to generate X threads and put every thread inside a queue in another class. Then in the class who have the queue use wait() and notify() methods to have a FIFO execution order.
Thanks in advance.
Some of the code:
public synchronized void semWait(Thread petitionerThread){
count--;
if(count < 0){
try {
petitionerThread.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
FIFOQueue.add(petitionerThread);
}
}
public synchronized void semSignal(Thread noticeThread){
count++;
if(count <= 0)
if(!FIFOQueue.isEmpty())
FIFOQueue.pollLast().notify();
}
Edit: The problem is that when a Thread enters the queue and it is put to wait, something happen that the semSignal method is never executed for any of the other threads (its called after semWait()).
You might want to check BlockingQueue(concrete class LinkedBlockingQueue) in java. This queue allows you to put any object into it, for that matter even Thread. The queue.put() will wait if the queue is full. And queue.get() will wait if queue is empty. wait() and notify() are implicitly taken care of.
Then a set of threads can take from the queue and execute them in order.
We are talking of a producer-consumer problem.
Your code violates one basic programming rule: let object governs itself. First, code that waits/notifies should be inside the methods of that object. Then, if you want a thread to behave some way, program its run method accordingly. In the code, you try to manipulate threads as if they are ordinary objects, while they are not. Low-level code which treats thread as objects is implemented already in wait/notify/synchronized and whatever synchronization primitives, and you need not to reimplement the wheel, unless you make a new operating system.
Looks like you are trying to implement Semaphore. In this case, your methods need not parameters. semWait should place in the queue current thread, and semSignal release a thread from the queue and not the thread passed as argument.
One possible implementation is as follows:
class Sem {
int count;
public synchronized void semWait() throws InterruptedException {
while (count <= 0) {
wait();
}
count--;
}
public synchronized void semSignal() {
count++;
notify();
}
}
Why is it that two synchronized blocks can't be executed simultaneously by two different threads in Java.
EDIT
public class JavaApplication4 {
public static void main(String[] args) {
new JavaApplication4();
}
public JavaApplication4() {
Thread t1 = new Thread() {
#Override
public void run() {
if (Thread.currentThread().getName().equals("Thread-1")) {
test(Thread.currentThread().getName());
} else {
test1(Thread.currentThread().getName());
}
}
};
Thread t2 = new Thread(t1);
t2.start();
t1.start();
}
public synchronized void test(String msg) {
for (int i = 0; i < 10; i++) {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
}
System.out.println(msg);
}
}
public synchronized void test1(String msg) {
for (int i = 0; i < 10; i++) {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
}
System.out.println(msg + " from test1");
}
}
}
Your statement is false. Any number of synchronized blocks can execute in parallel as long as they don't contend for the same lock.
But if your question is about blocks contending for the same lock, then it is wrong to ask "why is it so" because that is the purpose of the whole concept. Programmers need a mutual exclusion mechanism and they get it from Java through synchronized.
Finally, you may be asking "Why would we ever need to mutually exclude code segments from executing in parallel". The answer to that would be that there are many data structures that only make sense when they are organized in a certain way and when a thread updates the structure, it necessarily does it part by part, so the structure is in a "broken" state while it's doing the update. If another thread were to come along at that point and try to read the structure, or even worse, update it on its own, the whole thing would fall apart.
EDIT
I saw your example and your comments and now it's obvious what is troubling you: the semantics of the synchronized modifier of a method. That means that the method will contend for a lock on this's monitor. All synchronized methods of the same object will contend for the same lock.
That is the whole concept of synchronization, if you are taking a lock on an object (or a class), none of the other threads can access any synchronized blocks.
Example
Class A{
public void method1()
{
synchronized(this)//Block 1 taking lock on Object
{
//do something
}
}
public void method2()
{
synchronized(this)//Block 2 taking lock on Object
{
//do something
}
}
}
If one thread of an Object enters any of the synchronized blocks, all others threads of the same object will have to wait for that thread to come out of the synchronized block to enter any of the synchronized blocks. If there are N number of such blocks, only one thread of the Object can access only one block at a time. Please note my emphasis on Threads of same Object. The concept will not apply if we are dealing with threads from different objects.
Let me also add that if you are taking a lock on class, the above concept will get expanded to any object of the class. So if instead of saying synchronized(this), I would have used synchronized(A.class), code will instruct JVM, that irrespective of the Object that thread belongs to, make it wait for other thread to finish the synchronized block execution.
Edit: Please understand that when you are taking a lock (by using synchronized keyword), you are not just taking lock on one block. You are taking lock on the object. That means you are telling JVM "hey, this thread is doing some critical work which might change the state of the object (or class), so don't let any other thread do any other critical work" . Critical work, here refers to all the code in synchronized blocks which take lock on that particular Object (or class), and not only in one synchronized block.
This is not absolutely true. If you are dealing with locks on different objects then multiple threads can execute those blocks.
synchronized(obj1){
//your code here
}
synchronized(obj2){
//your code here
}
In above case one thread can execute first and second can execute second block , the point is here threads are working with different locks.
Your statement is correct if threads are dealing with same lock.Every object is associated with only one lock in java if one thread has acquired the lock and executing then other thread has to wait until first thread release that lock.Lock can be acquired by synchronized block or method.
Two Threads can execute synchronized blocks simultaneously till the point they are not locking the same object.
In case the blocks are synchronized on different object... they can execute simultaneously.
synchronized(object1){
...
}
synchronized(object2){
...
}
EDIT:
Please reason the output for http://pastebin.com/tcJT009i
In your example when you are invoking synchronized methods the lock is acquired over the same object. Try creating two objects and see.