This question already has answers here:
What does 'synchronized' mean?
(17 answers)
Closed 1 year ago.
I was practicing multithreading in java and wrote the below code
class Printer {
synchronized void printHi(String x) {
System.out.println(x);
}
}
class MyThread extends Thread {
Printer objm;
MyThread(Printer a) {
objm = a;
}
#Override
public void run() {
for (int i = 0; i < 10; i++) {
objm.printHi("MyThread" + i);
}
}
}
class YourThread extends Thread {
Printer objy;
YourThread(Printer a) {
objy = a;
}
#Override
public void run() {
for (int i = 0; i < 10; i++) {
objy.printHi("YourThread" + i);
}
}
}
public class test {
public static void main(String[] args) {
Printer ob = new Printer();
MyThread mt = new MyThread(ob);
YourThread yt = new YourThread(ob);
mt.start();
yt.start();
}
}
Sometimes I gets the output as:
MyThread0
YourThread0
MyThread1
YourThread1
MyThread2
YourThread2
MyThread3
YourThread3
YourThread4
MyThread4
MyThread5
MyThread6
YourThread5
MyThread7
YourThread6
MyThread8
MyThread9
YourThread7
YourThread8
YourThread9
Which is asynchronous. Why is it so even after making the function printHi() as synchronized?
Synchronized means only one thread can be running a block of code marked synchronized on the same lock object (instance of the object for the case of a synchronized method). It does not mean threads will run in order. It is perfectly correct for the two threads to enter those synchronized blocks in any order, and any number of times before the next thread does. What you want is much more difficult and beyond what a simple synchronized block can do.
If you want it to always go Thread A, Thread B, Thread A, Thread B- first I'd question that those two things should actually be separate threads. Wanting things to run sequentially like that is the number one sign you aren't asynchronous and shouldn't be multiple threads. But if they do, you're probably best off with two threads with message handlers, sending messages to each other about when they're allowed to run. Or using semaphores to signal each other. It's hard to give exact advice because the problem here is obviously a trivialized version of something harder, and without the details the right implementation is hard to guess.
You're synchronizing the printHi function call. That means two instances of printHi on the same object won't run at the same time. That's great. It means you'll never get output like
MyThYourThreadread77
(Side note: Java's printing primitives are already somewhat synchronized if I recall correctly, so that shouldn't happen anyway. But for demonstrative purposes, it'll do)
However, your loop is not synchronized. If you want the whole loop to only happen in one thread at a time, remove synchronized from print and write your loop as
synchronized (objm) {
for (int i = 0; i < 10; i++) {
objm.printHi("MyThread" + i);
}
}
(and the same for objy in YourThread)
There's still no guarantee on which one will happen first, but the two loops won't interrupt each other in this case.
Related
Let us say I have two classes, A main class and a Thread class as follows:
public class A {
public static void main(String []args){
int count = 0;
for(int i = 0; i < 10; i++){
count+=10;
//here on every addition, I want to update the variable countOfAdd of the thread class
//and when countOfAdd value is in multiples of 5 I want to print a statement
}
}
class B extends Thread {
int countOfAdd;
#Override
public void run(){
//on value received
count+=1;
}
}
I don't know whether this is possible or not. If it is possible how to do it
Thanks in advance.
The normal way to do that is a queue.
Create a queue and make references to it available to both threads.
The main thread should add() an element to the queue (e.g. the amount of increment).
The other thread should poll() the queue and use this information to update its internal state.
This way none of the intermediate updates are going to be lost between the threads.
The quick and dirty way is direct access and locking.
Both of your threads can keep a reference to a common piece of data, and a common lock object (which can just be a Object commonLock = new Object()).
Every time either thread needs to access the data member, they do it holing a lock, e.g.:
synchronized (commonLock) { commonCount +=1; } // One thread.
synchronized (commonLock) { if (commonCount > 1) {...} } // Another thread.
This is harder to reason about, but can be made serviceable if the number of accesses in each thread is made small, preferably just one.
I don't know why you are using Thread here but anyway.
1. Without Thread
public class A {
public static void main(String []args){
int count = 0;
B objectB = new B();
for(int i = 0; i < 10; i++){
count+=10;
//here on every addition, I want to update the variable countOfAdd
//of the thread class and when countOfAdd value is in multiples of 5
//I want to print a statement
objectB.setCount(YourInput);// set your value
if(objectB.getValue()%5==0){
//do your task
}
}
}
class B {
int countOfAdd;
public int getCount(){return countOfAdd;}
public void setCount(int ){this.countOfAdd =countOfAdd;}
}
2. With Thread
Use Pub-sub pattern
Implementation of pub sub pattern in Java
In the tutorial of java multi-threading, it gives an exmaple of Memory Consistency Errors. But I can not reproduce it. Is there any other method to simulate Memory Consistency Errors?
The example provided in the tutorial:
Suppose a simple int field is defined and initialized:
int counter = 0;
The counter field is shared between two threads, A and B. Suppose thread A increments counter:
counter++;
Then, shortly afterwards, thread B prints out counter:
System.out.println(counter);
If the two statements had been executed in the same thread, it would be safe to assume that the value printed out would be "1". But if the two statements are executed in separate threads, the value printed out might well be "0", because there's no guarantee that thread A's change to counter will be visible to thread B — unless the programmer has established a happens-before relationship between these two statements.
I answered a question a while ago about a bug in Java 5. Why doesn't volatile in java 5+ ensure visibility from another thread?
Given this piece of code:
public class Test {
volatile static private int a;
static private int b;
public static void main(String [] args) throws Exception {
for (int i = 0; i < 100; i++) {
new Thread() {
#Override
public void run() {
int tt = b; // makes the jvm cache the value of b
while (a==0) {
}
if (b == 0) {
System.out.println("error");
}
}
}.start();
}
b = 1;
a = 1;
}
}
The volatile store of a happens after the normal store of b. So when the thread runs and sees a != 0, because of the rules defined in the JMM, we must see b == 1.
The bug in the JRE allowed the thread to make it to the error line and was subsequently resolved. This definitely would fail if you don't have a defined as volatile.
This might reproduce the problem, at least on my computer, I can reproduce it after some loops.
Suppose you have a Counter class:
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
Let thread_A set flag as true, and save the time into
modifyTime.
Let another thread, let's say thread_B, read the Counter's flag. If thread_B still get false even when it is later than modifyTime, then we can say we have reproduced the problem.
Example code
class Holder {
boolean flag = false;
long modifyTime = Long.MAX_VALUE;
}
public class App {
public static void main(String[] args) {
while (!test());
}
private static boolean test() {
final Holder holder = new Holder();
new Thread(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(10);
holder.flag = true;
holder.modifyTime = System.currentTimeMillis();
} catch (Exception e) {
e.printStackTrace();
}
}
}).start();
long lastCheckStartTime = 0L;
long lastCheckFailTime = 0L;
while (true) {
lastCheckStartTime = System.currentTimeMillis();
if (holder.flag) {
break;
} else {
lastCheckFailTime = System.currentTimeMillis();
System.out.println(lastCheckFailTime);
}
}
if (lastCheckFailTime > holder.modifyTime
&& lastCheckStartTime > holder.modifyTime) {
System.out.println("last check fail time " + lastCheckFailTime);
System.out.println("modify time " + holder.modifyTime);
return true;
} else {
return false;
}
}
}
Result
last check time 1565285999497
modify time 1565285999494
This means thread_B get false from Counter's flag filed at time 1565285999497, even thread_A has set it as true at time 1565285999494(3 milli seconds ealier).
The example used is too bad to demonstrate the memory consistency issue. Making it work will require brittle reasoning and complicated coding. Yet you may not be able to see the results. Multi-threading issues occur due to unlucky timing. If someone wants to increase the chances of observing issue, we need to increase chances of unlucky timing.
Following program achieves it.
public class ConsistencyIssue {
static int counter = 0;
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter);
}
private static class Increment implements Runnable{
#Override
public void run() {
for(int i = 1; i <= 10000; i++)
counter++;
}
}
}
Execution 1 output: 10963,
Execution 2 output: 14552
Final count should have been 20000, but it is less than that. Reason is count++ is multi step operation,
1. read count
2. increment count
3. store it
two threads may read say count 1 at once, increment it to 2. and write out 2. But if it was a serial execution it should have been 1++ -> 2++ -> 3.
We need a way to make all 3 steps atomic. i.e to be executed by only one thread at a time.
Solution 1: Synchronized
Surround the increment with Synchronized. Since counter is static variable you need to use class level synchronization
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
synchronized (ConsistencyIssue.class) {
counter++;
}
}
Now it outputs: 20000
Solution 2: AtomicInteger
public class ConsistencyIssue {
static AtomicInteger counter = new AtomicInteger(0);
public static void main(String[] args) throws InterruptedException {
Thread thread1 = new Thread(new Increment(), "Thread-1");
Thread thread2 = new Thread(new Increment(), "Thread-2");
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println(counter.get());
}
private static class Increment implements Runnable {
#Override
public void run() {
for (int i = 1; i <= 10000; i++)
counter.incrementAndGet();
}
}
}
We can do with semaphores, explicit locking too. but for this simple code AtomicInteger is enough
Sometimes when I try to reproduce some real concurrency problems, I use the debugger.
Make a breakpoint on the print and a breakpoint on the increment and run the whole thing.
Releasing the breakpoints in different sequences gives different results.
Maybe to simple but it worked for me.
Please have another look at how the example is introduced in your source.
The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement. To see this, consider the following example.
This example illustrates the fact that multi-threading is not deterministic, in the sense that you get no guarantee about the order in which operations of different threads will be executed, which might result in different observations across several runs. But it does not illustrate a memory consistency error!
To understand what a memory consistency error is, you need to first get an insight about memory consistency. The simplest model of memory consistency has been introduced by Lamport in 1979. Here is the original definition.
The result of any execution is the same as if the operations of all the processes were executed in some sequential order and the operations of each individual process appear in this sequence in the order specified by its program
Now, consider this example multi-threaded program, please have a look at this image from a more recent research paper about sequential consistency. It illustrates what a real memory consistency error might look like.
To finally answer your question, please note the following points:
A memory consistency error always depends on the underlying memory model (A particular programming languages may allow more behaviours for optimization purposes). What's the best memory model is still an open research question.
The example given above gives an example of sequential consistency violation, but there is no guarantee that you can observe it with your favorite programming language, for two reasons: it depends on the programming language exact memory model, and due to undeterminism, you have no way to force a particular incorrect execution.
Memory models are a wide topic. To get more information, you can for example have a look at Torsten Hoefler and Markus Püschel course at ETH Zürich, from which I understood most of these concepts.
Sources
Leslie Lamport. How to Make a Multiprocessor Computer That Correctly Executes Multiprocessor Programs, 1979
Wei-Yu Chen, Arvind Krishnamurthy, Katherine Yelick, Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays, 2003
Design of Parallel and High-Performance Computing course, ETH Zürich
I'm learning multithreaded counter and I'm wondering why no matter how many times I ran the code it produces the right result.
public class MainClass {
public static void main(String[] args) {
Counter counter = new Counter();
for (int i = 0; i < 3; i++) {
CounterThread thread = new CounterThread(counter);
thread.start();
}
}
}
public class CounterThread extends Thread {
private Counter counter;
public CounterThread(Counter counter) {
this.counter = counter;
}
public void run() {
for (int i = 0; i < 10; i++) {
this.counter.add();
}
this.counter.print();
}
}
public class Counter {
private int count = 0;
public void add() {
this.count = this.count + 1;
}
public void print() {
System.out.println(this.count);
}
}
And this is the result
10
20
30
Not sure if this is just a fluke or is this expected? I thought the result is going to be
10
10
10
Try increasing the loop count from 10 to 10000 and you'll likely see some differences in the output.
The most logical explanation is that with only 10 additions, a thread is too fast to finish before the next thread gets started and adds on top of the previous result.
I'm learning multithreaded counter and I'm wondering why no matter how many times I ran the code it produces the right result.
<ttdr> Check out #manouti's answer. </ttdr>
Even though you are sharing the same Counter object, which is unsynchronized, there are a couple of things that are causing your 3 threads to run (or look like they are running) serially with data synchronization. I had to work hard on my 8 proc Intel Linux box to get it to show any interleaving.
When threads start and when they finish, there are memory barriers that are crossed. According to the Java Memory Model, the guarantee is that the thread that does the thread.join() will see the results of the thread published to it but I suspect a central memory flush happens when the thread finishes. This means that if the threads run serially (and with such a small loop it's hard for them not to) they will act as if there is no concurrency because they will see each other's changes to the Counter.
Putting a Thread.sleep(100); at the front of the thread run() method causes it to not run serially. It also hopefully causes the threads to cache the Counter and not see the results published by other threads that have already finished. Still needed help though.
Starting the threads in a loop after they all have been instantiated helps concurrency.
Another thing that causes synchronization is:
System.out.println(this.count);
System.out is a Printstream which is a synchronized class. Every time a thread calls println(...) it is publishing its results to central memory. If you instead recorded the value and then displayed it later, it might show better interleaving.
I really wonder if some Java compiler inlining of the Counter class at some point is causing part of the artificial synchronization. For example, I'm really surprised that a Thread.sleep(1000) at the front and end of the thread.run() method doesn't show 10,10,10.
It should be noted that on a non-intel architecture, with different memory and/or thread models, this might be easier to reproduce.
Oh, as commentary and apropos of nothing, typically it is recommended to implement Runnable instead of extending Thread.
So the following is my tweaks to your test program.
public class CounterThread extends Thread {
private Counter counter;
int result;
...
public void run() {
try {
Thread.sleep(100);
} catch (InterruptedException e1) {
Thread.currentThread().interrupt(); // good pattern
return;
}
for (int i = 0; i < 10; i++) {
counter.add();
}
result = counter.count;
// no print here
}
}
Then your main could do something like:
Counter counter = new Counter();
List<CounterThread> counterThreads = new ArrayList<>();
for (int i = 0; i < 3; i++) {
counterThread.add(new CounterThread(counter));
}
// start in a loop after constructing them all which improves the overlap chances
for (CounterThread counterThread : counterThreads) {
counterThread.start();
}
// wait for them to finish
for (CounterThread counterThread : counterThreads) {
counterThread.join();
}
// print the results
for (CounterThread counterThread : counterThreads) {
System.out.println(counterThread.result);
}
Even with this, I never see 10,10,10 output on my box and I often see 10,20,30. Closest I get is 12,12,12.
Shows you how hard it is to properly test a threaded program. Believe me, if this code was in production and you were expecting the "free" synchronization is when it would fail you. ;-)
public class Computation extends Thread {
private int num;
private boolean isComplete;
public Computation(int nu) {
num = nu;
}
public void run() {
System.out.println("Thread Called is: " + Thread.currentThread().getName());
}
public static void main(String... args) {
Computation [] c = new Computation[4];
for (int i = 0; i < 3; i++) {
c[i] = new Computation(i);
c[i].start();
}
}
}
My Question is in main function we are creating every time a new Computation object on which the thread is being started then why we need to snchrnoized the run method? As we know for every different class object 'this' reference is different so we don't need to synchronize.
Also in another Example:
public class DiffObjSynchronized implements Runnable {
#Override
public void run() {
move(Thread.currentThread().getId());
}
public synchronized void move(long id) {
System.out.print(id + " ");
System.out.print(id + " ");
}
public static void main(String []args) {
DiffObjSynchronized a = new DiffObjSynchronized();
/**** output ****/
// 8 9 8 9
new Thread(a).start();
new Thread(new DiffObjSynchronized()).start();
}
}
Here is second example just like first we create a Thread on 2 different instances of class. Here we synchronize the move() method but by definition:
"two different objects can enter the synchronized method at the same time"
Please share your feedback?
If I understand you correctly, your question is: "Why is the move method synchronized?"
The answer is: it shouldn't be, for two reasons:
It doesn't access any fields, so there is nothing that could be corrupted by having many threads inside that method at once.
Each thread gets a different instance of the object, and thus a different lock. So the synchronized modifier makes no difference. Each thread can still enter its own instance's move method because they have separate locks.
You only need to synchronize when you have some data which is being shared between threads, and at least one thread is modifying that data.
Your threads are operating on different objects since you create a new instance for each thread. The intrinsic lock used by synchronized belongs to the instance. So the synchronized methods entered by your threads are guarded by different locks.
You need to understand how synchronization works.
Threads take a 'lock' on the object on which you are synchronizing when they enter the synchronized block. If you have a synchronized method then in that case the object becomes the 'this' instance. Now, no 2 threads can take a lock on the same object at the same time. object locks are mutex based in philosophy so only once thread can hold the mutex at a time. When the thread holding the lock exits the synchronized method or the block, it releases the mutex and thus the object lock becomes available to other threads to request lock on.
This link explains the concepts excellently. It has pictures about disassembled byte code which shows how threads take and leave locks and why 2 threads on 2 different object dont block each other.
Is there anything wrong with the thread safety of this java code? Threads 1-10 add numbers via sample.add(), and Threads 11-20 call removeAndDouble() and print the results to stdout. I recall from the back of my mind that someone said that assigning item in same way as I've got in removeAndDouble() using it outside of the synchronized block may not be thread safe. That the compiler may optimize the instructions away so they occur out of sequence. Is that the case here? Is my removeAndDouble() method unsafe?
Is there anything else wrong from a concurrency perspective with this code? I am trying to get a better understanding of concurrency and the memory model with java (1.6 upwards).
import java.util.*;
import java.util.concurrent.*;
public class Sample {
private final List<Integer> list = new ArrayList<Integer>();
public void add(Integer o) {
synchronized (list) {
list.add(o);
list.notify();
}
}
public void waitUntilEmpty() {
synchronized (list) {
while (!list.isEmpty()) {
try {
list.wait(10000);
} catch (InterruptedException ex) { }
}
}
}
public void waitUntilNotEmpty() {
synchronized (list) {
while (list.isEmpty()) {
try {
list.wait(10000);
} catch (InterruptedException ex) { }
}
}
}
public Integer removeAndDouble() {
// item declared outside synchronized block
Integer item;
synchronized (list) {
waitUntilNotEmpty();
item = list.remove(0);
}
// Would this ever be anything but that from list.remove(0)?
return Integer.valueOf(item.intValue() * 2);
}
public static void main(String[] args) {
final Sample sample = new Sample();
for (int i = 0; i < 10; i++) {
Thread t = new Thread() {
public void run() {
while (true) {
System.out.println(getName()+" Found: " + sample.removeAndDouble());
}
}
};
t.setName("Consumer-"+i);
t.setDaemon(true);
t.start();
}
final ExecutorService producers = Executors.newFixedThreadPool(10);
for (int i = 0; i < 10; i++) {
final int j = i * 10000;
Thread t = new Thread() {
public void run() {
for (int c = 0; c < 1000; c++) {
sample.add(j + c);
}
}
};
t.setName("Producer-"+i);
t.setDaemon(false);
producers.execute(t);
}
producers.shutdown();
try {
producers.awaitTermination(600, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
sample.waitUntilEmpty();
System.out.println("Done.");
}
}
It looks thread safe to me. Here is my reasoning.
Everytime you access list you do it synchronized. This is great. Even though you pull out a part of the list in item, that item is not accessed by multiple threads.
As long as you only access list while synchronized, you should be good (in your current design.)
Your synchronization is fine, and will not result in any out-of-order execution problems.
However, I do notice a few issues.
First, your waitUntilEmpty method would be much more timely if you add a list.notifyAll() after the list.remove(0) in removeAndDouble. This will eliminate an up-to 10 second delay in your wait(10000).
Second, your list.notify in add(Integer) should be a notifyAll, because notify only wakes one thread, and it may wake a thread that is waiting inside waitUntilEmpty instead of waitUntilNotEmpty.
Third, none of the above is terminal to your application's liveness, because you used bounded waits, but if you make the two above changes, your application will have better threaded performance (waitUntilEmpty) and the bounded waits become unnecessary and can become plain old no-arg waits.
Your code as-is is in fact thread safe. The reasoning behind this is two part.
The first is mutual exclusion. Your synchronization correctly ensures that only one thread at a time will modify the collections.
The second has to do with your concern about compiler reordering. Youre worried that the compile can in fact re order the assigning in which it wouldnt be thread safe. You dont have to worry about it in this case. Synchronizing on the list creates a happens-before relationship. All removes from the list happens-before the write to Integer item. This tells the compiler that it cannot re order the write to item in that method.
Your code is thread-safe, but not concurrent (as in parallel). As everything is accessed under a single mutual exclusion lock, you are serialising all access, in effect access to the structure is single-threaded.
If you require the functionality as described in your production code, the java.util.concurrent package already provides a BlockingQueue with (fixed size) array and (growable) linked list based implementations. These are very interesting to study for implementation ideas at the very least.