I have the following code:
public class CheckIfSame implements Runnable {
private int[][] m;
private int[][] mNew;
private int row;
private boolean same;
public CheckIfSame(int[][] m,int[][] mNew,,int row,boolean same) {
this.m = m;
this.mNew = mNew;
this.row = row;
this.same = same;
}
#Override
public void run() {
for (int i = 0; i < mUpd[0].length; i++) {
if(m[row][i] != mUpd[row][i]) {
same = false;
}
}
}
}
Basically, the idea of this is that I use multi-threading to check row by row, whether the 2 matrices differ by at least 1 element.
I activate these threads through my main class, passing rows to an executor pool.
However, for some reason, the boolean same does not seem to update to false, even if the if condition is satisfied.
Multiple threads are trying to access that boolean at the same time: a race condition while updating the same variable.
Another possible scenario for non-volatile booleans in multithreaded applications is being affected by compiler optimizations - some threads may never notice the changes on the boolean, as the compiler should assume the value didn't change. As a result, until an specific trigger, such as a thread state change, threads may be reading stale/cached data.
You could choose an AtomicBoolean. Use it when you have multiple threads accessing a boolean variable. This will guarantee:
Synchronization.
Visibility of the updates (AtomicBoolean uses a volatile int internally).
For example:
public class CheckIfSame implements Runnable
{
//...
private AtomicBoolean same;
public CheckIfSame(..., AtomicBoolean same)
{
//...
this.same = same;
}
#Override
public void run()
{
for (int i = 0; i < mUpd[0].length; i++)
if(m[row][i] != mUpd[row][i])
same.set(false); // <--race conditions hate this
}
}
Related
Consider the following code:
public static void main(String[] args) throws InterruptedException {
int nThreads = 10;
MyThread[] threads = new MyThread[nThreads];
AtomicReferenceArray<Object> array = new AtomicReferenceArray<>(nThreads);
for (int i = 0; i < nThreads; i++) {
MyThread thread = new MyThread(array, i);
threads[i] = thread;
thread.start();
}
for (MyThread thread : threads)
thread.join();
for (int i = 0; i < nThreads; i++) {
Object obj_i = array.get(i);
// do something with obj_i...
}
}
private static class MyThread extends Thread {
private final AtomicReferenceArray<Object> pArray;
private final int pIndex;
public MyThread(final AtomicReferenceArray<Object> array, final int index) {
pArray = array;
pIndex = index;
}
#Override
public void run() {
// some entirely local time-consuming computation...
pArray.set(pIndex, /* result of the computation */);
}
}
Each MyThread computes something entirely locally (without need to synchronize with other threads) and writes the result to its specific array cell. The main thread waits until all MyThreads have finished, and then retrieves the results and does something with them.
Using the get and set methods of AtomicReferenceArray provides a memory ordering which guarantees that the main thread will see the results written by the MyThreads.
However, since every array cell is written only once, and no MyThread has to see the result written by any other MyThread, I wonder if these strong ordering guarantees are actually necessary or if the following code, with plain array cell accesses, would be guaranteed to always yield the same results as the code above:
public static void main(String[] args) throws InterruptedException {
int nThreads = 10;
MyThread[] threads = new MyThread[nThreads];
Object[] array = new Object[nThreads];
for (int i = 0; i < nThreads; i++) {
MyThread thread = new MyThread(array, i);
threads[i] = thread;
thread.start();
}
for (MyThread thread : threads)
thread.join();
for (int i = 0; i < nThreads; i++) {
Object obj_i = array[i];
// do something with obj_i...
}
}
private static class MyThread extends Thread {
private final Object[] pArray;
private final int pIndex;
public MyThread(final Object[] array, final int index) {
pArray = array;
pIndex = index;
}
#Override
public void run() {
// some entirely local time-consuming computation...
pArray[pIndex] = /* result of the computation */;
}
}
On the one hand, under plain mode access the compiler or runtime might happen to optimize away the read accesses to array in the final loop of the main thread and replace Object obj_i = array[i]; with Object obj_i = null; (the implicit initialization of the array) as the array is not modified from within that thread. On the other hand, I have read somewhere that Thread.join makes all changes of the joined thread visible to the calling thread (which would be sensible), so Object obj_i = array[i]; should see the object reference assigned by the i-th MyThread.
So, would the latter code produce the same results as the above?
So, would the latter code produce the same results as the above?
Yes.
The "somewhere" that you've read about Thread.join could be JLS 17.4.5 (The "Happens-before order" bit of the Java Memory Model):
All actions in a thread happen-before any other thread successfully returns from a join() on that thread.
So, all of your writes to individual elements will happen before the final join().
With this said, I would strongly recommend that you look for alternative ways to structure your problem that don't require you to be worrying about the correctness of your code at this level of detail (see my other answer).
An easier solution here would appear to be to use the Executor framework, which hides typically unnecessary details about the threads and how the result is stored.
For example:
ExecutorService executor = ...
List<Future<Object>> futures = new ArrayList<>();
for (int i = 0; i < nThreads; i++) {
futures.add(executor.submit(new MyCallable<>(i)));
}
executor.shutdown();
for (int i = 0; i < nThreads; ++i) {
array[i] = futures.get(i).get();
}
for (int i = 0; i < nThreads; i++) {
Object obj_i = array[i];
// do something with obj_i...
}
where MyCallable is analogous to your MyThread:
private static class MyCallable implements Callable<Object> {
private final int pIndex;
public MyCallable(final int index) {
pIndex = index;
}
#Override
public Object call() {
// some entirely local time-consuming computation...
return /* result of the computation */;
}
}
This results in simpler and more-obviously correct code, because you're not worrying about memory consistency: this is handled by the framework. It also gives you more flexibility, e.g. running it on fewer threads than work items, reusing a thread pool etc.
Atomic operations are required to ensure memory barriers are present when multiple threads access the same memory location. Without memory barriers, there is no happened-before relationship between the threads and there is no guarantee that the main thread will see the modifications done by the other threads, hence data rance. So what you really need is memory barriers for the write and read operations. You can achieve that using AtomicReferenceArray or a synchronized block on a common object.
You have Thread.join in the second program before the read operations. That should remove the data race. Without the join, you need explicit synchronization.
I have two threads doing calculation on a common variable "n", one thread increase "n" each time, another decrease "n" each time, when I am not using volatile keyword on this variable, something I cannot understand happens, sb there please help explain, the snippet is like follow:
public class TwoThreads {
private static int n = 0;
private static int called = 0;
public static void main(String[] args) {
for (int i = 0; i < 1000; i++) {
n = 0;
called = 0;
TwoThreads two = new TwoThreads();
Inc inc = two.new Inc();
Dec dec = two.new Dec();
Thread t = new Thread(inc);
t.start();
t = new Thread(dec);
t.start();
while (called != 2) {
//System.out.println("----");
}
System.out.println(n);
}
}
private synchronized void inc() {
n++;
called++;
}
private synchronized void dec() {
n--;
called++;
}
class Inc implements Runnable {
#Override
public void run() {
inc();
}
}
class Dec implements Runnable {
#Override
public void run() {
dec();
}
}
}
1) What I am expecting is "n=0,called=2" after execution, but chances are the main thread can be blocked in the while loop;
2) But when I uncomment this line, the program when as expected:
//System.out.println("----");
3) I know I should use "volatile" on "called", but I cannot explain why the above happens;
4) "called" is "read and load" in working memory of specific thread, but why it's not "store and write" back into main thread after "long" while loop, if it's not, why a simple "print" line can make such a difference
You have synchronized writing of data (in inc and dec), but not reading of data (in main). BOTH should be synchronized to get predictable effects. Otherwise, chances are that main never "sees" the changes done by inc and dec.
You don't know where exactly called++ will be executed, your main thread will continue to born new threads which will make mutual exclusion, I mean only one thread can make called++ in each time because methods are synchronized, and you don't know each exactly thread will be it. May be two times will performed n++ or n--, you don't know this, may be ten times will performed n++ while main thread reach your condition.
and try to read about data race
while (called != 2) {
//System.out.println("----");
}
//.. place for data race, n can be changed
System.out.println(n);
You need to synchronize access to called here:
while (called != 2) {
//System.out.println("----");
}
I sugest to add getCalled method
private synchronized int getCalled() {
return called;
}
and replace called != 2 with getCalled() != 2
If you interested in why this problem occure you can read about visibility in context of java memory model.
edit: 1.) Why is "globalCounter" synchronized , but not "Thread.currentThread().getId()"
2.) Can I assign a calculation to each thread? how? Can i work with the results?
public class Hauptprogramm {
public static final int MAX_THREADS = 10;
public static int globalCounter;
public static Integer syncObject = new Integer(0);
public static void main(String[] args) {
ExecutorService threadPool = Executors.newFixedThreadPool(MAX_THREADS);
for (int i = 0; i < MAX_THREADS; i++) {
threadPool.submit(new Runnable() {
public void run() {
synchronized (syncObject) {
globalCounter++;
System.out.println(globalCounter);
System.out.println(Thread.currentThread().getId());
try {
Thread.sleep(10);
} catch (InterruptedException e) {
}
}
}});
}
threadPool.shutdown();
}
}
1.) Why is "globalCounter" synchronized , but not "Thread.currentThread().getId()"
I can answer why globalCounter is synchronized. To avoid data race and race condition.
In case if it is not synchronized - globalCounter++ computation is a three step process (Read-Modify-Write) -
Read the current value of globalCounter varaible.
Modify its value.
Write/ Assign the modified value back to the globalCounter.
In the absence of synchronization in multi threaded environment, there is a possibility that a thread might read/ modifies the value of globalCounter when another thread is in the mid of this 3 step process.
This can result into thread/s reading stale values or loss of update count.
2) Can I assign a calculation to each thread? how? Can i work with the results?
This is possible. You can look into Future/ FutureTask to work with the result
I've been given the task to find the way to share a method's, involved in several threads, local variable, so it's value would be visible for every thread running this method.
Now my code look's like this:
public class SumBarrier2 implements Barrier {
int thread_num; // number of threads to handle
int thread_accessed; // number of threads come up the barrier
volatile int last_sum; // sum to be returned after new lifecyrcle
volatile int sum; // working variable to sum up the values
public SumBarrier2(int thread_num){
this.thread_num = thread_num;
thread_accessed = 0;
last_sum = 0;
sum = 0;
}
public synchronized void addValue(int value){
sum += value;
}
public synchronized void nullValues(){
thread_accessed = 0;
sum = 0;
}
#Override
public synchronized int waitBarrier(int value){
int shared_local_sum;
thread_accessed++;
addValue(value);
if(thread_accessed < thread_num){
// If this is not the last thread
try{
this.wait();
} catch(InterruptedException e){
System.out.println("Exception caught");
}
} else if(thread_num == thread_accessed){
last_sum = sum;
nullValues();
this.notifyAll();
} else if (thread_accessed > thread_num ) {
System.out.println("Something got wrong!");
}
return last_sum;
}
}
So the task is to replace the class member
volatile int last_sum
with method's waitBarrier local variable, so it's value would be visible to all threads.
Any suggestions?
Is it even possible?
Thanks in advance.
In case the variable last_sum is updated by only one thread, then declaring it volatile will work. If not then you should look at AtomicInteger
An int value that may be updated atomically. See the
java.util.concurrent.atomic package specification for description of
the properties of atomic variables. An AtomicInteger is used in
applications such as atomically incremented counters, and cannot be
used as a replacement for an Integer. However, this class does extend
Number to allow uniform access by tools and utilities that deal with
numerically-based classes.
You can have the practical uses of AtomicInteger here: Practical uses for AtomicInteger
This isn't homework for me, it's a task given to students from some university. I'm interested in the solution out of personal interest.
The task is to create a class (Calc) which holds an integer. The two methods add and mul should add to or multiply this integer.
Two threads are set-up simultaneously. One thread should call c.add(3) ten times, the other one should call c.mul(3) ten times (on the same Calc-object of course).
The Calc class should make sure that the operations are done alternatingly ( add, mul, add, mul, add, mul, ..).
I haven't worked with concurrency related problems a lot - even less with Java. I've come up with the following implementation for Calc:
class Calc{
private int sum = 0;
//Is volatile actually needed? Or is bool atomic by default? Or it's read operation, at least.
private volatile bool b = true;
public void add(int i){
while(!b){}
synchronized(this){
sum += i;
b = true;
}
}
public void mul(int i){
while(b){}
synchronized(this){
sum *= i;
b = false;
}
}
}
I'd like to know if I'm on the right track here. And there's surely a more elegant way to the while(b) part.
I'd like to hear your guys' thoughts.
PS: The methods' signature mustn't be changed. Apart from that I'm not restricted.
Try using the Lock interface:
class Calc {
private int sum = 0;
final Lock lock = new ReentrantLock();
final Condition addition = lock.newCondition();
final Condition multiplication = lock.newCondition();
public void add(int i){
lock.lock();
try {
if(sum != 0) {
multiplication.await();
}
sum += i;
addition.signal();
}
finally {
lock.unlock();
}
}
public void mul(int i){
lock.lock();
try {
addition.await();
sum *= i;
multiplication.signal();
} finally {
lock.unlock();
}
}
}
The lock works like your synchronized blocks. But the methods will wait at .await() if another thread holds the lock until .signal() is called.
What you did is a busy loop: you're running a loop which only stops when a variable changes. This is a bad technique because it makes the CPU very busy, instead of simple making the thread wait until the flag is changed.
I would use two semaphores: one for multiply, and one for add. add must acquire the addSemaphore before adding, and releases a permit to the multiplySemaphore when it's done, and vice-versa.
private Semaphore addSemaphore = new Semaphore(1);
private Semaphore multiplySemaphore = new Semaphore(0);
public void add(int i) {
try {
addSemaphore.acquire();
sum += i;
multiplySemaphore.release();
}
catch (InterrupedException e) {
Thread.currentThread().interrupt();
}
}
public void mul(int i) {
try {
multiplySemaphore.acquire();
sum *= i;
addSemaphore.release();
}
catch (InterrupedException e) {
Thread.currentThread().interrupt();
}
}
As others have said, the volatile in your solution is required. Also, your solution spin-waits, which can waste quite a lot of CPU cycles. That said, I can't see any problems as far as correctness in concerned.
I personally would implement this with a pair of semaphores:
private final Semaphore semAdd = new Semaphore(1);
private final Semaphore semMul = new Semaphore(0);
private int sum = 0;
public void add(int i) throws InterruptedException {
semAdd.acquire();
sum += i;
semMul.release();
}
public void mul(int i) throws InterruptedException {
semMul.acquire();
sum *= i;
semAdd.release();
}
volatile is needed otherwise the optimizer might optimize the loop to if(b)while(true){}
but you can do this with wait and notify
public void add(int i){
synchronized(this){
while(!b){try{wait();}catch(InterruptedException e){}}//swallowing is not recommended log or reset the flag
sum += i;
b = true;
notify();
}
}
public void mul(int i){
synchronized(this){
while(b){try{wait();}catch(InterruptedException e){}}
sum *= i;
b = false;
notify();
}
}
however in this case (b checked inside the sync block) volatile is not needed
Yes, volatile is needed, not because an assignment from a boolean to another is not atomic, but to prevent the caching of the variable such that its updated value is not visible to the other threads who are reading it. Also sum should be volatile if you care about the final result.
Having said this, it would probably be more elegant to use wait and notify to create this interleaving effect.
class Calc{
private int sum = 0;
private Object event1 = new Object();
private Object event2 = new Object();
public void initiate() {
synchronized(event1){
event1.notify();
}
}
public void add(int i){
synchronized(event1) {
event1.wait();
}
sum += i;
synchronized(event2){
event2.notify();
}
}
public void mul(int i){
synchronized(event2) {
event2.wait();
}
sum *= i;
synchronized(event1){
event1.notify();
}
}
}
Then after you start both threads, call initiate to release the first thread.
Hmmm. There are a number of problems with your solution. First, volatile isn't required for atomicity but for visibility. I won't go into this here, but you can read more about the Java memory model. (And yes, boolean is atomic, but it's irrelevant here). Besides, if you access variables only inside synchronized blocks then they don't have to be volatile.
Now, I assume that it's by accident, but your b variable is not accessed only inside synchronized blocks, and it happens to be volatile, so actually your solution would work, but it's neither idiomatic nor recommended, because you're waiting for b to change inside a busy loop. You're burning CPU cycles for nothing (this is what we call a spin-lock, and it may be useful sometimes).
An idiomatic solution would look like this:
class Code {
private int sum = 0;
private boolean nextAdd = true;
public synchronized void add(int i) throws InterruptedException {
while(!nextAdd )
wait();
sum += i;
nextAdd = false;
notify();
}
public synchronized void mul(int i) throws InterruptedException {
while(nextAdd)
wait();
sum *= i;
nextAdd = true;
notify();
}
}
The program is fully thread safe:
The boolean flag is set to volatile, so the JVM knows not to cache values and to keep write-access to one thread at a time.
The two critical sections locks on the current object, which means only one thread will have access at a time. Note that if a thread is inside the synchronized block, no thread can be in any other critical sections.
The above will apply to every instance of the class. For example if two instances are created, threads will be able to enter multiple critical sections at a time, but will be limited to one thread per instances, per critical section. Does that make sense?