Delay in running thread due to system.out.println statement [duplicate] - java

This question already has an answer here:
Loop doesn't see value changed by other thread without a print statement
(1 answer)
Closed 7 years ago.
In the following code, if i use sysout statement inside for loop then the code executes and goes inside the loop after the condition met but if i do not use sysout statement inside loop then then infinite loop goes on without entering inside the if condition even if the if condition is satisfied.. can anyone please help me to find out the exact reason for this. Just A sysout statement make the if condition to become true. why is it so?
The code is as follows:-
class RunnableDemo implements Runnable {
private Thread t;
private String threadName;
RunnableDemo( String name){
threadName = name;
System.out.println("Creating " + threadName );
}
public void run() {
System.out.println("Running " + threadName );
for(;;)
{
//Output 1: without this sysout statement.
//Output 2: After uncommenting this sysout statement
//System.out.println(Thread.currentThread().isInterrupted());
if(TestThread.i>3)
{
try {
for(int j = 4; j > 0; j--) {
System.out.println("Thread: " + threadName + ", " + j);
}
} catch (Exception e) {
System.out.println("Thread " + threadName + " interrupted.");
}
System.out.println("Thread " + threadName + " exiting.");
}
}
}
public void start ()
{
System.out.println("Starting " + threadName );
if (t == null)
{
t = new Thread (this, threadName);
t.start ();
}
}
}
public class TestThread {
static int i=0;
public static void main(String args[]) {
RunnableDemo R1 = new RunnableDemo( "Thread-1");
R1.start();
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
i+=4;
System.out.println(i);
}
}
Output without sysout statement in the infinite loop:-
Output with sysout statement in the infinite loop:-

The problem here can be fixed by changing
static int i=0;
to
static volatile int i=0;
Making a variable volatile has a number of complex consequences and I am not an expert at this. So, I'll try to explain how I think about it.
The variable i lives in your main memory, your RAM. But RAM is slow, so your processor copies it to the faster (and smaller) memory: the cache. Multiple caches in fact, but thats irrelevant.
But when two threads on two different processors put their values in different caches, what happens when the value changes? Well, if thread 1 changes the value in cache 1, thread 2 still uses the old value from cache 2. Unless we tell both threads that this variable i might be changing at any time as if it were magic. That's what the volatile keyword does.
So why does it work with the print statement? Well, the print statement invokes a lot of code behind the scenes. Some of this code most likely contains a synchronized block or another volatile variable, which (by accident) also refreshes the value of i in both caches. (Thanks to Marco13 for pointing this out).
Next time you try to access i, you get the updated value!
PS: I'm saying RAM here, but its probably the closest shared memory between the two threads, which could be a cache if they are hyperthreaded for instance.
This is a great explanation too (with pictures!):
http://tutorials.jenkov.com/java-concurrency/volatile.html

When you are accessing a variable value, the changes aren't written to (or loaded from) the actual memory location every time. The value can be loaded into a CPU register, or cached, and sit there until the caches are flushed. Moreover, because TestThread.i is not modified inside the loop at all, the optimizer might decide to just replace it with a check before the loop, and get rid of the if statement entirely (I do not think it is actually happening in your case, but the point is that it might).
The instruction that makes the thread to flush its caches and synchronize them with the current contents of physical memory is called memory barrier. There are two ways in Java to force a memory barrier: enter or exit a synchronized block or access a volatile variable.
When either of those events happens, the cached are flushed, and the thread is guaranteed to both see an up-to-date view of the memory contents, and have all the changes it has made locally committed to memory.
So, as suggested in comments, if your declare TestThread.i as volatile, the problem will go away, because whenever the value is modified, the change will be committed immediately, and the optimizer will know not to optimizer,e the check away from the loop, and not to cache the value.
Now, why does adding a print statement changes the behaviour? Well, there is a lot of synchronization going on inside the io, the thread hits a memory barrier somewhere, and loads the fresh value. This is just a coincidence.

Related

main problem of sharing same object between two threads

i have two Threads that increment a shared variable x of an object myObject.
And when incrementing this number a 1000 times for each thread, i don't get 2000, i only get less.
And i know this is bad and shouldn't be done, but am just trying to figure out where is the "problem" happening in the code.
Is it because in run method myObject.x = myObject.x + 1 is entrepeted like this :
int temp = myOject.x
temp = temp + 1; //The compiler/something pauses here and goes to thread 2
myObject.x = temp; //Dind't actually get here so no incrementation ?
Or because the two threads tried to access myObject.x at the same time, so it's like only it was incremented only ones ?
the threads look like this :
public class Ex4Incr extends Thread{
private MyObject myObject;
public Ex4Incr(MyObject myObject ) {
this.myObject = myObject ;
}
#Override
public void run() {
for(int i = 0;i < 100;i++) {
myObject .setX(myObject.getX() + 1);
}
System.out.println("id:" + Thread.currentThread().getId() + " x is: " + myObject.getX());
}
}
Am sorry that i din't add code but this problem is well known but i just didn't know what expalanation is true.
Has nothing to do with the compiler. The Thread Scheduler decides which threads to run and which to pause based on a scheduling algorithm. It may suspend one thread after the variable value, say 2 is already read into temp and run the other thread instead. This thread now also reads the variable value 2 increments it to 3 and writes it back. If now the first thread continues to run it continues in the second line so temp = 2, then increments to 3 and writes it back into memory. Now we've actually performed two updates but we've done the same update twice therefore one update "got lost" if you will.
An easy way to solve this is to make your int an AtomicInteger and use its method incrementAndGet().
There are however other possibilities like using a Monitor (synchronized keyword), Lock etc. to solve the problem at hand (known as Race Condition).

How to manually control which Thread enters critical region using Java Swing?

I am trying to create a simple Java Swing-based application that manually controls two threads which are both trying to continually increment an integer value. The application should be able to 'Start' and 'Stop' either of the threads (both threads incrementing the value simultaneously) and put either of the threads in the critical region (only one thread allowed to increment value).
Here's a screenshot of what I have, so that you may better understand what I am aiming for:
https://i.imgur.com/sQueUD7.png
I've created an "Incrementor" class which does the job of incrementing the int value, but if I try adding the synchronized keyword to the increment() method, I do not get the result I want.
private void increment() {
while (Thread.currentThread().isAlive()) {
if (Thread.currentThread().getName().equals("Thread 1")) {
if (t1Stop.isEnabled()) {
value++;
t1TextField.setText("Thread 1 has incremented value by 1. Current value = " + value + "\n");
}
} else if (Thread.currentThread().getName().equals("Thread 2")) {
if (t2Stop.isEnabled()) {
value++;
t2TextField.setText("Thread 2 has incremented value by 1. Current value = " + value + "\n");
}
}
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
Any advice on how to proceed?
I hope I've made it clear what it is I am looking for, if not, let me know and I'll update this post.
your problem is the dreaded thread lock !!
but if I try adding the synchronized keyword to the increment() method, I do not get the result I want.
of course ! Thread manager changes the "Working" thread whenever he feels like it !, and you should post more code here , but from the first look , you are running the same method in both threads , so it will be dropped down to 2 case :-
the good case !
the Thread Manager changes the thread after it finishes calling the increment method(good old win win for both threads ^-^).
the bad case (and this is what you have faced)
imagine that a thread accessed the method and before completing the method the thread managers changes it and when the other method tries to access it find's a big nasty synchronized in it's face with the lock in the other thread !from here is their is no guarantee what will happen but i can assure you that 90% of this cases result's only pleases the thread manager .
The application should be able to 'Start' and 'Stop' either of the threads (both threads incrementing the value simultaneously) and put either of the threads in the critical region (only one thread allowed to increment value).
sorry to break it to you but the thread manager is not-controllable my friend .
but we can suggest a fair amount of thing's to the thread manager , so what you are trying to achieve is not possible at the java thread manager .
and stopping thread's ooky dooky , but starting a thread after stopping it is big NO !!!
from the Thread.start() documentation
It is never legal to start a thread more than once.
In particular, a thread may not be restarted once it has completed
execution.
throws IllegalThreadStateException if the thread was already
started.
here's a very rich link were you can get the topic explained more widely at the oracle's
You can use object-level lock using synchronized keyword.
=> Object-level lock : To synchronize a non static method or block so that it can be accessed by only one thread at a time for that instance. It is used to protect non static data.
Example :
public class ClasswithCriticalSections {
private AtomicInteger count = new AtomicInteger(0);
public synchronized int increment() {
count.incrementAndGet();
return count;
}
}
or
public class ClasswithCriticalSections {
Object lock1 = new Object();
Object lock2 = new Object();
private AtomicInteger count = new AtomicInteger(0);
public int increment() {
synchronized(lock1) {
count.incrementAndGet();
return count;
}
}
public int decrement() {
synchronized(lock2) {
count.addAndGet(-1);
return count;
}
}
}

java thread synchronization - this shouldn't be working, but it is :) -

I am following examples from here
I've modified the processCommand as-
private void processCommand() throws InterruptedException {
this.command = "xyz";
}
Full code-
import java.util.logging.Level;
import java.util.logging.Logger;
public class WorkerThread implements Runnable {
private String command;
public WorkerThread(String s) {
this.command = s;
}
#Override
public void run() {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Logger.getLogger(WorkerThread.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.println(Thread.currentThread().getName() + " Commad at start :" + command);
try {
processCommand();
} catch (InterruptedException ex) {
}
System.out.println(Thread.currentThread().getName() + " Command after processCommand : " + command);
}
private void processCommand() throws InterruptedException {
this.command = "xyz";
}
}
Now, I expect to see synchronization issue, right? Basically, when
System.out.println(Thread.currentThread().getName()+' Start. Command = '+command);
is executed, it CAN pick-up the value xyz, right? but I never see it. I've experimented with various values in Thread.Sleep.
So what makes this.command = "xyz"; statement threadsafe in this case?
I am starting thread in this way -
ExecutorService executor = Executors.newFixedThreadPool(5);
for (int i = 0; i < 10; i++) {
Runnable worker = new WorkerThread("" + i);
executor.execute(worker);
}
UPDATE
It is still not entirely what the complete program looks like ... but based on what I think it is, I cannot see any point where it is not thread-safe.
There are two points where command is assigned and two points where the value is read.
The main thread assigns command in the constructor.
A second thread reads command in run() before calling processCommand.
The second thread assigns command in processCommand
The second thread reads command in run() after calling processCommand.
The last three events occur on the same thread, so no synchronization is required. The first and second events occur on different threads, but there should be a "happens before" relation between the main thread and worker thread at that point.
If the main thread were to start() the second thread, that would provide the happens before. (The JLS says so.)
But actually we are using ThreadPoolExecutor.execute(Runnable) to do the hand-over, and according to the javadoc for Executor:
Memory consistency effects: Actions in a thread prior to submitting a Runnable object to an Executor happen-before its execution begins, perhaps in another thread.
In summary, all 4 events of interest are properly synchronized, and there are no race conditions involving command.
However, even if this was not thread-safe you would have difficulty demonstrating that non-thread-safe behaviour.
The main reason you cannot demonstrate it is that the actual non-safeness is due to Java memory model. Changes to the command variable only need to be flushed to main memory if there is synchronization point or something to establish the "happens before". But they can be flushed anyway ... and they usually are ... especially if there is a long enough time gap, or a system call that causes a context switch. In this case you have both.
A second reason is that the System.err and System.out objects are internally synchronized, and if you are not careful with the way you call them you can eliminate the thread-safety problem you trying to demonstrate.
This is "the thing" about thread-safety issues involving non-synchronised access to shared variables. The actual race conditions often involve very small time windows; i.e. two events that need to happen within a few clock cycles (certainly less than a microsecond) for the race to be noticed. This is likely to happen rarely, which is why problems involving race conditions are typically so hard to reproduce.
The reason you don't see a race condition here is
Runnable worker = new WorkerThread('' + i);
A race condition involves a shared resource. All your worker threads on the other hand are changing their own private member command. To induce a race condition you would need to do something like
for (int i = 0; i < 10; i++) {
Runnable worker = new WorkerThread('' + 0);
executor.execute(worker);
worker.setCommand('' + i);
}
Now when the worker tries to access the command field it could get the stale 0 value or the i value.

Thread value not cached by threads even without volatile?

class Counter
{
public int i=0;
public void increment()
{
i++;
System.out.println("i is "+i);
System.out.println("i/=2 executing");
i=i+22;
System.out.println("i is (after i+22) "+i);
System.out.println("i+=1 executing");
i++;
System.out.println("i is (after i++) "+i);
}
public void decrement()
{
i--;
System.out.println("i is "+i);
System.out.println("i*=2 executing");
i=i*2;
System.out.println("i is after i*2"+i);
System.out.println("i-=1 executing");
i=i-1;
System.out.println("i is after i-1 "+i);
}
public int value()
{
return i;
} }
class ThreadA
{
public ThreadA(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread A trying to increment");
c.increment();
System.out.println("Increment completed "+c.i);
}
}).start();
}
}
class ThreadB
{
public ThreadB(final Counter c)
{
new Thread(new Runnable(){
public void run()
{
System.out.println("Thread B trying to decrement");
c.decrement();
System.out.println("Decrement completed "+c.i);
}
}).start();
}
}
class ThreadInterference
{
public static void main(String args[]) throws Exception
{
Counter c=new Counter();
new ThreadA(c);
new ThreadB(c);
}
}
In the above code, ThreadA first got access to Counter object and incremented the value along with performing some extra operations. For the very first time ThreadA does not have a cached value of i. But after the execution of i++ (in first line) it will get cache the value. Later on the value is updated and gets 24. According to the program, as the variable i is not volatile so the changes will be done in the local cache of ThreadA,
Now when ThreadB accesses the decrement() method the value of i is as updated by ThreadA i.e. 24. How could that be possible?
Assuming that threads won't see each updates that other threads make to shared data is as inappropriate as assuming that all threads will see each other's updates immediately.
The important thing is to take account of the possibility of not seeing updates - not to rely on it.
There's another issue besides not seeing the update from other threads, mind you - all of your operations act in a "read, modify, write" sense... if another thread modifies the value after you've read it, you'll basically ignore it.
So for example, suppose i is 5 when we reach this line:
i = i * 2;
... but half way through it, another thread modifies it to be 4.
That line can be thought of as:
int tmp = i;
tmp = tmp * 2;
i = tmp;
If the second thread changes i to 4 after the first line in the "expanded" version, then even if i is volatile the write of 4 will still be effectively lost - because by that point, tmp is 5, it will be doubled to 10, and then 10 will be written out.
As specified in JLS 8.3.1.4:
The Java programming language allows threads to access shared
variables (ยง17.1). As a rule, to ensure that shared variables are
consistently and reliably updated, a thread should ensure that it has
exclusive use of such variables by obtaining a lock that,
conventionally, enforces mutual exclusion for those shared variables........A field may be
declared volatile, in which case the Java Memory Model ensures that
all threads see a consistent value for the variable
Although not always but there is still a chance that the shared values among threads are not consistenly and reliably updated, which would lead to some unpredictable outcome of program. In code given below
class Test {
static int i = 0, j = 0;
static void one() { i++; j++; }
static void two() {
System.out.println("i=" + i + " j=" + j);
}
}
If, one thread repeatedly calls the method one (but no more than Integer.MAX_VALUE times in all), and another thread repeatedly calls the method two then method two could occasionally print a value for j that is greater than the value of i, because the example includes no synchronization and, the shared values of i and j might be updated out of order.
But if you declare i and j to be volatile , This allows method one and method two to be executed concurrently, but guarantees that accesses to the shared values for i and j occur exactly as many times, and in exactly the same order, as they appear to occur during execution of the program text by each thread. Therefore, the shared value for j is never greater than that for i,because each update to i must be reflected in the shared value for i before the update to j occurs.
Now i came to know that common objects (the objects that are being shared by multiple threads) are not cached by those threads. As the object is common, Java Memory Model is smart enough to identify that common objects when cached by threads could produce surprising results.
How could that be possible?
Because there is nowhere in the JLS that says values have to be cached within a thread.
This is what the spec does say:
If you have a non-volatile variable x, and it's updated by a thread T1, there is no guarantee that T2 can ever observe the change of x by T1. The only way to guarantee that T2 sees a change of T1 is with a happens-before relationship.
It just so happens that some implementations of Java cache non-volatile variables within a thread in certain cases. In other words, you can't rely on a non-volatile variable being cached.

Need advice on synchronization of Java Vector / ConcurrentModificationException

In a legacy application I have a Vector that keeps a chronological list of files to process and multiple threads ask it for the next file to process. (Note that I realize that there are likely better collections to use (feel free to suggest), but I don't have time for a change of that magnitude right now.)
At a scheduled interval, another thread checks the working directory to see if any files appear to have been orphaned because something went wrong. The method called by this thread occasionally throws a ConcurrentModificationException if the system is abnormally busy. So I know that at least two threads are trying to use the Vector at once.
Here is the code. I believe the issue is the use of the clone() on the returned Vector.
private synchronized boolean isFileInDataStore( File fileToCheck ){
boolean inFile = false;
for( File wf : (Vector<File>)m_dataStore.getFileList().clone() ){
File zipName = new File( Tools.replaceFileExtension(fileToCheck.getAbsolutePath(), ZIP_EXTENSION) );
if(wf.getAbsolutePath().equals(zipName.getAbsolutePath()) ){
inFile = true;
break;
}
}
return inFile;
}
The getFileList() method is as follows:
public synchronized Vector<File> getFileList() {
synchronized(fileList){
return fileList;
}
}
As a quick fix, would changing the getFileList method to return a copy of the vector as follows suffice?
public synchronized Vector<File> getFileListCopy() {
synchronized(fileList){
return (Vector<File>)fileList.clone();
}
}
I must admit that I am generally confused by the use of synchronized in Java as it pertains to collections, as simply declaring the method as such is not enough. As a bonus question, is declaring the method as synchronized and wrapping the return call with another synchronized block just crazy coding? Looks redundant.
EDIT: Here are the other methods which touch the list.
public synchronized boolean addFile(File aFile) {
boolean added = false;
synchronized(fileList){
if( !fileList.contains(aFile) ){
added = fileList.add(aFile);
}
}
notifyAll();
return added;
}
public synchronized void removeFile( File dirToImport, File aFile ) {
if(aFile!=null){
synchronized(fileList){
fileList.remove(aFile);
}
// Create a dummy list so I can synchronize it.
List<File> zipFiles = new ArrayList<File>();
synchronized(zipFiles){
// Populate with actual list
zipFiles = (List<File>)diodeTable.get(dirToImport);
if(zipFiles!=null){
zipFiles.remove(aFile);
// Repopulate list if the number falls below the number of importer threads.
if( zipFiles.size()<importerThreadCount ){
diodeTable.put(dirToImport, getFileList( dirToImport ));
}
}
}
notifyAll();
}
}
Basically, there are two separate issues here: sycnhronization and ConcurrentModificationException. Vector in contrast to e.g. ArrayList is synchronized internally so basic operation like add() or get() do not need synchronization. But you can get ConcurrentModificationException even from a single thread if you are iterating over a Vector and modify it in the meantime, e.g. by inserting an element. So, if you performed a modifying operation inside your for loop, you could break the Vector even with a single thread. Now, if you return your Vector outside of your class, you don't prevent anyone from modifyuing it without proper synchronization in their code. Synchronization on fileList in the original version of getFileList() is pointless. Returning a copy instead of original could help, as could using a collection which allows modification while iterating, like CopyOnWriteArrayList (but do note the additional cost of modifications, it may be a showstopper in some cases).
"I am generally confused by the use of synchronized in Java as it
pertains to collections, as simply declaring the method as such is not
enough"
Correct. synchronized on a method means that only one thread at a time may enter the method. But if the same collection is visible from multiple methods, then this doesn't help much.
To prevent two threads accessing the same collection at the same time, they need to synchronize on the same object - e.g. the collection itself. You have done this in some of your methods, but isFileInDataStore appears to access a collection returned by getFileList without synchronizing on it.
Note that obtaining the collection in a synchronized manner, as you have done in getFileList, isn't enough - it's the accessing that needs synchronizing. Cloning the collection would (probably) fix the issue if you only need read-access.
As well as looking at synchronizing, I suggest you track down which threads are involved - e.g. print out the call stack of the exception and/or use a debugger. It's better to really understand what's going on than to just synchronize and clone until the errors go away!
Where does the m_dataStore get updated? That's a likely culprit if it's not synchronized.
First, you should move your logic to whatever class is m_dataStore if you haven't.
Once you've done that, make your list final, and synchronize on it ONLY if you are modifying its elements. Threads that only need to read it, don't need synchronized access. They may end up polling an outdated list, but I suppose that is not a problem. This gets you increased performance.
As far as I can tell, you would only need to synchronize when adding and removing, and only need to lock your list.
e.g.
package answer;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Example {
public static void main(String[] args)
{
Example c = new Example();
c.runit();
}
public void runit()
{
Thread.currentThread().setName("Thread-1");
new Thread("Thread-2")
{
#Override
public void run() {
test1(true);
}
}.start();
// Force a scenario where Thread-1 allows Thread-2 to acquire the lock
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Logger.getLogger(Example.class.getName()).log(Level.SEVERE, null, ex);
}
// At this point, Thread-2 has acquired the lock, but it has entered its wait() method, releasing the lock
test1(false);
}
public synchronized void test1(boolean wait)
{
System.out.println( Thread.currentThread().getName() + " : Starting...");
try {
if (wait)
{
// Apparently the current thread is supposed to wait for some other thread to do something...
wait();
} else {
// The current thread is supposed to keep running with the lock
doSomeWorkThatRequiresALockLikeRemoveOrAdd();
System.out.println( Thread.currentThread().getName() + " : Our work is done. About to wake up the other thread(s) in 2s...");
Thread.sleep(2000);
// Tell Thread-2 that it we have done our work and that they don't have to spare the CPU anymore.
// This essentially tells it "hey don't wait anymore, start checking if you can get the lock"
// Try commenting this line and you will see that Thread-2 never wakes up...
notifyAll();
// This should show you that Thread-1 will still have the lock at this point (even after calling notifyAll).
//Thread-2 will not print "after wait/notify" for as long as Thread-1 is running this method. The lock is still owned by Thread-1.
Thread.sleep(1000);
}
System.out.println( Thread.currentThread().getName() + " : after wait/notify");
} catch (InterruptedException ex) {
Logger.getLogger(Example.class.getName()).log(Level.SEVERE, null, ex);
}
}
private void doSomeWorkThatRequiresALockLikeRemoveOrAdd()
{
// Do some work that requires a lock like remove or add
}
}

Categories