I've this kind of code:
public class RecursiveQueue {
//#Inject
private QueueService queueService;
public static void main(String[] args) {
RecursiveQueue test = new RecursiveQueue();
test.enqueue(new Node("X"), true);
test.enqueue(new Node("Y"), false);
test.enqueue(new Node("Z"), false);
}
private void enqueue(final Node node, final boolean waitTillFinished) {
final AtomicLong totalDuration = new AtomicLong(0L);
final AtomicInteger counter = new AtomicInteger(0);
AfterCallback callback= new AfterCallback() {
#Override
public void onFinish(Result result) {
for(Node aNode : result.getChildren()) {
counter.incrementAndGet();
queueService.requestProcess(aNode, this);
}
totalDuration.addAndGet(result.getDuration());
if(counter.decrementAndGet() <= 0) { //last one
System.out.println("Processing of " + node.toString() + " has finished in " + totalDuration.get() + " ms");
if(waitTillFinished) {
counter.notify();
}
}
}
};
counter.incrementAndGet();
queueService.requestProcess(node, callback);
if(waitTillFinished) {
try {
counter.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Imagine there is a queueService which uses blocking queue and few consumer threads to process nodes = calls DAO to fetch children of nodes (it's a tree).
So requestProcess method just enqueues the node and does not block.
Is there some better/safe way to avoid using wait/notify in this sample ?
According to some findings I can use Phaser (but I work on java 6) or conditions (but I'm not using locks).
There is no synchronized anything in your example. You mustn't call o.wait() or o.notify() except from within a synchronized(o) {...} block.
Your call to wait() is not in a loop. This may not ever happen in your JVM, but the language spec permits wait() to return prematurely (that's known as a spurious wakeup) More generally, it is good practice to always use a loop because it's a familiar design pattern. A while statement costs no more than an if, and you should have it because of the possibility of spurious wakeup, and you'd absolutely must have it in a multi-consumer situation, and so you might as well just always write it that way.
Since you must use synchronized blocks in order to use wait() and notify(), there probably is no reason to use Atomic anything.
This "recursive" thing seems awfully complicated, what with the callback adding more items to the queue. How deep can that go?
I think you are looking for CountDownLatch.
You actually use locks or, let's put it this way, you should be using them if you try to use wait/notify as James pointed out. As you are bound to Java 1.6 and ForkJoin or Phaser are not available to you the choice is either implementing wait/notify properly or using Condition with explicit lock. This would be a matter of your personal preferences.
Another alternative is to try and restructure your algorithm so you first get to know the entire set of steps you would need to execute. It is not always possible though.
Related
I was about to write something about this, but maybe it is better to have a second opinion before appearing like a fool...
So the idea in the next piece of code (android's room package v2.4.1, RoomTrackingLiveData), is that the winner thread is kept alive, and is forced to check for contention that may have entered the process (coming from losing threads) while computing.
While fail CAS operations performed by these losing threads keep them out from entering and executing code, preventing repeating signals (mComputeFunction.call() OR postValue()).
final Runnable mRefreshRunnable = new Runnable() {
#WorkerThread
#Override
public void run() {
if (mRegisteredObserver.compareAndSet(false, true)) {
mDatabase.getInvalidationTracker().addWeakObserver(mObserver);
}
boolean computed;
do {
computed = false;
if (mComputing.compareAndSet(false, true)) {
try {
T value = null;
while (mInvalid.compareAndSet(true, false)) {
computed = true;
try {
value = mComputeFunction.call();
} catch (Exception e) {
throw new RuntimeException("Exception while computing database"
+ " live data.", e);
}
}
if (computed) {
postValue(value);
}
} finally {
mComputing.set(false);
}
}
} while (computed && mInvalid.get());
}
};
final Runnable mInvalidationRunnable = new Runnable() {
#MainThread
#Override
public void run() {
boolean isActive = hasActiveObservers();
if (mInvalid.compareAndSet(false, true)) {
if (isActive) {
getQueryExecutor().execute(mRefreshRunnable);
}
}
}
};
The most obvious thing here is that atomics are being used for everything they are not good at:
Identifying losers and ignoring winners (what reactive patterns need).
AND a happens once behavior, performed by the loser thread.
So this is completely counter intuitive to what atomics are able to achieve, since they are extremely good at defining winners, AND anything that requires a "happens once" becomes impossible to ensure state consistency (the last one is suitable to start a philosophical debate about concurrency, and I will definitely agree with any conclusion).
If atomics are used as: "Contention checkers" and "Contention blockers" then we can implement the exact principle with a volatile check of an atomic reference after a successful CAS.
And checking this volatile against the snapshot/witness during every other step of the process.
private final AtomicInteger invalidationCount = new AtomicInteger();
private final IntFunction<Runnable> invalidationRunnableFun = invalidationVersion -> (Runnable) () -> {
if (invalidationVersion != invalidationCount.get()) return;
try {
T value = computeFunction.call();
if (invalidationVersion != invalidationCount.get()) return; //In case computation takes too long...
postValue(value);
} catch (Exception e) {
e.printStackTrace();
}
};
getQueryExecutor().execute(invalidationRunnableFun.apply(invalidationCount.incrementAndGet()));
In this case, each thread is left with the individual responsibility of checking their position in the contention lane, if their position moved and is not at the front anymore, it means that a new thread entered the process, and they should stop further processing.
This alternative is so laughably simple that my first question is:
Why didn't they do it like this?
Maybe my solution has a flaw... but the thing about the first alternative (the nested spin-lock) is that it follows the idea that an atomic CAS operation cannot be verified a second time, and that a verification can only be achieved with a cmpxchg process.... which is... false.
It also follows the common (but wrong) believe that what you define after a successful CAS is the sacred word of GOD... as I've seen code seldom check for concurrency issues once they enter the if body.
if (mInvalid.compareAndSet(false, true)) {
// Ummm... yes... mInvalid is still true...
// Let's use a second atomicReference just in case...
}
It also follows common code conventions that involve "double-<enter something>" in concurrency scenarios.
So only because the first code follows those ideas, is that I am inclined to believe that my solution is a valid and better alternative.
Even though there is an argument in favor of the "nested spin-lock" option, but does not hold up much:
The first alternative is "safer" precisely because it is SLOWER, so it has MORE time to identify contention at the end of the current of incoming threads.
BUT is not even 100% safe because of the "happens once" thing that is impossible to ensure.
There is also a behavior with the code, that, when it reaches the end of a continuos flow of incoming threads, 2 signals are dispatched one after the other, the second to last one, and then the last one.
But IF it is safer because it is slower, wouldn't that defeat the goal of using atomics, since their usage is supposed to be with the aim of being a better performance alternative in the first place?
I understand the overall concepts of multi-threading and synchronization but am new to writing thread-safe code. I currently have the following code snippet:
synchronized(compiledStylesheets) {
if(compiledStylesheets.containsKey(xslt)) {
exec = compiledStylesheets.get(xslt);
} else {
exec = compile(s, imports);
compiledStylesheets.put(xslt, exec);
}
}
where compiledStylesheets is a HashMap (private, final). I have a few questions.
The compile method can take a few hundred milliseconds to return. This seems like a long time to have the object locked, but I don't see an alternative. Also, it is unnecessary to use Collections.synchronizedMap in addition to the synchronized block, correct? This is the only code that hits this object other than initialization/instantiation.
Alternatively, I know of the existence of a ConcurrentHashMap but I don't know if that's overkill. The putIfAbsent() method will not be usable in this instance because it doesn't allow me to skip the compile() method call. I also don't know if it will solve the "modified after containsKey() but before put()" problem, or if that's even really a concern in this case.
Edit: Spelling
For tasks of this nature, I highly recommend Guava caching support.
If you can't use that library, here is a compact implementation of a Multiton. Use of the FutureTask was a tip from assylias, here, via OldCurmudgeon.
public abstract class Cache<K, V>
{
private final ConcurrentMap<K, Future<V>> cache = new ConcurrentHashMap<>();
public final V get(K key)
throws InterruptedException, ExecutionException
{
Future<V> ref = cache.get(key);
if (ref == null) {
FutureTask<V> task = new FutureTask<>(new Factory(key));
ref = cache.putIfAbsent(key, task);
if (ref == null) {
task.run();
ref = task;
}
}
return ref.get();
}
protected abstract V create(K key)
throws Exception;
private final class Factory
implements Callable<V>
{
private final K key;
Factory(K key)
{
this.key = key;
}
#Override
public V call()
throws Exception
{
return create(key);
}
}
}
I think you are looking for a Multiton.
There's a very good Java one here that #assylas posted some time ago.
You can loosen the lock at the risk of an occasional doubly compiled stylesheet in race condition.
Object y;
// lock here if needed
y = map.get(x);
if(y == null) {
y = compileNewY();
// lock here if needed
map.put(x, y); // this may happen twice, if put is t.s. one will be ignored
y = map.get(x); // essential because other thread's y may have been put
}
This requires get and put to be atomic, which is true in the case of ConcurrentHashMap and you can achieve by wrapping individual calls to get and put with a lock in your class. (As I tried to explain with "lock here if needed" comments - the point being you only need to wrap individual calls, not have one big lock).
This is a standard thread safe pattern to use even with ConcurrentHashMap (and putIfAbsent) to minimize the cost of compiling twice. It still needs to be acceptable to compile twice sometimes, but it should be okay even if expensive.
By the way, you can solve that problem. Usually the above pattern isn't used with a heavy function like compileNewY but a lightweight constructor new Y(). e.g. do this:
class PrecompiledY {
public volatile Y y;
private final AtomicBoolean compiled = new AtomicBoolean(false);
public void compile() {
if(!compiled.getAndSet(true)) {
y = compile();
}
}
}
// ...
ConcurrentMap<X, PrecompiledY> myMap; // alternatively use proper locking
py = map.get(x);
if(py == null) {
py = new PrecompiledY(); // much cheaper than compiling
map.put(x, y); // this may happen twice, if put is t.s. one will be ignored
y = map.get(x); // essential because other thread's y may have been put
y.compile(); // object that didn't get inserted never gets compiled
}
Also:
Alternatively, I know of the existence of a ConcurrentHashMap but I don't know if that's overkill.
Given that your code is heavily locking, ConcurrentHashMap is almost certainly far faster, so not overkill. (And much more likely to be bug-free. Concurrency bugs are not fun to fix.)
Please see Erickson's comment below. Using double-checked locking with Hashmaps is not very smart
The compile method can take a few hundred milliseconds to return. This seems like a long time to have the object locked, but I don't see an alternative.
You can use double-checked locking, and note that you don't need any lock before get since you never remove anything from the map.
if(compiledStylesheets.containsKey(xslt)) {
exec = compiledStylesheets.get(xslt);
} else {
synchronized(compiledStylesheets) {
if(compiledStylesheets.containsKey(xslt)) {
// another thread might have created it while
// this thread was waiting for lock
exec = compiledStylesheets.get(xslt);
} else {
exec = compile(s, imports);
compiledStylesheets.put(xslt, exec);
}
}
}
}
Also, it is unnecessary to use Collections.synchronizedMap in addition to the synchronized block, correct?
Correct
This is the only code that hits this object other than initialization/instantiation.
First of all, the code as you posted it is race-condition-free because containsKey() result will never change while compile() method is running.
Collections.synchronizedMap() is useless for your case as stated above because it wraps all map methods into a synchronized block using either this as a mutex or another object you provided (for two-argument version).
IMO using ConcurrentHashMap is also not an option because it stripes locks based on key hashCode() result; its concurrent iterators is also useless here.
If you really want compile() out of synchronized block, you may pre-calculate if before checking containsKey(). This may draw the overall performance back, but may be better than calling it in synchronized block. To make a decision, personally I would consider how often key "miss" is happening and so, which option is preferrable - keep the lock for longer times or calculate your stuff always.
I want to clear my understanding that if I surround a block of code with synchronized(this){} statement, does this mean that I am making those statements atomic?
No, it does not ensure your statements are atomic. For example, if you have two statements inside one synchronized block, the first may succeed, but the second may fail. Hence, the result is not "all or nothing". But regarding multiple threads, you ensure that no statement of two threads are interleaved. In other words: all statements of all threads are strictly serialized, even so, there is no guarantee, that all or none statements of a thread gets executed.
Have a look at how Atomicity is defined.
Here is an example showing that the reader is able to ready a corrupted state. Hence the synchronized block was not executed atomically (forgive me the nasty formatting):
public class Example {
public static void sleep() {
try { Thread.sleep(400); } catch (InterruptedException e) {};
}
public static void main(String[] args) {
final Example example = new Example(1);
ExecutorService executor = newFixedThreadPool(2);
try {
Future<?> reader = executor.submit(new Runnable() { #Override public void run() {
int value; do {
value = example.getSingleElement();
System.out.println("single value is: " + value);
} while (value != 10);
}});
Future<?> writer = executor.submit(new Runnable() { #Override public void run() {
for (int value = 2; value < 10; value++) example.failDoingAtomic(value);
}});
reader.get(); writer.get();
} catch (Exception e) { e.getCause().printStackTrace();
} finally { executor.shutdown(); }
}
private final Set<Integer> singleElementSet;
public Example(int singleIntValue) {
singleElementSet = new HashSet<>(Arrays.asList(singleIntValue));
}
public synchronized void failDoingAtomic(int replacement) {
singleElementSet.clear();
if (new Random().nextBoolean()) sleep();
else throw new RuntimeException("I failed badly before adding the new value :-(");
singleElementSet.add(replacement);
}
public int getSingleElement() {
return singleElementSet.iterator().next();
}
}
No, synchronization and atomicity are two different concepts.
Synchronization means that a code block can be executed by at most one thread at a time, but other threads (that execute some other code that uses the same data) can see intermediate results produced inside the "synchronized" block.
Atomicity means that other threads do not see intermediate results - they see either the initial or the final state of the data affected by the atomic operation.
It's unfortunate that java uses synchronized as a keyword. A synchronized block in Java is a "mutex" (short for "mutual exclusion"). It's a mechanism that insures only one thread at a time can enter the block.
Mutexes are just one of many tools that are used to achieve "synchronization" in a multi-threaded program: Broadly speaking, synchronization refers to all of the techniques that are used to insure that the threads will work in a coordinated fashion to achieve a desired outcome.
Atomicity is what Oleg Estekhin said, above. We usually hear about it in the context of "transactions." Mutual exclusion (i.e., Java's synchronized) guarantees something less than atomicity: Namely, it protects invariants.
An invariant is any assertion about the program's state that is supposed to be "always" true. E.g., in a game where players exchange virtual coins, the total number of coins in the game might be an invariant. But it's often impossible to advance the state of the program without temporarily breaking the invariant. The purpose of mutexes is to insure that only one thread---the one that is doing the work---can see the temporary "broken" state.
For code that use syncronized on that object - yes.
For code, that don't use syncronized keyword for that object - no.
Can we say that by synchronizing a block of code we are making the contained statements atomic?
You are taking a very big leap there. Atomicity means that the operation if atomic will complete in one CPU cycle or equivalent to one CPU cycle whereas Synchronizing a block means only one thread can access the critical region. It may take multiple CPU cycles for processing code in the critical region(which will make it non atomic).
What are the possible ways to make code thread-safe without using the synchronized keyword?
Actually, lots of ways:
No need for synchronization at all if you don't have mutable state.
No need for synchronization if the mutable state is confined to a single thread. This can be done by using local variables or java.lang.ThreadLocal.
You can also use built-in synchronizers. java.util.concurrent.locks.ReentrantLock has the same functionality as the lock you access when using synchronized blocks and methods, and it is even more powerful.
Only have variables/references local to methods. Or ensure that any instance variables are immutable.
You can make your code thread-safe by making all the data immutable, if there is no mutability, everything is thread-safe.
Secondly, you may want to have a look at java concurrent API which has provision for providing read / write locks which perform better in case there are lots of readers and a few writers. Pure synchronized keyword will block two readers also.
////////////FIRST METHOD USING SINGLE boolean//////////////
public class ThreadTest implements Runnable {
ThreadTest() {
Log.i("Ayaz", "Constructor..");
}
private boolean lockBoolean = false;
public void run() {
Log.i("Ayaz", "Thread started.." + Thread.currentThread().getName());
while (lockBoolean) {
// infinite loop for other thread if one is accessing
}
lockBoolean = true;
synchronizedMethod();
}
/**
* This method is synchronized without using synchronized keyword
*/
public void synchronizedMethod() {
Log.e("Ayaz", "processing...." + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(3000);
} catch (Exception e) {
System.out.println("Exp");
}
Log.e("Ayaz", "complete.." + Thread.currentThread().getName());
lockBoolean = false;
}
} //end of ThreadTest class
//For testing use below line in main method or in Activity
ThreadTest threadTest = new ThreadTest();
Thread threadA = new Thread(threadTest, "A thead");
Thread threadB = new Thread(threadTest, "B thead");
threadA.start();
threadB.start();
///////////SECOND METHOD USING TWO boolean/////////////////
public class ThreadTest implements Runnable {
ThreadTest() {
Log.i("Ayaz", "Constructor..");
}
private boolean isAnyThreadInUse = false;
private boolean lockBoolean = false;
public void run() {
Log.i("Ayaz", "Thread started.." + Thread.currentThread().getName());
while (!lockBoolean)
if (!isAnyThreadInUse) {
isAnyThreadInUse = true;
synchronizedMethod();
lockBoolean = true;
}
}
/**
* This method is synchronized without using synchronized keyword
*/
public void synchronizedMethod() {
Log.e("Ayaz", "processing...." + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(3000);
} catch (Exception e) {
System.out.println("Exp");
}
Log.e("Ayaz", "complete.." + Thread.currentThread().getName());
isAnyThreadInUse = false;
}
} // end of ThreadTest class
//For testing use below line in main method or in Activity
ThreadTest threadTest = new ThreadTest();
Thread t1 = new Thread(threadTest, "a thead");
Thread t2 = new Thread(threadTest, "b thead");
t1.start();
t2.start();
To maintain predictability you must either ensure all access to mutable data is made sequentially or handle the issues caused by parallel access.
The most gross protection uses the synchronized keyword. Beyond that there are at least two layers of possibility, each with their benefits.
Locks/Semaphores
These can be very effective. For example, if you have a structure that is read by many threads but only updated by one you may find a ReadWriteLock useful.
Locks can be much more efficient if you choose your lock to match the algorithm.
Atomics
Use of AtomicReference for example can often provide completely lock free functionality. This can usually provide huge benefits.
The reasoning behind atomics is to allow them to fail but to tell you they failed in a way you can handle it.
For example, if you want to change a value you can read it and then write its new value so long as it is still the old value. This is called a "compare and set" or cas and can usually be implemented in hardware and so is extremely efficient. All you then need is something like:
long old = atomic.get();
while ( !atomic.cas(old, old+1) ) {
// The value changed between my get and the cas. Get it again.
old = atomic.get();
}
Note, however, that predictability is not always the requirement.
Well there are many ways you can achieve this, but each contains many flavors. Java 8 also ships with new concurrency features.
Some ways you could make sure thread safety are:
Semaphores
Locks-Reentrantlock,ReadWriteLock,StampedLock(Java 8)
Why do u need to do it?
Using only local variable/references will not solve most of the complex business needs.
Also, if instance variable are immutable, their references can still be changed by other threads.
One option is use something like a SingleThreadModel, but it is highly discouraged and deprecated.
u can also look at concurrent api as suggested above by Kal
In a legacy application I have a Vector that keeps a chronological list of files to process and multiple threads ask it for the next file to process. (Note that I realize that there are likely better collections to use (feel free to suggest), but I don't have time for a change of that magnitude right now.)
At a scheduled interval, another thread checks the working directory to see if any files appear to have been orphaned because something went wrong. The method called by this thread occasionally throws a ConcurrentModificationException if the system is abnormally busy. So I know that at least two threads are trying to use the Vector at once.
Here is the code. I believe the issue is the use of the clone() on the returned Vector.
private synchronized boolean isFileInDataStore( File fileToCheck ){
boolean inFile = false;
for( File wf : (Vector<File>)m_dataStore.getFileList().clone() ){
File zipName = new File( Tools.replaceFileExtension(fileToCheck.getAbsolutePath(), ZIP_EXTENSION) );
if(wf.getAbsolutePath().equals(zipName.getAbsolutePath()) ){
inFile = true;
break;
}
}
return inFile;
}
The getFileList() method is as follows:
public synchronized Vector<File> getFileList() {
synchronized(fileList){
return fileList;
}
}
As a quick fix, would changing the getFileList method to return a copy of the vector as follows suffice?
public synchronized Vector<File> getFileListCopy() {
synchronized(fileList){
return (Vector<File>)fileList.clone();
}
}
I must admit that I am generally confused by the use of synchronized in Java as it pertains to collections, as simply declaring the method as such is not enough. As a bonus question, is declaring the method as synchronized and wrapping the return call with another synchronized block just crazy coding? Looks redundant.
EDIT: Here are the other methods which touch the list.
public synchronized boolean addFile(File aFile) {
boolean added = false;
synchronized(fileList){
if( !fileList.contains(aFile) ){
added = fileList.add(aFile);
}
}
notifyAll();
return added;
}
public synchronized void removeFile( File dirToImport, File aFile ) {
if(aFile!=null){
synchronized(fileList){
fileList.remove(aFile);
}
// Create a dummy list so I can synchronize it.
List<File> zipFiles = new ArrayList<File>();
synchronized(zipFiles){
// Populate with actual list
zipFiles = (List<File>)diodeTable.get(dirToImport);
if(zipFiles!=null){
zipFiles.remove(aFile);
// Repopulate list if the number falls below the number of importer threads.
if( zipFiles.size()<importerThreadCount ){
diodeTable.put(dirToImport, getFileList( dirToImport ));
}
}
}
notifyAll();
}
}
Basically, there are two separate issues here: sycnhronization and ConcurrentModificationException. Vector in contrast to e.g. ArrayList is synchronized internally so basic operation like add() or get() do not need synchronization. But you can get ConcurrentModificationException even from a single thread if you are iterating over a Vector and modify it in the meantime, e.g. by inserting an element. So, if you performed a modifying operation inside your for loop, you could break the Vector even with a single thread. Now, if you return your Vector outside of your class, you don't prevent anyone from modifyuing it without proper synchronization in their code. Synchronization on fileList in the original version of getFileList() is pointless. Returning a copy instead of original could help, as could using a collection which allows modification while iterating, like CopyOnWriteArrayList (but do note the additional cost of modifications, it may be a showstopper in some cases).
"I am generally confused by the use of synchronized in Java as it
pertains to collections, as simply declaring the method as such is not
enough"
Correct. synchronized on a method means that only one thread at a time may enter the method. But if the same collection is visible from multiple methods, then this doesn't help much.
To prevent two threads accessing the same collection at the same time, they need to synchronize on the same object - e.g. the collection itself. You have done this in some of your methods, but isFileInDataStore appears to access a collection returned by getFileList without synchronizing on it.
Note that obtaining the collection in a synchronized manner, as you have done in getFileList, isn't enough - it's the accessing that needs synchronizing. Cloning the collection would (probably) fix the issue if you only need read-access.
As well as looking at synchronizing, I suggest you track down which threads are involved - e.g. print out the call stack of the exception and/or use a debugger. It's better to really understand what's going on than to just synchronize and clone until the errors go away!
Where does the m_dataStore get updated? That's a likely culprit if it's not synchronized.
First, you should move your logic to whatever class is m_dataStore if you haven't.
Once you've done that, make your list final, and synchronize on it ONLY if you are modifying its elements. Threads that only need to read it, don't need synchronized access. They may end up polling an outdated list, but I suppose that is not a problem. This gets you increased performance.
As far as I can tell, you would only need to synchronize when adding and removing, and only need to lock your list.
e.g.
package answer;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Example {
public static void main(String[] args)
{
Example c = new Example();
c.runit();
}
public void runit()
{
Thread.currentThread().setName("Thread-1");
new Thread("Thread-2")
{
#Override
public void run() {
test1(true);
}
}.start();
// Force a scenario where Thread-1 allows Thread-2 to acquire the lock
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Logger.getLogger(Example.class.getName()).log(Level.SEVERE, null, ex);
}
// At this point, Thread-2 has acquired the lock, but it has entered its wait() method, releasing the lock
test1(false);
}
public synchronized void test1(boolean wait)
{
System.out.println( Thread.currentThread().getName() + " : Starting...");
try {
if (wait)
{
// Apparently the current thread is supposed to wait for some other thread to do something...
wait();
} else {
// The current thread is supposed to keep running with the lock
doSomeWorkThatRequiresALockLikeRemoveOrAdd();
System.out.println( Thread.currentThread().getName() + " : Our work is done. About to wake up the other thread(s) in 2s...");
Thread.sleep(2000);
// Tell Thread-2 that it we have done our work and that they don't have to spare the CPU anymore.
// This essentially tells it "hey don't wait anymore, start checking if you can get the lock"
// Try commenting this line and you will see that Thread-2 never wakes up...
notifyAll();
// This should show you that Thread-1 will still have the lock at this point (even after calling notifyAll).
//Thread-2 will not print "after wait/notify" for as long as Thread-1 is running this method. The lock is still owned by Thread-1.
Thread.sleep(1000);
}
System.out.println( Thread.currentThread().getName() + " : after wait/notify");
} catch (InterruptedException ex) {
Logger.getLogger(Example.class.getName()).log(Level.SEVERE, null, ex);
}
}
private void doSomeWorkThatRequiresALockLikeRemoveOrAdd()
{
// Do some work that requires a lock like remove or add
}
}