I'm loosely following a tutorial on Java NIO to create my first multi-threading, networking Java application. The tutorial is basically about creating an echo-server and a client, but at the moment I'm just trying to get as far as a server receiving messages from the clients and logging them to the console. By searching the tutorial page for "EchoServer" you can see the class that I base most of the relevant code on.
My problem is (at least I think it is) that I can't find a way to initialize the queue of messages to be processed so that it can be used as I want to.
The application is running on two threads: a server thread, which listens for connections and socket data, and a worker thread which processes data received by the server thread. When the server thread has received a message, it calls processData(byte[] data) on the worker, where the data is added to a queue:
1. public void processData(byte[] data) {
2. synchronized(queue) {
3. queue.add(new String(data));
4. queue.notify();
5. }
6. }
In the worker thread's run() method, I have the following code:
7. while (true) {
8. String msg;
9.
10. synchronized (queue) {
11. while (queue.isEmpty()) {
12. try {
13. queue.wait();
14. } catch (InterruptedException e) { }
15. }
16. msg = queue.poll();
17. }
18.
19. System.out.println("Processed message: " + msg);
20. }
I have verified in the debugger that the worker thread gets to line 13, but doesn't proceed to line 16, when the server starts. I take that as a sign of a successful wait. I have also verified that the server thread gets to line 4, and calls notify()on the queue. However, the worker thread doesn't seem to wake up.
In the javadoc for wait(), it is stated that
The current thread must own this object's monitor.
Given my inexperience with threads I am not exactly certain what that means, but I have tried instantiating the queue from the worker thread with no success.
Why does my thread not wake up? How do I wake it up correctly?
Update:
As #Fly suggested, I added some log calls to print out System.identityHashCode(queue) and sure enough the queues were different instances.
This is the entire Worker class:
public class Worker implements Runnable {
Queue<String> queue = new LinkedList<String>();
public void processData(byte[] data) { ... }
#Override
public void run() { ... }
}
The worker is instantiated in the main method and passed to the server as follows:
public static void main(String[] args)
{
Worker w = new Worker();
// Give names to threads for debugging purposes
new Thread(w,"WorkerThread").start();
new Thread(new Server(w), "ServerThread").start();
}
The server saves the Worker instance to a private field and calls processData() on that field. Why do I not get the same queue?
Update 2:
The entire code for the server and worker threads is now available here.
I've placed the code from both files in the same paste, so if you want to compile and run the code yourself, you'll have to split them up again. Also, there's abunch of calls to Log.d(), Log.i(), Log.w() and Log.e() - those are just simple logging routines that construct a log message with some extra information (timestamp and such) and outputs to System.out and System.err.
I'm going to guess that you are getting two different queue objects, because you are creating a whole new Worker instances. You didn't post the code that starts the Worker, but assuming that it also instantiates and starts the Server, then the problem is on the line where you assign this.worker = new Worker(); instead of assigning it to the Worker parameter.
public Server(Worker worker) {
this.clients = new ArrayList<ClientHandle>();
this.worker = new Worker(); // <------THIS SHOULD BE this.worker = worker;
try {
this.start();
} catch (IOException e) {
Log.e("An error occurred when trying to start the server.", e,
this.getClass());
}
}
The thread for the Worker is probably using the worker instance passed to the Server constructor, so the Server needs to assign its own worker reference to that same Worker object.
You might want to use LinkedBlockingQueue instead, it internally handles the multithreading part, and you can focus more on logic. For example :
// a shared instance somewhere in your code
LinkedBlockingQueue<String> queue = new LinkedBlockingQueue<String>();
in one of your thread
public void processData(byte[] data) {
queue.offer(new String(data));
}
and in your other thread
while (running) { // private class member, set to false to exit loop
String msg = queue.poll(500, TimeUnit.MILLISECONDS);
if (msg == null) {
// queue was empty
Thread.yield();
} else {
System.out.println("Processed message: " + msg);
}
}
Note : for the sake of completeness, the methode poll throws in InterruptedException that you may handle as you see fit. In this case, the while could be surrounded by the try...catch so to exit if the thread should have been interrupted.
I'm assuming that queue is an instance of some class that implements the Queue interface, and that (therefore) the poll() method doesn't block.
In this case, you simply need to instantiate a single queue object that can be shared by the two threads. The following will do the trick:
Queue<String> queue = new LinkedList<String>();
The LinkedList class is not thread-safe, but provided that you always access and update the queue instance in a synchronized(queue) block, this will take care of thread-safety.
I think that the rest of the code is correct. You appear to be doing the wait / notify correctly. The worker thread should get and print the message.
If this isn't working, then the first thing to check is whether the two threads are using the same queue object. The second thing to check is whether processData is actually being called. A third possibility is that some other code is adding or removing queue entries, and doing it the wrong way.
notify() calls are lost if there is no thread sleeping when notify() is called. So if you go notify() then another thread does wait() afterwards, then you will deadlock.
You want to use a semaphore instead. Unlike condition variables, release()/increment() calls are not lost on semaphores.
Start the semaphore's count at zero. When you add to the queue increase it. When you take from the queue decrease it. You will not get lost wake-up calls this way.
Update
To clear up some confusion regarding condition variables and semaphores.
There are two differences between condition variables and semaphores.
Condition variables, unlike semaphores, are associated with a lock. You must acquire the lock before you call wait() and notify(). Semaphore do not have this restriction. Also, wait() calls release the lock.
notify() calls are lost on condition variables, meaning, if you call notify() and no thread is sleeping with a call to wait(), then the notify() is lost. This is not the case with semaphores. The ordering of acquire() and release() calls on semaphores does not matter because the semaphore maintains a count. This is why they are sometimes called counting semaphores.
In the javadoc for wait(), it is stated that
The current thread must own this object's monitor.
Given my inexperience with threads I am not exactly certain what that
means, but I have tried instantiating the queue from the worker thread
with no success.
They use really bizarre and confusing terminology. As a general rule of thumb, "object's monitor" in Java speak means "object's lock". Every object in Java has, inside it, a lock and one condition variable (wait()/notify()). So what that line means is, before you call wait() or notify() on an object (in you're case the queue object) you much acquire the lock with synchronized(object){} fist. Being "inside" the monitor in Java speak means possessing the lock with synchronized(). The terminology has been adopted from research papers and applied to Java concepts so it is a bit confusing since these words mean something slightly different from what they originally meant.
The code seems to be correct.
Do both threads use the same queue object? You can check this by object id in a debugger.
Does changing notify() to notifyAll() help? There could be another thread that invoked wait() on the queue.
OK, after some more hours of pointlessly looking around the net I decided to just screw around with the code for a while and see what I could get to. This worked:
private static BlockingQueue<String> queue;
private BlockingQueue<String> getQueue() {
if (queue == null) {
queue = new LinkedBlockingQueue<String>();
}
return queue;
}
As Yanick Rochon pointed out the code could be simplified slightly by using a BlockingQueue instead of an ordinary Queue, but the change that made the difference was that I implemented the Singleton pattern.
As this solves my immediate problem to get the app working, I'll call this the answer. Large amounts of kudos should go to #Fly and others for pointing out that the Queue instances might not be the same - without that I would never have figured this out. However, I'm still very curious on why I have to do it this way, so I will ask a new question about that in a moment.
Related
I want the server to execute a certain part of the service impl code for one client at a time, thread-safe; and sequentially. Here's the part of the server-side service implementation that does this:
public BorcData getBorcData(String userId) throws GeneralException, EyeksGwtException
{
StoredProcedure sp = DALDB.storedProcedure("BORCBILDIRIM_GETMUKDATA_SP");
DALResult spResult;
Row spRow;
String vergiNo;
String asamaOid;
synchronized (ServerUtility.lock_GeriArama_GetBorcData_GetMukDataSP)
{
String curOptime =CSDateUtility.getCurrentDateTimeToSave();
sp.addParam(curOptime);
spResult = sp.execute();
if (!spResult.hasNext())
{
throw new GeneralException("53", "");
}
}
You see the synchronized block. The object that I use for the lock is defined as:
public static Object lock_GeriArama_GetBorcData_GetMukDataSP = new Object();
My problem is: I think I saw that while a client was waiting to execute that synchronized block for a long time, some other client called this service and executed that block without getting in line and went on. The first client was still waiting.
I know that the server-side runs pure Java. Is it possible that the server-side is being unfair to the clients and not running the longest waiting client's request first?
EDIT: Actually; the fairness isn't even the real problem. Sometimes clients look like they just hang in that synchronized part; waiting forever for the service to finish.
First your lock Object always should be declared final. This isn't fixing any problems, but it tells you if you did code something wrong (like setting the lock to a different lock somewhere).
One way to ensure fairness is to use a ReentrantLock initialized with true (fair scheduling). It will ensure that clients do not hang indefinitely, but are executed in a FIFO order. The good thing is that this requires only a minor change to your code, by replacing all those synchronized blocks with:
lock.lock();
try {
// previous code
} finally {
lock.unlock();
}
The finally is just a safety measure, should any part inside throw an exception.
Other than that your code looks perfectly fine, so the issue is most likely in the DB and not caused by using synchronized at all.
Multi threading in java doesn't guarantee sequential execution.
Read the article : http://www.javaworld.com/javaworld/jw-07-2002/jw-0703-java101.html
It is much helpful in understanding how threads are scheduled.
I have an app that needs to wait for some unknown amount of time. It must wait until several data fields are finished being populated by a server.
The server's API provides me a way to request data, easy enough...
The server's API also provides a way to receive my data back, one field at a time. It does not tell me when all of the fields are finished being populated.
What is the most efficient way to wait until my request is finished being processed by the server? Here's some pseudocode:
public class ServerRequestMethods {
public void requestData();
}
public interface ServerDeliveryMethods {
public void receiveData(String field, int value);
}
public class MyApp extends ServerRequestMethods implements ServerDeliveryMethods {
//store data fields and their respective values
public Hashtable<String, Integer> myData;
//implement required ServerDeliveryMethods
public void receiveData(String field, int value) {
myData.put(field, value);
}
public static void main(String[] args) {
this.requestData();
// Now I have to wait for all of the fields to be populated,
// so that I can decide what to do next.
decideWhatToDoNext();
doIt();
}
}
I have to wait until the server is finished populating my data fields, and the server doesn't let me know when the request is complete. So I must keep checking whether or not my request has finished processing. What is the most efficient way to do this?
wait() and notify(), with a method guarding the while loop that checks if I have all of the required values yet every time I'm woken up by notify()?
Observer and Observable, with a method that checks if I have the all the required values yet every time my Observer.Update() is called?
What's the best approach? Thanks.
If I understood you right, some other thread calls receiveData on your MyApp to fill the data. If that's right, then here's how you do it:
You sleep like this:
do {
this.wait(someSmallTime); //We are aquiring a monitor on "this" object, so it would require a notification. You should put some time (like 100msec maybe) to prevent very rare but still possible deadlock, when notification came before this.wait was called.
} while (!allFieldsAreFilled());
receiveData should make a notify call, to unpause that wait call of yours. For example like this:
myData.put(field, value);
this.notify();
Both blocks will need to be "synchronized" on this object to be able to aquire it's monitor (that's needed for wait). You need to either declare the methods as "synchronized", or put the respective blocks inside synchronized(this) {...} block.
Use a CompletionService
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/CompletionService.html
i think the most efficient method is with wait and notify. You can set a Thread into sleep with wait(). You can wake up the Thread from another one, e.g. your server with notify() to wake up. wait() is a blocking method, you dont have to poll anything. You can also use the static method Thread.sleep(milliseconds) to wait for a time. If you put sleep into a endless while loop checking for a condition with a continusly wait time, youll wait also.
I prefer wait() and notify(), its most efficient at all.
Pretty old question, but I looked for similar problem and found a solution.
At first, developer should never create a thread that will wait forever. You really have to create 'exit condition' if you are using 'while' cycle. Also, waiting for 'InterruptedException' is tricky. If another thread doesn't call yourThread.interrupt() you'll wait until program truly ends.
I used java.util.concurrent.CountDownLatch so in short:
/*as field*/
CountDownLatch semaphore = new CountDownLatch(1);
/*waiting code*/
boolean timeout = !semaphore.await(10, TimeUnit.SECONDS);
/*releasing code*/
semaphore.countDown();
As result, 'waiting code' thread will wait until some another Thread calls 'releasing code' or will "timeout". If you want to wait for 10 fields to be populated, then use 'new CountDownLatch(10)'.
This principle is similar for 'java.util.concurrent.Semaphore' but semaphore is better for access locking and that isn't your case, indeed.
It seems like many people have been having trouble with this (myself included) but I have found an easy and sleek solution. Use this method:
public static void delay(int time) {
long endTime = System.currentTimeMillis() + time;
while (System.currentTimeMillis() < endTime)
{
// do nothing
}
}
This gets the current time and sets an end time (current time + time to wait) and waits until the current time hits the end time.
I have a ThreadPoolExecutor that seems to be lying to me when I call getActiveCount(). I haven't done a lot of multithreaded programming however, so perhaps I'm doing something incorrectly.
Here's my TPE
#Override
public void afterPropertiesSet() throws Exception {
BlockingQueue<Runnable> workQueue;
int maxQueueLength = threadPoolConfiguration.getMaximumQueueLength();
if (maxQueueLength == 0) {
workQueue = new LinkedBlockingQueue<Runnable>();
} else {
workQueue = new LinkedBlockingQueue<Runnable>(maxQueueLength);
}
pool = new ThreadPoolExecutor(
threadPoolConfiguration.getCorePoolSize(),
threadPoolConfiguration.getMaximumPoolSize(),
threadPoolConfiguration.getKeepAliveTime(),
TimeUnit.valueOf(threadPoolConfiguration.getTimeUnit()),
workQueue,
// Default thread factory creates normal-priority,
// non-daemon threads.
Executors.defaultThreadFactory(),
// Run any rejected task directly in the calling thread.
// In this way no records will be lost due to rejection
// however, no records will be added to the workQueue
// while the calling thread is processing a Task, so set
// your queue-size appropriately.
//
// This also means MaxThreadCount+1 tasks may run
// concurrently. If you REALLY want a max of MaxThreadCount
// threads don't use this.
new ThreadPoolExecutor.CallerRunsPolicy());
}
In this class I also have a DAO that I pass into my Runnable (FooWorker), like so:
#Override
public void addTask(FooRecord record) {
if (pool == null) {
throw new FooException(ERROR_THREAD_POOL_CONFIGURATION_NOT_SET);
}
pool.execute(new FooWorker(context, calculator, dao, record));
}
FooWorker runs record (the only non-singleton) through a state machine via calculator then sends the transitions to the database via dao, like so:
public void run() {
calculator.calculate(record);
dao.save(record);
}
Once my main thread is done creating new tasks I try and wait to make sure all threads finished successfully:
while (pool.getActiveCount() > 0) {
recordHandler.awaitTermination(terminationTimeout,
terminationTimeoutUnit);
}
What I'm seeing from output logs (which are presumably unreliable due to the threading) is that getActiveCount() is returning zero too early, and the while() loop is exiting while my last threads are still printing output from calculator.
Note I've also tried calling pool.shutdown() then using awaitTermination but then the next time my job runs the pool is still shut down.
My only guess is that inside a thread, when I send data into the dao (since it's a singleton created by Spring in the main thread...), java is considering the thread inactive since (I assume) it's processing in/waiting on the main thread.
Intuitively, based only on what I'm seeing, that's my guess. But... Is that really what's happening? Is there a way to "do it right" without putting a manual incremented variable at the top of run() and a decremented at the end to track the number of threads?
If the answer is "don't pass in the dao", then wouldn't I have to "new" a DAO for every thread? My process is already a (beautiful, efficient) beast, but that would really suck.
As the JavaDoc of getActiveCount states, it's an approximate value: you should not base any major business logic decisions on this.
If you want to wait for all scheduled tasks to complete, then you should simply use
pool.shutdown();
pool.awaitTermination(terminationTimeout, terminationTimeoutUnit);
If you need to wait for a specific task to finish, you should use submit() instead of execute() and then check the Future object for completion (either using isDone() if you want to do it non-blocking or by simply calling get() which blocks until the task is done).
The documentation suggests that the method getActiveCount() on ThreadPoolExecutor is not an exact number:
getActiveCount
public int getActiveCount()
Returns the approximate number of threads that are actively executing tasks.
Returns: the number of threads
Personally, when I am doing multithreaded work such as this, I use a variable that I increment as I add tasks, and decrement as I grab their output.
This simple sample code demonstrates the problem. I create an ArrayBlockingQueue, and a thread that waits for data on this queue using take(). After the loop is over, in theory both the queue and the thread can be garbage collected, but in practice I soon get an OutOfMemoryError. What is preventing this to be GC'd, and how can this be fixed?
/**
* Produces out of memory exception because the thread cannot be garbage
* collected.
*/
#Test
public void checkLeak() {
int count = 0;
while (true) {
// just a simple demo, not useful code.
final ArrayBlockingQueue<Integer> abq = new ArrayBlockingQueue<Integer>(2);
final Thread t = new Thread(new Runnable() {
#Override
public void run() {
try {
abq.take();
} catch (final InterruptedException e) {
e.printStackTrace();
}
}
});
t.start();
// perform a GC once in a while
if (++count % 1000 == 0) {
System.out.println("gc");
// this should remove all the previously created queues and threads
// but it does not
System.gc();
}
}
}
I am using Java 1.6.0.
UPDATE: perform GC after a few iterations, but this does not help.
Threads are top level objects. They are 'special' so they do not follow the same rules as other objects. The do not rely on references to keep them 'alive' (i.e. safe from GC). A thread will not get garbage collected until it has ended. Which doesn't happen in your sample code, since the thread is blocked. Of course, now that the thread object is not garbage collected, then any other object referenced by it (the queue in your case) also cannot be garbage collected.
You are creating threads indefinitely because they all block until ArrayBlockingQueue<Integer> abq has some entry. So eventually you'll get a OutOfMemoryError.
(edit)
Each thread you create will never end because it blocks until the abq queue as one entry.
If the thread is running, the GC isn't going to collect any object that the thread is referencing including the queue abq and the thread itself.
abq.put(0);
should save your day.
Your threads all wait on their queue's take() but you never put anything in those queues.
Your while loop is an infinite loop and its creating new threads continuously. Although you starting the thread execution as soon as its created but the time its taking to complete the task by the thread is greater then the time its taking to create the thread.
Also what are doing with the abq parameter by declaring it inside the while loop?
Based on your edits and other comments. System.gc() doesn't not guarantee a GC cycle. Read my statement above the speed of execution of your thread is lower than the speed of creation.
I checked the comment for the take() method "Retrieves and removes the head of this queue, waiting if no elements are present on this queue." I see you define the ArrayBlockingQueue but you not adding any elements to it so all your thread are just waiting on that method, that is why you getting OOM.
I do not know how threads are implemented in Java, but one possible reason comes to mind why the queues and threads are not collected: The threads may be wrappers for system threads using system synchronization primitives, in which case the GC cannot automatically collect a waiting thread, since it cannot tell whether the thread is alive or not, i.e. the GC simply does not know that a thread cannot be woken.
I can't say what's the best way to fix it, since I'd need to know what you are trying to do, but you could look at java.util.concurrent to see if it has classes for doing what you need.
You start the thread, so all those new threads will be running asynchronously while the loop continues to create new ones.
Since your code is locking, the threads are life references in the system and cannot be collected. But even if they were doing some work, the threads are unlikely to be terminating as quickly as they are created (at least in this sample), and therefore the GC cannot collect all memory and will eventually fail with an OutOfMemoryException.
Creating as many threads is neither efficient nor efficient. If it is not a requirement to have all those pending operations run in parallel, you may want to use a thread pool and a queue of runnables to process.
The System.gc call does nothing because there is nothing to collect. When a thread starts it increments the threads reference count, not doing so will mean the thread will terminate indeterminately. When the thread's run method completes, then the thread's reference count is decremented.
while (true) {
// just a simple demo, not useful code.
// 0 0 - the first number is thread reference count, the second is abq ref count
final ArrayBlockingQueue<Integer> abq = new ArrayBlockingQueue<Integer>(2);
// 0 1
final Thread t = new Thread(new Runnable() {
#Override
public void run() {
try {
abq.take();
// 2 2
} catch (final InterruptedException e) {
e.printStackTrace();
}
}
});
// 1 1
t.start();
// 2 2 (because the run calls abq.take)
// after end of loop
// 1 1 - each created object's reference count is decreased
}
Now, there is a potential race condition - what if the main loop terminates and does garbage collection before the thread t has a chance to do any processing, i.e. it is suspended by the OS before the abq.take statement is executed? The run method will try to access the abq object after the GC has released it, which would be bad.
To avoid the race condition, you should pass the object as a parameter to the run method. I'm not sure about Java these days, it's been a while, so I'd suggest passing the object as a constructor parameter to a class derived from Runnable. That way, there's an extra reference to abq made before the run method is called, thus ensuring the object is always valid.
I'm making a Java application with an application-logic-thread and a database-access-thread.
Both of them persist for the entire lifetime of the application and both need to be running at the same time (one talks to the server, one talks to the user; when the app is fully started, I need both of them to work).
However, on startup, I need to make sure that initially the app thread waits until the db thread is ready (currently determined by polling a custom method dbthread.isReady()).
I wouldn't mind if app thread blocks until the db thread was ready.
Thread.join() doesn't look like a solution - the db thread only exits at app shutdown.
while (!dbthread.isReady()) {} kind of works, but the empty loop consumes a lot of processor cycles.
Any other ideas? Thanks.
Use a CountDownLatch with a counter of 1.
CountDownLatch latch = new CountDownLatch(1);
Now in the app thread do-
latch.await();
In the db thread, after you are done, do -
latch.countDown();
I would really recommend that you go through a tutorial like Sun's Java Concurrency before you commence in the magical world of multithreading.
There are also a number of good books out (google for "Concurrent Programming in Java", "Java Concurrency in Practice".
To get to your answer:
In your code that must wait for the dbThread, you must have something like this:
//do some work
synchronized(objectYouNeedToLockOn){
while (!dbThread.isReady()){
objectYouNeedToLockOn.wait();
}
}
//continue with work after dbThread is ready
In your dbThread's method, you would need to do something like this:
//do db work
synchronized(objectYouNeedToLockOn){
//set ready flag to true (so isReady returns true)
ready = true;
objectYouNeedToLockOn.notifyAll();
}
//end thread run method here
The objectYouNeedToLockOn I'm using in these examples is preferably the object that you need to manipulate concurrently from each thread, or you could create a separate Object for that purpose (I would not recommend making the methods themselves synchronized):
private final Object lock = new Object();
//now use lock in your synchronized blocks
To further your understanding:
There are other (sometimes better) ways to do the above, e.g. with CountdownLatches, etc. Since Java 5 there are a lot of nifty concurrency classes in the java.util.concurrent package and sub-packages. You really need to find material online to get to know concurrency, or get a good book.
Requirement ::
To wait execution of next thread until previous finished.
Next thread must not start until previous thread stops, irrespective of time consumption.
It must be simple and easy to use.
Answer ::
#See java.util.concurrent.Future.get() doc.
future.get() Waits if necessary for the computation to complete, and then retrieves its result.
Job Done!! See example below
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import org.junit.Test;
public class ThreadTest {
public void print(String m) {
System.out.println(m);
}
public class One implements Callable<Integer> {
public Integer call() throws Exception {
print("One...");
Thread.sleep(6000);
print("One!!");
return 100;
}
}
public class Two implements Callable<String> {
public String call() throws Exception {
print("Two...");
Thread.sleep(1000);
print("Two!!");
return "Done";
}
}
public class Three implements Callable<Boolean> {
public Boolean call() throws Exception {
print("Three...");
Thread.sleep(2000);
print("Three!!");
return true;
}
}
/**
* #See java.util.concurrent.Future.get() doc
* <p>
* Waits if necessary for the computation to complete, and then
* retrieves its result.
*/
#Test
public void poolRun() throws InterruptedException, ExecutionException {
int n = 3;
// Build a fixed number of thread pool
ExecutorService pool = Executors.newFixedThreadPool(n);
// Wait until One finishes it's task.
pool.submit(new One()).get();
// Wait until Two finishes it's task.
pool.submit(new Two()).get();
// Wait until Three finishes it's task.
pool.submit(new Three()).get();
pool.shutdown();
}
}
Output of this program ::
One...
One!!
Two...
Two!!
Three...
Three!!
You can see that takes 6sec before finishing its task which is greater than other thread. So Future.get() waits until the task is done.
If you don't use future.get() it doesn't wait to finish and executes based time consumption.
Good Luck with Java concurrency.
A lot of correct answers but without a simple example.. Here is an easy and simple way how to use CountDownLatch:
//inside your currentThread.. lets call it Thread_Main
//1
final CountDownLatch latch = new CountDownLatch(1);
//2
// launch thread#2
new Thread(new Runnable() {
#Override
public void run() {
//4
//do your logic here in thread#2
//then release the lock
//5
latch.countDown();
}
}).start();
try {
//3 this method will block the thread of latch untill its released later from thread#2
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
//6
// You reach here after latch.countDown() is called from thread#2
public class ThreadEvent {
private final Object lock = new Object();
public void signal() {
synchronized (lock) {
lock.notify();
}
}
public void await() throws InterruptedException {
synchronized (lock) {
lock.wait();
}
}
}
Use this class like this then:
Create a ThreadEvent:
ThreadEvent resultsReady = new ThreadEvent();
In the method this is waiting for results:
resultsReady.await();
And in the method that is creating the results after all the results have been created:
resultsReady.signal();
EDIT:
(Sorry for editing this post, but this code has a very bad race condition and I don't have enough reputation to comment)
You can only use this if you are 100% sure that signal() is called after await(). This is the one big reason why you cannot use Java object like e.g. Windows Events.
The if the code runs in this order:
Thread 1: resultsReady.signal();
Thread 2: resultsReady.await();
then thread 2 will wait forever. This is because Object.notify() only wakes up one of the currently running threads. A thread waiting later is not awoken. This is very different from how I expect events to work, where an event is signalled until a) waited for or b) explicitly reset.
Note: Most of the time, you should use notifyAll(), but this is not relevant to the "wait forever" problem above.
Try CountDownLatch class out of the java.util.concurrent package, which provides higher level synchronization mechanisms, that are far less error prone than any of the low level stuff.
You could do it using an Exchanger object shared between the two threads:
private Exchanger<String> myDataExchanger = new Exchanger<String>();
// Wait for thread's output
String data;
try {
data = myDataExchanger.exchange("");
} catch (InterruptedException e1) {
// Handle Exceptions
}
And in the second thread:
try {
myDataExchanger.exchange(data)
} catch (InterruptedException e) {
}
As others have said, do not take this light-hearted and just copy-paste code. Do some reading first.
The Future interface from the java.lang.concurrent package is designed to provide access to results calculated in another thread.
Take a look at FutureTask and ExecutorService for a ready-made way of doing this kind of thing.
I'd strongly recommend reading Java Concurrency In Practice to anyone interested in concurrency and multithreading. It obviously concentrates on Java, but there is plenty of meat for anybody working in other languages too.
If you want something quick and dirty, you can just add a Thread.sleep() call within your while loop. If the database library is something you can't change, then there is really no other easy solution. Polling the database until is ready with a wait period won't kill the performance.
while (!dbthread.isReady()) {
Thread.sleep(250);
}
Hardly something that you could call elegant code, but gets the work done.
In case you can modify the database code, then using a mutex as proposed in other answers is better.
This applies to all languages:
You want to have an event/listener model. You create a listener to wait for a particular event. The event would be created (or signaled) in your worker thread. This will block the thread until the signal is received instead of constantly polling to see if a condition is met, like the solution you currently have.
Your situation is one of the most common causes for deadlocks- make sure you signal the other thread regardless of errors that may have occurred. Example- if your application throws an exception- and never calls the method to signal the other that things have completed. This will make it so the other thread never 'wakes up'.
I suggest that you look into the concepts of using events and event handlers to better understand this paradigm before implementing your case.
Alternatively you can use a blocking function call using a mutex- which will cause the thread to wait for the resource to be free. To do this you need good thread synchronization- such as:
Thread-A Locks lock-a
Run thread-B
Thread-B waits for lock-a
Thread-A unlocks lock-a (causing Thread-B to continue)
Thread-A waits for lock-b
Thread-B completes and unlocks lock-b
You could read from a blocking queue in one thread and write to it in another thread.
Since
join() has been ruled out
you have already using CountDownLatch and
Future.get() is already proposed by other experts,
You can consider other alternatives:
invokeAll from ExecutorService
invokeAll(Collection<? extends Callable<T>> tasks)
Executes the given tasks, returning a list of Futures holding their status and results when all complete.
ForkJoinPool or newWorkStealingPool from Executors ( since Java 8 release)
Creates a work-stealing thread pool using all available processors as its target parallelism level.
This idea can apply?. If you use CountdownLatches or Semaphores works perfect but if u are looking for the easiest answer for an interview i think this can apply.