I have a situation where I have 2 blocking queues. The first I insert some tasks that I execute. When each task completes, it adds a task to the second queue, where they are executed.
So my first queue is easy: I just check to make sure it's not empty and execute, else I interrupt():
public void run() {
try {
if (taskQueue1.isEmpty()) {
SomeTask task = taskQueue1.poll();
doTask(task);
taskQueue2.add(task);
}
else {
Thread.currentThread().interrupt();
}
}
catch (InterruptedException ex) {
ex.printStackTrace();
}
}
The second one I do the following, which as you can tell, doesn't work:
public void run() {
try {
SomeTask2 task2 = taskQueue2.take();
doTask(task2);
}
catch (InterruptedException ex) {
}
Thread.currentThread().interrupt();
}
How would you solve it so that the second BlockingQueue doesn't block on take(), yet finishes only when it knows there are no more items to be added. It would be good if the 2nd thread could see the 1st blocking queue perhaps, and check if that was empty and the 2nd queue was also empty, then it would interrupt.
I could also use a Poison object, but would prefer something else.
NB: This isn't the exact code, just something I wrote here:
You make it sound as though the thread processing the first queue knows that there are no more tasks coming as soon as its queue is drained. That sounds suspicious, but I'll take you at your word and propose a solution anyway.
Define an AtomicInteger visible to both threads. Initialize it to positive one.
Define the first thread's operation as follows:
Loop on Queue#poll().
If Queue#poll() returns null, call AtomicInteger#decrementAndGet() on the shared integer.
If AtomicInteger#decrementAndGet() returned zero, interrupt the second thread via Thread#interrupt(). (This handles the case where no items ever arrived.)
In either case, exit the loop.
Otherwise, process the extracted item, call AtomicInteger#incrementAndGet() on the shared integer, add the extracted item to the second thread's queue, and continue the loop.
Define the second thread's operation as follows:
Loop blocking on BlockingQueue#take().
If BlockingQueue#take() throws InterruptedException, catch the exception, call Thread.currentThread().interrupt(), and exit the loop.
Otherwise, process the extracted item.
Call AtomicInteger#decrementAndGet() on the shared integer.
If AtomicInteger#decrementAndGet() returned zero, exit the loop.
Otherwise, continue the loop.
Make sure you understand the idea before trying to write the actual code. The contract is that the second thread continues waiting on more items from its queue until the count of expected tasks reaches zero. At that point, the producing thread (the first one) will no longer push any new items into the second thread's queue, so the second thread knows that it's safe to stop servicing its queue.
The screwy case arises when no tasks ever arrive at the first thread's queue. Since the second thread only decrements and tests the count after it processes an item, if it never gets a chance to process any items, it won't ever consider stopping. We use thread interruption to handle that case, at the cost of another conditional branch in the first thread's loop termination steps. Fortunately, that branch will execute only once.
There are many designs that could work here. I merely described one that introduced only one additional entity—the shared atomic integer—but even then, it's fiddly. I think that using a poison pill would be much cleaner, though I do concede that neither Queue#add() nor BlockingQueue#put() accept null as a valid element (due to Queue#poll()'s return value contract). It would be otherwise be easy to use null as a poison pill.
I can't figure out what you are actually trying to do here, but I can say that the interrupt() in your first run() method is either pointless or wrong.
If you are running the run() method in your own Thread object, then that thread is about to exit anyway, so there's no point interrupting it.
If you are running the run() method in an executor with a thread pool, then you most likely don't want to kill the thread or shut down the executor at all ... at that point. And if you do want to shutdown the executor, then you should call one of its shutdown methods.
For instance, here's a version what does what you seeming to be doing without all of the interrupt stuff, and without thread creation/destruction churn.
public class TaskExecutor {
private ExecutorService executor = new ThreadPoolExecutorService(...);
public void submitTask1(final SomeTask task) {
executor.submit(new Runnable(){
public void run() {
doTask(task);
submitTask2(task);
}
});
}
public void submitTask2(final SomeTask task) {
executor.submit(new Runnable(){
public void run() {
doTask2(task);
}
});
}
public void shutdown() {
executor.shutdown();
}
}
If you want separate queuing for the tasks, simply create and use two different executors.
Related
During the course of my program execution, a number of threads are started. The amount of threads varies depending on user defined settings, but they are all executing the same method with different variables.
In some situations, a clean up is required mid execution, part of this is stopping all the threads, I don't want them to stop immediately though, I just set a variable that they check for that terminates them. The problem is that it can be up to 1/2 second before the thread stops. However, I need to be sure that all threads have stopped before the clean up can continues. The cleanup is executed from another thread so technically I need this thread to wait for the other threads to finish.
I have thought of several ways of doing this, but they all seem to be overly complex. I was hoping there would be some method that can wait for a group of threads to complete. Does anything like this exist?
Just join them one by one:
for (Thread thread : threads) {
thread.join();
}
(You'll need to do something with InterruptedException, and you may well want to provide a time-out in case things go wrong, but that's the basic idea...)
If you are using java 1.5 or higher, you can try CyclicBarrier. You can pass the cleanup operation as its constructor parameter, and just call barrier.await() on all threads when there is a need for cleanup.
Have you seen the Executor classes in java.util.concurrent? You could run your threads through an ExecutorService. It gives you a single object you can use to cancel the threads or wait for them to complete.
Define a utility method (or methods) yourself:
public static waitFor(Collection<? extends Thread) c) throws InterruptedException {
for(Thread t : c) t.join();
}
Or you may have an array
public static waitFor(Thread[] ts) throws InterruptedException {
waitFor(Arrays.asList(ts));
}
Alternatively you could look at using a CyclicBarrier in the java.util.concurrent library to implement an arbitrary rendezvous point between multiple threads.
If you control the creation of the Threads (submission to an ExecutorService) then it appears you can use an ExecutorCompletionService
see ExecutorCompletionService? Why do need one if we have invokeAll? for various answers there.
If you don't control thread creation, here is an approach that allows you to join the threads "one by one as they finish" (and know which one finishes first, etc.), inspired by the ruby ThreadWait class.
Basically by newing up "watching threads" which alert when the other threads terminate, you can know when the "next" thread out of many terminates.
You'd use it something like this:
JoinThreads join = new JoinThreads(threads);
for(int i = 0; i < threads.size(); i++) {
Thread justJoined = join.joinNextThread();
System.out.println("Done with a thread, just joined=" + justJoined);
}
And the source:
public static class JoinThreads {
java.util.concurrent.LinkedBlockingQueue<Thread> doneThreads =
new LinkedBlockingQueue<Thread>();
public JoinThreads(List<Thread> threads) {
for(Thread t : threads) {
final Thread joinThis = t;
new Thread(new Runnable() {
#Override
public void run() {
try {
joinThis.join();
doneThreads.add(joinThis);
}
catch (InterruptedException e) {
// "should" never get here, since we control this thread and don't call interrupt on it
}
}
}).start();
}
}
Thread joinNextThread() throws InterruptedException {
return doneThreads.take();
}
}
The nice part of this is that it works with generic Java threads, without modification, any thread can be joined. The caveat is it requires some extra thread creation. Also this particular implementation "leaves threads behind" if you don't call joinNextThread() the full number of times, and doesn't have an "close" method, etc. Comment here if you'd like a more polished version created. You could also use this same type of pattern with "Futures" instead of Thread objects, etc.
I have this code:
public class Nit extends Thread {
public void run() {
try {
synchronized(this) {
this.wait();
}
System.out.println("AAA");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
Nit n = new Nit();
n.start();
synchronized(n) {
n.notify();
}
}
}
When I run it from cmd it never exits like it is an infinite loop. I don't understand why. Only thing i can think of is that Nit n is still waiting but I don't get why?
You are observing a race condition. You notify before the wait happens. Therefore the wait sits there and waits forever.
If you would invoke this code often enough, you might see it passing sometimes - when the new thread advanced faster then the main thread. One way to make the example work: try adding a call to Thread.sleep(1000) or so before calling notify(). Alternatively, even a println() call on the main thread (before the notify() might change timing enough).
Beyond that: such subtleties are the main reason why you actually avoid using the "low level" primitives such as as wait/notify. Instead, you use the powerful abstractions (like queues) that standard APIs have to offer.
The notify method tells the scheduler to pick a thread to notify, choosing from only those threads that are currently waiting on the same lock that notify was called on.
In this case the n thread doesn't start waiting until after the notification has already happened, so nothing ever wakes the thread up from waiting. You may have assumed that waiting threads will see notifications made before they started waiting, or that the JVM would have to give the n thread CPU time before the main thread proceeds past the call to start, but those assumptions aren't valid.
Introduce a condition flag as an instance member of Nit:
public class Nit extends Thread {
boolean notified = false;
and change Nit's run method to check it:
synchronized (this) {
while (!notified) {
wait();
}
}
Then add a line to the main method so that the main thread can set the flag:
synchronized (n) {
n.notified = true;
n.notify();
}
This way the notify can still happen before n starts waiting, but in that case n will check the flag, see it's true already, and skip waiting.
See Oracle's guarded blocks tutorial:
Note: Always invoke wait inside a loop that tests for the condition being waited for.
Also the API documentation (see Thread.join) discourages the practice of locking on thread objects.
I have couple of objects which implement the Runnable interface and I execute them in separate Threads. Essentially in the run() method of the Runnable object I do some network activities which includes call to methods that block while waiting for input (from the network). Note that I do not have any deliberate pauses i.e. Thread.sleep() calls. Any pause is caused by calls to methods that may block.
These Runnable objects are under the control of a GUI and hence the GUI interface and one function I wish to provide to the user is the ability to end the thread executing my Runnable objects however I'm not able to understand how to do this.
One obvious means is to call the Thread.interrupt() method of the Runnable objects Thread but how is this call to the Thread method propagated through to the Runnable object? For example I cannot use try-catch, catching InterruptedException in the Runnable object does not seem to be allowed; my IDE (netbeans) complains that InterruptedException is never thrown in the run() method.
My code is below, stripped for brevity.
The following lines are executed in the GUI code in the GUI thread:
digiSearch = new DigiSearch(hostIP,this);
digiSearchThread = new Thread(digiSearch);
digiSearchThread.start();
The following is my Runnable class and where I would like/need to capture the interruption of its executing thread.
public class DigiSearch implements Runnable {
private String networkAdapterIP;
private DigiList digiList;
public DigiSearch (String ipAddress, DigiList digiList){
networkAdapterIP = ipAddress;
this.digiList = digiList;
}
#Override
public void run() {
try{
/*
* Do some network and other activities here which includes calling some blocking methods. However I would like to interrupt the run() method if possible by calling Thread.interrupt()
*/
} catch (Exception ex){
digiList.digiListException(ex);
} catch (Throwable t){
System.out.println("Search thread interrupted");
}
}
}
Could someone enlighten me on how I can achieve this or perhaps resolve my misunderstanding of interrupting threads?
Do you have any blocking methods that throw IOException? If so, this is probably your InterruptedException placeholder. Many of these method were written before InterruptedException was introduced and so rather than update the interface which would break legacy code, they wrap the InterruptedException in an IOException.
If this is not the case you are kinda stuck. For example, if you write a Runnable that creates an infinit loop that just does work and never sleeps, interrupting this thread will not result in an InterruptedException. It is the responsibilily of the Runnable to regularly check Thread.interrupted().
Couple of points to note here:
1) While I agree it is useful to have a feature for user to stop execution of a thread, I recommend thinking about the action that the thread is already doing. Is it possible to rollback the action? Is it possible to ignore the action and just stop execution?
2) Thread.stop() and Thread.destroy() etc are deprecated methods (http://docs.oracle.com/javase/6/docs/api/)
So how does one normally interrupt thread execution? Enter volatile state variables.
public class MyClass implements Runnable {
private volatile boolean isAlive=true;
/**
* Request thread stop by calling requestThreadStop() externally.
*/
public void requestThreadStop() {
isAlive = false;
}
#Override
public void run() {
while(isAlive) {
//Do All your thread work
//if isAlive is modified, the next iteration will not happen
}
}
}
For many use cases, the above implementation works. However, if the work inside the run() method loop is only a single iteration and can block for significant amount of time, the user has to wait until the operation completes.
Is there a way to silently discard the execution of a thread almost immediately once the user requests for termination from the GUI? Maybe. You will have to explore using Threadpools for that. Using the ExecutorService, you can provide hooks to shutdown() and shutdownNow() methods.
To avoid repetition, you can find more about this feature of threadpools from this previoud stackoverflow post How to stop the execution of Executor ThreadPool in java?
I'm loosely following a tutorial on Java NIO to create my first multi-threading, networking Java application. The tutorial is basically about creating an echo-server and a client, but at the moment I'm just trying to get as far as a server receiving messages from the clients and logging them to the console. By searching the tutorial page for "EchoServer" you can see the class that I base most of the relevant code on.
My problem is (at least I think it is) that I can't find a way to initialize the queue of messages to be processed so that it can be used as I want to.
The application is running on two threads: a server thread, which listens for connections and socket data, and a worker thread which processes data received by the server thread. When the server thread has received a message, it calls processData(byte[] data) on the worker, where the data is added to a queue:
1. public void processData(byte[] data) {
2. synchronized(queue) {
3. queue.add(new String(data));
4. queue.notify();
5. }
6. }
In the worker thread's run() method, I have the following code:
7. while (true) {
8. String msg;
9.
10. synchronized (queue) {
11. while (queue.isEmpty()) {
12. try {
13. queue.wait();
14. } catch (InterruptedException e) { }
15. }
16. msg = queue.poll();
17. }
18.
19. System.out.println("Processed message: " + msg);
20. }
I have verified in the debugger that the worker thread gets to line 13, but doesn't proceed to line 16, when the server starts. I take that as a sign of a successful wait. I have also verified that the server thread gets to line 4, and calls notify()on the queue. However, the worker thread doesn't seem to wake up.
In the javadoc for wait(), it is stated that
The current thread must own this object's monitor.
Given my inexperience with threads I am not exactly certain what that means, but I have tried instantiating the queue from the worker thread with no success.
Why does my thread not wake up? How do I wake it up correctly?
Update:
As #Fly suggested, I added some log calls to print out System.identityHashCode(queue) and sure enough the queues were different instances.
This is the entire Worker class:
public class Worker implements Runnable {
Queue<String> queue = new LinkedList<String>();
public void processData(byte[] data) { ... }
#Override
public void run() { ... }
}
The worker is instantiated in the main method and passed to the server as follows:
public static void main(String[] args)
{
Worker w = new Worker();
// Give names to threads for debugging purposes
new Thread(w,"WorkerThread").start();
new Thread(new Server(w), "ServerThread").start();
}
The server saves the Worker instance to a private field and calls processData() on that field. Why do I not get the same queue?
Update 2:
The entire code for the server and worker threads is now available here.
I've placed the code from both files in the same paste, so if you want to compile and run the code yourself, you'll have to split them up again. Also, there's abunch of calls to Log.d(), Log.i(), Log.w() and Log.e() - those are just simple logging routines that construct a log message with some extra information (timestamp and such) and outputs to System.out and System.err.
I'm going to guess that you are getting two different queue objects, because you are creating a whole new Worker instances. You didn't post the code that starts the Worker, but assuming that it also instantiates and starts the Server, then the problem is on the line where you assign this.worker = new Worker(); instead of assigning it to the Worker parameter.
public Server(Worker worker) {
this.clients = new ArrayList<ClientHandle>();
this.worker = new Worker(); // <------THIS SHOULD BE this.worker = worker;
try {
this.start();
} catch (IOException e) {
Log.e("An error occurred when trying to start the server.", e,
this.getClass());
}
}
The thread for the Worker is probably using the worker instance passed to the Server constructor, so the Server needs to assign its own worker reference to that same Worker object.
You might want to use LinkedBlockingQueue instead, it internally handles the multithreading part, and you can focus more on logic. For example :
// a shared instance somewhere in your code
LinkedBlockingQueue<String> queue = new LinkedBlockingQueue<String>();
in one of your thread
public void processData(byte[] data) {
queue.offer(new String(data));
}
and in your other thread
while (running) { // private class member, set to false to exit loop
String msg = queue.poll(500, TimeUnit.MILLISECONDS);
if (msg == null) {
// queue was empty
Thread.yield();
} else {
System.out.println("Processed message: " + msg);
}
}
Note : for the sake of completeness, the methode poll throws in InterruptedException that you may handle as you see fit. In this case, the while could be surrounded by the try...catch so to exit if the thread should have been interrupted.
I'm assuming that queue is an instance of some class that implements the Queue interface, and that (therefore) the poll() method doesn't block.
In this case, you simply need to instantiate a single queue object that can be shared by the two threads. The following will do the trick:
Queue<String> queue = new LinkedList<String>();
The LinkedList class is not thread-safe, but provided that you always access and update the queue instance in a synchronized(queue) block, this will take care of thread-safety.
I think that the rest of the code is correct. You appear to be doing the wait / notify correctly. The worker thread should get and print the message.
If this isn't working, then the first thing to check is whether the two threads are using the same queue object. The second thing to check is whether processData is actually being called. A third possibility is that some other code is adding or removing queue entries, and doing it the wrong way.
notify() calls are lost if there is no thread sleeping when notify() is called. So if you go notify() then another thread does wait() afterwards, then you will deadlock.
You want to use a semaphore instead. Unlike condition variables, release()/increment() calls are not lost on semaphores.
Start the semaphore's count at zero. When you add to the queue increase it. When you take from the queue decrease it. You will not get lost wake-up calls this way.
Update
To clear up some confusion regarding condition variables and semaphores.
There are two differences between condition variables and semaphores.
Condition variables, unlike semaphores, are associated with a lock. You must acquire the lock before you call wait() and notify(). Semaphore do not have this restriction. Also, wait() calls release the lock.
notify() calls are lost on condition variables, meaning, if you call notify() and no thread is sleeping with a call to wait(), then the notify() is lost. This is not the case with semaphores. The ordering of acquire() and release() calls on semaphores does not matter because the semaphore maintains a count. This is why they are sometimes called counting semaphores.
In the javadoc for wait(), it is stated that
The current thread must own this object's monitor.
Given my inexperience with threads I am not exactly certain what that
means, but I have tried instantiating the queue from the worker thread
with no success.
They use really bizarre and confusing terminology. As a general rule of thumb, "object's monitor" in Java speak means "object's lock". Every object in Java has, inside it, a lock and one condition variable (wait()/notify()). So what that line means is, before you call wait() or notify() on an object (in you're case the queue object) you much acquire the lock with synchronized(object){} fist. Being "inside" the monitor in Java speak means possessing the lock with synchronized(). The terminology has been adopted from research papers and applied to Java concepts so it is a bit confusing since these words mean something slightly different from what they originally meant.
The code seems to be correct.
Do both threads use the same queue object? You can check this by object id in a debugger.
Does changing notify() to notifyAll() help? There could be another thread that invoked wait() on the queue.
OK, after some more hours of pointlessly looking around the net I decided to just screw around with the code for a while and see what I could get to. This worked:
private static BlockingQueue<String> queue;
private BlockingQueue<String> getQueue() {
if (queue == null) {
queue = new LinkedBlockingQueue<String>();
}
return queue;
}
As Yanick Rochon pointed out the code could be simplified slightly by using a BlockingQueue instead of an ordinary Queue, but the change that made the difference was that I implemented the Singleton pattern.
As this solves my immediate problem to get the app working, I'll call this the answer. Large amounts of kudos should go to #Fly and others for pointing out that the Queue instances might not be the same - without that I would never have figured this out. However, I'm still very curious on why I have to do it this way, so I will ask a new question about that in a moment.
I have a ThreadPoolExecutor that seems to be lying to me when I call getActiveCount(). I haven't done a lot of multithreaded programming however, so perhaps I'm doing something incorrectly.
Here's my TPE
#Override
public void afterPropertiesSet() throws Exception {
BlockingQueue<Runnable> workQueue;
int maxQueueLength = threadPoolConfiguration.getMaximumQueueLength();
if (maxQueueLength == 0) {
workQueue = new LinkedBlockingQueue<Runnable>();
} else {
workQueue = new LinkedBlockingQueue<Runnable>(maxQueueLength);
}
pool = new ThreadPoolExecutor(
threadPoolConfiguration.getCorePoolSize(),
threadPoolConfiguration.getMaximumPoolSize(),
threadPoolConfiguration.getKeepAliveTime(),
TimeUnit.valueOf(threadPoolConfiguration.getTimeUnit()),
workQueue,
// Default thread factory creates normal-priority,
// non-daemon threads.
Executors.defaultThreadFactory(),
// Run any rejected task directly in the calling thread.
// In this way no records will be lost due to rejection
// however, no records will be added to the workQueue
// while the calling thread is processing a Task, so set
// your queue-size appropriately.
//
// This also means MaxThreadCount+1 tasks may run
// concurrently. If you REALLY want a max of MaxThreadCount
// threads don't use this.
new ThreadPoolExecutor.CallerRunsPolicy());
}
In this class I also have a DAO that I pass into my Runnable (FooWorker), like so:
#Override
public void addTask(FooRecord record) {
if (pool == null) {
throw new FooException(ERROR_THREAD_POOL_CONFIGURATION_NOT_SET);
}
pool.execute(new FooWorker(context, calculator, dao, record));
}
FooWorker runs record (the only non-singleton) through a state machine via calculator then sends the transitions to the database via dao, like so:
public void run() {
calculator.calculate(record);
dao.save(record);
}
Once my main thread is done creating new tasks I try and wait to make sure all threads finished successfully:
while (pool.getActiveCount() > 0) {
recordHandler.awaitTermination(terminationTimeout,
terminationTimeoutUnit);
}
What I'm seeing from output logs (which are presumably unreliable due to the threading) is that getActiveCount() is returning zero too early, and the while() loop is exiting while my last threads are still printing output from calculator.
Note I've also tried calling pool.shutdown() then using awaitTermination but then the next time my job runs the pool is still shut down.
My only guess is that inside a thread, when I send data into the dao (since it's a singleton created by Spring in the main thread...), java is considering the thread inactive since (I assume) it's processing in/waiting on the main thread.
Intuitively, based only on what I'm seeing, that's my guess. But... Is that really what's happening? Is there a way to "do it right" without putting a manual incremented variable at the top of run() and a decremented at the end to track the number of threads?
If the answer is "don't pass in the dao", then wouldn't I have to "new" a DAO for every thread? My process is already a (beautiful, efficient) beast, but that would really suck.
As the JavaDoc of getActiveCount states, it's an approximate value: you should not base any major business logic decisions on this.
If you want to wait for all scheduled tasks to complete, then you should simply use
pool.shutdown();
pool.awaitTermination(terminationTimeout, terminationTimeoutUnit);
If you need to wait for a specific task to finish, you should use submit() instead of execute() and then check the Future object for completion (either using isDone() if you want to do it non-blocking or by simply calling get() which blocks until the task is done).
The documentation suggests that the method getActiveCount() on ThreadPoolExecutor is not an exact number:
getActiveCount
public int getActiveCount()
Returns the approximate number of threads that are actively executing tasks.
Returns: the number of threads
Personally, when I am doing multithreaded work such as this, I use a variable that I increment as I add tasks, and decrement as I grab their output.