I am using an ExecutorService for a connection task as below:
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<ApplicationConnection> future = (Future<ApplicationConnection>) executor.submit(new ConnectThread(crf, connoptions));
connection = future.get(300000, TimeUnit.SECONDS);
executor.shutdownNow();
The call() method calls a .connect() method (proprietary API). This connect method spawns various threadpools etc. My concern is that if the future times out and kills the executor, will the threads that may have already spawned by calling the .connect() method in the future also end? I know that killing a thread will also kill any child threads but does this follow the same logic?
You are right in your assumption, if the Future times out, some hanging threads will remain. Even worse, shutdownNow() will not even shutdown your pool thread (not to mention proprietary API threads). It merely stops accepting new jobs. ExecutorService thread pool will terminate all threads once all running tasks finish.
What you can do is to try canceling the future and interrupting it. First handle InterruptedException inside your future:
class ConnectThread implements Callbale<ApplicationConnection> {
public ApplicationConnection call() {
try {
return prioprietaryApi.connect();
} catch(InterruptedException e) {
prioprietaryApi.cleanUp();
throw e;
}
}
}
Now simply run:
future.cancel(true);
However your proprietary API might not handle InterruptedException (it will not rethrow it from connect(), moreover you might not have access to any cleanUp() method.
In these circumstances just... forget about it. That Future will eventually terminate and clean up after itself, ignoring the fact that you no longer wait for it. Of course this might lead to various scalability issues.
BTW if the only thing you want to achieve is limiting the maximum time given method runs, consider TimeLimiter from guava.
As per javadoc
Attempts to stop all actively executing tasks, halts the processing of
waiting tasks, and returns a list of the tasks that were awaiting
execution. There are no guarantees beyond best-effort attempts to stop
processing actively executing tasks. For example, typical
implementations will cancel via Thread.interrupt(), so any task that
fails to respond to interrupts may never terminate.
Related
I have an ExecutorService that runs a few threads.
What I am trying to accomplish is to execute, and then wait for all threads to terminate. To give you more background, every thread1 connects to a website.
This is what I came up with:
public static void terminateExecutor(ExecutorService taskExecutor) {
taskExecutor.shutdown();
try {
taskExecutor.awaitTermination(2, TimeUnit.SECONDS);
} catch (InterruptedException e) {
System.out.println("Some tasks were interrupted!"); //This gets printed
}
}
Now, strangely enough, the main thread that uses the ExecutorService terminates, but the thread1s in it don't.
I noticed this because thread1 threw an error (the main thread at this point was already dead) telling me that it didn't find the URL specified (so I guess it's something related to connections).
Is it possible that awaitTermination doesn't terminate the thread1 because its trying (and retrying it seems) to connect to an invalid link?
I cannot stop the thread1 in any other way (or at least to my knowledge I can't), because there isn't any kind of loop.
EDIT:
I get thread1 by creating a new class and feeding it to the executor.
for (....)
{
String urlToVisit = globalUrl + links.get(i);
Thread thread1 = new MagicalThread(urlToVisit, 2).getThread();
executor.execute(thread1);
}
terminateExecutor(executor.getExecutor());
From the Javadoc (emphasis mine):
Blocks until all tasks have completed execution after a shutdown request
You need to call shutdown() before calling awaitTermination, otherwise it does nothing meaningful.
The executor uses interruption to let the threads know it's time to quit. If your tasks are using blocking I/O then they will be blocked and can't check the interrupt flag. There is no ability for the blocked task to respond to the interruption in the way that happens with sleep or wait, where interruption causes the threads to wake up and throw an InterruptedException.
If you set a timeout on the socket then, once the socket times out, the task can check for interruption. Also you can have the task respond to interrupt by closing the socket. See https://www.javaspecialists.eu/archive/Issue056.html
Be aware that implementing this in a threadpool is more involved than in the example given in the linked article. Nothing about the executor lets the pool call methods on a task besides run. One way to do it would be to put a reference to the socket in a ThreadLocal. Then you could make a ThreadFactory for the pool to use to subclass Thread with a method that overrides the interrupt method on the thread to get the socket from the ThreadLocal and close it.
When
taskExecutor.awaitTermination(2, TimeUnit.SECONDS);
returns, it doesn't guarantee that the ExecutorService has terminated. Look at its return value:
[Returns] true if this executor terminated and false if the timeout elapsed before termination
You don't check this value, but I'll bet it's returning false if the thing you're running in the ExecutorService is still running.
Currently, I'm making sure my tasks have finished before moving on like so:
ExecutorService pool = Executors.newFixedThreadPool(5);
public Set<Future> EnqueueWork(StreamWrapper stream) {
Set<Future> futureObjs = new HashSet<>();
util.setData(stream);
Callable callable = util;
Future future = pool.submit(callable);
futureObjs.add(future);
pool.shutdown();
try {
pool.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
Node.sendTCP(Node.getNodeByHostname(StorageTopology.getNextPeer()), Coordinator.prepareForTransport(stream));
return futureObjs;
}
However, because of some other threading on my socket, it's possible that multiple calls are made to EnqueueWork - I'd like to make sure the calls to .submit have completed in the current thread, without shutting down the pool for subsequent threads coming in.
Is this possible?
You can check by invoking isDone() method on all the Future objects in futureObjs. You need to make sure isDone is called in a loop. calling get() method on Future object is another option, since get() is a blocking call, it will return only after task is completed and result is ready. But do you really want to keep the pool open after all the tasks are done?
I agree with one of the comments, it seems odd that your executor can be used by different threads. Usually and executor is private to an instance of some class, but anyhow.
What you can do, from the docs, is to check:
getActiveCount() - Returns the approximate number of threads that are >actively executing tasks.
NOTE: This is a blocking method, it will take out a lock on the workers of your threadpool and block until it has counted everything
And also check:
getQueue() - Returns the task queue used by this executor. Access to the
task queue is intended primarily for debugging and monitoring.
This queue may be in active use. Retrieving the task queue
does not prevent queued tasks from executing.
If your queue is empty and the activeCount is 0, all your tasks should have finished. I say should because getActiveCount says "approximate". Looking at the impl, this is most likely because the worker internally has a flag indicating that it is locked (in use). There is in theory a slight race between executing and the worker being done and marking itself so.
A better approach would in fact be to track the features. You would have to check the Queue and that all futures are done.
However I think what you really need is to reverse your logic. Instead of the current thread trying to work out if another thread has submitted work in the meantime, you should have the other thread call isShutdown() and simply not submit a new task in that case.
You are approaching this issue from the wrong direction. If you need to know whether or not your tasks are finished, that means you have a dependency of A->B. The executor is the wrong place to ensure that dependency, as much as you don't ask the engine of your car "are we there yet?".
Java offers several features to ensure that a certain state has been reached before starting a new execution path. One of them is the invokeAll method of the ExecutorService, that returns only when all tasks that have been submitted are completed.
pool.invokeAll(listOfAllMyCallables);
// if you reach this point all callables are completed
You have already added Future to the set. Just add below code block to get the status of each Future task by calling get() with time out period.
In my example, time out is 60 seconds. You can change it as per your requirement.
Sample code:
try{
for(Future future : futureObjs){
System.out.println("future.status = " + future.get(60000, TimeUnit.MILLISECONDS));
}
}catch(Exception err){
err.printStackTrace();
}
Other useful posts:
How to forcefully shutdown java ExecutorService
How to wait for completion of multiple tasks in Java?
What is the best way for a worker thread to signal that a graceful shutdown should be initiated?
I have a fixed size thread pool which works through a continuous set of tasks, each lasting no more than a few seconds. During normal operation this works well and chugs along with its workload.
The problem I am having is when an exception is thrown in one of the threads. If this happens I would like to bring the whole thing down and have been unable to get this working correctly.
Current approach
The naive approach that I have been using is to have a static method in the "Supervisor" class which shuts down the thread pool using the standard shutdown() and awaitTermination() approach. This is then called by any of the "Worker" classes if they encounter a problem. This was done rather than propagating the exception because execute() requires a Runnable and the run() method cannot throw exceptions.
Here is some pseudo code:
// Finds work to do and passes them on to workers
class Supervisor {
ThreadPoolExecutor exec;
static main() {
exec = new FixedThreadPool(...);
forever {
exec.execute(new Worker(next available task));
}
}
static stopThreadPool() {
exec.shutdown();
if(!exec.awaitTermination(timeout_value)) {
print "Timed out waiting on terminate"
}
}
}
class Worker {
run() {
try {
// Work goes here
} catch () {
Supervisor.stopThreadPool()
}
}
}
The effect that I am seeing is that the threads do pause for a while but then I see the timeout message and they all resume their processing. This pattern continues until I manually shut it down. If I put a call to stopThreadPool() in the main method after having broken out of the loop, the shutdown happens correctly as expected.
The approach is clearly wrong because it doesn't work, but it also feels like the design is not right.
To reiterate the question: What is the best way for a worker thread to signal that a graceful shutdown should be initiated?
Additional information
The questions I have looked at on SO have been of two types:
"How do I kill a thread in a thread pool?"
"How do I know all my threads are finished?"
That's not what I'm after. They also seem to exclusively talk about a finite set of tasks whereas I am dealing with a continuous feed.
I have read about an alternative approach using exec.submit() and Futures which puts the onus on the supervisor class to check that everything's ok but I don't know enough about it to know if it's a better design. The exception case is, well ... exceptional and so I wouldn't want to add work/complexity to the normal case unnecessarily.
(Minor side note: This is a work project and there are other people involved. I'm saying "I" in the question for simplicity.)
You are not that far from the correct solution, the problem is that you need to handle the interruption caused by the shutdown call properly. So your thread's run method should look like this:
run () {
try {
while (Thread.interrupted() == false) {
doSomeWork();
}
} catch (Exception e) {
myExecutor.shutdown();
}
}
Note that I explicitly used the shutdown() without awaitTermination() because otherwise the waiting thread is the one that keeps the Executor from properly terminating, because one thread is still waiting. Perfect single-thread deadlock. ;)
The check for interruption is by the way the hint on how to kill a thread gracefully: get the run method to end by either setting a running boolean to false or by interrupting, the thread will die a moment later.
To check if all of your threads have terminated (= are just about to end their run method), you can use a CountDownLatch for a simple case or the CyclicBarrier/Phaser class for more complex cases.
There are 2 problems here:
If you intend to just force a shutdown on any exception in a worker, then you do you use shutdown() and await counterparts. Just force it using shutdownNow and you should be good. shutdown does a graceful shutdown.
try to break your for loop when such a thing happens. The best way to do it is have a try catch in your for loop around the execute call. when an exception happens in a worker throw an unchecked exception and catch it in the for loop. Terminate the for loop and call your method to force shutdown on executor. This is a cleaner approach. Alternately you can also consider using consider a handler in your executor for doing this.
I have a thread which may get stuck and keep running forever. Thus after a certain amount of time, I would like it to stop executing, go to the finally method to do cleanup, and then die. How would I go about doing this safely? Thanks.
My first thought on how to do this was to make a child thread and have that sleep and then do the cleanup. But then when the parent thread is still trying to run and it can't so it outputs an error.
Refactor your code into a Callable and use an ExecutorService to get a Future. Then use get with a timeout, which throws a TimeoutException if not done by then. See https://stackoverflow.com/a/2275596/53897 for a full example.
You need to set timeouts for your blocking calls. If there are no timeouts, abstract the call and time it out that way.
You could create 1 thread the poll the task for its completion status, and kill it if its exceeded some value. The task itself would still require yet another thread. I'd do this by creating tasks which have a staleness value. Poll all tasks periodically, if they are stale, cancel them.
Suggestion 1: If you put your code in a try block with a wait() statement you can catch interruptedException which will then follow to your finally. Another thread will have to send a notify() or notifyAll() to cause the interruption whenever circumstances need to interrupt your thread.
Suggestion 2: I'm only just a beginner with Java but the thread getting stuck means you must be able to throw a custom exception inside your try/finally block.
(1)
Best solution is to send your data with a timeout. Should look something like
try {
mySendingDataLibraryApi.sendData(data, timeout /*, timeUnit */);
// some new APIs enable also to configure the time unit of the required timeout.
// Older APIs typically just use milliseconds.
} catch (TimeoutException e) {
doCleanup(); // your cleanup method.
}
(2)
If this isn't applicable since the API you're using doesn't expose such configuration, second best solution would be to use an interruptible API sendData method and interrupt the executing thread. This relies on the fact that such an interruptible API is provided. I wouldn't count much on the existence of such a method if a timed method isn't provided by the API... Anyway, the code in the thread that executes the task would look like:
class MySendingDataRunnable implements Runnable {
#Override
public void run() {
try {
mySendingDataLibraryApi.sendDataInterruptibly(data);
} catch (InterruptedException e) {
doCleanup(); // your cleanup method.
// here either re-throw InterruptedExecption
// or restore the interrupted state with Thread.currentThread().interrupt();
}
}
}
The code in the caller the thread, should use an ExecutorService and the Future instance returned by its Future<?> submit(Runnable task) method, in order to wait the desired time and cancel the task with the mayInterruptIfRunning argument set to true:
final ExecutorService executor = Executors.newSingleThreadExecutor();
final Future<?> future = executor.submit(new MySendingDataRunnable());
try {
final Object noResult = future.get(60, TimeUnit.SECONDS); // no result for Runnable
} catch (InterruptedException e) {
// here again either re-throw or restore interupted state
} catch (ExecutionException e) {
// some applicative exception has occurred and should be handled
} catch (TimeoutException e) {
future.cancel(true); // *** here you actually cancel the task after time is out
}
(3)
If the API you use provide neither of these features (timed / interruptible methods), you'll have to use your creativity! This one line of blocking code of yours must be blocking on some resource. Try to reach out for this resource and shut it down or disconnect from it, implicitly causing the task to terminate. A typical example is closing a network connection.
Note: the above solutions only provide a way of actually cancelling the task and freeing the thread for further tasks. The thread might still be alive though. Killing the thread is usually not something that you do when a task is completed (or failed for that matter). It is acceptable when you've created some thread(s) for specific task(s) which isn't supposed to be executed ever again. In such cases you use the above ExecutorService and invoke its shutdownNow() method. And even shutdownNow() only makes best effort and typically depends on the the actual task to be interruptible...
Here's a detailed article (somewhat old but nonetheless).
I am using an ExecutorService (a ThreadPoolExecutor) to run (and queue) a lot of tasks. I am attempting to write some shut down code that is as graceful as possible.
ExecutorService has two ways of shutting down:
I can call ExecutorService.shutdown() and then ExecutorService.awaitTermination(...).
I can call ExecutorService.shutdownNow().
According to the JavaDoc, the shutdown command:
Initiates an orderly shutdown in which previously submitted
tasks are executed, but no new tasks will be accepted.
And the shutdownNow command:
Attempts to stop all actively executing tasks, halts the
processing of waiting tasks, and returns a list of the tasks that were
awaiting execution.
I want something in between these two options.
I want to call a command that:
a. Completes the currently active task or tasks (like shutdown).
b. Halts the processing of waiting tasks (like shutdownNow).
For example: suppose I have a ThreadPoolExecutor with 3 threads. It currently has 50 tasks in the queue with the first 3 actively running. I want to allow those 3 active tasks to complete but I do not want the remaining 47 tasks to start.
I believe I can shutdown the ExecutorService this way by keeping a list of Future objects around and then calling cancel on all of them. But since tasks are being submitted to this ExecutorService from multiple threads, there would not be a clean way to do this.
I'm really hoping I'm missing something obvious or that there's a way to do it cleanly.
Thanks for any help.
I ran into this issue recently. There may be a more elegant approach, but my solution is to first call shutdown(), then pull out the BlockingQueue being used by the ThreadPoolExecutor and call clear() on it (or else drain it to another Collection for storage). Finally, calling awaitTermination() allows the thread pool to finish what's currently on its plate.
For example:
public static void shutdownPool(boolean awaitTermination) throws InterruptedException {
//call shutdown to prevent new tasks from being submitted
executor.shutdown();
//get a reference to the Queue
final BlockingQueue<Runnable> blockingQueue = executor.getQueue();
//clear the Queue
blockingQueue.clear();
//or else copy its contents here with a while loop and remove()
//wait for active tasks to be completed
if (awaitTermination) {
executor.awaitTermination(SHUTDOWN_TIMEOUT, TimeUnit.SECONDS);
}
}
This method would be implemented in the directing class wrapping the ThreadPoolExecutor with the reference executor.
It's important to note the following from the ThreadPoolExecutor.getQueue() javadoc:
Access to the task queue is intended primarily for debugging and
monitoring. This queue may be in active use. Retrieving the task queue
does not prevent queued tasks from executing.
This highlights the fact that additional tasks may be polled from the BlockingQueue while you drain it. However, all BlockingQueue implementations are thread-safe according to that interface's documentation, so this shouldn't cause problems.
The shutdownNow() is exactly what you need. You've missed the 1st word Attempts and the entire 2nd paragraph of its javadoc:
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. For example, typical implementations will cancel via Thread.interrupt(), so any task that fails to respond to interrupts may never terminate.
So, only tasks which are checking Thread#isInterrupted() on a regular basis (e.g. in a while (!Thread.currentThread().isInterrupted()) loop or something), will be terminated. But if you aren't checking on that in your task, it will still keep running.
You can wrap each submitted task with a little extra logic
wrapper = new Runnable()
public void run()
if(executorService.isShutdown())
throw new Error("shutdown");
task.run();
executorService.submit(wrapper);
the overhead of extra checking is negligible. After executor is shutdown, the wrappers will still be executed, but the original tasks won't.