In my Callable code I use signaling to notify multiple ending behaviours to another thread. The Callable objects are queued up with FutureTasks in an Executor. They may also be cancelled after being queued up.
Now, my problem is that I rely on the tasks atleast being started for my signaling to work, but it looks like the Executor just skips a task if it's been marked as canceled before it got a chance to run it.
So, is there a way to garantee that a task is always started, and always cancelled (by InterruptedException) while running.
Also, can you check if a task has not started but failed?
You can probably subclass FutureTask class and override its done() method to perform the signalling. According to the documentation, this method should be called even if the task has been cancelled.
Related
I am considering an implementation of an ExecutorService to run a series of tasks. I plan to use the internal queue to have a few tasks waiting for their turn to run. Is there some way to interrupt the task (the Runnable) that is currently running in an ExecutorService thread, and keep the thread alive to run the next task? Or is only possible to call .shutdown() and then create a new ExecutorService?
I have found this and wanted to know if there are any other solutions.
Instead of interfering with the threads you may want to have a Task class (that extends or wraps the Runnable) which implements an interrupt mechanism (e.g. a boolean flag).
When you execute your task you need to check this flag periodically and if it is set, the task should stop what it is doing. You might want to return a specific result at this point, that tells your code that the task was cancelled succesfully.
If a user now decides that he no longer requires the results from this task,
you will have to set this flag. However the task might have already completed at this point of time so you still need to deal with the cases where the result already exists but the user does no longer care about it.
An interrupt on a thread level does not guarantee that the thread stops working. This will only work if the thread is in a state where it can receive an interrupt.
Also you should not interfere with the Threads of the ExecutorSerivce directly, as you might unintentionally stop a different task or stop the ExecutorSerivce from working properly.
Why would you want to kill that task and continue with the next one? If it is a question of times you can define that the threads that are taking longer than you declared in the method that executes them are automatically canceled. E.g:
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.invokeAll(Arrays.asList(new Task()), 60, TimeUnit.SECONDS); // Timeout of 60 seconds.
executor.shutdown();
If any of the threads takes longer than 60 seconds it will throw a cancellation.Exception() that you must catch
I have scheduled some tasks in a ScheduledExecutorService. On shutdown, I'd like to cancel them, and release a database lock once they are all done.
How can I wait for the cancelled tasks to finish processing?
Attempted solutions
according to my tests, future.cancel() does not block until the task has stopped executing
invoking future.get() after future.cancel() results in an immediate CancelledExecutionException
I'd rather not wait for shutdown of the entire executor, as it is shared among components
How can I wait for the cancelled tasks to finish processing?
The problem with this is that cancellation is done on a best effort basis. The javadoc states
Attempts to cancel execution of this task. This attempt will fail if
the task has already completed, has already been cancelled, or could
not be cancelled for some other reason.
Typically, cancellation is implemented with interruption and interruption is a convention. Nothing guarantees that it will be implemented correctly. So even if you do send a cancel, nothing guarantees that the underlying task (if already running) will fulfill the cancellation.
There's no reliable way to implement your use case.
I am using an ExecutorService (a ThreadPoolExecutor) to run (and queue) a lot of tasks. I am attempting to write some shut down code that is as graceful as possible.
ExecutorService has two ways of shutting down:
I can call ExecutorService.shutdown() and then ExecutorService.awaitTermination(...).
I can call ExecutorService.shutdownNow().
According to the JavaDoc, the shutdown command:
Initiates an orderly shutdown in which previously submitted
tasks are executed, but no new tasks will be accepted.
And the shutdownNow command:
Attempts to stop all actively executing tasks, halts the
processing of waiting tasks, and returns a list of the tasks that were
awaiting execution.
I want something in between these two options.
I want to call a command that:
a. Completes the currently active task or tasks (like shutdown).
b. Halts the processing of waiting tasks (like shutdownNow).
For example: suppose I have a ThreadPoolExecutor with 3 threads. It currently has 50 tasks in the queue with the first 3 actively running. I want to allow those 3 active tasks to complete but I do not want the remaining 47 tasks to start.
I believe I can shutdown the ExecutorService this way by keeping a list of Future objects around and then calling cancel on all of them. But since tasks are being submitted to this ExecutorService from multiple threads, there would not be a clean way to do this.
I'm really hoping I'm missing something obvious or that there's a way to do it cleanly.
Thanks for any help.
I ran into this issue recently. There may be a more elegant approach, but my solution is to first call shutdown(), then pull out the BlockingQueue being used by the ThreadPoolExecutor and call clear() on it (or else drain it to another Collection for storage). Finally, calling awaitTermination() allows the thread pool to finish what's currently on its plate.
For example:
public static void shutdownPool(boolean awaitTermination) throws InterruptedException {
//call shutdown to prevent new tasks from being submitted
executor.shutdown();
//get a reference to the Queue
final BlockingQueue<Runnable> blockingQueue = executor.getQueue();
//clear the Queue
blockingQueue.clear();
//or else copy its contents here with a while loop and remove()
//wait for active tasks to be completed
if (awaitTermination) {
executor.awaitTermination(SHUTDOWN_TIMEOUT, TimeUnit.SECONDS);
}
}
This method would be implemented in the directing class wrapping the ThreadPoolExecutor with the reference executor.
It's important to note the following from the ThreadPoolExecutor.getQueue() javadoc:
Access to the task queue is intended primarily for debugging and
monitoring. This queue may be in active use. Retrieving the task queue
does not prevent queued tasks from executing.
This highlights the fact that additional tasks may be polled from the BlockingQueue while you drain it. However, all BlockingQueue implementations are thread-safe according to that interface's documentation, so this shouldn't cause problems.
The shutdownNow() is exactly what you need. You've missed the 1st word Attempts and the entire 2nd paragraph of its javadoc:
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. For example, typical implementations will cancel via Thread.interrupt(), so any task that fails to respond to interrupts may never terminate.
So, only tasks which are checking Thread#isInterrupted() on a regular basis (e.g. in a while (!Thread.currentThread().isInterrupted()) loop or something), will be terminated. But if you aren't checking on that in your task, it will still keep running.
You can wrap each submitted task with a little extra logic
wrapper = new Runnable()
public void run()
if(executorService.isShutdown())
throw new Error("shutdown");
task.run();
executorService.submit(wrapper);
the overhead of extra checking is negligible. After executor is shutdown, the wrappers will still be executed, but the original tasks won't.
If I submit some tasks to an Executor using invokeAll, am I guaranteed that the submitted thread sees all the side effects of the task executions, even if I don't call get() on each of the returned Futures?
From a practical point of view, it would seem that this would be a useful guarantee, but I don't see anything in the javadoc.
More precisely, do all actions in the body of a Callable submitted to an executor happen-before the return from the invokeAll() call?
It's annoying to uselessly call get() on each future, when in fact the return type is Void and no exceptions are thrown - all the work in the happens as side-effects.
From the documentation of ExecutorService:
Actions in a thread prior to the submission of a Runnable or Callable
task to an ExecutorService happen-before any actions taken by that
task, which in turn happen-before the result is retrieved via
Future.get().
As I read this, there is a memory barrier on task submission, so potentially you'd need to call get() on the last task in your list to, but not the others.
However, since calling get() is the only way to determine whether the task completed or threw, I would still call it on every Future, regardless of memory guarantees.
If invokeAny() promises that no tasks are still in execution when invokeAny() returns, this will be the case: all side effects are visible.
In order for invokeAny() to know that all tasks are done, it needs to have synchronized with those threads, meaning that the returning of the functions happens after the tasks completing (and everything that happens in the task). However the API of 'ExecutorSerive' and 'Future.cancel()' does not explicitly say what happens when you cancel a running task (in particular: will cancel() wait with returning until the tasks has stopped running. The fact that after calling cancel(), isDone() must return true, does imply that cancel() will not return until the task has actually finished executing.
One more thing to watch out for is that you will not know if a task ever started execution, when using invokeAny() without inspecting the Future objects.
I need to force the release of resources when a task is interrupted. For that, I implemented this solution. But with this, when I call shutdown() all the task in the ScheduledThreadPoolExecutor.getQueue() are forced correctly, but some of my jobs kept the resources. I checked very well the behavior and I figured out this: when a task is in execution, he is not present in the ScheduledThreadPoolExecutor queue (I know it sounds obvious). The problem is that I need to force the release of resources for all the jobs (in the queue or in execution)
So, How can I get the jobs that are in execution? Do you have a better idea?
You can maintain a list of all the Future's you create when you submit the jobs.
Use this list to cancel all futures.
Don't you want to call
executor.shutdownNow()
that will attempt to cancel currently running tasks (using Thread.interrupt so you'll need to implement an 'interruption policy' in each task that uses the interrupt flag).
from the javadoc
Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution.
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. For example, typical implementations will cancel via Thread.interrupt, so any task that fails to respond to interrupts may never terminate.
This will return a list of waiting tasks, so you can always put this back onto a 'wait list' rather than loose them completely. You might also want follow that up with an 'await termination' to avoid run away code. For example, executor.awaitTermination(...).
tempus-fugit has some handy classes for handling this. You just call
shutdown(executor).waitingForShutdown(millis(400));
see here for details.
Also, the solution you outline in the blog post; I'm not sure if that's quite right. Future.cancel will only stop the task from being scheduled. If you were to update the example in the blog to allow interruption (ie cancel(true), it'd be equivalent (more or less) with the shutdownNow. That is to say, it will call interrupt on the underlying task which (if you've implemented an interruption policy) will stop it processing. As for cleaning up after interruption, you just need to make sure that you handle that appropriately within the interruption policy implementation. The upshot is that I think you can cancel and cleanup correctly using shutdownNow (or cancel(true))