ScheduledThreadPoolExecutor.remove : Is it safe to use - java

i am trying to schedule bunch of tasks to execute periodically. under certain situations some task need to be stopped from scheduling, so i remove them from the interal queue of threadPoolExecutor. I do that from within the task itself
Below is my approach. I am not sure the idea of removing the task from the threadPoolExecutor service, from inside of the task can cause any problem.(look at the synchronized method name 'removeTask'. Is there a better way to accomplish what i am trying to do here.
public class SchedulerDaemon {
private ScheduledExecutorService taskScheduler;
private ScheduledFuture taskResult1, taskResult2;
private Task1 task1;
private Task2 task2;
public SchedulerDaemon(Task1 task, Task2 task2)
{
this.task1 = task1;
this.task2 = task2;1
taskScheduler = new ScheduledThreadPoolExecutor(1);
}
public void start() {
if(taskScheduler == null) {
taskScheduler = new ScheduledThreadPoolExecutor(1);
taskResult = taskScheduler.scheduleAtFixedRate(new TaskWrapper(task1) , 60000,60000, TimeUnit.MILLISECONDS);
taskResult2 = taskScheduler.scheduleAtFixedRate(new TaskWrapper(task2) , 60000,60000, TimeUnit.MILLISECONDS);
}
}
public void stop() {
if(taskScheduler != null) {
taskScheduler.shutdown();
taskResult1.cancel(false);
taskResult2.cancel(false);
taskScheduler = null;
taskResult = null;
}
}
public synchronized void removeTask( TaskWrapper task){
((ScheduledThreadPoolExecutor) taskScheduler).remove(task);
}
class TaskWrapper implements Runnable {
private Task myTask;
public TaskWrapper(Task task) {
myTask = task;
}
#Override
public void run() {
try {
boolean keepRunningTask = myTask.call();
if(!keepRunningTask) {
***//Should this cause any problem??***
removeTask(this);
}
} catch (Exception e) {
//the task threw an exception remove it from execution queue
***//Should this cause any problem??***
removeTask(this);
}
}
}
}
public Task1 implements Callable<Boolean> {
public Boolean call() {
if(<something>)
return true;
else
return false;
}
}
public Task2 implements Callable<Boolean> {
public Boolean call() {
if(<something>)
return true;
else
return false;
}
}

Whenever you schedule a task
ScheduledFuture<?> future = schedulerService.scheduleAtFixedRate(new AnyTask());
Future Object is returned.
Use this Future Object to cancel this Task.
try this
future.cancel(true);
from JavaDocs
/**
* Attempts to cancel execution of this task. This attempt will
* fail if the task has already completed, has already been cancelled,
* or could not be cancelled for some other reason. If successful,
* and this task has not started when <tt>cancel</tt> is called,
* this task should never run. If the task has already started,
* then the <tt>mayInterruptIfRunning</tt> parameter determines
* whether the thread executing this task should be interrupted in
* an attempt to stop the task.
*
* <p>After this method returns, subsequent calls to {#link #isDone} will
* always return <tt>true</tt>. Subsequent calls to {#link #isCancelled}
* will always return <tt>true</tt> if this method returned <tt>true</tt>.
*
* #param mayInterruptIfRunning <tt>true</tt> if the thread executing this
* task should be interrupted; otherwise, in-progress tasks are allowed
* to complete
* #return <tt>false</tt> if the task could not be cancelled,
* typically because it has already completed normally;
* <tt>true</tt> otherwise
*/

Canceling a task by force is dangerous, that is why stop is mark to remove from java, so,
in alternative you should have a shared flag in your thread...
something like: can i live? can i live? no? ok return! this seam hugely but is the safe way!

Related

Does runnable run() create a thread everytime I call it?

I wrote a class which I use like as follows:
EventWrapperBuilder.newWrapperBuilder().
addSync(this::<some_method>).
addSync(this::<some_method>).
addSync(this::<some_method>).
addAsync(() -> <some_method>, Duration.ofSeconds(10)).
GET();
My code is the following:
private final List<EventWrapper> _wrappers = new ArrayList<>();
public EventWrapperBuilder addSync(final Runnable task)
{
_wrappers.add(new EventWrapper(task, Duration.ZERO));
return this;
}
public EventWrapperBuilder addAsync(final Runnable task, final Duration duration)
{
_wrappers.add(new EventWrapper(task, duration));
return this;
}
/**
* #return {#code List} of all {#code Future}
*/
public List<Future<?>> GET()
{
final List<Future<?>> list = new ArrayList<>();
for (final EventWrapper wrapper : getWrappers())
{
if (!wrapper.getDuration().isZero())
{
list.add(ThreadPoolManager.getInstance().scheduleEvent(wrapper.getTask(), wrapper.getDuration().toMillis()));
}
else
{
wrapper.getTask().run();
}
}
return list;
}
/**
* #param builder
* #return {#code EventWrapperBuilder}
*/
public EventWrapperBuilder COMBINE(final EventWrapperBuilder builder)
{
_wrappers.addAll(builder.getWrappers());
return this;
}
/**
* #return {#code List} of all {#code EventWrapper}
*/
public List<EventWrapper> getWrappers()
{
return _wrappers;
}
//#formatter:off
private static record EventWrapper (Runnable getTask, Duration getDuration) {}
//#formatter:on
public static EventWrapperBuilder newWrapperBuilder()
{
return new EventWrapperBuilder();
}
My question is: Do I create a new thread every time I execute this if it's instant i.e. the EventWrappers duration is zero?
I obviously know that the
list.add(ThreadPoolManager.getInstance().scheduleEvent(wrapper.getTask(), wrapper.getDuration().toMillis()));
creates a thread and execute it after the scheduled time but the
wrapper.getTask().run();
is real time without a thread right?
I don't want my code to create threads and therefore to be heavy when it executes run().
No, calling run() of the Runnable interface will not spawn a new thread.
On the other hand, when you wrap the Runnable with a Thread class and call start() then the JVM will spawn a new thread and execute the Runnable in it's context.
In your code, only the Async tasks will run in a separate thread since the Threadpool is managing their execution.
In your code, calling run() will execute the Runnable in the current thread, while calling start() will run the Runnable in the new thread.
https://docs.oracle.com/javase/7/docs/api/java/lang/Runnable.html#run()
https://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html#start()

Processing tasks in parallel and sequentially Java

In my program, the user can trigger different tasks via an interface, which take some time to process. Therefore they are executed by threads. So far I have implemented it so that I have an executer with one thread that executes all tasks one after the other. But now I would like to parallelize everything a little bit.
i.e. I would like to run tasks in parallel, except if they have the same path, then I want to run them sequentially. For example, I have 10 threads in my pool and when a task comes in, the task should be assigned to the worker which is currently processing a task with the same path. If no task with the same path is currently being processed by a worker, then the task should be processed by a currently free worker.
Additional info: A task is any type of task that is executed on a file in the local file system. For example, renaming a file. Therefore, the task have the attribute path. And I don't want to execute two tasks on the same file at the same time, so such tasks with the same paths should be performed sequentially.
Here is my sample code but there is work to do:
One of my problems is, I need a safe way to check if a worker is currently running and get the path of the currently running worker. By safe I mean, that no problems of simultaneous access or other thread problems occur.
public class TasksOrderingExecutor {
public interface Task extends Runnable {
//Task code here
String getPath();
}
private static class Worker implements Runnable {
private final LinkedBlockingQueue<Task> tasks = new LinkedBlockingQueue<>();
//some variable or mechanic to give the actual path of the running tasks??
private volatile boolean stopped;
void schedule(Task task) {
tasks.add(task);
}
void stop() {
stopped = true;
}
#Override
public void run() {
while (!stopped) {
try {
Task task = tasks.take();
task.run();
} catch (InterruptedException ie) {
// perhaps, handle somehow
}
}
}
}
private final Worker[] workers;
private final ExecutorService executorService;
/**
* #param queuesNr nr of concurrent task queues
*/
public TasksOrderingExecutor(int queuesNr) {
Preconditions.checkArgument(queuesNr >= 1, "queuesNr >= 1");
executorService = new ThreadPoolExecutor(queuesNr, queuesNr, 0, TimeUnit.SECONDS, new SynchronousQueue<>());
workers = new Worker[queuesNr];
for (int i = 0; i < queuesNr; i++) {
Worker worker = new Worker();
executorService.submit(worker);
workers[i] = worker;
}
}
public void submit(Task task) {
Worker worker = getWorker(task);
worker.schedule(task);
}
public void stop() {
for (Worker w : workers) w.stop();
executorService.shutdown();
}
private Worker getWorker(Task task) {
//check here if a running worker with a specific path exists? If yes return it, else return a free worker. How do I check if a worker is currently running?
return workers[task.getPath() //HERE I NEED HELP//];
}
}
Seems like you have a pair of problems:
You want to check the status of tasks submitted to an executor service
You want to run tasks in parallel, and possibly prioritize them
Future
For the first problem, capture the Future object returned when you submit a task to an executor service. You can check the Future object for its completion status.
Future< Task > future = myExecutorService.submit( someTask ) ;
…
boolean isCancelled = future.isCancelled() ; // Returns true if this task was cancelled before it completed normally.
boolean isDone = future.isDone(); // Returns true if this task completed.
The Future is of a type, and that type can be your Task class itself. Calling Future::get yields the Task object. You can then interrogate that Task object for its contained file path.
Task task = future.get() ;
String path = task.getPath() ; // Access field via getter from your `Task` object.
Executors
Rather than instantiating new ThreadPoolExecutor, use the Executors utility class to instantiate an executor service on your behalf. Instantiating ThreadPoolExecutor directly is not needed for most common scenarios, as mentioned in the first line of its Javadoc.
ExecutorService es = Executors.newFixedThreadPool​( 3 ) ; // Instantiate an executor service backed by a pool of three threads.
For the second problem, use an executor service backed by a thread pool rather than a single thread. The executor service automatically assigns the submitted task to an available thread.
As for grouping or prioritizing, use multiple executor services. You can instantiate more than one. You can have as many executor services as you want, provided you do not overload the demand on your deployment machine for CPU cores and memory (think about your maximum simultaneous usage).
ExecutorService esSingleThread = Executors.newSingleThreadExecutor() ;
ExecutorService esMultiThread = Executors.newCachedThreadPool() ;
One executor service might be backed by a single thread to limit the demands on the deployment computer, while others might be backed by a thread pool to get more work done. You can use these multiple executor services as your multiple queues. No need for you to be managing queues and workers as seen in the code of your Question. Executors were invented to further simplify working with multiple threads.
Concurrency
You said:
And I don't want to execute two tasks on the same file at the same time, so such tasks with the same paths should be performed sequentially.
You should have a better way to handle the concurrency conflict that just scheduling tasks on threads.
Java has ways to manage concurrent access to files. Search to learn more, as this has been covered on Stack Overflow already.
Perhaps I have not understood fully your needs, so do comment if I am off-base.
It seems that you need some sort of "Task Dispatcher" that executes or holds some tasks depending on some identifier (here the Path of the file the task is applied to).
You could use something like this :
public class Dispatcher<I> implements Runnable {
/**
* The executor used to execute the submitted task
*/
private final Executor executor;
/**
* Map of the pending tasks
*/
private final Map<I, Deque<Runnable>> pendingTasksById = new HashMap<>();
/**
* set containing the id that are currently executed
*/
private final Set<I> runningIds = new HashSet<>();
/**
* Action to be executed by the dispatcher
*/
private final BlockingDeque<Runnable> actionQueue = new LinkedBlockingDeque<>();
public Dispatcher(Executor executor) {
this.executor = executor;
}
/**
* Task in the same group will be executed sequentially (but not necessarily in the same thread)
* #param id the id of the group the task belong
* #param task the task to execute
*/
public void submitTask(I id, Runnable task) {
actionQueue.addLast(() -> {
if (canBeLaunchedDirectly(id)) {
executeTask(id, task);
} else {
addTaskToPendingTasks(id, task);
ifPossibleLaunchPendingTaskForId(id);
}
});
}
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
actionQueue.takeFirst().run();
} catch (InterruptedException e) {
Thread.currentThread().isInterrupted();
break;
}
}
}
private void addTaskToPendingTasks(I id, Runnable task) {
this.pendingTasksById.computeIfAbsent(id, i -> new LinkedList<>()).add(task);
}
/**
* #param id an id of a group
* #return true if a task of the group with the provided id is currently executed
*/
private boolean isRunning(I id) {
return runningIds.contains(id);
}
/**
* #param id an id of a group
* #return an optional containing the first pending task of the group,
* an empty optional if no such task is available
*/
private Optional<Runnable> getFirstPendingTask(I id) {
final Deque<Runnable> pendingTasks = pendingTasksById.get(id);
if (pendingTasks == null) {
return Optional.empty();
}
assert !pendingTasks.isEmpty();
final Runnable result = pendingTasks.removeFirst();
if (pendingTasks.isEmpty()) {
pendingTasksById.remove(id);
}
return Optional.of(result);
}
private boolean canBeLaunchedDirectly(I id) {
return !isRunning(id) && pendingTasksById.get(id) == null;
}
private void executeTask(I id, Runnable task) {
this.runningIds.add(id);
executor.execute(() -> {
try {
task.run();
} finally {
actionQueue.addLast(() -> {
runningIds.remove(id);
ifPossibleLaunchPendingTaskForId(id);
});
}
});
}
private void ifPossibleLaunchPendingTaskForId(I id) {
if (isRunning(id)) {
return;
}
getFirstPendingTask(id).ifPresent(r -> executeTask(id, r));
}
}
To use it, you need to launch it in a separated thread (or you can adapt it for a cleaner solution) like this :
final Dispatcher<Path> dispatcher = new Dispatcher<>(Executors.newCachedThreadPool());
new Thread(dispatcher).start();
dispatcher.submitTask(path, task1);
dispatcher.submitTask(path, task2);
This is basic example, you might need to keep the thread and even better wrap all of that in a class.
all you need is a hash map of actors, with file path as a key. Different actors would run in parallel, and concrete actor would handle tasks sequentially.
Your solution is wrong because Worker class uses blocking operation take but is executed in a limited thread pool, which may lead to a thread starvation (a kind of deadlock). Actors do not block when waiting for next message.
import org.df4j.core.dataflow.ClassicActor;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.*;
public class TasksOrderingExecutor {
public static class Task implements Runnable {
private final String path;
private final String task;
public Task(String path, String task) {
this.path = path;
this.task = task;
}
//Task code here
String getPath() {
return path;
}
#Override
public void run() {
System.out.println(path+"/"+task+" started");
try {
Thread.sleep(500);
} catch (InterruptedException e) {
}
System.out.println(path+"/"+task+" stopped");
}
}
static class Worker extends ClassicActor<Task> {
#Override
protected void runAction(Task task) throws Throwable {
task.run();
}
}
private final ExecutorService executorService;
private final Map<String,Worker> workers = new HashMap<String,Worker>(){
#Override
public Worker get(Object key) {
return super.computeIfAbsent((String) key, (k) -> {
Worker res = new Worker();
res.setExecutor(executorService);
res.start();
return res;
});
}
};
/**
* #param queuesNr nr of concurrent task queues
*/
public TasksOrderingExecutor(int queuesNr) {
executorService = ForkJoinPool.commonPool();
}
public void submit(Task task) {
Worker worker = getWorker(task);
worker.onNext(task);
}
public void stop() throws InterruptedException {
for (Worker w : workers.values()) {
w.onComplete();
}
executorService.shutdown();
executorService.awaitTermination(10, TimeUnit.SECONDS);
}
private Worker getWorker(Task task) {
//check here if a runnig worker with a specific path exists? If yes return it, else return a free worker. How do I check if a worker is currently running?
return workers.get(task.getPath());
}
public static void main(String[] args) throws InterruptedException {
TasksOrderingExecutor orderingExecutor = new TasksOrderingExecutor(20);
orderingExecutor.submit(new Task("path1", "task1"));
orderingExecutor.submit(new Task("path1", "task2"));
orderingExecutor.submit(new Task("path2", "task1"));
orderingExecutor.submit(new Task("path3", "task1"));
orderingExecutor.submit(new Task("path2", "task2"));
orderingExecutor.stop();
}
}
The protocol of execution shows that tasks with te same key are executed sequentially and tasks with different keys are executed in parallel:
path3/task1 started
path2/task1 started
path1/task1 started
path3/task1 stopped
path2/task1 stopped
path1/task1 stopped
path2/task2 started
path1/task2 started
path2/task2 stopped
path1/task2 stopped
I used my own actor library DF4J, but any other actor library can be used.

Get result from FutureTask after canceling it

Consider a long running computation inside Callable instance.
And consider that the result of this computation can have some precision depending on computation time, i.e.: if task will be cancled than it should return what is computed so far before canceling (for example, we have a conveyor of irrational numbers calculating).
It is desirable to implement this paradigm using standard java concurency utils, e.g.
Callable<ValuableResult> task = new Callable<>() { ... };
Future<ValuableResult> future = Executors.newSingleThreadExecutor().submit(task);
try {
return future.get(timeout, TimeUnit.SECONDS);
} catch (TimeoutException te) {
future.cancel(true);
// HERE! Get what was computed so far
}
It seems, that without full reimplementing of Future and ThreadPoolExecutor interfaces this issue can not be solved. Are any convient existing tools for that in Java 1.7?
Instead of canceling it through the Future's API, tell it to finish through a mechanism of your own (such as a long that you pass into the constructor, which tells it how long to run before returning normally; or an AtomicBoolean you set to true).
Keep in mind that once the task actually starts, cancel (true) doesn't magically stop it. All it does then is to interrupt the thread. There are a few methods that check this flag and throw InterruptedException, but otherwise you'll have to manually check the isInterrupted flag. So, given that you need to code that cooperative mechanism anyway, why not just make it one that better suits your requirements?
Well, it seems to me, that the most simple way in this case is to prepare some final ResultWrapper object, which will be passed inside this Callable instance:
final ValuableResultWrapper wrapper = new ValuableResultWrapper();
final CountDownLatch latch = new CountDownLatch(1);
Callable<ValuableResultWrapper> task = new Callable<>() {
...
wrapper.setValue(...); // here we set what we have computed so far
latch.countDown();
return wrapper;
...
};
Future<ValuableResultWrapper> future = Executors.newSingleThreadExecutor().submit(task);
try {
return future.get(timeout, TimeUnit.SECONDS);
} catch (TimeoutException te) {
future.cancel(true);
// HERE! Get what was computed so far
latch.await();
return wrapper;
}
UPD: In such implemetation (which becomes to complicated) we have to introduce some kind of latch (CountDownLatch in my example) to be sure, that task will be completed before we done return wrapper;
CompletionSerivce is a more powerful than only FutureTask and in many case it's more suitable. I get some idea from it to solve the problem. Besides, its subclass public ExecutorCompletionService is simple than FutureTask, just including a few lines code. It's easy to read. So I modify the class to get partly computed result. A satisfying solution for me, after all, it looks simple and clear.
Demo code:
CompletionService<List<DeviceInfo>> completionService =
new MyCompletionService<>(Executors.newCachedThreadPool());
Future task = completionService.submit(detector);
try {
LogHelper.i(TAG, "result 111: " );
Future<List<DeviceInfo>> result = completionService.take();
LogHelper.i(TAG, "result: " + result.get());
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
This is the class code:
import java.util.concurrent.AbstractExecutorService;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.Callable;
import java.util.concurrent.CancellationException;
import java.util.concurrent.CompletionService;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executor;
import java.util.concurrent.Future;
import java.util.concurrent.FutureTask;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.RunnableFuture;
import java.util.concurrent.TimeUnit;
/**
* This is a CompletionService like java.util.ExecutorCompletionService, but we can get partly computed result
* from our FutureTask which returned from submit, even we cancel or interrupt it.
* Besides, CompletionService can ensure that the FutureTask is done when we get from take or poll method.
*/
public class MyCompletionService<V> implements CompletionService<V> {
private final Executor executor;
private final AbstractExecutorService aes;
private final BlockingQueue<Future<V>> completionQueue;
/**
* FutureTask extension to enqueue upon completion.
*/
private static class QueueingFuture<V> extends FutureTask<Void> {
QueueingFuture(RunnableFuture<V> task,
BlockingQueue<Future<V>> completionQueue) {
super(task, null);
this.task = task;
this.completionQueue = completionQueue;
}
private final Future<V> task;
private final BlockingQueue<Future<V>> completionQueue;
protected void done() { completionQueue.add(task); }
}
private static class DoneFutureTask<V> extends FutureTask<V> {
private Object outcome;
DoneFutureTask(Callable<V> task) {
super(task);
}
DoneFutureTask(Runnable task, V result) {
super(task, result);
}
#Override
protected void set(V v) {
super.set(v);
outcome = v;
}
#Override
public V get() throws InterruptedException, ExecutionException {
try {
return super.get();
} catch (CancellationException e) {
return (V)outcome;
}
}
}
private RunnableFuture<V> newTaskFor(Callable<V> task) {
return new DoneFutureTask<V>(task);
}
private RunnableFuture<V> newTaskFor(Runnable task, V result) {
return new DoneFutureTask<V>(task, result);
}
/**
* Creates an MyCompletionService using the supplied
* executor for base task execution and a
* {#link LinkedBlockingQueue} as a completion queue.
*
* #param executor the executor to use
* #throws NullPointerException if executor is {#code null}
*/
public MyCompletionService(Executor executor) {
if (executor == null)
throw new NullPointerException();
this.executor = executor;
this.aes = (executor instanceof AbstractExecutorService) ?
(AbstractExecutorService) executor : null;
this.completionQueue = new LinkedBlockingQueue<Future<V>>();
}
/**
* Creates an MyCompletionService using the supplied
* executor for base task execution and the supplied queue as its
* completion queue.
*
* #param executor the executor to use
* #param completionQueue the queue to use as the completion queue
* normally one dedicated for use by this service. This
* queue is treated as unbounded -- failed attempted
* {#code Queue.add} operations for completed tasks cause
* them not to be retrievable.
* #throws NullPointerException if executor or completionQueue are {#code null}
*/
public MyCompletionService(Executor executor,
BlockingQueue<Future<V>> completionQueue) {
if (executor == null || completionQueue == null)
throw new NullPointerException();
this.executor = executor;
this.aes = (executor instanceof AbstractExecutorService) ?
(AbstractExecutorService) executor : null;
this.completionQueue = completionQueue;
}
public Future<V> submit(Callable<V> task) {
if (task == null) throw new NullPointerException();
RunnableFuture<V> f = newTaskFor(task);
executor.execute(new QueueingFuture<V>(f, completionQueue));
return f;
}
public Future<V> submit(Runnable task, V result) {
if (task == null) throw new NullPointerException();
RunnableFuture<V> f = newTaskFor(task, result);
executor.execute(new QueueingFuture<V>(f, completionQueue));
return f;
}
public Future<V> take() throws InterruptedException {
return completionQueue.take();
}
public Future<V> poll() {
return completionQueue.poll();
}
public Future<V> poll(long timeout, TimeUnit unit)
throws InterruptedException {
return completionQueue.poll(timeout, unit);
}
}

ExecutorService.execute() does not return the thread type

I have something like this
public static void runThread(Thread t){
ExecutorService threadExecutor = Executors.newSingleThreadExecutor();
threadExecutor.execute(t);
}
if I do Thread.currentThread(), then I get back weblogic.work.ExecuteThread or sometimes java.lang.Thread (I used Weblogic as my AppServer), but if I do
public static void runThread(Thread t){
//ExecutorService threadExecutor = Executors.newSingleThreadExecutor();
//threadExecutor.execute(t);
t.start();
}
then when I dod Thread.currentThread(), I get back com.my.thread.JSFExecutionThread, which is the Thread that I passed in and this is what I want. Is there a way to fix so the ExecutorService#execute() return the correct Thread type like Thread#start()? The thing is that I want to use ExecutorService, because I want to leverage shutdown() and shutdownNow()
EDIT
Is there anything wrong with this implementation?
/**
* Run {#code Runnable runnable} with {#code ExecutorService}
* #param runnable {#code Runnable}
* #return
*/
public static ExecutorService runThread(Thread t){
ExecutorService threadExecutor = Executors.newSingleThreadExecutor(
new ExecutionThreadFactory(t));
threadExecutor.execute(t);
return threadExecutor;
}
private static class ExecutionThreadFactory implements ThreadFactory{
private JSFExecutionThread jsfThread;
ExecutionThreadFactory(Thread t){
if(t instanceof JSFExecutionThread){
jsfThread = (JSFExecutionThread)t;
}
}
#Override
public Thread newThread(Runnable r) {
if(jsfThread != null){
return jsfThread;
}else{
return new Thread(r);
}
}
}
Is there anything wrong with this implementation?
Yes.
First, the ExecutorService manages the lifetime of each Thread from the time the ThreadFactory creates it until the executor is done with it... and the punchline, a Thread is not re-usable, once it has terminated it can not be started.
Second
public Thread newThread(Runnable r) {
if(jsfThread != null){
return jsfThread;
}else{
return new Thread(r);
}
}
This code violates the contract of ThreadFactory.newThread by not making the Runnable r set as the runnable to be executed by the jsfThread.

Why is UncaughtExceptionHandler not called by ExecutorService?

I've stumbled upon a problem, that can be summarized as follows:
When I create the thread manually (i.e. by instantiating java.lang.Thread) the UncaughtExceptionHandler is called appropriately. However, when I use an ExecutorService with a ThreadFactory the handler is ommited. What did I miss?
public class ThreadStudy {
private static final int THREAD_POOL_SIZE = 1;
public static void main(String[] args) {
// create uncaught exception handler
final UncaughtExceptionHandler exceptionHandler = new UncaughtExceptionHandler() {
#Override
public void uncaughtException(Thread t, Throwable e) {
synchronized (this) {
System.err.println("Uncaught exception in thread '" + t.getName() + "': " + e.getMessage());
}
}
};
// create thread factory
ThreadFactory threadFactory = new ThreadFactory() {
#Override
public Thread newThread(Runnable r) {
// System.out.println("creating pooled thread");
final Thread thread = new Thread(r);
thread.setUncaughtExceptionHandler(exceptionHandler);
return thread;
}
};
// create Threadpool
ExecutorService threadPool = Executors.newFixedThreadPool(THREAD_POOL_SIZE, threadFactory);
// create Runnable
Runnable runnable = new Runnable() {
#Override
public void run() {
// System.out.println("A runnable runs...");
throw new RuntimeException("Error in Runnable");
}
};
// create Callable
Callable<Integer> callable = new Callable<Integer>() {
#Override
public Integer call() throws Exception {
// System.out.println("A callable runs...");
throw new Exception("Error in Callable");
}
};
// a) submitting Runnable to threadpool
threadPool.submit(runnable);
// b) submit Callable to threadpool
threadPool.submit(callable);
// c) create a thread for runnable manually
final Thread thread_r = new Thread(runnable, "manually-created-thread");
thread_r.setUncaughtExceptionHandler(exceptionHandler);
thread_r.start();
threadPool.shutdown();
System.out.println("Done.");
}
}
I expect: Three times the message "Uncaught exception..."
I get: The message once (triggered by the manually created thread).
Reproduced with Java 1.6 on Windows 7 and Mac OS X 10.5.
Because the exception does not go uncaught.
The Thread that your ThreadFactory produces is not given your Runnable or Callable directly. Instead, the Runnable that you get is an internal Worker class, for example see ThreadPoolExecutor$Worker. Try System.out.println() on the Runnable given to newThread in your example.
This Worker catches any RuntimeExceptions from your submitted job.
You can get the exception in the ThreadPoolExecutor#afterExecute method.
Exceptions which are thrown by tasks submitted to ExecutorService#submit get wrapped into an ExcecutionException and are rethrown by the Future.get() method. This is, because the executor considers the exception as part of the result of the task.
If you however submit a task via the execute() method which originates from the Executor interface, the UncaughtExceptionHandler is notified.
Quote from the book Java Concurrency in Practice(page 163),hope this helps
Somewhat confusingly, exceptions thrown from tasks make it to the uncaught
exception handler only for tasks submitted with execute; for tasks submitted
with submit, any thrown exception, checked or not, is considered to be part of the
task’s return status. If a task submitted with submit terminates with an exception,
it is rethrown by Future.get, wrapped in an ExecutionException.
Here is the example:
public class Main {
public static void main(String[] args){
ThreadFactory factory = new ThreadFactory(){
#Override
public Thread newThread(Runnable r) {
// TODO Auto-generated method stub
final Thread thread =new Thread(r);
thread.setUncaughtExceptionHandler( new Thread.UncaughtExceptionHandler() {
#Override
public void uncaughtException(Thread t, Throwable e) {
// TODO Auto-generated method stub
System.out.println("in exception handler");
}
});
return thread;
}
};
ExecutorService pool=Executors.newSingleThreadExecutor(factory);
pool.execute(new testTask());
}
private static class TestTask implements Runnable {
#Override
public void run() {
// TODO Auto-generated method stub
throw new RuntimeException();
}
}
I use execute to submit the task and the console outputs "in exception handler"
I just browsed through my old questions and thought I might share the solution I implemented in case it helps someone (or I missed a bug).
import java.lang.Thread.UncaughtExceptionHandler;
import java.util.concurrent.Callable;
import java.util.concurrent.Delayed;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.FutureTask;
import java.util.concurrent.RunnableScheduledFuture;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
/**
* #author Mike Herzog, 2009
*/
public class ExceptionHandlingExecuterService extends ScheduledThreadPoolExecutor {
/** My ExceptionHandler */
private final UncaughtExceptionHandler exceptionHandler;
/**
* Encapsulating a task and enable exception handling.
* <p>
* <i>NB:</i> We need this since {#link ExecutorService}s ignore the
* {#link UncaughtExceptionHandler} of the {#link ThreadFactory}.
*
* #param <V> The result type returned by this FutureTask's get method.
*/
private class ExceptionHandlingFutureTask<V> extends FutureTask<V> implements RunnableScheduledFuture<V> {
/** Encapsulated Task */
private final RunnableScheduledFuture<V> task;
/**
* Encapsulate a {#link Callable}.
*
* #param callable
* #param task
*/
public ExceptionHandlingFutureTask(Callable<V> callable, RunnableScheduledFuture<V> task) {
super(callable);
this.task = task;
}
/**
* Encapsulate a {#link Runnable}.
*
* #param runnable
* #param result
* #param task
*/
public ExceptionHandlingFutureTask(Runnable runnable, RunnableScheduledFuture<V> task) {
super(runnable, null);
this.task = task;
}
/*
* (non-Javadoc)
* #see java.util.concurrent.FutureTask#done() The actual exception
* handling magic.
*/
#Override
protected void done() {
// super.done(); // does nothing
try {
get();
} catch (ExecutionException e) {
if (exceptionHandler != null) {
exceptionHandler.uncaughtException(null, e.getCause());
}
} catch (Exception e) {
// never mind cancelation or interruption...
}
}
#Override
public boolean isPeriodic() {
return this.task.isPeriodic();
}
#Override
public long getDelay(TimeUnit unit) {
return task.getDelay(unit);
}
#Override
public int compareTo(Delayed other) {
return task.compareTo(other);
}
}
/**
* #param corePoolSize The number of threads to keep in the pool, even if
* they are idle.
* #param eh Receiver for unhandled exceptions. <i>NB:</i> The thread
* reference will always be <code>null</code>.
*/
public ExceptionHandlingExecuterService(int corePoolSize, UncaughtExceptionHandler eh) {
super(corePoolSize);
this.exceptionHandler = eh;
}
#Override
protected <V> RunnableScheduledFuture<V> decorateTask(Callable<V> callable, RunnableScheduledFuture<V> task) {
return new ExceptionHandlingFutureTask<V>(callable, task);
}
#Override
protected <V> RunnableScheduledFuture<V> decorateTask(Runnable runnable, RunnableScheduledFuture<V> task) {
return new ExceptionHandlingFutureTask<V>(runnable, task);
}
}
In addition to Thilos answer: I've written a post about this behavior, if one wants to have it explained a little bit more verbose: https://ewirch.github.io/2013/12/a-executor-is-not-a-thread.html.
Here is a excerpts from the article:
A Thread is capable of processing only one Runable in general. When the Thread.run() method exits the Thread dies. The ThreadPoolExecutor implements a trick to make a Thread process multiple Runnables: it uses a own Runnable implementation. The threads are being started with a Runnable implementation which fetches other Runanbles (your Runnables) from the ExecutorService and executes them: ThreadPoolExecutor -> Thread -> Worker -> YourRunnable. When a uncaught exception occurs in your Runnable implementation it ends up in the finally block of Worker.run(). In this finally block the Worker class tells the ThreadPoolExecutor that it “finished” the work. The exception did not yet arrive at the Thread class but ThreadPoolExecutor already registered the worker as idle.
And here’s where the fun begins. The awaitTermination() method will be invoked when all Runnables have been passed to the Executor. This happens very quickly so that probably not one of the Runnables finished their work. A Worker will switch to “idle” if a exception occurs, before the Exception reaches the Thread class. If the situation is similar for the other threads (or if they finished their work), all Workers signal “idle” and awaitTermination() returns. The main thread reaches the code line where it checks the size of the collected exception list. And this may happen before any (or some) of the Threads had the chance to call the UncaughtExceptionHandler. It depends on the order of execution if or how many exceptions will be added to the list of uncaught exceptions, before the main thread reads it.
A very unexpected behavior. But I won’t leave you without a working solution. So let’s make it work.
We are lucky that the ThreadPoolExecutor class was designed for extensibility. There is a empty protected method afterExecute(Runnable r, Throwable t). This will be invoked directly after the run() method of our Runnable before the worker signals that it finished the work. The correct solution is to extend the ThreadPoolExecutor to handle uncaught exceptions:
public class ExceptionAwareThreadPoolExecutor extends ThreadPoolExecutor {
private final List<Throwable> uncaughtExceptions =
Collections.synchronizedList(new LinkedList<Throwable>());
#Override
protected void afterExecute(final Runnable r, final Throwable t) {
if (t != null) uncaughtExceptions.add(t);
}
public List<Throwable> getUncaughtExceptions() {
return Collections.unmodifiableList(uncaughtExceptions);
}
}
There is a little bit of a workaround.
In your run method, you can catch every exception, and later on do something like this (ex: in a finally block)
Thread.getDefaultUncaughtExceptionHandler().uncaughtException(Thread.currentThread(), ex);
//or, same effect:
Thread.currentThread().getUncaughtExceptionHandler().uncaughtException(Thread.currentThread(), ex);
This will "ensure a firing" of the current exception as thrown to your uncoughtExceptionHandler (or to the defualt uncought exception handler).
You can always rethrow catched exceptions for pool worker.

Categories