How to use MDC with thread pools? - java

In our software we extensively use MDC to track things like session IDs and user names for web requests. This works fine while running in the original thread.
However, there's a lot of things that need to be processed in the background. For that we use the java.concurrent.ThreadPoolExecutor and java.util.Timer classes along with some self-rolled async execution services. All these services manage their own thread pool.
This is what Logback's manual has to say about using MDC in such an environment:
A copy of the mapped diagnostic context can not always be inherited by worker threads from the initiating thread. This is the case when java.util.concurrent.Executors is used for thread management. For instance, newCachedThreadPool method creates a ThreadPoolExecutor and like other thread pooling code, it has intricate thread creation logic.
In such cases, it is recommended that MDC.getCopyOfContextMap() is invoked on the original (master) thread before submitting a task to the executor. When the task runs, as its first action, it should invoke MDC.setContextMapValues() to associate the stored copy of the original MDC values with the new Executor managed thread.
This would be fine, but it is a very easy to forget adding those calls, and there is no easy way to recognize the problem until it is too late. The only sign with Log4j is that you get missing MDC info in the logs, and with Logback you get stale MDC info (since the thread in the tread pool inherits its MDC from the first task that was ran on it). Both are serious problems in a production system.
I don't see our situation special in any way, yet I could not find much about this problem on the web. Apparently, this is not something that many people bump up against, so there must be a way to avoid it. What are we doing wrong here?

Yes, this is a common problem I've run into as well. There are a few workarounds (like manually setting it, as described), but ideally you want a solution that
Sets the MDC consistently;
Avoids tacit bugs where the MDC is incorrect but you don't know it; and
Minimizes changes to how you use thread pools (e.g. subclassing Callable with MyCallable everywhere, or similar ugliness).
Here's a solution that I use that meets these three needs. Code should be self-explanatory.
(As a side note, this executor can be created and fed to Guava's MoreExecutors.listeningDecorator(), if
you use Guava's ListanableFuture.)
import org.slf4j.MDC;
import java.util.Map;
import java.util.concurrent.*;
/**
* A SLF4J MDC-compatible {#link ThreadPoolExecutor}.
* <p/>
* In general, MDC is used to store diagnostic information (e.g. a user's session id) in per-thread variables, to facilitate
* logging. However, although MDC data is passed to thread children, this doesn't work when threads are reused in a
* thread pool. This is a drop-in replacement for {#link ThreadPoolExecutor} sets MDC data before each task appropriately.
* <p/>
* Created by jlevy.
* Date: 6/14/13
*/
public class MdcThreadPoolExecutor extends ThreadPoolExecutor {
final private boolean useFixedContext;
final private Map<String, Object> fixedContext;
/**
* Pool where task threads take MDC from the submitting thread.
*/
public static MdcThreadPoolExecutor newWithInheritedMdc(int corePoolSize, int maximumPoolSize, long keepAliveTime,
TimeUnit unit, BlockingQueue<Runnable> workQueue) {
return new MdcThreadPoolExecutor(null, corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
}
/**
* Pool where task threads take fixed MDC from the thread that creates the pool.
*/
#SuppressWarnings("unchecked")
public static MdcThreadPoolExecutor newWithCurrentMdc(int corePoolSize, int maximumPoolSize, long keepAliveTime,
TimeUnit unit, BlockingQueue<Runnable> workQueue) {
return new MdcThreadPoolExecutor(MDC.getCopyOfContextMap(), corePoolSize, maximumPoolSize, keepAliveTime, unit,
workQueue);
}
/**
* Pool where task threads always have a specified, fixed MDC.
*/
public static MdcThreadPoolExecutor newWithFixedMdc(Map<String, Object> fixedContext, int corePoolSize,
int maximumPoolSize, long keepAliveTime, TimeUnit unit,
BlockingQueue<Runnable> workQueue) {
return new MdcThreadPoolExecutor(fixedContext, corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
}
private MdcThreadPoolExecutor(Map<String, Object> fixedContext, int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
this.fixedContext = fixedContext;
useFixedContext = (fixedContext != null);
}
#SuppressWarnings("unchecked")
private Map<String, Object> getContextForTask() {
return useFixedContext ? fixedContext : MDC.getCopyOfContextMap();
}
/**
* All executions will have MDC injected. {#code ThreadPoolExecutor}'s submission methods ({#code submit()} etc.)
* all delegate to this.
*/
#Override
public void execute(Runnable command) {
super.execute(wrap(command, getContextForTask()));
}
public static Runnable wrap(final Runnable runnable, final Map<String, Object> context) {
return new Runnable() {
#Override
public void run() {
Map previous = MDC.getCopyOfContextMap();
if (context == null) {
MDC.clear();
} else {
MDC.setContextMap(context);
}
try {
runnable.run();
} finally {
if (previous == null) {
MDC.clear();
} else {
MDC.setContextMap(previous);
}
}
}
};
}
}

We have run into a similar problem. You might want to extend ThreadPoolExecutor and override before/afterExecute methods to make the MDC calls you need before starting/stopping new threads.

IMHO the best solution is to:
use ThreadPoolTaskExecutor
implement your own TaskDecorator
use it: executor.setTaskDecorator(new LoggingTaskDecorator());
The decorator can look like this:
private final class LoggingTaskDecorator implements TaskDecorator {
#Override
public Runnable decorate(Runnable task) {
// web thread
Map<String, String> webThreadContext = MDC.getCopyOfContextMap();
return () -> {
// work thread
try {
// TODO: is this thread safe?
MDC.setContextMap(webThreadContext);
task.run();
} finally {
MDC.clear();
}
};
}
}

This is how I do it with fixed thread pools and executors:
ExecutorService executor = Executors.newFixedThreadPool(4);
Map<String, String> mdcContextMap = MDC.getCopyOfContextMap();
In the threading part:
executor.submit(() -> {
MDC.setContextMap(mdcContextMap);
// my stuff
});

In case you face this problem in a spring framework related environment where you run tasks by using #Async annotation you are able to decorate the tasks by using the TaskDecorator approach.
A sample of how to do it is provided here:
Spring 4.3: Using a TaskDecorator to copy MDC data to #Async
threads
I faced this issue and the article above helped me to tackle it so that's why I am sharing it here.

Similar to the previously posted solutions, the newTaskFor methods for Runnable and Callable can be overwritten in order to wrap the argument (see accepted solution) when creating the RunnableFuture.
Note: Consequently, the executorService's submit method must be called instead of the execute method.
For the ScheduledThreadPoolExecutor, the decorateTask methods would be overwritten instead.

Another variation similar to existing answers here is to implement ExecutorService and allow a delegate to be passed to it. Then using generics, it can still expose the actual delegate in case one wants to get some stats (as long no other modification methods are used).
Reference code:
https://github.com/project-ncl/pnc/blob/master/common/src/main/java/org/jboss/pnc/common/concurrent/MDCThreadPoolExecutor.java
https://github.com/project-ncl/pnc/blob/master/common/src/main/java/org/jboss/pnc/common/concurrent/MDCWrappers.java
public class MDCExecutorService<D extends ExecutorService> implements ExecutorService {
private final D delegate;
public MDCExecutorService(D delegate) {
this.delegate = delegate;
}
#Override
public void shutdown() {
delegate.shutdown();
}
#Override
public List<Runnable> shutdownNow() {
return delegate.shutdownNow();
}
#Override
public boolean isShutdown() {
return delegate.isShutdown();
}
#Override
public boolean isTerminated() {
return delegate.isTerminated();
}
#Override
public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException {
return delegate.awaitTermination(timeout, unit);
}
#Override
public <T> Future<T> submit(Callable<T> task) {
return delegate.submit(wrap(task));
}
#Override
public <T> Future<T> submit(Runnable task, T result) {
return delegate.submit(wrap(task), result);
}
#Override
public Future<?> submit(Runnable task) {
return delegate.submit(wrap(task));
}
#Override
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException {
return delegate.invokeAll(wrapCollection(tasks));
}
#Override
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit) throws InterruptedException {
return delegate.invokeAll(wrapCollection(tasks), timeout, unit);
}
#Override
public <T> T invokeAny(Collection<? extends Callable<T>> tasks) throws InterruptedException, ExecutionException {
return delegate.invokeAny(wrapCollection(tasks));
}
#Override
public <T> T invokeAny(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
return delegate.invokeAny(wrapCollection(tasks), timeout, unit);
}
#Override
public void execute(Runnable command) {
delegate.execute(wrap(command));
}
public D getDelegate() {
return delegate;
}
/* Copied from https://github.com/project-ncl/pnc/blob/master/common/src/main/java/org/jboss/pnc/common
/concurrent/MDCWrappers.java */
private static Runnable wrap(final Runnable runnable) {
final Map<String, String> context = MDC.getCopyOfContextMap();
return () -> {
Map previous = MDC.getCopyOfContextMap();
if (context == null) {
MDC.clear();
} else {
MDC.setContextMap(context);
}
try {
runnable.run();
} finally {
if (previous == null) {
MDC.clear();
} else {
MDC.setContextMap(previous);
}
}
};
}
private static <T> Callable<T> wrap(final Callable<T> callable) {
final Map<String, String> context = MDC.getCopyOfContextMap();
return () -> {
Map previous = MDC.getCopyOfContextMap();
if (context == null) {
MDC.clear();
} else {
MDC.setContextMap(context);
}
try {
return callable.call();
} finally {
if (previous == null) {
MDC.clear();
} else {
MDC.setContextMap(previous);
}
}
};
}
private static <T> Consumer<T> wrap(final Consumer<T> consumer) {
final Map<String, String> context = MDC.getCopyOfContextMap();
return (t) -> {
Map previous = MDC.getCopyOfContextMap();
if (context == null) {
MDC.clear();
} else {
MDC.setContextMap(context);
}
try {
consumer.accept(t);
} finally {
if (previous == null) {
MDC.clear();
} else {
MDC.setContextMap(previous);
}
}
};
}
private static <T> Collection<Callable<T>> wrapCollection(Collection<? extends Callable<T>> tasks) {
Collection<Callable<T>> wrapped = new ArrayList<>();
for (Callable<T> task : tasks) {
wrapped.add(wrap(task));
}
return wrapped;
}
}

Related

How to wait until a space becomes available in a threadpool [duplicate]

This question already has answers here:
ThreadPoolExecutor Block When its Queue Is Full?
(10 answers)
Closed 1 year ago.
I am trying to code a solution in which a single thread produces I/O-intensive tasks that can be performed in parallel. Each task have significant in-memory data. So I want to be able limit the number of tasks that are pending at a moment.
If I create ThreadPoolExecutor like this:
ThreadPoolExecutor executor = new ThreadPoolExecutor(numWorkerThreads, numWorkerThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(maxQueue));
Then the executor.submit(callable) throws RejectedExecutionException when the queue fills up and all the threads are already busy.
What can I do to make executor.submit(callable) block when the queue is full and all threads are busy?
EDIT:
I tried this:
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
And it somewhat achieves the effect that I want achieved but in an inelegant way (basically rejected threads are run in the calling thread, so this blocks the calling thread from submitting more).
EDIT: (5 years after asking the question)
To anyone reading this question and its answers, please don't take the accepted answer as one correct solution. Please read through all answers and comments.
I have done this same thing. The trick is to create a BlockingQueue where the offer() method is really a put(). (you can use whatever base BlockingQueue impl you want).
public class LimitedQueue<E> extends LinkedBlockingQueue<E>
{
public LimitedQueue(int maxSize)
{
super(maxSize);
}
#Override
public boolean offer(E e)
{
// turn offer() and add() into a blocking calls (unless interrupted)
try {
put(e);
return true;
} catch(InterruptedException ie) {
Thread.currentThread().interrupt();
}
return false;
}
}
Note that this only works for thread pool where corePoolSize==maxPoolSize so be careful there (see comments).
The currently accepted answer has a potentially significant problem - it changes the behavior of ThreadPoolExecutor.execute such that if you have a corePoolSize < maxPoolSize, the ThreadPoolExecutor logic will never add additional workers beyond the core.
From ThreadPoolExecutor.execute(Runnable):
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
Specifically, that last 'else' block willl never be hit.
A better alternative is to do something similar to what OP is already doing - use a RejectedExecutionHandler to do the same put logic:
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
if (!executor.isShutdown()) {
executor.getQueue().put(r);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RejectedExecutionException("Executor was interrupted while the task was waiting to put on work queue", e);
}
}
There are some things to watch out for with this approach, as pointed out in the comments (referring to this answer):
If corePoolSize==0, then there is a race condition where all threads in the pool may die before the task is visible
Using an implementation that wraps the queue tasks (not applicable to ThreadPoolExecutor) will result in issues unless the handler also wraps it the same way.
Keeping those gotchas in mind, this solution will work for most typical ThreadPoolExecutors, and will properly handle the case where corePoolSize < maxPoolSize.
Here is how I solved this on my end:
(note: this solution does block the thread that submits the Callable, so it prevents RejectedExecutionException from being thrown )
public class BoundedExecutor extends ThreadPoolExecutor{
private final Semaphore semaphore;
public BoundedExecutor(int bound) {
super(bound, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
semaphore = new Semaphore(bound);
}
/**Submits task to execution pool, but blocks while number of running threads
* has reached the bound limit
*/
public <T> Future<T> submitButBlockIfFull(final Callable<T> task) throws InterruptedException{
semaphore.acquire();
return submit(task);
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
semaphore.release();
}
}
How about using the CallerBlocksPolicy class if you are using spring-integration?
This class implements the RejectedExecutionHandler interface, which is a handler for tasks that cannot be executed by a ThreadPoolExecutor.
You can use this policy like this.
executor.setRejectedExecutionHandler(new CallerBlocksPolicy());
The main difference between CallerBlocksPolicy and CallerRunsPolicy is whether it blocks or runs the task in the caller thread.
Please refer to this code.
I know this is an old question but had a similar issue that creating new tasks was very fast and if there were too many an OutOfMemoryError occur because existing task were not completed fast enough.
In my case Callables are submitted and I need the result hence I need to store all the Futures returned by executor.submit(). My solution was to put the Futures into a BlockingQueue with a maximum size. Once that queue is full, no more tasks are generated until some are completed (elements removed from queue). In pseudo-code:
final ExecutorService executor = Executors.newFixedThreadPool(numWorkerThreads);
final LinkedBlockingQueue<Future> futures = new LinkedBlockingQueue<>(maxQueueSize);
try {
Thread taskGenerator = new Thread() {
#Override
public void run() {
while (reader.hasNext) {
Callable task = generateTask(reader.next());
Future future = executor.submit(task);
try {
// if queue is full blocks until a task
// is completed and hence no future tasks are submitted.
futures.put(future);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
executor.shutdown();
}
}
taskGenerator.start();
// read from queue as long as task are being generated
// or while Queue has elements in it
while (taskGenerator.isAlive()
|| !futures.isEmpty()) {
Future future = futures.take();
// do something
}
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
} catch (ExecutionException ex) {
throw new MyException(ex);
} finally {
executor.shutdownNow();
}
I had the similar problem and I implemented that by using beforeExecute/afterExecute hooks from ThreadPoolExecutor:
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
/**
* Blocks current task execution if there is not enough resources for it.
* Maximum task count usage controlled by maxTaskCount property.
*/
public class BlockingThreadPoolExecutor extends ThreadPoolExecutor {
private final ReentrantLock taskLock = new ReentrantLock();
private final Condition unpaused = taskLock.newCondition();
private final int maxTaskCount;
private volatile int currentTaskCount;
public BlockingThreadPoolExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit,
BlockingQueue<Runnable> workQueue, int maxTaskCount) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
this.maxTaskCount = maxTaskCount;
}
/**
* Executes task if there is enough system resources for it. Otherwise
* waits.
*/
#Override
protected void beforeExecute(Thread t, Runnable r) {
super.beforeExecute(t, r);
taskLock.lock();
try {
// Spin while we will not have enough capacity for this job
while (maxTaskCount < currentTaskCount) {
try {
unpaused.await();
} catch (InterruptedException e) {
t.interrupt();
}
}
currentTaskCount++;
} finally {
taskLock.unlock();
}
}
/**
* Signalling that one more task is welcome
*/
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
taskLock.lock();
try {
currentTaskCount--;
unpaused.signalAll();
} finally {
taskLock.unlock();
}
}
}
This should be good enough for you. Btw, original implementation was task size based because one task could be larger 100 time than another and submitting two huge tasks was killing the box, but running one big and plenty of small was Okay. If your I/O-intensive tasks are roughly the same size you could use this class, otherwise just let me know and I'll post size based implementation.
P.S. You would want to check ThreadPoolExecutor javadoc. It's really nice user guide from Doug Lea about how it could be easily customized.
I have implemented a solution following the decorator pattern and using a semaphore to control the number of executed tasks. You can use it with any Executor and:
Specify the maximum of ongoing tasks
Specify the maximum timeout to wait for a task execution permit (if the timeout passes and no permit is acquired, a RejectedExecutionException is thrown)
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import java.time.Duration;
import java.util.Objects;
import java.util.concurrent.Executor;
import java.util.concurrent.RejectedExecutionException;
import java.util.concurrent.Semaphore;
import javax.annotation.Nonnull;
public class BlockingOnFullQueueExecutorDecorator implements Executor {
private static final class PermitReleasingDecorator implements Runnable {
#Nonnull
private final Runnable delegate;
#Nonnull
private final Semaphore semaphore;
private PermitReleasingDecorator(#Nonnull final Runnable task, #Nonnull final Semaphore semaphoreToRelease) {
this.delegate = task;
this.semaphore = semaphoreToRelease;
}
#Override
public void run() {
try {
this.delegate.run();
}
finally {
// however execution goes, release permit for next task
this.semaphore.release();
}
}
#Override
public final String toString() {
return String.format("%s[delegate='%s']", getClass().getSimpleName(), this.delegate);
}
}
#Nonnull
private final Semaphore taskLimit;
#Nonnull
private final Duration timeout;
#Nonnull
private final Executor delegate;
public BlockingOnFullQueueExecutorDecorator(#Nonnull final Executor executor, final int maximumTaskNumber, #Nonnull final Duration maximumTimeout) {
this.delegate = Objects.requireNonNull(executor, "'executor' must not be null");
if (maximumTaskNumber < 1) {
throw new IllegalArgumentException(String.format("At least one task must be permitted, not '%d'", maximumTaskNumber));
}
this.timeout = Objects.requireNonNull(maximumTimeout, "'maximumTimeout' must not be null");
if (this.timeout.isNegative()) {
throw new IllegalArgumentException("'maximumTimeout' must not be negative");
}
this.taskLimit = new Semaphore(maximumTaskNumber);
}
#Override
public final void execute(final Runnable command) {
Objects.requireNonNull(command, "'command' must not be null");
try {
// attempt to acquire permit for task execution
if (!this.taskLimit.tryAcquire(this.timeout.toMillis(), MILLISECONDS)) {
throw new RejectedExecutionException(String.format("Executor '%s' busy", this.delegate));
}
}
catch (final InterruptedException e) {
// restore interrupt status
Thread.currentThread().interrupt();
throw new IllegalStateException(e);
}
this.delegate.execute(new PermitReleasingDecorator(command, this.taskLimit));
}
#Override
public final String toString() {
return String.format("%s[availablePermits='%s',timeout='%s',delegate='%s']", getClass().getSimpleName(), this.taskLimit.availablePermits(),
this.timeout, this.delegate);
}
}
I think it is as simple as using a ArrayBlockingQueue instead of a a LinkedBlockingQueue.
Ignore me... that's totally wrong. ThreadPoolExecutor calls Queue#offer not put which would have the effect you require.
You could extend ThreadPoolExecutor and provide an implementation of execute(Runnable) that calls put in place of offer.
That doesn't seem like a completely satisfactory answer I'm afraid.

How to manage return values of an unknown amount of Callables using ExecutorService?

I want to create a singleton-ExecutorService with a fixed threadpool size. Another thread will feed that ExecutorService with Callables and I want to parse the result of the Callables (optimally) immediately after the execution is done.
I am really uncertain how to implement this properly.
My initial thought was a method in the singleton-ES, which adds a Callable to the ExecutorService via "submit(callable)" and stores the resulting Future inside a HashMap or ArrayList inside the singleton. Another thread would check the Futures for results within a given interval.
But somehow this solution does not "feel right" and I didn't find a solution for this usecase elsewhere, so I am asking you guys before I code something I regret later.
How would you approach this problem?
I am looking forward to your responses!
import java.util.concurrent.*;
public class PostProcExecutor extends ThreadPoolExecutor {
// adjust the constructor to your desired threading policy
public PostProcExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
}
#Override
protected <T> RunnableFuture<T> newTaskFor(Callable<T> callable) {
return new FutureTask<T>(callable) {
#Override
protected void done()
{
if(!isCancelled()) try {
processResult(get());
} catch(InterruptedException ex) {
throw new AssertionError("on complete task", ex);
} catch(ExecutionException ex) {
// no result available
}
}
};
}
protected void processResult(Object o)
{
System.out.println("Result "+o);// do your post-processing here
}
}
Use a ExecutorCompletionService. This way you can get the result of the Callable(s) as soon as they are ready. The take method of the completion service blocks waiting for each tasks to be done.
Here is an example from the java doc:
void solve(Executor e,
Collection<Callable<Result>> solvers)
throws InterruptedException, ExecutionException {
CompletionService<Result> ecs
= new ExecutorCompletionService<Result>(e);
for (Callable<Result> s : solvers)
ecs.submit(s);
int n = solvers.size();
for (int i = 0; i < n; ++i) {
Result r = ecs.take().get();
if (r != null)
use(r);
}
}
You can use MoreExecutors.listeningDecorator(Executors.newFixedThreadPool(THREAD_NUMBER)); to create service
and use guava ListenableFuture for parsing result immidiatly also you can wtite your bike for Listen future result.
ListeningExecutorService service = MoreExecutors.listeningDecorator(Executors.newFixedThreadPool(10));
ListenableFuture<Explosion> explosion = service.submit(new Callable<Explosion>() {
public Explosion call() {
return pushBigRedButton();
}
});
Futures.addCallback(explosion, new FutureCallback<Explosion>() {
// we want this handler to run immediately after we push the big red button!
public void onSuccess(Explosion explosion) {
walkAwayFrom(explosion);
}
public void onFailure(Throwable thrown) {
battleArchNemesis(); // escaped the explosion!
}
});
You can use ExecutorCompletionService to implement it.
The following steps can help you some.
Populate the number of available processors using Runtime.getRuntime().availableProcessors(). Let's keep the value in variable availableProcessors.
Initilize ExecutorService, like service = Executors.newFixedThreadPool(availableProcessors)
Initialize ExecutorCompletionService, assume the result from Callable is an Integer Array Integer[], ExecutorCompletionService completionService = new ExecutorCompletionService(service)
Use completionService.submit to submit the task.
Use completionService.take().get() to get each result of a task (callable).
Based on the above steps you can get the results of all callable, and do some business you would like to.

Java Executor partial shutdown

Lets have one classic Executor in application. Many parts of application use this executor for some computations, each computation can be cancelled, for this I can call shutdown() or shutdownNow() on Executor.
But I want to shutdown only part of tasks in Executor. Sadly I can't have access to Future objects, they are private part of computation implementation (actually computation is backed by actor framework jetlang)
I want something like Executor wrapper, which I could pass to computation and which should be backed by real Executor. Something like this:
// main application executor
Executor applicationExecutor = Executors.newCachedThreadPool();
// starting computation
Executor computationExecutor = new ExecutorWrapper(applicationExecutor);
Computation computation = new Computation(computationExecutor);
computation.start();
// cancelling computation
computation.cancel();
// shutting down only computation tasks
computationExecutor.shutdown();
// applicationExecutor remains running and happy
Or any other idea?
For those, who wants good ends: there is final solution, partially based of Ivan Sopov's answer. Luckily jetlang uses for running its tasks only Executor interface (not ExecutorService), so I make wrapper class which supports stopping tasks created only by this wrapper.
static class StoppableExecutor implements Executor {
final ExecutorService executor;
final List<Future<?>> futures = Lists.newArrayList();
boolean stopped;
public StoppableExecutor(ExecutorService executor) {
this.executor = executor;
}
void stop() {
this.stopped = true;
synchronized (futures) {
for (Iterator<Future<?>> iterator = futures.iterator(); iterator.hasNext();) {
Future<?> future = iterator.next();
if (!future.isDone() && !future.isCancelled()) {
System.out.println(future.cancel(true));
}
}
futures.clear();
}
}
#Override
public void execute(Runnable command) {
if (!stopped) {
synchronized (futures) {
Future<?> newFuture = executor.submit(command);
for (Iterator<Future<?>> iterator = futures.iterator(); iterator.hasNext();) {
Future<?> future = iterator.next();
if (future.isDone() || future.isCancelled())
iterator.remove();
}
futures.add(newFuture);
}
}
}
}
Using this is pretty straightforward:
ExecutorService service = Executors.newFixedThreadPool(5);
StoppableExecutor executor = new StoppableExecutor(service);
// doing some actor stuff with executor instance
PoolFiberFactory factory = new PoolFiberFactory(executor);
// stopping tasks only created on executor instance
// executor service is happily running other tasks
executor.stop();
That's all. Works nice.
How about having your Computation be a Runnable (and run using the provided Executor) until a boolean flag is set? Something along the lines of :
public class Computation
{
boolean volatile stopped;
public void run(){
while(!stopped){
//do magic
}
public void cancel)(){stopped=true;}
}
What you are doing is essentially stopping the thread. However, it does not get garbage-collected, but is instead re-used because it is managed by the Executor. Look up "what is the proper way to stop a thread?".
EDIT: please note the code above is quite primitive in the sense it assumes the body of the while loop takes a short amount of time. If it does not, the check will be executed infrequently and you will notice a delay between canceling a task and it actually stopping.
Something like this?
You may do partial shutdown:
for (Future<?> future : %ExecutorServiceWrapperInstance%.getFutures()) {
if (%CONDITION%) {
future.cancel(true);
}
}
Here is the code:
package com.sopovs.moradanen;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class ExecutorServiceWrapper implements ExecutorService {
private final ExecutorService realService;
private List<Future<?>> futures = new ArrayList<Future<?>>();
public ExecutorServiceWrapper(ExecutorService realService) {
this.realService = realService;
}
#Override
public void execute(Runnable command) {
realService.execute(command);
}
#Override
public void shutdown() {
realService.shutdown();
}
#Override
public List<Runnable> shutdownNow() {
return realService.shutdownNow();
}
#Override
public boolean isShutdown() {
return realService.isShutdown();
}
#Override
public boolean isTerminated() {
return realService.isTerminated();
}
#Override
public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException {
return realService.awaitTermination(timeout, unit);
}
#Override
public <T> Future<T> submit(Callable<T> task) {
Future<T> future = realService.submit(task);
synchronized (this) {
futures.add(future);
}
return future;
}
public synchronized List<Future<?>> getFutures() {
return Collections.unmodifiableList(futures);
}
#Override
public <T> Future<T> submit(Runnable task, T result) {
Future<T> future = realService.submit(task, result);
synchronized (this) {
futures.add(future);
}
return future;
}
#Override
public Future<?> submit(Runnable task) {
Future<?> future = realService.submit(task);
synchronized (this) {
futures.add(future);
}
return future;
}
#Override
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException {
List<Future<T>> future = realService.invokeAll(tasks);
synchronized (this) {
futures.addAll(future);
}
return future;
}
#Override
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit)
throws InterruptedException {
List<Future<T>> future = realService.invokeAll(tasks, timeout, unit);
synchronized (this) {
futures.addAll(future);
}
return future;
}
#Override
public <T> T invokeAny(Collection<? extends Callable<T>> tasks) throws InterruptedException, ExecutionException {
//don't know what to do here. Maybe this method is not needed by the framework
//than just throw new NotImplementedException();
return realService.invokeAny(tasks);
}
#Override
public <T> T invokeAny(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit)
throws InterruptedException, ExecutionException, TimeoutException {
//don't know what to do here. Maybe this method is not needed by the framework
//than just throw new NotImplementedException();
return realService.invokeAny(tasks, timeout, unit);
}
}

Is there an ExecutorService that uses the current thread?

What I am after is a compatible way to configure the use of a thread pool or not. Ideally the rest of the code should not be impacted at all. I could use a thread pool with 1 thread but that isn't quite what I want. Any ideas?
ExecutorService es = threads == 0 ? new CurrentThreadExecutor() : Executors.newThreadPoolExecutor(threads);
// es.execute / es.submit / new ExecutorCompletionService(es) etc
Java 8 style:
Executor e = Runnable::run;
You can use Guava's MoreExecutors.newDirectExecutorService(), or MoreExecutors.directExecutor() if you don't need an ExecutorService.
If including Guava is too heavy-weight, you can implement something almost as good:
public final class SameThreadExecutorService extends ThreadPoolExecutor {
private final CountDownLatch signal = new CountDownLatch(1);
private SameThreadExecutorService() {
super(1, 1, 0, TimeUnit.DAYS, new SynchronousQueue<Runnable>(),
new ThreadPoolExecutor.CallerRunsPolicy());
}
#Override public void shutdown() {
super.shutdown();
signal.countDown();
}
public static ExecutorService getInstance() {
return SingletonHolder.instance;
}
private static class SingletonHolder {
static ExecutorService instance = createInstance();
}
private static ExecutorService createInstance() {
final SameThreadExecutorService instance
= new SameThreadExecutorService();
// The executor has one worker thread. Give it a Runnable that waits
// until the executor service is shut down.
// All other submitted tasks will use the RejectedExecutionHandler
// which runs tasks using the caller's thread.
instance.submit(new Runnable() {
#Override public void run() {
boolean interrupted = false;
try {
while (true) {
try {
instance.signal.await();
break;
} catch (InterruptedException e) {
interrupted = true;
}
}
} finally {
if (interrupted) {
Thread.currentThread().interrupt();
}
}
}});
return Executors.unconfigurableScheduledExecutorService(instance);
}
}
Here's a really simple Executor (not ExecutorService, mind you) implementation that only uses the current thread. Stealing this from "Java Concurrency in Practice" (essential reading).
public class CurrentThreadExecutor implements Executor {
public void execute(Runnable r) {
r.run();
}
}
ExecutorService is a more elaborate interface, but could be handled with the same approach.
I wrote an ExecutorService based on the AbstractExecutorService.
/**
* Executes all submitted tasks directly in the same thread as the caller.
*/
public class SameThreadExecutorService extends AbstractExecutorService {
//volatile because can be viewed by other threads
private volatile boolean terminated;
#Override
public void shutdown() {
terminated = true;
}
#Override
public boolean isShutdown() {
return terminated;
}
#Override
public boolean isTerminated() {
return terminated;
}
#Override
public boolean awaitTermination(long theTimeout, TimeUnit theUnit) throws InterruptedException {
shutdown(); // TODO ok to call shutdown? what if the client never called shutdown???
return terminated;
}
#Override
public List<Runnable> shutdownNow() {
return Collections.emptyList();
}
#Override
public void execute(Runnable theCommand) {
theCommand.run();
}
}
I had to use the same "CurrentThreadExecutorService" for testing purposes and, although all suggested solutions were nice (particularly the one mentioning the Guava way), I came up with something similar to what Peter Lawrey suggested here.
As mentioned by Axelle Ziegler here, unfortunately Peter's solution won't actually work because of the check introduced in ThreadPoolExecutor on the maximumPoolSize constructor parameter (i.e. maximumPoolSize can't be <=0).
In order to circumvent that, I did the following:
private static ExecutorService currentThreadExecutorService() {
CallerRunsPolicy callerRunsPolicy = new ThreadPoolExecutor.CallerRunsPolicy();
return new ThreadPoolExecutor(0, 1, 0L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>(), callerRunsPolicy) {
#Override
public void execute(Runnable command) {
callerRunsPolicy.rejectedExecution(command, this);
}
};
}
You can use the RejectedExecutionHandler to run the task in the current thread.
public static final ThreadPoolExecutor CURRENT_THREAD_EXECUTOR = new ThreadPoolExecutor(0, 0, 0, TimeUnit.DAYS, new SynchronousQueue<Runnable>(), new RejectedExecutionHandler() {
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
r.run();
}
});
You only need one of these ever.

Best ways to handle maximum execution time for threads (in Java)

So, I'm curious. How do you handle setting maximum execution time for threads? When running in a thread pool?
I have several techniques but, I'm never quite satisfied with them. So, I figure I'd ask the community how they go about it.
How about:
Submit your Callable to the ExecutorService and keep a handle to the returned Future.
ExecutorService executorService = ... // Create ExecutorService.
Callable<Result> callable = new MyCallable(); // Create work to be done.
Future<Result> fut = executorService.submit(callable);
Wrap the Future in an implementation of Delayed whereby Delayed's getDelay(TimeUnit) method returns the maximum execution time for the work in question.
public class DelayedImpl<T> implements Delayed {
private final long maxExecTimeMillis;
private final Future<T> future;
public DelayedImpl(long maxExecTimeMillis, Future<T> future) {
this.maxExecMillis = maxExecMillis;
this.future = future;
}
public TimeUnit getDelay(TimeUnit timeUnit) {
return timeUnit.convert(maxExecTimeMillis, TimeUnit.MILLISECONDS);
}
public Future<T> getFuture() {
return future;
}
}
DelayedImpl impl = new DelayedImpl(3000L, fut); // Max exec. time == 3000ms.
Add the `DelayedImpl` to a `DelayQueue`.
Queue<DelayedImpl> queue = new DelayQueue<DelayImpl>();
queue.add(impl);
Have a thread repeatedly take() from the queue and check whether each DelayedImpl's Future is complete by calling isDone(); If not then cancel the task.
new Thread(new Runnable() {
public void run() {
while (!Thread.interrupted) {
DelayedImpl impl = queue.take(); // Perform blocking take.
if (!impl.getFuture().isDone()) {
impl.getFuture().cancel(true);
}
}
}
}).start();
The main advantage to this approach is that you can set a different maximum execution time per task and the delay queue will automatically return the task with the smallest amount of execution time remaining.
Normally, I just poll regularly a control object from the threaded code. Something like:
interface ThreadControl {
boolean shouldContinue();
}
class Timer implements ThreadControl {
public boolean shouldContinue() {
// returns false if max_time has elapsed
}
}
class MyTask implements Runnable {
private tc;
public MyTask(ThreadControl tc) {
this.tc = tc;
}
public void run() {
while (true) {
// do stuff
if (!tc.shouldContinue())
break;
}
}
}
Adamski:
I believe that your implementation of the Delayed Interface requires some adjustment in order to work properly. The return value of 'getDelay()' should return a negative value if the amount of time elapsed from the instantiation of the object has exceeded the maximum lifetime. To achieve that, you need to store the time when the task was created (and presumably started). Then each time 'getDelay()' is invoked, calculate whether or not the maximum lifetime of the thread has been exceeded. As in:
class DelayedImpl<T> implements Delayed {
private Future<T> task;
private final long maxExecTimeMinutes = MAX_THREAD_LIFE_MINUTES;
private final long startInMillis = System.currentTimeMillis();
private DelayedImpl(Future<T> task) {
this.task = task;
}
public long getDelay(TimeUnit unit) {
return unit.convert((startInMillis + maxExecTimeMinutes*60*1000) - System.currentTimeMillis(), TimeUnit.MILLISECONDS);
}
public int compareTo(Delayed o) {
Long thisDelay = getDelay(TimeUnit.MILLISECONDS);
Long thatDelay = o.getDelay(TimeUnit.MILLISECONDS);
return thisDelay.compareTo(thatDelay);
}
public Future<T> getTask() {
return task;
}
}

Categories