Best ways to handle maximum execution time for threads (in Java) - java

So, I'm curious. How do you handle setting maximum execution time for threads? When running in a thread pool?
I have several techniques but, I'm never quite satisfied with them. So, I figure I'd ask the community how they go about it.

How about:
Submit your Callable to the ExecutorService and keep a handle to the returned Future.
ExecutorService executorService = ... // Create ExecutorService.
Callable<Result> callable = new MyCallable(); // Create work to be done.
Future<Result> fut = executorService.submit(callable);
Wrap the Future in an implementation of Delayed whereby Delayed's getDelay(TimeUnit) method returns the maximum execution time for the work in question.
public class DelayedImpl<T> implements Delayed {
private final long maxExecTimeMillis;
private final Future<T> future;
public DelayedImpl(long maxExecTimeMillis, Future<T> future) {
this.maxExecMillis = maxExecMillis;
this.future = future;
}
public TimeUnit getDelay(TimeUnit timeUnit) {
return timeUnit.convert(maxExecTimeMillis, TimeUnit.MILLISECONDS);
}
public Future<T> getFuture() {
return future;
}
}
DelayedImpl impl = new DelayedImpl(3000L, fut); // Max exec. time == 3000ms.
Add the `DelayedImpl` to a `DelayQueue`.
Queue<DelayedImpl> queue = new DelayQueue<DelayImpl>();
queue.add(impl);
Have a thread repeatedly take() from the queue and check whether each DelayedImpl's Future is complete by calling isDone(); If not then cancel the task.
new Thread(new Runnable() {
public void run() {
while (!Thread.interrupted) {
DelayedImpl impl = queue.take(); // Perform blocking take.
if (!impl.getFuture().isDone()) {
impl.getFuture().cancel(true);
}
}
}
}).start();
The main advantage to this approach is that you can set a different maximum execution time per task and the delay queue will automatically return the task with the smallest amount of execution time remaining.

Normally, I just poll regularly a control object from the threaded code. Something like:
interface ThreadControl {
boolean shouldContinue();
}
class Timer implements ThreadControl {
public boolean shouldContinue() {
// returns false if max_time has elapsed
}
}
class MyTask implements Runnable {
private tc;
public MyTask(ThreadControl tc) {
this.tc = tc;
}
public void run() {
while (true) {
// do stuff
if (!tc.shouldContinue())
break;
}
}
}

Adamski:
I believe that your implementation of the Delayed Interface requires some adjustment in order to work properly. The return value of 'getDelay()' should return a negative value if the amount of time elapsed from the instantiation of the object has exceeded the maximum lifetime. To achieve that, you need to store the time when the task was created (and presumably started). Then each time 'getDelay()' is invoked, calculate whether or not the maximum lifetime of the thread has been exceeded. As in:
class DelayedImpl<T> implements Delayed {
private Future<T> task;
private final long maxExecTimeMinutes = MAX_THREAD_LIFE_MINUTES;
private final long startInMillis = System.currentTimeMillis();
private DelayedImpl(Future<T> task) {
this.task = task;
}
public long getDelay(TimeUnit unit) {
return unit.convert((startInMillis + maxExecTimeMinutes*60*1000) - System.currentTimeMillis(), TimeUnit.MILLISECONDS);
}
public int compareTo(Delayed o) {
Long thisDelay = getDelay(TimeUnit.MILLISECONDS);
Long thatDelay = o.getDelay(TimeUnit.MILLISECONDS);
return thisDelay.compareTo(thatDelay);
}
public Future<T> getTask() {
return task;
}
}

Related

How to wait until a space becomes available in a threadpool [duplicate]

This question already has answers here:
ThreadPoolExecutor Block When its Queue Is Full?
(10 answers)
Closed 1 year ago.
I am trying to code a solution in which a single thread produces I/O-intensive tasks that can be performed in parallel. Each task have significant in-memory data. So I want to be able limit the number of tasks that are pending at a moment.
If I create ThreadPoolExecutor like this:
ThreadPoolExecutor executor = new ThreadPoolExecutor(numWorkerThreads, numWorkerThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(maxQueue));
Then the executor.submit(callable) throws RejectedExecutionException when the queue fills up and all the threads are already busy.
What can I do to make executor.submit(callable) block when the queue is full and all threads are busy?
EDIT:
I tried this:
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
And it somewhat achieves the effect that I want achieved but in an inelegant way (basically rejected threads are run in the calling thread, so this blocks the calling thread from submitting more).
EDIT: (5 years after asking the question)
To anyone reading this question and its answers, please don't take the accepted answer as one correct solution. Please read through all answers and comments.
I have done this same thing. The trick is to create a BlockingQueue where the offer() method is really a put(). (you can use whatever base BlockingQueue impl you want).
public class LimitedQueue<E> extends LinkedBlockingQueue<E>
{
public LimitedQueue(int maxSize)
{
super(maxSize);
}
#Override
public boolean offer(E e)
{
// turn offer() and add() into a blocking calls (unless interrupted)
try {
put(e);
return true;
} catch(InterruptedException ie) {
Thread.currentThread().interrupt();
}
return false;
}
}
Note that this only works for thread pool where corePoolSize==maxPoolSize so be careful there (see comments).
The currently accepted answer has a potentially significant problem - it changes the behavior of ThreadPoolExecutor.execute such that if you have a corePoolSize < maxPoolSize, the ThreadPoolExecutor logic will never add additional workers beyond the core.
From ThreadPoolExecutor.execute(Runnable):
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
Specifically, that last 'else' block willl never be hit.
A better alternative is to do something similar to what OP is already doing - use a RejectedExecutionHandler to do the same put logic:
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
if (!executor.isShutdown()) {
executor.getQueue().put(r);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RejectedExecutionException("Executor was interrupted while the task was waiting to put on work queue", e);
}
}
There are some things to watch out for with this approach, as pointed out in the comments (referring to this answer):
If corePoolSize==0, then there is a race condition where all threads in the pool may die before the task is visible
Using an implementation that wraps the queue tasks (not applicable to ThreadPoolExecutor) will result in issues unless the handler also wraps it the same way.
Keeping those gotchas in mind, this solution will work for most typical ThreadPoolExecutors, and will properly handle the case where corePoolSize < maxPoolSize.
Here is how I solved this on my end:
(note: this solution does block the thread that submits the Callable, so it prevents RejectedExecutionException from being thrown )
public class BoundedExecutor extends ThreadPoolExecutor{
private final Semaphore semaphore;
public BoundedExecutor(int bound) {
super(bound, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
semaphore = new Semaphore(bound);
}
/**Submits task to execution pool, but blocks while number of running threads
* has reached the bound limit
*/
public <T> Future<T> submitButBlockIfFull(final Callable<T> task) throws InterruptedException{
semaphore.acquire();
return submit(task);
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
semaphore.release();
}
}
How about using the CallerBlocksPolicy class if you are using spring-integration?
This class implements the RejectedExecutionHandler interface, which is a handler for tasks that cannot be executed by a ThreadPoolExecutor.
You can use this policy like this.
executor.setRejectedExecutionHandler(new CallerBlocksPolicy());
The main difference between CallerBlocksPolicy and CallerRunsPolicy is whether it blocks or runs the task in the caller thread.
Please refer to this code.
I know this is an old question but had a similar issue that creating new tasks was very fast and if there were too many an OutOfMemoryError occur because existing task were not completed fast enough.
In my case Callables are submitted and I need the result hence I need to store all the Futures returned by executor.submit(). My solution was to put the Futures into a BlockingQueue with a maximum size. Once that queue is full, no more tasks are generated until some are completed (elements removed from queue). In pseudo-code:
final ExecutorService executor = Executors.newFixedThreadPool(numWorkerThreads);
final LinkedBlockingQueue<Future> futures = new LinkedBlockingQueue<>(maxQueueSize);
try {
Thread taskGenerator = new Thread() {
#Override
public void run() {
while (reader.hasNext) {
Callable task = generateTask(reader.next());
Future future = executor.submit(task);
try {
// if queue is full blocks until a task
// is completed and hence no future tasks are submitted.
futures.put(future);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
executor.shutdown();
}
}
taskGenerator.start();
// read from queue as long as task are being generated
// or while Queue has elements in it
while (taskGenerator.isAlive()
|| !futures.isEmpty()) {
Future future = futures.take();
// do something
}
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
} catch (ExecutionException ex) {
throw new MyException(ex);
} finally {
executor.shutdownNow();
}
I had the similar problem and I implemented that by using beforeExecute/afterExecute hooks from ThreadPoolExecutor:
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
/**
* Blocks current task execution if there is not enough resources for it.
* Maximum task count usage controlled by maxTaskCount property.
*/
public class BlockingThreadPoolExecutor extends ThreadPoolExecutor {
private final ReentrantLock taskLock = new ReentrantLock();
private final Condition unpaused = taskLock.newCondition();
private final int maxTaskCount;
private volatile int currentTaskCount;
public BlockingThreadPoolExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit,
BlockingQueue<Runnable> workQueue, int maxTaskCount) {
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
this.maxTaskCount = maxTaskCount;
}
/**
* Executes task if there is enough system resources for it. Otherwise
* waits.
*/
#Override
protected void beforeExecute(Thread t, Runnable r) {
super.beforeExecute(t, r);
taskLock.lock();
try {
// Spin while we will not have enough capacity for this job
while (maxTaskCount < currentTaskCount) {
try {
unpaused.await();
} catch (InterruptedException e) {
t.interrupt();
}
}
currentTaskCount++;
} finally {
taskLock.unlock();
}
}
/**
* Signalling that one more task is welcome
*/
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
taskLock.lock();
try {
currentTaskCount--;
unpaused.signalAll();
} finally {
taskLock.unlock();
}
}
}
This should be good enough for you. Btw, original implementation was task size based because one task could be larger 100 time than another and submitting two huge tasks was killing the box, but running one big and plenty of small was Okay. If your I/O-intensive tasks are roughly the same size you could use this class, otherwise just let me know and I'll post size based implementation.
P.S. You would want to check ThreadPoolExecutor javadoc. It's really nice user guide from Doug Lea about how it could be easily customized.
I have implemented a solution following the decorator pattern and using a semaphore to control the number of executed tasks. You can use it with any Executor and:
Specify the maximum of ongoing tasks
Specify the maximum timeout to wait for a task execution permit (if the timeout passes and no permit is acquired, a RejectedExecutionException is thrown)
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import java.time.Duration;
import java.util.Objects;
import java.util.concurrent.Executor;
import java.util.concurrent.RejectedExecutionException;
import java.util.concurrent.Semaphore;
import javax.annotation.Nonnull;
public class BlockingOnFullQueueExecutorDecorator implements Executor {
private static final class PermitReleasingDecorator implements Runnable {
#Nonnull
private final Runnable delegate;
#Nonnull
private final Semaphore semaphore;
private PermitReleasingDecorator(#Nonnull final Runnable task, #Nonnull final Semaphore semaphoreToRelease) {
this.delegate = task;
this.semaphore = semaphoreToRelease;
}
#Override
public void run() {
try {
this.delegate.run();
}
finally {
// however execution goes, release permit for next task
this.semaphore.release();
}
}
#Override
public final String toString() {
return String.format("%s[delegate='%s']", getClass().getSimpleName(), this.delegate);
}
}
#Nonnull
private final Semaphore taskLimit;
#Nonnull
private final Duration timeout;
#Nonnull
private final Executor delegate;
public BlockingOnFullQueueExecutorDecorator(#Nonnull final Executor executor, final int maximumTaskNumber, #Nonnull final Duration maximumTimeout) {
this.delegate = Objects.requireNonNull(executor, "'executor' must not be null");
if (maximumTaskNumber < 1) {
throw new IllegalArgumentException(String.format("At least one task must be permitted, not '%d'", maximumTaskNumber));
}
this.timeout = Objects.requireNonNull(maximumTimeout, "'maximumTimeout' must not be null");
if (this.timeout.isNegative()) {
throw new IllegalArgumentException("'maximumTimeout' must not be negative");
}
this.taskLimit = new Semaphore(maximumTaskNumber);
}
#Override
public final void execute(final Runnable command) {
Objects.requireNonNull(command, "'command' must not be null");
try {
// attempt to acquire permit for task execution
if (!this.taskLimit.tryAcquire(this.timeout.toMillis(), MILLISECONDS)) {
throw new RejectedExecutionException(String.format("Executor '%s' busy", this.delegate));
}
}
catch (final InterruptedException e) {
// restore interrupt status
Thread.currentThread().interrupt();
throw new IllegalStateException(e);
}
this.delegate.execute(new PermitReleasingDecorator(command, this.taskLimit));
}
#Override
public final String toString() {
return String.format("%s[availablePermits='%s',timeout='%s',delegate='%s']", getClass().getSimpleName(), this.taskLimit.availablePermits(),
this.timeout, this.delegate);
}
}
I think it is as simple as using a ArrayBlockingQueue instead of a a LinkedBlockingQueue.
Ignore me... that's totally wrong. ThreadPoolExecutor calls Queue#offer not put which would have the effect you require.
You could extend ThreadPoolExecutor and provide an implementation of execute(Runnable) that calls put in place of offer.
That doesn't seem like a completely satisfactory answer I'm afraid.

Scheduled executor: poll for result at fix rate and exit if timeout or result valid

Problem:
I have a requirement to call a dao method at fix rate say every 10 sec, then I need to check if the result is valid if yes exit, else keep on calling that method every 10 sec till I get a valid result or defined time out (say 2 min) is over.
Approaches:
I want to keep the task and scheduler logic separate, and write a task in such a way that it can be used by different classes having similar requirement.
One way I can think is to define a new poller task
public abstract class PollerTask<T> implements Runnable {
abstract public boolean isValid(T result);
abstract public T task();
private T result;
private volatile boolean complete;
public boolean isComplete() {
return complete;
}
public T getResult() {
return result;
}
#Override
final public void run() {
result = task();
if (complete = isValid(result)) {
//may be stop scheduler ??
}
}
}
User need to simply provide implementation of task and isValid;
Then we can define a separate class that takes pooling freq and timeout and creates a scheduled executor and submit this task
public class PollerTaskExecutor {
private int pollingFreq;
private int timeout;
private ScheduledExecutorService executor;
private ScheduledExecutorService terminator;
private ExecutorService condition;
private volatile boolean done;
private ScheduledFuture future;
public PollerTaskExecutor(int pollingFreq, int timeout) {
this.pollingFreq = pollingFreq;
this.timeout = timeout;
executor = Executors.newSingleThreadScheduledExecutor();
terminator = Executors.newSingleThreadScheduledExecutor();
condition = Executors.newSingleThreadExecutor();
}
public void submitTaskForPolling(final PollerTask pollerTask) {
future = executor.scheduleAtFixedRate(pollerTask, 0, pollingFreq, TimeUnit.SECONDS);
terminator.schedule(new Runnable() {
#Override
public void run() {
complete();
}
}, timeout, TimeUnit.SECONDS);
condition.execute(new Runnable() {
#Override
public void run() {
if (pollerTask.isComplete()) {
complete();
}
}
});
}
public boolean isDone() {
return done;
}
public void complete() {
future.cancel(false);
executor.shutdown();
terminator.shutdown();
condition.shutdown();
done = true;
}
now user can wait till pollerExecutor.isDone returns true and get the result.
I had to use three executors for following purposes:
executor to run task at fix interval
executor to stop all when time out is over
executor to stop all if valid result is obtained before time out.
Can someone please suggest a better approach, this seems to be complicated for such a trivial task ?
Make it a self-scheduling task. In pseudo code:
public class PollingTaskRunner {
...
CountDownLatch doneWait = new CountDownLatch(1);
volatile boolean done;
PollingTaskRunner(Runnable pollingTask, int frequency, int period) {
...
endTime = now + period;
executor.schedule(this, 0);
}
run() {
try {
pollingTask.run();
} catch (Exception e) {
...
}
if (pollingTask.isComplete() || now + frequency > endTime) {
done = true;
doneWait.countDown();
executor.shutdown();
} else {
executor.schedule(this, frequency);
}
}
await() {
doneWait.await();
}
isDone() {
return done;
}
}
It is not that complicated but add plenty of debug statements the first time you run/test this so you know what is going on. Once it is running as intended, it is easy to re-use the pattern.
A slightly simpler method, you don't need a separate executor service for the terminator, you could simply push the terminator task into the same executor.
Even simpler. Have PollerTask place it's result in a BlockingQueue. Then have the PollingTaskRunner do a timed poll on that BlockingQueue. Whenever control is returned from the poll call ScheduledFuture.cancel because the task either succeeded or timed out.

How to set priority of java.util.Timer

How to set the Thread priority of a Timer in java? This is the code I have found in the project that I am working on, and I do not think that it is working:
public static Timer createNamedTimer(boolean isDaemon,
final String threadName, final int priority) {
Timer timer = new Timer(isDaemon);
timer.schedule(new TimerTask() {
public void run() {
Thread.currentThread().setName("TimerThread: " + threadName);
Thread.currentThread().setPriority(priority);
}
}, 0);
return timer;
}
AFAIK for timer the only way you can change priority is the way you are doing it.
If you need a better option you can use the ThreadFactory for creating the threads and setting their priority.
class SimpleThreadFactory implements ThreadFactory {
private int threadPriority;
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setPriority(threadPriority);
return t;
}
}
Then you can pass the factory to the Executors framework of Java for doing what you want, IMHO this will be a much better approach.
Why do I say it would be a better approach?
The Timer class's JavaDoc mentions ScheduledThreadPoolExecutor and notes, that this class is effectively a more versatile replacement for the Timer/TimerTask combination
The suggested solution won't likely work for tasks that are repeated more than once, because between invocations another task that shared the same thread may have adjusted the priority to something else. Therefore, for repeating tasks you must set the priority at execution time, every time. This potential issue exists w/or w/o the new Executors framework.
One solution is to create a wrapper class that does prep work for you to ensure consistency. For example:
AnyClass.java:
private static void exampleUsage()
{
try { launchHighPriorityTask(() -> System.out.println("What a fancy task.")).join(); }
catch (Throwable ignored) {}
}
private static Thread launchMaxPriorityTask(Runnable task)
{
final Thread customThread = new Thread(new Task("MaxPriority", Thread.MAX_PRIORITY, task));
customThread.start();
return customThread;
}
Task.java:
public class Task implements Runnable
{
private final String name;
private final int priority;
private final Runnable task;
public Task(String name, int priority, Runnable task)
{
if (null == task) throw new NullPointerException("no task provided");
this.name = name; this.priority = priority; this.task = task;
}
/**
* run() is made final here to prevent any deriving classes
* accidentally ruining the expected behavior
*/
#Override public final void run()
{
final Thread thread = Thread.currentThread();
// cache the current state to restore settings and be polite
final String prevName = thread.getName();
final int prevPriority = thread.getPriority();
// set our thread's config
thread.setName(name);
thread.setPriority(priority);
try { task.run(); } catch (Throwable ignored) {}
// restore previous thread config
thread.setPriority(prevPriority);
thread.setName(prevName);
}
}
This is naturally a minimalist example of what can be accomplished with this sort of setup.

Callback when a periodic task is cancelled and done

I have two tasks: The first task (work) is reoccurring and the second task (cleanup) is releases some resources. The cleanup task should be run exactly once after the reoccurring work task has completed and will not be run again.
My first instinct was something like this:
ScheduledExecutorService service = ...;
ScheduledFuture<?> future = service.scheduleAtFixedRate(work, ...);
// other stuff happens
future.cancel(false);
cleanup.run();
The problem here is that cancel() returns immediately. So if work happens to be running, then cleanup will overlap it.
Ideally I would use something like Guava's Futures.addCallback(ListenableFuture future, FutureCallback callback). (Guava 15 may have something like that).
In the meantime, how can fire a callback when future is cancelled and work no longer running?
This is the solution that I've come up with. It seems to be pretty simple, but I still assume there's a more common and/or elegant solution out there. I'd really like to see one in a library like Guava...
First I create a wrapper to impose mutual exclusion on my Runnables:
private static final class SynchronizedRunnable implements Runnable {
private final Object monitor;
private final Runnable delegate;
private SynchronizedRunnable(Object monitor, Runnable delegate) {
this.monitor = monitor;
this.delegate = delegate;
}
#Override
public void run() {
synchronized (monitor) {
delegate.run();
}
}
}
Then I create a wrapper to fire my callback on successful invokations of cancel:
private static final class FutureWithCancelCallback<V> extends ForwardingFuture.SimpleForwardingFuture<V> {
private final Runnable callback;
private FutureWithCancelCallback(Future<V> delegate, Runnable callback) {
super(delegate);
this.callback = callback;
}
#Override
public boolean cancel(boolean mayInterruptIfRunning) {
boolean cancelled = super.cancel(mayInterruptIfRunning);
if (cancelled) {
callback.run();
}
return cancelled;
}
}
Then I roll it all together in my own method:
private Future<?> scheduleWithFixedDelayAndCallback(ScheduledExecutorService service, Runnable work, long initialDelay, long delay, TimeUnit unit, Runnable cleanup) {
Object monitor = new Object();
Runnable monitoredWork = new SynchronizedRunnable(monitor, work);
Runnable monitoredCleanup = new SynchronizedRunnable(monitor, cleanup);
Future<?> rawFuture = service.scheduleAtFixedRate(monitoredWork, initialDelay, delay, unit);
Future<?> wrappedFuture = new FutureWithCancelCallback(rawFuture, monitoredCleanup);
return wrappedFuture;
}
I'll give it another shot then. Either you may enhance the command or you may wrap the executed Runnable/Callable. Look at this:
public static class RunnableWrapper implements Runnable {
private final Runnable original;
private final Lock lock = new ReentrantLock();
public RunnableWrapper(Runnable original) {
this.original = original;
}
public void run() {
lock.lock();
try {
this.original.run();
} finally {
lock.unlock();
}
}
public void awaitTermination() {
lock.lock();
try {
} finally {
lock.unlock();
}
}
}
So you can change your code to
ScheduledExecutorService service = ...;
RunnableWrapper wrapper = new RunnableWrapper(work);
ScheduledFuture<?> future = service.scheduleAtFixedRate(wrapper, ...);
// other stuff happens
future.cancel(false);
wrapper.awaitTermination();
cleanup.run();
After calling cancel, either work is no longer running and awaitTermination() returns immediately, or it is running and awaitTermination() blocks until it's done.
Why don't you do
// other stuff happens
future.cancel(false);
service.shutdown();
service.awaitTermination(1, TimeUnit.DAYS);
cleanup.run();
This will tell your executor service to shutdown, thus allowing you to wait for the possibly running work to be finished.

ThreadPoolExecutor with ArrayBlockingQueue

I started reading more about ThreadPoolExecutor from Java Doc as I am using it in one of my project. So Can anyone explain me what does this line means actually?- I know what does each parameter stands for, but I wanted to understand it in more general/lay-man way from some of the experts here.
ExecutorService service = new ThreadPoolExecutor(10, 10, 1000L,
TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(10, true), new
ThreadPoolExecutor.CallerRunsPolicy());
Updated:-
Problem Statement is:-
Each thread uses unique ID between 1 and 1000 and program has to run for 60 minutes or more, So in that 60 minutes it is possible that all the ID's will get finished so I need to reuse those ID's again. So this is the below program I wrote by using above executor.
class IdPool {
private final LinkedList<Integer> availableExistingIds = new LinkedList<Integer>();
public IdPool() {
for (int i = 1; i <= 1000; i++) {
availableExistingIds.add(i);
}
}
public synchronized Integer getExistingId() {
return availableExistingIds.removeFirst();
}
public synchronized void releaseExistingId(Integer id) {
availableExistingIds.add(id);
}
}
class ThreadNewTask implements Runnable {
private IdPool idPool;
public ThreadNewTask(IdPool idPool) {
this.idPool = idPool;
}
public void run() {
Integer id = idPool.getExistingId();
someMethod(id);
idPool.releaseExistingId(id);
}
// This method needs to be synchronized or not?
private synchronized void someMethod(Integer id) {
System.out.println("Task: " +id);
// and do other calcuations whatever you need to do in your program
}
}
public class TestingPool {
public static void main(String[] args) throws InterruptedException {
int size = 10;
int durationOfRun = 60;
IdPool idPool = new IdPool();
// create thread pool with given size
ExecutorService service = new ThreadPoolExecutor(size, size, 500L, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(size), new ThreadPoolExecutor.CallerRunsPolicy());
// queue some tasks
long startTime = System.currentTimeMillis();
long endTime = startTime + (durationOfRun * 60 * 1000L);
// Running it for 60 minutes
while(System.currentTimeMillis() <= endTime) {
service.submit(new ThreadNewTask(idPool));
}
// wait for termination
service.shutdown();
service.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
}
}
My Questions is:- This code is right as far as the Performance is considered or not? And what else I can make it here to make it more accurate? Any help will be appreciated.
[First, i apologize, this is a response to a previous answer, but i wanted formatting].
Except in reality, you DON'T block when an item is submitted to a ThreadPoolExecutor with a full queue. The reason for this is that ThreadPoolExecutor calls the BlockingQueue.offer(T item) method which by definition is a non-blocking method. It either adds the item and returns true, or does not add (when full) and returns false. The ThreadPoolExecutor then calls the registered RejectedExecutionHandler to deal with this situation.
From the javadoc:
Executes the given task sometime in the future. The task may execute
in a new thread or in an existing pooled thread. If the task cannot be
submitted for execution, either because this executor has been
shutdown or because its capacity has been reached, the task is handled
by the current RejectedExecutionHandler.
By default, the ThreadPoolExecutor.AbortPolicy() is used which throws a RejectedExecutionException from the "submit" or "execute" method of the ThreadPoolExecutor.
try {
executorService.execute(new Runnable() { ... });
}
catch (RejectedExecutionException e) {
// the queue is full, and you're using the AbortPolicy as the
// RejectedExecutionHandler
}
However, you can use other handlers to do something different, such as ignore the error (DiscardPolicy), or run it in the thread which called the "execute" or "submit" method (CallerRunsPolicy). This example lets whichever thread calls the "submit" or "execute" method run the requested task when the queue is full. (this means at any given time, you could 1 additional thing running on top of what's in the pool itself):
ExecutorService service = new ThreadPoolExecutor(..., new ThreadPoolExecutor.CallerRunsPolicy());
If you want to block and wait, you could implement your own RejectedExecutionHandler which would block until there's a slot available on the queue (this is a rough estimate, i have not compiled or run this, but you should get the idea):
public class BlockUntilAvailableSlot implements RejectedExecutionHandler {
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
if (e.isTerminated() || e.isShutdown()) {
return;
}
boolean submitted = false;
while (! submitted) {
if (Thread.currentThread().isInterrupted()) {
// be a good citizen and do something nice if we were interrupted
// anywhere other than during the sleep method.
}
try {
e.execute(r);
submitted = true;
}
catch (RejectedExceptionException e) {
try {
// Sleep for a little bit, and try again.
Thread.sleep(100L);
}
catch (InterruptedException e) {
; // do you care if someone called Thread.interrupt?
// if so, do something nice here, and maybe just silently return.
}
}
}
}
}
It's creating an ExecutorService which handles the execution of a pool of threads. Both the initial and maximum number of threads in the pool is 10 in this case. When a thread in the pool becomes idle for 1 second (1000ms) it will kill it (the idle timer), however because the max and core number of threads is the same, this will never happen (it always keeps 10 threads around and will never run more than 10 threads).
It uses an ArrayBlockingQueue to manage the execution requests with 10 slots, so when the queue is full (after 10 threads have been enqueued), it will block the caller.
If thread is rejected (which in this case would be due to the service shutting down, since threads will be queued or you will be blocked when queuing a thread if the queue is full), then the offered Runnable will be executed on the caller's thread.
Consider semaphores. These are meant for the same purpose. Please check below for the code using semaphore. Not sure if this is what you want. But this will block if there are no more permits to acquire. Also is ID important to you?
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Semaphore;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
class ThreadNewTask implements Runnable {
private Semaphore idPool;
public ThreadNewTask(Semaphore idPool) {
this.idPool = idPool;
}
public void run() {
// Integer id = idPool.getExistingId();
try {
idPool.acquire();
someMethod(0);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
idPool.release();
}
// idPool.releaseExistingId(id);
}
// This method needs to be synchronized or not?
private void someMethod(Integer id) {
System.out.println("Task: " + id);
// and do other calcuations whatever you need to do in your program
}
}
public class TestingPool {
public static void main(String[] args) throws InterruptedException {
int size = 10;
int durationOfRun = 60;
Semaphore idPool = new Semaphore(100);
// IdPool idPool = new IdPool();
// create thread pool with given size
ExecutorService service = new ThreadPoolExecutor(size, size, 500L,
TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(size),
new ThreadPoolExecutor.CallerRunsPolicy());
// queue some tasks
long startTime = System.currentTimeMillis();
long endTime = startTime + (durationOfRun * 60 * 1000L);
// Running it for 60 minutes
while (System.currentTimeMillis() <= endTime) {
service.submit(new ThreadNewTask(idPool));
}
// wait for termination
service.shutdown();
service.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
}
}
Another solution is to hack underlying queue to replace offer with offer with large timeout (up to 292 years, can be considered infinite).
// helper method
private static boolean interruptibleInfiniteOffer(BlockingQueue<Runnable> q, Runnable r) {
try {
return q.offer(r, Long.MAX_VALUE, TimeUnit.NANOSECONDS); // infinite == ~292 years
} catch (InterruptedException e) {
return false;
}
}
// fixed size pool with blocking (instead of rejecting) if bounded queue is full
public static ThreadPoolExecutor getFixedSizePoolWithLimitedWaitingQueue(int nThreads, int maxItemsInTheQueue) {
BlockingQueue<Runnable> queue = maxItemsInTheQueue == 0
? new SynchronousQueue<>() { public boolean offer(Runnable r) { return interruptibleInfiniteOffer(this, r);} }
: new ArrayBlockingQueue<>(maxItemsInTheQueue) { public boolean offer(Runnable r) { return interruptibleInfiniteOffer(this, r);} };
return new ThreadPoolExecutor(nThreads, nThreads, 0, TimeUnit.MILLISECONDS, queue);
}

Categories