I'm new to java.util.concurrent package therefore I'd like to ask an advice. I need to evaluate some operation within a ThreadPool, but I also need to cancel the concurrent evaluation. So, I tend to wrap the ThreadPoolExecutor into a Future<T> interface. Here is what I tried to do:
public class PooledFutureTask implements Future<List<Integer>> {
ExecutorService executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
private boolean cancelled;
List<Integer> result;
public PooledFutureTask(List<Runnable> runnables) {
for (Runnable r : runnables)
executor.execute(r);
}
#Override
public boolean cancel(boolean mayInterruptIfRunning) {
if (executor.shutdownNow().size() > 0)
return cancelled = true;
return executor.isTerminated() || executor.isShutdown();
}
#Override
public boolean isCancelled() {
return cancelled;
}
#Override
public boolean isDone() {
return executor.isTerminated();
}
#Override
public List<Integer> get() throws InterruptedException, ExecutionException {
while(true) {
if (executor.isShutdown())
return result;
}
}
#Override
public List<Integer> get(long timeout, TimeUnit unit)
throws InterruptedException, ExecutionException, TimeoutException {
if (executor.awaitTermination(timeout, TimeUnit.MILLISECONDS))
throw new TimeoutException();
return result;
}
}
But I'm not sure about that. Couldn't you point me out where I was wrong?
I'm using Java7.
Related
When any command scheduled with fixed rate at any ScheduledExecutorService, it returns ScheduledFuture which can be cancelled as well.
But "cancel" does not provide guarantee that command is not still executing after cancel returns, for example because command was already in the middle of execution when "cancell" was called.
For mostly use cases it is enough functionality. But I have deal with usecase when need to block current thread after cancel, if command already is in progress, and wait until command done. In other words thread which called cancel should not go forward if command still executing. Cancelling with mayInterruptIfRunning=true also is not suitable, because I do not want to broke current executions, I just need to wait for normal complete.
I did not found how to achieve this requirements via standard JDK classes. Question1: Was I wrong and this kind of functionality exists?
So I decided to implement it by itself:
import java.util.concurrent.*;
public class GracefullyStoppingScheduledFutureDecorator implements ScheduledFuture {
/**
* #return the scheduled future with method special implementation of "cancel" method,
* which in additional to standard implementation,
* provides strongly guarantee that command is not in the middle of progress when "cancel" returns
*/
public static ScheduledFuture schedule(Runnable command, long initialDelay, long period, TimeUnit unit, ScheduledExecutorService scheduler) {
CancellableCommand cancellableCommand = new CancellableCommand(command);
ScheduledFuture future = scheduler.scheduleAtFixedRate(cancellableCommand, initialDelay, period, unit);
return new GracefullyStoppingScheduledFutureDecorator(future, cancellableCommand);
}
private GracefullyStoppingScheduledFutureDecorator(ScheduledFuture targetFuture, CancellableCommand command) {
this.targetFuture = targetFuture;
this.runnable = command;
}
private final ScheduledFuture targetFuture;
private final CancellableCommand runnable;
#Override
public boolean cancel(boolean mayInterruptIfRunning) {
runnable.cancel();
return targetFuture.cancel(mayInterruptIfRunning);
}
#Override
public long getDelay(TimeUnit unit) {
return targetFuture.getDelay(unit);
}
#Override
public int compareTo(Delayed o) {
return targetFuture.compareTo(o);
}
#Override
public boolean isCancelled() {
return targetFuture.isCancelled();
}
#Override
public boolean isDone() {
return targetFuture.isDone();
}
#Override
public Object get() throws InterruptedException, ExecutionException {
return targetFuture.get();
}
#Override
public Object get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
return targetFuture.get(timeout, unit);
}
private static class CancellableCommand implements Runnable {
private final Object monitor = new Object();
private final Runnable target;
private boolean cancelled = false;
private CancellableCommand(Runnable target) {
this.target = target;
}
public void cancel() {
synchronized (monitor) {
cancelled = true;
}
}
#Override
public void run() {
synchronized (monitor) {
if (!cancelled) {
target.run();
}
}
}
}
}
Question2: Could anybody find errors in the code above?
Question2: Could anybody find errors in the code above?
There is hypothetical deadlock which can be reproduced by following scenario:
Having thread T1 which holds monitor M1
Scheduled task is executing(holds its monitor M2) on thread T2 and wants to enter to M1, so T2 need to wait until T1 exits monitor M1.
T1 decided to cancel task, but because its monitor M2 is locked by task itself we have the deadlock.
Most likely scenario abovr is unreal, but to protect from all possible cases, I decided to rewrite code in lock-free manner:
public class GracefullyStoppingScheduledFuture {
/**
* #return the scheduled future with method special implementation of "cancel" method,
* which in additional to standard implementation,
* provides strongly guarantee that command is not in the middle of progress when "cancel" returns
*/
public static GracefullyStoppingScheduledFuture cheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit, ScheduledExecutorService scheduler) {
CancellableCommand cancellableCommand = new CancellableCommand(command);
ScheduledFuture future = scheduler.scheduleAtFixedRate(cancellableCommand, initialDelay, period, unit);
return new GracefullyStoppingScheduledFuture(future, cancellableCommand);
}
private GracefullyStoppingScheduledFuture(ScheduledFuture targetFuture, CancellableCommand command) {
this.targetFuture = targetFuture;
this.runnable = command;
}
private final ScheduledFuture targetFuture;
private final CancellableCommand runnable;
public void cancelAndBeSureOfTermination(boolean mayInterruptIfRunning) throws InterruptedException, ExecutionException {
try {
targetFuture.cancel(mayInterruptIfRunning);
} finally {
runnable.cancel();
}
}
private static class CancellableCommand implements Runnable {
private static final int NOT_EXECUTING = 0;
private static final int IN_PROGRESS = 1;
private static final int CANCELLED_WITHOUT_OBSTRUCTION = 2;
private static final int CANCELLED_IN_MIDDLE_OF_PROGRESS = 3;
private final AtomicInteger state = new AtomicInteger(NOT_EXECUTING);
private final AtomicReference<Thread> executionThread = new AtomicReference<>();
private final CompletableFuture<Void> cancellationFuture = new CompletableFuture<>();
private final Runnable target;
private CancellableCommand(Runnable target) {
this.target = target;
}
public void cancel() throws ExecutionException, InterruptedException {
if (executionThread.get() == Thread.currentThread()) {
// cancel method was called from target by itself
state.set(CANCELLED_IN_MIDDLE_OF_PROGRESS);
return;
}
while (true) {
if (state.get() == CANCELLED_WITHOUT_OBSTRUCTION) {
return;
}
if (state.get() == CANCELLED_IN_MIDDLE_OF_PROGRESS) {
cancellationFuture.get();
return;
}
if (state.compareAndSet(NOT_EXECUTING, CANCELLED_WITHOUT_OBSTRUCTION)) {
return;
}
if (state.compareAndSet(IN_PROGRESS, CANCELLED_IN_MIDDLE_OF_PROGRESS)) {
cancellationFuture.get();
return;
}
}
}
#Override
public void run() {
if (!state.compareAndSet(NOT_EXECUTING, IN_PROGRESS)) {
notifyWaiters();
return;
}
try {
executionThread.set(Thread.currentThread());
target.run();
} finally {
executionThread.set(null);
if (!state.compareAndSet(IN_PROGRESS, NOT_EXECUTING)) {
notifyWaiters();
}
}
}
private void notifyWaiters() {
if (state.get() == CANCELLED_WITHOUT_OBSTRUCTION) {
// no need to notify anything
return;
}
// someone waits for cancelling
cancellationFuture.complete(null);
return;
}
}
Precondition :
I implement chating in my application and try to implement schedule mechanism. Where user doesn't have internet connection it will be put into scheduled queue.
Problem explanation :
I have a single thread executor
private ExecutorService executor = Executors.newSingleThreadExecutor();
And have a Scheduler for my Tasks
public class MessageScheduler {
private ConcurrentLinkedQueue<MessageTask> queue;
public MessageScheduler() {
queue = new ConcurrentLinkedQueue<>();
}
public void schedule(MessageTask messageTask) {
queue.add(messageTask);
}
public ArrayList<MessageTask> getScheduled() {
return new ArrayList<>(queue);
}
public boolean isEmpty() {
return queue.isEmpty();
}
public MessageTask pop() {
return queue.peek();
}
public void removeHead() {
queue.poll();
}
}
MessageTask have complete listener :
public interface OnMessageCompleteListener {
public void onMessageSendingComplete(MessageTask task);
public void onMessageSendingError(MessageTask task);
}
In MessageTask i notify listener about task i complete :
#Override
public void run() {
// Async call
listener.onMessageSendingComplete(thisInstance);
}
The my problem is when i notify a listeners by onMessageSendingComplete() i need to execute scheduled messageTasks in same method :
#Override
public void onMessageSendingComplete(MessageTask task) {
scheduler.removeHead();
if(!scheduler.isEmpty()){
MessageTask readyToExecute = scheduler.pop();
// MessageTask may be not finished already.
// and executor was always will be busy or not?
execute(readyToExecute);
}
}
execute method :
public void execute(MessageTask messageTask) {
if (!executorIsBusy && isNetworkAvailable) {
messageTask.setOnCompleteListener(this);
executor.execute(messageTask);
executorIsBusy = true;
} else {
scheduler.schedule(messageTask);
}
}
Where i need set executorIsBusy to false that ExecutorService will be ready to execute?
Resolved.
After some thinking i find a solution.
I change
#Override
public void onMessageSendingComplete(MessageTask task) {
scheduler.removeHead();
if(!scheduler.isEmpty()){
MessageTask readyToExecute = scheduler.pop();
// MessageTask may be not finished already.
// and executor was always will be busy or not?
execute(readyToExecute);
}
}
to
#Override
public void onMessageSendingComplete(MessageTask task) {
scheduler.removeHead();
if(!scheduler.isEmpty()){
MessageTask readyToExecute = scheduler.pop();
readyToExecute.setOnCompleteListener(this);
executor.execute(readyToExecute);
} else {
executorIsBusy = false;
}
}
ThreadPoolExecutor doc says
If corePoolSize or more threads are running, the Executor always
prefers queuing a request rather than adding a new thread.
If there are more than corePoolSize but less than maximumPoolSize
threads running, a new thread will be created only if the queue is
full.
Is there a way to get the executor to prefer new thread creation until the max is reached even if there are there are more than core size threads, and then start queuing? Tasks would get rejected if the queue reached its maximum size. It would be nice if the timeout setting would kick in and remove threads down to core size after a busy burst has been handled. I see the reason behind preferring to queue so as to allow for throttling; however, this customization would additionally allow the queue to act mainly as a list of tasks yet to be run.
No way to get this exact behavior with a ThreadPoolExecutor.
But, here's a couple solutions:
Consider,
If less than corePoolSize threads are running, a new thread will be created for every item queued until coorPoolSize threads are running.
A new thread will only be created if the queue is full, and less than maximumPoolSize threads are running.
So, wrap a ThreadPoolExecutor in a class which monitors how fast items are being queued. Then, change the core pool size to a higher value when many items are being submitted. This will cause a new thread to be created each time a new item is submitted.
When the submission burst is done, core pool size needs to be manually reduced again so the threads can naturally time out. If you're worried the busy burst could end abruptly, causing the manual method to fail, be sure to use allowCoreThreadTimeout.
Create a fixed thread pool, and allowCoreThreadTimeout
Unfortunately this uses more threads during low submission bursts, and stores no idle threads during zero traffic.
Use the 1st solution if you have the time, need, and inclination as it will handle a wider range of submission frequency and so is a better solution in terms of flexibility.
Otherwise use the 2nd solution.
Just do what Executors.newFixedThreadPool does and set core and max to the same value. Here's the newFixedThreadPool source from Java 6:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
What you can do if you have an existing one:
ThreadPoolExecutor tpe = ... ;
tpe.setCorePoolSize(tpe.getMaxPoolSize());
Edit: As William points out in the comments, this means that all threads are core threads, so none of the threads will time out and terminate. To change this behavior, just use ThreadPoolExecutor.allowCoreThreadTimeout(true). This will make it so that the threads can time out and be swept away when the executor isn't in use.
It seems that your preference is minimal latency during times of low-activity. For that I would just set the corePoolSize to the max and let the extra threads hang around. During high-activity times these threads will be there anyways. During low-activity times their existence won't have that much impact. You can set the core thread timeout if you want them to die though.
That way all the threads will always be available to execute a task as soon as possible.
CustomBlockingQueue
package com.gunjan;
import java.util.concurrent.BlockingQueue;
public abstract class CustomBlockingQueue<E> implements BlockingQueue<E> {
public BlockingQueue<E> blockingQueue;
public CustomBlockingQueue(BlockingQueue blockingQueue) {
this.blockingQueue = blockingQueue;
}
#Override
final public boolean offer(E e) {
return false;
}
final public boolean customOffer(E e) {
return blockingQueue.offer(e);
}
}
ThreadPoolBlockingQueue
package com.gunjan;
import java.util.Collection;
import java.util.Iterator;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.TimeUnit;
public class ThreadPoolBlockingQueue<E> extends CustomBlockingQueue<E> {
public ThreadPoolBlockingQueue(BlockingQueue blockingQueue) {
super(blockingQueue);
}
#Override
public E remove() {
return this.blockingQueue.remove();
}
#Override
public E poll() {
return this.blockingQueue.poll();
}
#Override
public E element() {
return this.blockingQueue.element();
}
#Override
public E peek() {
return this.blockingQueue.peek();
}
#Override
public int size() {
return this.blockingQueue.size();
}
#Override
public boolean isEmpty() {
return this.blockingQueue.isEmpty();
}
#Override
public Iterator<E> iterator() {
return this.blockingQueue.iterator();
}
#Override
public Object[] toArray() {
return this.blockingQueue.toArray();
}
#Override
public <T> T[] toArray(T[] a) {
return this.blockingQueue.toArray(a);
}
#Override
public boolean containsAll(Collection<?> c) {
return this.blockingQueue.containsAll(c);
}
#Override
public boolean addAll(Collection<? extends E> c) {
return this.blockingQueue.addAll(c);
}
#Override
public boolean removeAll(Collection<?> c) {
return this.blockingQueue.removeAll(c);
}
#Override
public boolean retainAll(Collection<?> c) {
return this.blockingQueue.retainAll(c);
}
#Override
public void clear() {
this.blockingQueue.clear();
}
#Override
public boolean add(E e) {
return this.blockingQueue.add(e);
}
#Override
public void put(E e) throws InterruptedException {
this.blockingQueue.put(e);
}
#Override
public boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException {
return this.blockingQueue.offer(e, timeout, unit);
}
#Override
public E take() throws InterruptedException {
return this.blockingQueue.take();
}
#Override
public E poll(long timeout, TimeUnit unit) throws InterruptedException {
return this.blockingQueue.poll(timeout, unit);
}
#Override
public int remainingCapacity() {
return this.blockingQueue.remainingCapacity();
}
#Override
public boolean remove(Object o) {
return this.blockingQueue.remove(o);
}
#Override
public boolean contains(Object o) {
return this.blockingQueue.contains(o);
}
#Override
public int drainTo(Collection<? super E> c) {
return this.blockingQueue.drainTo(c);
}
#Override
public int drainTo(Collection<? super E> c, int maxElements) {
return this.blockingQueue.drainTo(c, maxElements);
}
}
RejectedExecutionHandlerImpl
package com.gunjan;
import java.util.concurrent.RejectedExecutionException;
import java.util.concurrent.RejectedExecutionHandler;
import java.util.concurrent.ThreadPoolExecutor;
public class RejectedExecutionHandlerImpl implements RejectedExecutionHandler {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
boolean inserted = ((CustomBlockingQueue) executor.getQueue()).customOffer(r);
if (!inserted) {
throw new RejectedExecutionException();
}
}
}
CustomThreadPoolExecutorTest
package com.gunjan;
import java.util.concurrent.*;
public class CustomThreadPoolExecutorTest {
public static void main(String[] args) throws InterruptedException {
LinkedBlockingQueue linkedBlockingQueue = new LinkedBlockingQueue<Runnable>(500);
CustomBlockingQueue customLinkedBlockingQueue = new ThreadPoolBlockingQueue<Runnable>(linkedBlockingQueue);
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(5, 100, 60, TimeUnit.SECONDS,
customLinkedBlockingQueue, new RejectedExecutionHandlerImpl());
for (int i = 0; i < 750; i++) {
try {
threadPoolExecutor.submit(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(1000);
System.out.println(threadPoolExecutor);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
} catch (RejectedExecutionException e) {
e.printStackTrace();
}
}
threadPoolExecutor.shutdown();
threadPoolExecutor.awaitTermination(Integer.MAX_VALUE, TimeUnit.MINUTES);
System.out.println(threadPoolExecutor);
}
}
Lets have one classic Executor in application. Many parts of application use this executor for some computations, each computation can be cancelled, for this I can call shutdown() or shutdownNow() on Executor.
But I want to shutdown only part of tasks in Executor. Sadly I can't have access to Future objects, they are private part of computation implementation (actually computation is backed by actor framework jetlang)
I want something like Executor wrapper, which I could pass to computation and which should be backed by real Executor. Something like this:
// main application executor
Executor applicationExecutor = Executors.newCachedThreadPool();
// starting computation
Executor computationExecutor = new ExecutorWrapper(applicationExecutor);
Computation computation = new Computation(computationExecutor);
computation.start();
// cancelling computation
computation.cancel();
// shutting down only computation tasks
computationExecutor.shutdown();
// applicationExecutor remains running and happy
Or any other idea?
For those, who wants good ends: there is final solution, partially based of Ivan Sopov's answer. Luckily jetlang uses for running its tasks only Executor interface (not ExecutorService), so I make wrapper class which supports stopping tasks created only by this wrapper.
static class StoppableExecutor implements Executor {
final ExecutorService executor;
final List<Future<?>> futures = Lists.newArrayList();
boolean stopped;
public StoppableExecutor(ExecutorService executor) {
this.executor = executor;
}
void stop() {
this.stopped = true;
synchronized (futures) {
for (Iterator<Future<?>> iterator = futures.iterator(); iterator.hasNext();) {
Future<?> future = iterator.next();
if (!future.isDone() && !future.isCancelled()) {
System.out.println(future.cancel(true));
}
}
futures.clear();
}
}
#Override
public void execute(Runnable command) {
if (!stopped) {
synchronized (futures) {
Future<?> newFuture = executor.submit(command);
for (Iterator<Future<?>> iterator = futures.iterator(); iterator.hasNext();) {
Future<?> future = iterator.next();
if (future.isDone() || future.isCancelled())
iterator.remove();
}
futures.add(newFuture);
}
}
}
}
Using this is pretty straightforward:
ExecutorService service = Executors.newFixedThreadPool(5);
StoppableExecutor executor = new StoppableExecutor(service);
// doing some actor stuff with executor instance
PoolFiberFactory factory = new PoolFiberFactory(executor);
// stopping tasks only created on executor instance
// executor service is happily running other tasks
executor.stop();
That's all. Works nice.
How about having your Computation be a Runnable (and run using the provided Executor) until a boolean flag is set? Something along the lines of :
public class Computation
{
boolean volatile stopped;
public void run(){
while(!stopped){
//do magic
}
public void cancel)(){stopped=true;}
}
What you are doing is essentially stopping the thread. However, it does not get garbage-collected, but is instead re-used because it is managed by the Executor. Look up "what is the proper way to stop a thread?".
EDIT: please note the code above is quite primitive in the sense it assumes the body of the while loop takes a short amount of time. If it does not, the check will be executed infrequently and you will notice a delay between canceling a task and it actually stopping.
Something like this?
You may do partial shutdown:
for (Future<?> future : %ExecutorServiceWrapperInstance%.getFutures()) {
if (%CONDITION%) {
future.cancel(true);
}
}
Here is the code:
package com.sopovs.moradanen;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class ExecutorServiceWrapper implements ExecutorService {
private final ExecutorService realService;
private List<Future<?>> futures = new ArrayList<Future<?>>();
public ExecutorServiceWrapper(ExecutorService realService) {
this.realService = realService;
}
#Override
public void execute(Runnable command) {
realService.execute(command);
}
#Override
public void shutdown() {
realService.shutdown();
}
#Override
public List<Runnable> shutdownNow() {
return realService.shutdownNow();
}
#Override
public boolean isShutdown() {
return realService.isShutdown();
}
#Override
public boolean isTerminated() {
return realService.isTerminated();
}
#Override
public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException {
return realService.awaitTermination(timeout, unit);
}
#Override
public <T> Future<T> submit(Callable<T> task) {
Future<T> future = realService.submit(task);
synchronized (this) {
futures.add(future);
}
return future;
}
public synchronized List<Future<?>> getFutures() {
return Collections.unmodifiableList(futures);
}
#Override
public <T> Future<T> submit(Runnable task, T result) {
Future<T> future = realService.submit(task, result);
synchronized (this) {
futures.add(future);
}
return future;
}
#Override
public Future<?> submit(Runnable task) {
Future<?> future = realService.submit(task);
synchronized (this) {
futures.add(future);
}
return future;
}
#Override
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException {
List<Future<T>> future = realService.invokeAll(tasks);
synchronized (this) {
futures.addAll(future);
}
return future;
}
#Override
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit)
throws InterruptedException {
List<Future<T>> future = realService.invokeAll(tasks, timeout, unit);
synchronized (this) {
futures.addAll(future);
}
return future;
}
#Override
public <T> T invokeAny(Collection<? extends Callable<T>> tasks) throws InterruptedException, ExecutionException {
//don't know what to do here. Maybe this method is not needed by the framework
//than just throw new NotImplementedException();
return realService.invokeAny(tasks);
}
#Override
public <T> T invokeAny(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit)
throws InterruptedException, ExecutionException, TimeoutException {
//don't know what to do here. Maybe this method is not needed by the framework
//than just throw new NotImplementedException();
return realService.invokeAny(tasks, timeout, unit);
}
}
Consider a Swing application with a JList or JTable, when the selection changes a SwingWorker is started and loads related data from database and updates UI. This works fine and the UI is responsive.
But if the user is quickly changing the selected row (holding key-up/down) I want to be sure that the last selected row is the one that is loaded last, and also I don't wanna query the DB in vain. So what I want is an single threaded Executor with a LIFO queue of size=1. So submitting a task to it removes any previous submitted tasks and making it execute at most 1 task at a time and having at most 1 task waiting for execution.
I couldn't find anything like this in java.util.concurrent so I wrote my own Executor. Was I right in doing that or am I missing something from the concurrent package? Is the solution acceptable or is there better ways of achieving what I want?
public class SingleLIFOExecutor implements Executor
{
private final ThreadPoolExecutor executor;
private Runnable lastCommand;
public SingleLIFOExecutor()
{
executor = new ThreadPoolExecutor(0, 1, 0, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(1));
}
#Override
public void execute(Runnable command)
{
executor.remove(lastCommand);
lastCommand = command;
executor.execute(command);
}
}
And here's an example showing how it could be used:
final Executor executor = new SingleLIFOExecutor();
JList jList = createMyList();
jList.addListSelectionListener(new ListSelectionListener()
{
#Override
public void valueChanged(ListSelectionEvent e)
{
if (!e.getValueIsAdjusting())
{
executor.execute(new MyWorker());
}
}
});
LinkedBlockingDeque seems to still use Queues with ThreadPoolExecutor.
So instead I used a wrapper and used it with the ThreadPoolExecutor:
package util;
import java.util.Collection;
import java.util.Iterator;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.TimeUnit;
/**
* LIFO BlockingQueue to be used with the ExecutorService.
* #author Daniel
* #param <T>
*/
public class LinkedBlockingStack<T> implements BlockingQueue<T>{
private final LinkedBlockingDeque<T> stack = new LinkedBlockingDeque<T>();
#Override
public T remove() {
return stack.remove();
}
#Override
public T poll() {
return stack.poll();
}
#Override
public T element() {
return stack.element();
}
#Override
public T peek() {
return stack.peek();
}
#Override
public int size() {
return stack.size();
}
#Override
public boolean isEmpty() {
return stack.isEmpty();
}
#Override
public Iterator<T> iterator() {
return stack.iterator();
}
#Override
public Object[] toArray() {
return stack.toArray();
}
#Override
public <S> S[] toArray(final S[] a) {
return stack.toArray(a);
}
#Override
public boolean containsAll(final Collection<?> c) {
return stack.containsAll(c);
}
#Override
public boolean addAll(final Collection<? extends T> c) {
return stack.addAll(c);
}
#Override
public boolean removeAll(final Collection<?> c) {
return stack.removeAll(c);
}
#Override
public boolean retainAll(final Collection<?> c) {
return stack.removeAll(c);
}
#Override
public void clear() {
stack.clear();
}
#Override
public boolean add(final T e) {
return stack.offerFirst(e); //Used offerFirst instead of add.
}
#Override
public boolean offer(final T e) {
return stack.offerFirst(e); //Used offerFirst instead of offer.
}
#Override
public void put(final T e) throws InterruptedException {
stack.put(e);
}
#Override
public boolean offer(final T e, final long timeout, final TimeUnit unit)
throws InterruptedException {
return stack.offerLast(e, timeout, unit);
}
#Override
public T take() throws InterruptedException {
return stack.take();
}
#Override
public T poll(final long timeout, final TimeUnit unit)
throws InterruptedException {
return stack.poll();
}
#Override
public int remainingCapacity() {
return stack.remainingCapacity();
}
#Override
public boolean remove(final Object o) {
return stack.remove(o);
}
#Override
public boolean contains(final Object o) {
return stack.contains(o);
}
#Override
public int drainTo(final Collection<? super T> c) {
return stack.drainTo(c);
}
#Override
public int drainTo(final Collection<? super T> c, final int maxElements) {
return stack.drainTo(c, maxElements);
}
}
BlockingDeque I believe is what you want. It supports stacks.
What I have in my code:
private transient final ExecutorService threadPool=
new ThreadPoolExecutor(3, 10,10,
TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<Runnable>());
This was the solution I implemented, works great for the problem I tried to solve :)
/**
* A "Single Last-In-First-Out Executor".
* <p>
* It maintains a queue of <b>one</b> task and only one task may execute simultaneously,
* submitting a new task to {#link #execute(Runnable)} will discard any previous submitted not yet started tasks.
*/
public class SingleLIFOExecutor implements Executor
{
private final ThreadPoolExecutor executor;
private Runnable lastCommand;
public SingleLIFOExecutor()
{
executor = new ThreadPoolExecutor(0, 1, 0, MILLISECONDS, new ArrayBlockingQueue<Runnable>(1));
}
/**
* #see java.util.concurrent.Executor#execute(java.lang.Runnable)
*/
#Override
public void execute(Runnable command)
{
executor.remove(lastCommand);
lastCommand = command;
executor.execute(command);
}
}