I have a cached thread pool where new tasks are spawned in rather unpredictable manner. These tasks don't generate any results (they are Runnables rather than Callables).
I would like to have an action to be executed whenever the pool has no active workers.
However I don't want to shutdown the pool (and obviously use awaitTermination) because I would have to reinitialize it again when a new task arrives (as it could arrive unpredictably, even during the shutdown).
I came up with the following possible approaches:
Have an extra thread (outside the pool) which is spawned whenever a new task is spawned AND the ThreadPoolExecutor had no active workers. It should then continually check the getActiveWorkers() until it returns 0 and if yes, execute the desired action.
Have some thread-safe queue (which one?), where the Future of every newly spawned task is added. Whenever there's at least one entry in the queue, spawn an extra thread (outside the pool) which waits until the queue is empty and executes the desired action.
Implement a PriorityBlockingQueue to use with the pool and assign the worker threads higher priority than to the thread (now from inside the pool) which executes the desired action.
My question:
I was wondering if there is some cleaner solution, which uses some nice synchronization object (like CountDownLatch, which however cannot be used here, because I don't know the number of tasks in advance) ?
If I were you, I would implement a decorator for your thread pool that keeps track of the scheduled tasks and slighlig modifies the tasks that are run. This way, whenever a Runnable is scheduled, you can instead schedule another, decoarated Runnable which is capable of tracing its own process.
This decorator would look something like:
class RunnableDecorator implements Runnable {
private final Runnable delegate;
// this task counter must be increased on any
// scheduling of a task by the thread pool
private final AtomicInteger taskCounter;
// Constructor omitted
#Override
public void run() {
try {
delegate.run();
} finally {
if (taskCounter.decrementAndGet() == 0) {
// spawn idle action
}
}
}
}
Of course, the thread pool has to increment the counter every time a task is scheduled. Thus, the logic for this must not be added to the Runnable but to the ThreadPool. Finally, it is up to you to decide if you want to run the idle action in the same thread or if you want to provide a reference to the executing thread pool to run a new thread. If you decide the latter, note however that the completion of the idle action would then trigger another idle action. You might however also provide a method for a sort of raw scheduling. You could also add the decoration to the thread queue what however makes it harder to provide this sort of raw scheduling.
This approach is non-blocking and does not mess with your code base too much. Note that the tread pool does not start an action when it is created and therefore empty by definition.
If you look at the source behind Executors.newCachedThreadPool(), you can see how it's created with a ThreadPoolExecutor. Using that, override the execute and afterExecute methods to add a counter. This way the increment and decrement logic is isolated in one location. Ex:
ExecutorService executor = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>()) {
private AtomicInteger counter = new AtomicInteger(0);
#Override
public void execute(Runnable r) {
counter.incrementAndGet();
super.execute(r);
}
#Override
public void afterExecute(Runnable r, Throwable t) {
if (counter.decrementAndGet() == 0) {
// thread pool is idle - do something
}
super.afterExecute(r, t);
}
};
Related
I have an ExecutorService to execute my tasks concurrently. Most of these tasks are simple actions that require ~300ms to complete each. But a few of these tasks are background processing queues that take in new sub-tasks all the time and execute them in order. These background tasks will remain active as long as there are normal tasks running.
The ThreadPool is generated through one of the Executors' methods (don't know which yet) with a user-specified Thread count. My fear is that the following situation might happen: There are less threads than there are background queues. At a given moment, all background queues are working, blocking all the threads of the ExecutorService. No normal tasks will thus be started and the program hang forever.
Is there a possibility this might happen and how can I avoid it? I'm thinking of a possibility to interrupt the background tasks to leave the place to the normal ones.
The goal is to limit the number of threads in my application because Google said having a lot of threads is bad and having them idle for most of the time is bad too.
There are ~10000 tasks that are going to be submitted in a very short amount of time at the begin of the program execution. About ~50 background task queues are needed and most of the time will be spent waiting for a background job to do.
Don't mix up long running tasks with short running tasks in same ExecutorService.
Use two different ExecutorService instances with right pool size. Even if you set the size as 50 for background threads with long running tasks, performance of the pool is not optimal since number of available cores (2 core, 4 core, 8 core etc.) is not in that number.
I would like to create two separate ExecutorService initialized with Runtime.getRuntime().availableProcessors()/2;
Have a look at below posts for more details to effectively utilize available cores:
How to implement simple threading with a fixed number of worker threads
Dynamic Thread Pool
You can have an unlimited number of threads, check out cache thread pool
Creates a thread pool that creates new threads as needed, but will
reuse previously constructed threads when they are available. These
pools will typically improve the performance of programs that execute
many short-lived asynchronous tasks. Calls to execute will reuse
previously constructed threads if available. If no existing thread is
available, a new thread will be created and added to the pool. Threads
that have not been used for sixty seconds are terminated and removed
from the cache. Thus, a pool that remains idle for long enough will
not consume any resources. Note that pools with similar properties but
different details (for example, timeout parameters) may be created
using ThreadPoolExecutor constructors.
Another option is create two different pools and reserve one for priority tasks.
The solution is that the background tasks stop instead of being idle when there is no work and get restarted if there are enough tasks again.
public class BackgroundQueue implements Runnable {
private final ExecutorService service;
private final Queue<Runnable> tasks = new ConcurrentLinkedQueue<>();
private final AtomicBoolean running = new AtomicBoolean(false);
private Future<?> future;
public BackgroundQueue(ExecutorService service) {
this.service = Objects.requireNonNull(service);
// Create a Future that immediately returns null
FutureTask f = new FutureTask<>(() -> null);
f.run();
future = f;
}
public void awaitQueueTermination() throws InterruptedException, ExecutionException {
do {
future.get();
} while (!tasks.isEmpty() || running.get());
}
public synchronized void submit(Runnable task) {
tasks.add(task);
if (running.compareAndSet(false, true))
future = service.submit(this);
}
#Override
public void run() {
while (!running.compareAndSet(tasks.isEmpty(), false)) {
tasks.remove().run();
}
}
}
currently I am experimenting with Concurrency in Java/JavaFX. Printing must run in a different thread otherwise it will make the JavaFX main thread freeze for a couple seconds. Right now my printing is done with this simplified example.
public void print(PrintContent pt) {
setPrintContent(pt);
Thread thread = new Thread(this);
thread.start();
}
#Override
public void run() {
// send content to printer
}
With this code I am sending many print jobs parallel to my printer. Therefore I get the error telling me that my printer can only handle 1 print job at a time. Since I know that Threads cannot be reused, I would like to know if there is a possibility to queue up Threads, so that my printer only handles one print job at a time.
Thank you very much for your effort and your time.
Use a single threaded executor to execute the print jobs. It will create one (and only one) background thread and queue the jobs:
// it might be better not to make this static; but you need to ensure there is
// only one instance of this executor:
private static final Executor PRINT_QUEUE = Executors.newSingleThreadExecutor();
// ...
public void print(PrintContent pt) {
PRINT_QUEUE.execute(() -> {
// send content to printer
});
}
~~> WAY 1
You can implement your own BlockingQueue read this is very useful or use a default implementation from Java libraries tutorial
So after reading the above links,you add a method in your class like
public void addJob(Object job){
queue.put(job);
}
Secondly you implement a Thread that is running into an infinite while loop.Inside it you call the method
queue.take();
When the queue is empty this Thread is blocked waiting until a new Object is added,so you dont have to worry about spending cpu time.
Finally you can set some upper bounds so for example queue can contain .. 27 items.
Mention that in case of Thread failure you have to recreate it manually.
~~>WAY 2 Better Approach
You can use an Executors Interface:
ExecutorService executorService1 = Executors.newSingleThreadExecutor();
From documentation:
Creates an Executor that uses a single worker thread operating off an
unbounded queue. (Note however that if this single thread terminates
due to a failure during execution prior to shutdown, a new one will
take its place if needed to execute subsequent tasks.) Tasks are
guaranteed to execute sequentially, and no more than one task will be
active at any given time.
With the method below you retrieve a result if the job has successfully done.
Future future = executorService.submit(new Callable(){ public Object call() throws Exception { System.out.println("Asynchronous Callable"); return "Callable Result"; } });
System.out.println("future.get() = " + future.get());
If future.get() returns null, the job has been done successfully.
Remember to call
executorService.shutdown(); because the active threads inside this ExecutorService may prevent the JVM from shutting down.
Full tutorial here
We have an application which processes items and on each iteration, starts a thread to do an update on an other database - it is not hugely important what happens on that other thread, it is a very straightforward update.
Our original intention was (by using a thread) to make sure the main processing is not held up by initalizing a connection to this other db and running the update.
Yesterday, we had an issue where (for a yet unknown reason) the database slowed down and the number of parallel threads went to the sky, resulting in 1000+ connections in this DB. So we realized we need more control over the threads.
I need a lib or tool for our software which can:
1) Put threads / jobs / tasks (anything - we can rewrite the code if required, we have Thread objects at the mintue) into a queue like system
2) we can define how many threads are running at most at the same time
3) After the thread finished, the thread is removed from the queue so GC can remove all the entities involved.
I was doing a fair bit of reading and i found ExecutorService (Executors.newFixedThreadPool(5);) but may problem is that it fails with 3) because according to the javadocs:
The threads in the pool will exist until it is explicitly shutdown.
Which, i believe, means that if the app keeps adding threads, the threads will exists until the application is restarted (or if i shutdown and reinstantiate the ExecutorService, but that seems to be a hack to me).
Am i right in thinking Executors.newFixedThreadPool(5) is failing at my 3) requirement?
Am i actually getting the problem from the good end? Do i need Threads or something different?
The sentence you are afraid of:
The threads in the pool will exist until it is explicitly shutdown
describes only calls to Executors.newFixedThreadPool(). To keep more control on the thread pool behavior, use ThreadPoolExecutor constructor expicitly, e.g
new ThreadPoolExecutor(1, //minimal Pool Size,
10, // maximal Pool Size,
30, TimeUnit.SECONDS // idle thread dies in 30 seconds
new ArrayBlockingQueue<Runnable>())
Here you need to understand the difference between Thread and the Runnable/Callable Task. so the meaning of The threads in the pool will exist until it is explicitly shutdown. is that at any point of time there will be 5 threads in the thread pool if you use Executors.newFixedThreadPool(5); . And the work that you want these threads to do would be submitted as Tasks (Runnable/Callable). So essentially at any point of time at max there will be 5 threads executing via this thread pool which in your case would be 5 connections.
The threads will stay there (waiting for other tasks to run), but it won't hold on to all the data that you put there. When the thread in the threadpool has executed the task, it will take a next task and won't reference the existing task anymore.
So your fears are baseless, unless you explicitly keep references to the tasks.
Use the ScheduledExecutorService with a fixed pool of threads for however many connections you need.
Have a BlockingQueue that you put requests in, the worker threads wait on the queue and process the requests as they appear.
The ExecutorService is the way to go. It provides you an interface (Future) to the state of the underlying thread, the ability to detect exceptions and return a value from the completed thread.
Here is a simple example of how to use ExecutorSerivce and the Future interface.
public class Updater implements Future< boolean > {
public Updater() { }
public boolean call() throws Exception {
System.out.println( "Hello, World!" );
return true;
}
}
public class Main {
public Main() { }
public static void main( String[] args ) {
ExecutorService pool = Executors.newFixedThreadPool( 1 );
boolean again = true;
do {
if ( again ) {
Future< ? > update = pool.submit( new Updater() );
}
/* Do other work while waiting for update to finish */
if( update.isDone() ) { //may be because of completion or an exception
try {
again = update.get(); // This would block if the Updater was still running
} catch( ExecutionException ee ) { // This is thrown by get() if an exception occurred in Updater.call()
again = false;
ee.printStackTrace();
}
}
} while ( true );
}
}
The example above will start an update to your database if the last update succeeded without an exception. This way you are controlling how many threads are trying to connect, and catching any errors that are causing the update to fail.
I have question about the Java threads. Here is my scenario:
I have a thread calling a method that could take while. The thread keeps itself on that method until I get the result. If I send another request to that method in the same way, now there are two threads running (provided the first did not return the result yet). But I want to give the priority to the last thread and don't want to get the results from the previously started threads. So how could I get rid of earlier threads when I do not have a stop method?
The standard design pattern is to use a local variable in the thread that can be set to stop it:
public class MyThread extends Thread {
private volatile boolean running = true;
public void stop() {
running = false;
}
public void run() {
while (running) {
// do your things
}
}
}
This way you can greacefully terminate the thread, i.e. without throwing an InterruptedException.
The best way really depends on what that method does. If it waits on something, chances are an interrupt will result in an InterruptedException which you handle and cleanly exit. If it's doing something busy, it won't:
class Scratchpad {
public static void main(String[] a) {
Thread t = new Thread(new Runnable() {
public void run() {doWork();}
});
t.start();
try {
Thread.sleep(50);
} catch (InterruptedException ie) {}
t.interrupt();
}
private static void doWork() {
for ( long i = 1; i != 0; i *=5 );
}
}
In the case above, the only viable solution really is a flag variable to break out of the loop early on a cancel, ala #inflagranti.
Another option for event-driven architectures is the poison-pill: if your method is waiting on a blocking queue for a new item, then you can have a global constant item called the "poison-pill" that when consumed (dequeued) you kill the thread:
try {
while(true) {
SomeType next = queue.take();
if ( next == POISON_PILL ) {
return;
}
consume(next);
}
} catch //...
EDIT:
It looks like what you really want is an executor service. When you submit a job to an executor service, you get back a Future which you can use to track results and cancel the job.
You can interrupt a Thread, its execution chain will throw an InterruptedException most of the time (see special cases in the documentation).
If you just want to slow down the other thread and not have it exit, you can take some other approach...
For one thing, just like exiting you can have a de-prioritize variable that, when set, puts your thread to sleep for 100ms on each iteration. This would effectively stop it while your other thread searched, then when you re-prioritize it it would go back to full speed.
However, this is a little sloppy. Since you only ever want one thing running but you want to have it remember to process others when the priority one is done, you may want to place your processing into a class with a .process() method that is called repeatedly. When you wish to suspend processing of that request you simply stop calling .process on that object for a while.
In this way you can implement a stack of such objects and your thread would just execute stack.peek().process(); every iteration, so pushing a new, more important task onto the stack would automatically stop any previous task from operating.
This leads to much more flexible scheduling--for instance you could have process() return false if there is nothing for it to do at which point your scheduler might go to the next item on the stack and try its' process() method, giving you some serious multi-tasking ability in a single thread without overtaxing your resources (network, I'm guessing)
There is a setPriority(int) method for Thread. You can set the first thread its priority like this:
Thread t = new Thread(yourRunnable);
t.start();
t.setPriority(Thread.MIN_PRIORITY); // The range goes from 1 to 10, I think
But this won't kill your thread. If you have only two threads using your runnable, then this is a good solution. But if you create threads in a loop and you always sets the priority of the last thread to minimum, you will get a lot of threads.
If this is what is application is going to do, take a look at a ThreadPool. This isn't an existing class in the Java API. You will have create one by yourself.
A ThreadPool is another Thread that manages all your other Threads the way you want. You can set a maximum number of running Threads. And in that ThreadPool, you can implement a system that manages the Thread priority automatically. Eg: You can make that older threads gain more priority, so you can properly end them.
So, if you know how to work with a ThreadPool, it can be very interesting.
According to java.lang.Thread API, you should use interrupt() method and check for isInterrupted() flag while you're doing some time-consuming cancelable operation. This approach allows to deal with different kind of "waiting situations":
1. wait(), join() and sleep() methods will throw InterruptedExcetion after you invoke interrupt() method
2. If thread blocked by java.nio.channels.Selector it will finish selector operation
3. If you're waiting for I/O thread will receive ClosedByInterruptException, but in this case your I/O facility must implement InterruptibleChannel interface.
If it's not possible to interrupt this action in a generic way, you could simply abandon previous thread and get results from a new one. You could do it by means of java.util.concurrent.Future and java.util.concurrent.ExecutorService.
Cosider following code snippet:
public class RequestService<Result> {
private ExecutorService executor = Executors.newFixedThreadPool(3);
private Future<Result> result;
public Future<Result> doRequest(){
if(result !=null){
result.cancel(true);
}
result = executor.submit(new Callable<Result>() {
public Result call() throws Exception {
// do your long-running service call here
}
});
return result;
}
}
Future object here represents a results of service call. If you invoke doRequest method one more time, it attempts to cancel previous task and then try to submit new request. As far as thread pool contain more than one thread, you won't have to wait until previous request is cancelled. New request is submitted immediately and method returns you a new result of request.
I have an Executors.newFixedThreadPool(1) that I send several different tasks to (all implementing Runnable), and they get queued up and run sequentially correct? What is the best way to only allow one of each task to be either running or queued up at one time? I want to ignore all tasks sent to the ExecutorService that are already in the queue.
I have an Executors.newFixedThreadPool(1) that I send several different tasks to (all implementing Runnable), and they get queued up and run sequentially correct?
Yes, by default the underlying thread pool executor is created with a LinkedBlockingQueue. Since you have only one worker thread and the queue is used in a FIFO manner, tasks will be executed in order.
What is the best way to only allow one of each task to be either running or queued up at one time? I want to ignore all tasks sent to the ExecutorService that are already in the queue.
The easiest way I can think is to create your own ExecutorService which extends a ThreadPoolExecutor. Then override the execute() method such that you call BlockingQueue#contains(Object) prior to delegating to the super classes execute.
public class LimitedExecutorService extends ThreadPoolExecutor {
public LimitedExecutorService(final int nThreads) {
super(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
}
#Override
public void execute(Runnable command) {
if (!this.getQueue().contains(command)) {
super.execute(command);
} else {
//reject
}
}
}
Note: Many people will argue that you should not extend a ThreadPoolExecutor, but instead say you implement ExecutorService and have your class contain a ThreadPoolExecutor which you delegate to (aka composition).
The thread pool does not guarantee task ordering.
As for the second part you can set the rejected execution handler, using ThreadPoolExecutor.DiscardPolicy using ThreadPoolExecutor