I have a fixedThreadPool that I am using to run a bunch of worker threads to achieve parallel execution of a task with many components.
When all threads have finished, I retrieve their results (which are quite large) using a method (getResult) and write them to a file.
Ultimately, to save memory and be able to see intermediate results, I'd like each thread to write its result to the file as soon as it finishes execution and then free its memory.
Ordinarily, I'd add code to that effect to the end of the run() method. However, certain other objects in this class also calls these threads, but DO NOT want them to write their results to file - instead they use their results to perform other calculations, which are eventually written to file.
So, I was wondering if it's possible to attach a callback function to the event of a thread finishing using the ExecutorService. That way, I can immediately retrieve its result and free the memory in that scenario, but not break the code when those threads are used in other scenarios.
Is such a thing possible?
If using Google Guava is an option, you could utilize the ListenableFuture interface in the following manner:
Convert an ExecutorService to a ListeningExecutorService via MoreExecutors.listeningDecorator(existingExecutorService)
The submit(Callable<V>) method of ListeningExecutorService has been narrowed to return a ListenableFuture, which is a subinterface of Future.
ListenableFuture has an addListener() method so you can register a callback to be run when the future is completed.
You can add a callback for when a thread returns in Java 8+ using CompletableFuture as in the following, where t is the result of your long-running computation,
CompletableFuture.supplyAsync(() -> {
T t = new T();
// do something
return t;
}).thenApply(t -> {
// process t
});
If you want to use callbacks in just Java 7, you could do something like,
int x = 10;
ExecutorService fixedThreadPool = Executors.newFixedThreadPool(x);
Future<T> result = fixedThreadPool.submit(() -> {
// do calculation
return T;
});
fixedThreadPool.submit(() -> {
long minutesToWait = 5;
T t = null;
try {
t = result.get(minutesToWait, TimeUnit.MINUTES);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
LOGGER.error(e);
}
if (t != null) {
// process t
}
});
ExecutorService#submit return FutureTask<T> which helps you to retrieve result and the ExecutorService#get method will block execution until the computation is not completed. Example -
ExecutorService executor = Executors.newFixedThreadPool(10);
Future<Long> future = executor.submit(new Callable<Long>(){
#Override
public Long call() throws Exception {
long sum = 0;
for (long i = 0; i <= 10000000l; i++) {
sum += i;
}
return sum;
}
});
Long result = future.get();
System.out.println(result);
So, I was wondering if it's possible to attach a callback function to the event of a thread finishing using the ExecutorService.
Not directly, no, but there are a couple of ways you could accomplish this. The easiest way that comes to mind is to wrap your Runnable in another Runnable that does the reaping of the results.
So you'd do something like:
threadPool.submit(new ResultPrinter(myRunnable));
...
private static class ResultPrinter implements Runnable {
private final MyRunnable myRunnable;
public ResultPrinter(MyRunnable myRunnable) {
this.myRunnable = myRunnable;
}
public void run() {
myRunnable.run();
Results results = myRunnable.getResults();
// print results;
}
}
Project Loom
Project Loom will hopefully be bringing new features to the concurrency facilities of Java. Experimental builds available now, based on early-access Java 17. The Loom teams is soliciting feedback. For more info, see any of the most recent videos and articles by members of the team such as Ron Pressler or Alan Bateman. Loom has evolved, so study the most recent resources.
One convenient feature of Project Loom is making ExecutorService be AutoCloseable. This means we can use try-with-resources syntax to automatically shutdown an executor service. The flow-of-control blocks at the end of the try block until all the submitted tasks are done/failed/canceled. After that, the executor service is automatically closed. Simplifies our code, and makes obvious by visual code structure our intent to wait for tasks to complete.
Another import feature of Project Loom is virtual threads (a.k.a. fibers). Virtual threads are lightweight in terms of both memory and CPU.
Regarding memory, each virtual thread gets a stack that grows and shrinks as needed.
Regarding CPU, each of many virtual threads rides on top of any of several platform/kernel threads. This makes blocking is very cheap. When a virtual thread blocks, it is “parked” (set aside) so that another virtual thread may continue to execute on the “real” platform/kernel thread.
Being lightweight means we can have many virtual threads at a time, millions even.
➥ The challenge of your Question is to react immediately when a submitted task is ready to return its result, without waiting for all the other tasks to finish. This is much simpler with Project Loom technology.
Just call get on each Future on yet another thread
Because we have nearly endless numbers of threads, and because blocking is so very cheap, we can submit a task that simply calls Future#get to wait for a result on every Future returned by every Callable we submit to an executor service. The call to get blocks, waiting until the Callable from whence it came has finished its work and returned a result.
Normally, we would want to avoid assigning a Future#get call to a conventional background thread. That thread would halt all further work until the blocked get method returns. But with Project Loom, that blocking call is detected, and its thread is “parked”, so other threads may continue. And when that blocked-call eventually returns, that too is detected by Loom, causing the no-longer-blocked-task’s virtual thread to soon be scheduled for further execution on a “real” thread. All this parking and rescheduling happens rapidly and automatically, with no effort on our part as Java programmers.
To demonstrate, the results of my tasks are stuffed into a concurrent map. To show that this is happening as soon as results are available, I override the put method on the ConcurrentSkipListMap class to do a System.out.println message.
The full example app is shown below. But the 3 key lines are as follows. Notice how we instantiate a Callable that sleeps a few seconds, and then returns the current moment as a Instant object. As we submit each of those Callable objects, we get back a Future object. For each returned Future, we submit another task, a Runnable, to our same executor service that merely calls Future#get, waiting for a result, and eventually posting that result to our results map.
final Callable < Instant > callable = new TimeTeller( nth );
final Future < Instant > future = executorService.submit( callable ); // Submit first task: a `Callable`, an instance of our `TimeTeller` class.
executorService.submit( ( ) -> results.put( nth , future.get() ) ); // Submit second task: a `Runnable` that merely waits for our first task to finish, and put its result into a map.
Caveat: I am no expert on concurrency. But I believe my approach here is sound.
Caveat: Project Loom is still in the experimental stage, and is subject to change in both its API and its behavior.
package work.basil.example.callbacks;
import java.time.Duration;
import java.time.Instant;
import java.util.concurrent.*;
public class App
{
public static void main ( String[] args )
{
App app = new App();
app.demo();
}
private void demo ( )
{
System.out.println( "INFO - Starting `demo` method. " + Instant.now() );
int limit = 10;
ConcurrentNavigableMap < Integer, Instant > results = new ConcurrentSkipListMap <>()
{
#Override
public Instant put ( Integer key , Instant value )
{
System.out.println( "INFO - Putting key=" + key + " value=" + value + " at " + Instant.now() );
return super.put( key , value );
}
};
try (
ExecutorService executorService = Executors.newVirtualThreadExecutor() ;
)
{
for ( int i = 0 ; i < limit ; i++ )
{
final Integer nth = Integer.valueOf( i );
final Callable < Instant > callable = new TimeTeller( nth );
final Future < Instant > future = executorService.submit( callable ); // Submit first task: a `Callable`, an instance of our `TimeTeller` class.
executorService.submit( ( ) -> results.put( nth , future.get() ) ); // Submit second task: a `Runnable` that merely waits for our first task to finish, and put its result into a map.
}
}
// At this point flow-of-control blocks until:
// (a) all submitted tasks are done/failed/canceled, and
// (b) the executor service is automatically closed.
System.out.println( "INFO - Ending `demo` method. " + Instant.now() );
System.out.println( "limit = " + limit + " | count of results: " + results.size() );
System.out.println( "results = " + results );
}
record TimeTeller(Integer id) implements Callable
{
#Override
public Instant call ( ) throws Exception
{
// To simulate work that involves blocking, sleep a random number of seconds.
Duration duration = Duration.ofSeconds( ThreadLocalRandom.current().nextInt( 1 , 55 ) );
System.out.println( "id = " + id + " ➠ duration = " + duration );
Thread.sleep( duration );
return Instant.now();
}
}
}
When run.
INFO - Starting `demo` method. 2021-03-07T07:51:03.406847Z
id = 1 ➠ duration = PT27S
id = 2 ➠ duration = PT4S
id = 4 ➠ duration = PT6S
id = 5 ➠ duration = PT16S
id = 6 ➠ duration = PT34S
id = 7 ➠ duration = PT33S
id = 8 ➠ duration = PT52S
id = 9 ➠ duration = PT17S
id = 0 ➠ duration = PT4S
id = 3 ➠ duration = PT41S
INFO - Putting key=2 value=2021-03-07T07:51:07.443580Z at 2021-03-07T07:51:07.444137Z
INFO - Putting key=0 value=2021-03-07T07:51:07.445898Z at 2021-03-07T07:51:07.446173Z
INFO - Putting key=4 value=2021-03-07T07:51:09.446220Z at 2021-03-07T07:51:09.446623Z
INFO - Putting key=5 value=2021-03-07T07:51:19.443060Z at 2021-03-07T07:51:19.443554Z
INFO - Putting key=9 value=2021-03-07T07:51:20.444723Z at 2021-03-07T07:51:20.445132Z
INFO - Putting key=1 value=2021-03-07T07:51:30.443793Z at 2021-03-07T07:51:30.444254Z
INFO - Putting key=7 value=2021-03-07T07:51:36.445371Z at 2021-03-07T07:51:36.445865Z
INFO - Putting key=6 value=2021-03-07T07:51:37.442659Z at 2021-03-07T07:51:37.443087Z
INFO - Putting key=3 value=2021-03-07T07:51:44.449661Z at 2021-03-07T07:51:44.450056Z
INFO - Putting key=8 value=2021-03-07T07:51:55.447298Z at 2021-03-07T07:51:55.447717Z
INFO - Ending `demo` method. 2021-03-07T07:51:55.448194Z
limit = 10 | count of results: 10
results = {0=2021-03-07T07:51:07.445898Z, 1=2021-03-07T07:51:30.443793Z, 2=2021-03-07T07:51:07.443580Z, 3=2021-03-07T07:51:44.449661Z, 4=2021-03-07T07:51:09.446220Z, 5=2021-03-07T07:51:19.443060Z, 6=2021-03-07T07:51:37.442659Z, 7=2021-03-07T07:51:36.445371Z, 8=2021-03-07T07:51:55.447298Z, 9=2021-03-07T07:51:20.444723Z}
Related
I want to create an ExecutorService in Java which, when given a task will stop and discard it's current task (if there is a current task) and execute the given task. When a new task is given to this ExecutorService it is always because the previous tasks became irrelevant an not worth executing anymore.
Is there a builtin way in Java to do this or should I resort to implementing this behavior myself? Or is there another approach which works better in this case?
This is an interesting problem. It took me a bit deeper into the core ExecutorService implementation. Thanks!
Solving without ExecutorService
From what you have mentioned, you will have atmost one thread executing tasks and atmost one task pending because we are interested only in the last submitted task. Do you really need an ExecutorService for this?
You can just hold the next task in a static AtomicReference field of a POJO object. Since we are interested only in the latest task, task producers may simply replace the object in the AtomicReference. The task consumer can get from this field as soon as the current task execution is done. The field must be:
static because there should be only one instance of this field
AtomicReference since multiple threads may be trying to set the next task.
Solving using ExecutorService
However, if you still want to go the ExecutorService way, you can try this out. Create a ThreadPoolExecutor with only one thread (core and maximum) and give it a BlockingQueue implementation that "forgets" all its elements as soon as a new one is added.
Here is a sample test code that submits new tasks 100 times. If the previous task has not been taken up for execution, it is discarded. If it has been, then it is executed and the new one is queued.
public class OnlyOneTask{
public static void main( String[] args ){
ExecutorService svc = null;
/* A BlockingQueue that immediately "forgets" all tasks it had as soon as a new one is "offered". */
BlockingQueue<Runnable> Q = new ArrayBlockingQueue<Runnable>( 1 ) {
private static final long serialVersionUID = 1L;
/* Forget the current task(s) and add the new one
* TODO These 2 steps may need synchronization. */
public boolean offer( Runnable e) {
clear();
return super.offer( e );
}
};
try {
/* A ThreadPoolExecutor that uses the queue we created above. */
svc = new ThreadPoolExecutor( 1, 1, 5000, TimeUnit.MILLISECONDS, Q );
for( int i = 0; i < 100; i++ ) {
/* Our simple task. */
int id = i;
Runnable r = () -> {
System.out.print( "|" + id + "|" );
};
svc.submit( r );
/* A delay generator. Otherwise, tasks will be cleared too fast. */
System.out.print( " " );
}
}
finally {
svc.shutdown();
try{
svc.awaitTermination( 10, TimeUnit.SECONDS );
}
catch( InterruptedException e ){
e.printStackTrace();
}
}
}
}
This sample class is only to give a sense of what I thought will work. You will certainly need to improve upon the following drawbacks in this implementation:
The first task is anyhow executed because it is immediately picked up by the ExecutorService. (This is why the next point becomes important.)
Interruptibility/cancellability has to be brought in to the running tasks, if necessary
Another way using ExecutorService and Future.cancel()
This is actually the simplest, if you are checking thread interruption in the task. This is basically the same as above but instead of clear()ing the queue, we simply use Future.cancel() to indicate that we don't need to execute the last task.
public static void main( String[] args ){
ExecutorService svc = null;
try {
/* A single thread executor is enough. */
svc = Executors.newSingleThreadExecutor();
Future<?> f = null;
for( int i = 0; i < 100; i++ ) {
int id = i;
/* Our simple task. */
Runnable r = () -> {
/* If the thread has been interrupted (by the Future.cancel() call, then return from here. */
if( Thread.currentThread().isInterrupted() ) return;
System.out.print( "|" + id + "|" );
};
if( f != null ) f.cancel( true );
f = svc.submit( r );
/* A pseudo delay generator. */
System.out.print( " " );
}
}
finally {
svc.shutdown();
try{
svc.awaitTermination( 10, TimeUnit.SECONDS );
}
catch( InterruptedException e ){
e.printStackTrace();
}
}
}
I'm just exploring method scheduleAtFixedRate of class ScheduledExecutorService in Java.
Here is my suspicious code:
ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(5);
Runnable command = () -> {
System.out.println("Yo");
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
e.printStackTrace();
}
};
scheduledExecutorService.scheduleAtFixedRate(command, 0, 1, TimeUnit.SECONDS);
I expected that every 1 second scheduledExecutorService will try to take new thread from the pool and start it.
API says: "scheduledExecutorService creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given period. /(unimportant deleted)/ If any execution of this task takes longer than its period, then subsequent executions may start late, but will not concurrently execute."
Result - every new thread starts every 4 seconds.
So, the questions:
What's the catch - Does Thread.sleep() stop all threads or nuance in this behavior - "If any execution of this task takes longer than its period, then subsequent executions may start late, but will not concurrently execute"?
If "will not concurrently execute" is true in this situation - why do we need this pool of several threads if every thread will start after execution of previous thread?
Is there any simple valid example of usage of scheduleAtFixedRate, where one thread starts while previous still executes?
The answer is in the quote you provided. Executor waits until the task finishes before launching this task again. It prevents concurrent execution of many instances of one task - in most cases this behaviour is needed. In your case Executor starts a task, then waits 1 second of delay, then waits 3 more seconds until current task is done and only then starts this task again (It does not necessarily start new thread, it may start the task in the same thread).
Your code does not use thread pool at all - you can get exactly same result using single thread executor.
If you want to get this behaviour:
I expected that every 1 second scheduledExecutorService will try to take new thread
from the pool and start it.
Then you may write is like this:
ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(5);
Runnable command = () -> {
System.out.println("Yo");
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
e.printStackTrace();
}
};
Runnable commandRunner = () -> {
scheduledExecutorService.schedule(command, 0, TimeUnit.SECONDS);
}
scheduledExecutorService.scheduleAtFixedRate(commandRunner, 0, 1, TimeUnit.SECONDS);
(It's better to create a single-threaded ScheduledExecutorService that runs commandRunner and create a thread pool based ExecutorService that is used by commandRunner to execute command)
What's the catch - Does Thread.sleep() stop all threads or nuance in
this behavior - "If any execution of this task takes longer than its
period, then subsequent executions may start late, but will not
concurrently execute"?
I didn't quite understand what you mean here. But, essentially speaking, in the code that you have shared, Thread.sleep() is just making the thread execution take 4 seconds, which is longer than the set period of 1 second. Thus, subsequent threads will not execute after 1 second, but only after ~4 seconds of execution of the previous thread.
If "will not concurrently execute" is true in this situation - why do
we need this pool of several threads if every thread will start after
execution of previous thread?
You may want to schedule some other type of threads (which do a different job) in the same executor, which may run in parallel to the code which you have shared. Your current code only needs 1 thread in the pool though, since you are scheduling only one job (Runnable).
Is there any simple valid example of usage of scheduleAtFixedRate,
where one thread starts while previous still executes?
As stated in the documentation, concurrent execution will not happen for the job that you scheduled at fixed rate (with the current code)
public class Poll {
ScheduledFuture<?> future;
static int INIT_DELAY = 1;
static int REPEAT_PERIOD = 2;
static int MAX_TRIES = 3;
int tries = 1;
Runnable task = () -> {
System.out.print( tries + ": " + Thread.currentThread().getName() + " " );
if ( ++tries > MAX_TRIES ) {
future.cancel( false );
}
};
void poll() {
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
future = executor.scheduleAtFixedRate( task, INIT_DELAY, REPEAT_PERIOD, TimeUnit.SECONDS );
System.out.println( "Start: " + tries + ": " + Thread.currentThread().getName() + " " );
try {
future.get();
} catch ( InterruptedException | ExecutionException e ) {
System.out.println( e.getMessage() );
} catch ( CancellationException e ) {
System.out.println( "Regular End Of Scheduled Task as Designed.");
} finally {
executor.shutdown();
executor.shutdownNow();
}
System.out.println( "Return The Result." );
}
// The Driver
public static void main( String[] args ) {
new Poll().poll();
}
}
I have a ThreadPoolExecutor with one thread that will be used for batch processing, So before assigning a new task to the executor i have to wait for the earlier task to complete, i was doing this by depending upon the value for active jobs, but seeing in detail i found that, the executor doesn't executes the task instantly.
The problem this is causing to me is that i am ready to give the next batch but the first task has not yet started thus the value of active jobs is 0.
How can i get to run the task instantly. I am also OK with any other executor or way that this can be done.
You should probably use submit method from ExecutorService to schedule your tasks. Here is a working program that uses single thread executor to run 10 tasks. I casted to ThreadPoolExecutor to monitor thread pool state. You can wait for a single task by calling get on its corresponding Future instance or wait for all the tasks by invoking awaitTermination. If you don't need result from the Future just use Void. Hope it helps.
public class Main {
static class TimingCallable implements Callable<Long> {
static int MIN_WAIT = 200;
#Override
public Long call() {
long start = System.currentTimeMillis();
try {
Thread.sleep(MIN_WAIT + new Random().nextInt(300));
} catch (InterruptedException e) {
//DO NOTHING
}
return System.currentTimeMillis() - start;
}
}
public static void main(String[] args) throws InterruptedException, ExecutionException {
ExecutorService executor = Executors.newFixedThreadPool(1);
#SuppressWarnings("unchecked")
Future<Long>[] futureResults = new Future[10];
for(int i =0; i < futureResults.length; i++) {
futureResults[i] = executor.submit(new TimingCallable());
System.out.println(String.format("ActiveCount after submitting %d tasks: ", i+1) + ((ThreadPoolExecutor)executor).getActiveCount());
System.out.println(String.format("Queue size after submitting %d tasks: ", i+1) + ((ThreadPoolExecutor)executor).getQueue().size());
}
Thread.sleep(2000);
System.out.println("ActiveCount after 2 seconds: " + ((ThreadPoolExecutor)executor).getActiveCount());
System.out.println("Queue size after 2 seconds: " + ((ThreadPoolExecutor)executor).getQueue().size());
for(int i =0; i < futureResults.length; i++) {
if (futureResults[i].isDone()) {
System.out.println(String.format("%d task is done with execution time: ", i) + futureResults[i].get());
}
} //Waiting for the last task to finish
System.out.println("Waiting for the last task result: " + futureResults[9].get());
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
}
}
If you are having only one thread to execute just use LinkedQueue for storing jobs once thread is done with the execution then only it will pick another task.
ThreadPoolExecutor executor = new ThreadPoolExecutor(1, 1,1, TimeUnit.MINUTES, new LinkedBlockingQueue<Runnable>());
Also you can have different strategies if you restricting size
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.html
Read Rejected tasks
Single thread pool executor service
Apparently you want to run multiple tasks immediately but in the order submitted.
Quite easy: Use an executor service backed by a single thread. The executor buffers up the tasks while waiting on earlier ones to complete. With on a single thread in the thread pool, only one task at a time can be executed, so they will be done sequentially in the order submitted.
The Executors class provides a choice of a few different thread pools backing an executor service. You want Executors.newSingleThreadExecutor().
ExecutorService es = Executors.newSingleThreadExecutor() ;
Submit a series of Runnable or Callable objects. Each represents a task to be executed.
es.submit( ( ) -> System.out.println( "Hello. " + Instant.now() ) ) ;
es.submit( ( ) -> System.out.println( "Bonjour. " + Instant.now() ) ) ;
es.submit( ( ) -> System.out.println( "Aloha. " + Instant.now() ) ) ;
es.submit( ( ) -> System.out.println( "Ciào. " + Instant.now() ) ) ;
es.submit( ( ) -> System.out.println( "Shwmai. " + Instant.now() ) ) ;
Optionally, you can capture the Future object returned by each call to submit if you want to track completion of the tasks. (not shown in code above)
See this code run live at IdeOne.com.
Hello. 2019-11-29T09:10:13.426987Z
Bonjour. 2019-11-29T09:10:13.472719Z
Aloha. 2019-11-29T09:10:13.473177Z
Ciào. 2019-11-29T09:10:13.473479Z
Shwmai. 2019-11-29T09:10:13.473974Z
I would like to know if there's a way to monitor the life of a thread but I'll explain the context of what I'm doing so maybe there's a better way to do this.
Basically I have x threads that are working on a queue and processing it, if a thread gets a acceptable result it goes into a solutions queue otherwise the data is either discarded or further processed.
My problem is in my main thread I have a like while(!solutions_results.isEmpty()) and it saves the data(right now its print to a file but later maybe database). The obvious problem is once it clears the solutions queue its done and finishes working even though the other threads are still putting data into the queue.
I'm not sure the best way to deal with this(maybe have a dedicated thread that only saves the solution queue?) but I was thinking if I could somehow monitor the life of the other threads are done then there's no chance of more data going into the solutions queue.
if there's a better way to do this please let me know otherwise is there a way to tell once the other threads are done(I can't wait for executor to completely finish before running this process because it can get quite large and don't want it to just sit in memory, ideally want to process it as it close to as it comes in but its not time dependent)?
If you use the ExecutorService to run your thread jobs then you can use the awaitTermination() method to know when all of the threads have finished:
ExecutorService pool = Executors.newFixedThreadPool(10);
pool.submit(yourSolutionsRunnable);
pool.submit(yourSolutionsRunnable);
...
// once you've submitted your last job you can do
pool.shutdown();
Then you can wait for all of the jobs submitted to finish:
pool.waitTermination(Integer.MAX_VALUE, TimeUnit.MILLISECONDS);
This would get more complicated if your threads need to keep running after submitting their solutions. If you edit your question and make this more apparent I'll edit my answer.
Edit:
Oh, I see you want to process some results along the way but not stop until all of the threads are done.
You can use the pool.isTerminated() test which will tell you if all of the jobs have completed. So your loop would look something like:
// this is the main thread so waiting for solutions in a while(true) loop is ok
while (true) {
// are all the workers done?
if (pool.isTerminated()) {
// if there are results process one last time
if (!solutions_results.isEmpty()) {
processTheSolutions();
}
break;
} else {
if (solutions_results.isEmpty()) {
// wait a bit to not spin, you could also use a wait/notify here
Thread.sleep(1000);
} else {
processTheSolutions();
}
}
}
Edit:
You could also have two thread pools, one for generating the solutions and another one processing. Your main thread could then wait for the worker pool to empty and then wait for the solutions processing pool. The worker pool would submit the solutions (if any) into the solutions pool. You could just have 1 thread in the solutions processing pool or more as necessary.
ExecutorService workerPool = Executors.newFixedThreadPool(10);
final ExecutorService solutionsPool = Executors.newFixedThreadPool(1);
solutionsPool.submit(workerThatPutsSolutionsIntoSolutionsPool);
...
// once you've submitted your last worker you can do
workerPool.shutdown();
workerPool.waitTermination(Integer.MAX_VALUE, TimeUnit.MILLISECONDS);
// once the workers have finished you shutdown the solutions pool
solutionsPool.shutdown();
// and then wait for it to finish
solutionsPool.waitTermination(Integer.MAX_VALUE, TimeUnit.MILLISECONDS);
I don't know much about the behavior requirements that you're dealing with but if you want the main thread to block until all your child threads are complete you should take a look at the join method of the Thread class.
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Thread.html#join()
Just run a loop inside your main thread that calls the join method on each one of your child threads and when it exits the loop you can be sure that all threads have finished working.
Just keep a list of your active threads. You'd want it synchronized to keep it from being trashed if you add/remove threads simultaneously. Or use something like java.util.concurrent.ConcurrentLinkedQueue, which can deal with multiple threads itself. Add each thread to the list when you start it. Each thread should remove itself from the list right before it stops. When the list is empty, all your threads are done.
Edit: the timing is important. First, the main thread has to put the working threads into the list. If they put themselves into the list, the main thread could check the list at a time when some threads have removed themselves from the list and all the rest, though started, have not yet begun executing--and so not yet put themselves in the list. It would then think everything was done when it wasn't. Second, the main thread must put each worker thread on the list before it starts it. Otherwise, the thread might finish and make its attempt to remove itself from the list before the main thread adds it to the list. Then the list will never become empty and the program will never finish.
Maybe java.util.concurrent.ExecutorCompletionService would be useful here.
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.CancellationException;
import java.util.concurrent.CompletionService;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorCompletionService;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class monitor_life_of_threads
{
/**
* First, convert all of your threads to instances of Callable (easy to do), and have them each return an instance some class (I'm using Integer below just as
* an example).
* This will help simplify things.
*/
public static void main ( String args[] )
{
final monitor_life_of_threads worker = new monitor_life_of_threads();
worker.executeCallablesAndUseResults ();
System.exit ( 0 );
}
private void executeCallablesAndUseResults ()
{
List < Callable < Result >> list = new ArrayList <> ();
populateInputList ( list );
try
{
doWork ( list );
}
catch ( InterruptedException e )
{
e.printStackTrace ();
}
catch ( ExecutionException e )
{
e.printStackTrace ();
}
catch ( CancellationException e )
{
/*
* Could be called if a Callable throws an InterruptedException, and if it's not caught, it can cause Future.get to hang.
*/
e.printStackTrace ();
}
catch ( Exception defaultException )
{
defaultException.printStackTrace ();
}
}
private void doWork ( Collection < Callable < Result >> callables ) throws InterruptedException, ExecutionException
{
ExecutorService executorService = Executors.newCachedThreadPool ();
CompletionService < Result > ecs = new ExecutorCompletionService < > ( executorService );
for ( Callable < Result > callable : callables )
ecs.submit ( callable );
for ( int i = 0, n = callables.size (); i < n; ++i )
{
Result r = ecs.take ().get ();
if ( r != null )
use ( r ); // This way you don't need a second queue.
}
executorService.shutdown ();
}
private void use ( Result result )
{
// Write result to database, output file, etc.
System.out.println ( "result = " + result );
}
private List < Callable < Result >> populateInputList ( List < Callable < Result >> list )
{
list.add ( new Callable < Result > () {
#Override
public Result call () throws Exception
{
// Do some number crunching, then return a 5.
return new Result ( 5 );
}
} );
list.add ( new Callable < Result > () {
#Override
public Result call () throws Exception
{
// Do some number crunching, then return an 8.
return new Result ( 8 );
}
} );
list.add ( new Callable < Result > () {
#Override
public Result call () throws Exception
{
// Do some number crunching, but fail and so return null.
return null;
}
} );
return list;
}
}
class Result
{
private Integer i;
Result ( Integer i)
{
this.i = i;
}
public String toString ()
{
return Integer.toString ( i );
}
}
I'm trying to figure out how to correctly use Java's Executors. I realize submitting tasks to an ExecutorService has its own overhead. However, I'm surprised to see it is as high as it is.
My program needs to process huge amount of data (stock market data) with as low latency as possible. Most of the calculations are fairly simple arithmetic operations.
I tried to test something very simple: "Math.random() * Math.random()"
The simplest test runs this computation in a simple loop. The second test does the same computation inside a anonymous Runnable (this is supposed to measure the cost of creating new objects). The third test passes the Runnable to an ExecutorService (this measures the cost of introducing executors).
I ran the tests on my dinky laptop (2 cpus, 1.5 gig ram):
(in milliseconds)
simpleCompuation:47
computationWithObjCreation:62
computationWithObjCreationAndExecutors:422
(about once out of four runs, the first two numbers end up being equal)
Notice that executors take far, far more time than executing on a single thread. The numbers were about the same for thread pool sizes between 1 and 8.
Question: Am I missing something obvious or are these results expected? These results tell me that any task I pass in to an executor must do some non-trivial computation. If I am processing millions of messages, and I need to perform very simple (and cheap) transformations on each message, I still may not be able to use executors...trying to spread computations across multiple CPUs might end up being costlier than just doing them in a single thread. The design decision becomes much more complex than I had originally thought. Any thoughts?
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class ExecServicePerformance {
private static int count = 100000;
public static void main(String[] args) throws InterruptedException {
//warmup
simpleCompuation();
computationWithObjCreation();
computationWithObjCreationAndExecutors();
long start = System.currentTimeMillis();
simpleCompuation();
long stop = System.currentTimeMillis();
System.out.println("simpleCompuation:"+(stop-start));
start = System.currentTimeMillis();
computationWithObjCreation();
stop = System.currentTimeMillis();
System.out.println("computationWithObjCreation:"+(stop-start));
start = System.currentTimeMillis();
computationWithObjCreationAndExecutors();
stop = System.currentTimeMillis();
System.out.println("computationWithObjCreationAndExecutors:"+(stop-start));
}
private static void computationWithObjCreation() {
for(int i=0;i<count;i++){
new Runnable(){
#Override
public void run() {
double x = Math.random()*Math.random();
}
}.run();
}
}
private static void simpleCompuation() {
for(int i=0;i<count;i++){
double x = Math.random()*Math.random();
}
}
private static void computationWithObjCreationAndExecutors()
throws InterruptedException {
ExecutorService es = Executors.newFixedThreadPool(1);
for(int i=0;i<count;i++){
es.submit(new Runnable() {
#Override
public void run() {
double x = Math.random()*Math.random();
}
});
}
es.shutdown();
es.awaitTermination(10, TimeUnit.SECONDS);
}
}
Using executors is about utilizing CPUs and / or CPU cores, so if you create a thread pool that utilizes the amount of CPUs at best, you have to have as many threads as CPUs / cores.
You are right, creating new objects costs too much. So one way to reduce the expenses is to use batches. If you know the kind and amount of computations to do, you create batches. So think about thousand(s) computations done in one executed task. You create batches for each thread. As soon as the computation is done (java.util.concurrent.Future), you create the next batch. Even the creation of new batches can be done in parralel (4 CPUs -> 3 threads for computation, 1 thread for batch provisioning). In the end, you may end up with more throughput, but with higher memory demands (batches, provisioning).
Edit: I changed your example and I let it run on my little dual-core x200 laptop.
provisioned 2 batches to be executed
simpleCompuation:14
computationWithObjCreation:17
computationWithObjCreationAndExecutors:9
As you see in the source code, I took the batch provisioning and executor lifecycle out of the measurement, too. That's more fair compared to the other two methods.
See the results by yourself...
import java.util.List;
import java.util.Vector;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class ExecServicePerformance {
private static int count = 100000;
public static void main( String[] args ) throws InterruptedException {
final int cpus = Runtime.getRuntime().availableProcessors();
final ExecutorService es = Executors.newFixedThreadPool( cpus );
final Vector< Batch > batches = new Vector< Batch >( cpus );
final int batchComputations = count / cpus;
for ( int i = 0; i < cpus; i++ ) {
batches.add( new Batch( batchComputations ) );
}
System.out.println( "provisioned " + cpus + " batches to be executed" );
// warmup
simpleCompuation();
computationWithObjCreation();
computationWithObjCreationAndExecutors( es, batches );
long start = System.currentTimeMillis();
simpleCompuation();
long stop = System.currentTimeMillis();
System.out.println( "simpleCompuation:" + ( stop - start ) );
start = System.currentTimeMillis();
computationWithObjCreation();
stop = System.currentTimeMillis();
System.out.println( "computationWithObjCreation:" + ( stop - start ) );
// Executor
start = System.currentTimeMillis();
computationWithObjCreationAndExecutors( es, batches );
es.shutdown();
es.awaitTermination( 10, TimeUnit.SECONDS );
// Note: Executor#shutdown() and Executor#awaitTermination() requires
// some extra time. But the result should still be clear.
stop = System.currentTimeMillis();
System.out.println( "computationWithObjCreationAndExecutors:"
+ ( stop - start ) );
}
private static void computationWithObjCreation() {
for ( int i = 0; i < count; i++ ) {
new Runnable() {
#Override
public void run() {
double x = Math.random() * Math.random();
}
}.run();
}
}
private static void simpleCompuation() {
for ( int i = 0; i < count; i++ ) {
double x = Math.random() * Math.random();
}
}
private static void computationWithObjCreationAndExecutors(
ExecutorService es, List< Batch > batches )
throws InterruptedException {
for ( Batch batch : batches ) {
es.submit( batch );
}
}
private static class Batch implements Runnable {
private final int computations;
public Batch( final int computations ) {
this.computations = computations;
}
#Override
public void run() {
int countdown = computations;
while ( countdown-- > -1 ) {
double x = Math.random() * Math.random();
}
}
}
}
This is not a fair test for the thread pool for following reasons,
You are not taking advantage of the pooling at all because you only have 1 thread.
The job is too simple that the pooling overhead can't be justified. A multiplication on a CPU with FPP only takes a few cycles.
Considering following extra steps the thread pool has to do besides object creation and the running the job,
Put the job in the queue
Remove the job from queue
Get the thread from the pool and execute the job
Return the thread to the pool
When you have a real job and multiple threads, the benefit of the thread pool will be apparent.
The 'overhead' you mention is nothing to do with ExecutorService, it is caused by multiple threads synchronizing on Math.random, creating lock contention.
So yes, you are missing something (and the 'correct' answer below is not actually correct).
Here is some Java 8 code to demonstrate 8 threads running a simple function in which there is no lock contention:
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.function.DoubleFunction;
import com.google.common.base.Stopwatch;
public class ExecServicePerformance {
private static final int repetitions = 120;
private static int totalOperations = 250000;
private static final int cpus = 8;
private static final List<Batch> batches = batches(cpus);
private static DoubleFunction<Double> performanceFunc = (double i) -> {return Math.sin(i * 100000 / Math.PI); };
public static void main( String[] args ) throws InterruptedException {
printExecutionTime("Synchronous", ExecServicePerformance::synchronous);
printExecutionTime("Synchronous batches", ExecServicePerformance::synchronousBatches);
printExecutionTime("Thread per batch", ExecServicePerformance::asynchronousBatches);
printExecutionTime("Executor pool", ExecServicePerformance::executorPool);
}
private static void printExecutionTime(String msg, Runnable f) throws InterruptedException {
long time = 0;
for (int i = 0; i < repetitions; i++) {
Stopwatch stopwatch = Stopwatch.createStarted();
f.run(); //remember, this is a single-threaded synchronous execution since there is no explicit new thread
time += stopwatch.elapsed(TimeUnit.MILLISECONDS);
}
System.out.println(msg + " exec time: " + time);
}
private static void synchronous() {
for ( int i = 0; i < totalOperations; i++ ) {
performanceFunc.apply(i);
}
}
private static void synchronousBatches() {
for ( Batch batch : batches) {
batch.synchronously();
}
}
private static void asynchronousBatches() {
CountDownLatch cb = new CountDownLatch(cpus);
for ( Batch batch : batches) {
Runnable r = () -> { batch.synchronously(); cb.countDown(); };
Thread t = new Thread(r);
t.start();
}
try {
cb.await();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
private static void executorPool() {
final ExecutorService es = Executors.newFixedThreadPool(cpus);
for ( Batch batch : batches ) {
Runnable r = () -> { batch.synchronously(); };
es.submit(r);
}
es.shutdown();
try {
es.awaitTermination( 10, TimeUnit.SECONDS );
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
private static List<Batch> batches(final int cpus) {
List<Batch> list = new ArrayList<Batch>();
for ( int i = 0; i < cpus; i++ ) {
list.add( new Batch( totalOperations / cpus ) );
}
System.out.println("Batches: " + list.size());
return list;
}
private static class Batch {
private final int operationsInBatch;
public Batch( final int ops ) {
this.operationsInBatch = ops;
}
public void synchronously() {
for ( int i = 0; i < operationsInBatch; i++ ) {
performanceFunc.apply(i);
}
}
}
}
Result timings for 120 tests of 25k operations (ms):
Synchronous exec time: 9956
Synchronous batches exec time: 9900
Thread per batch exec time: 2176
Executor pool exec time: 1922
Winner: Executor Service.
I don't think this is at all realistic since you're creating a new executor service every time you make the method call. Unless you have very strange requirements that seems unrealistic - typically you'd create the service when your app starts up, and then submit jobs to it.
If you try the benchmarking again but initialise the service as a field, once, outside the timing loop; then you'll see the actual overhead of submitting Runnables to the service vs. running them yourself.
But I don't think you've grasped the point fully - Executors aren't meant to be there for efficiency, they're there to make co-ordinating and handing off work to a thread pool simpler. They will always be less efficient than just invoking Runnable.run() yourself (since at the end of the day the executor service still needs to do this, after doing some extra housekeeping beforehand). It's when you are using them from multiple threads needing asynchronous processing, that they really shine.
Also consider that you're looking at the relative time difference of a basically fixed cost (Executor overhead is the same whether your tasks take 1ms or 1hr to run) compared to a very small variable amount (your trivial runnable). If the executor service takes 5ms extra to run a 1ms task, that's not a very favourable figure. If it takes 5ms extra to run a 5 second task (e.g. a non-trivial SQL query), that's completely negligible and entirely worth it.
So to some extent it depends on your situation - if you have an extremely time-critical section, running lots of small tasks, that don't need to be executed in parallel or asynchronously then you'll get nothing from an Executor. If you're processing heavier tasks in parallel and want to respond asynchronously (e.g. a webapp) then Executors are great.
Whether they are the best choice for you depends on your situation, but really you need to try the tests with realistic representative data. I don't think it would be appropriate to draw any conclusions from the tests you've done unless your tasks really are that trivial (and you don't want to reuse the executor instance...).
Math.random() actually synchronizes on a single Random number generator. Calling Math.random() results in significant contention for the number generator. In fact the more threads you have, the slower it's going to be.
From the Math.random() javadoc:
This method is properly synchronized to allow correct use by more than
one thread. However, if many threads need to generate pseudorandom
numbers at a great rate, it may reduce contention for each thread to
have its own pseudorandom-number generator.
Firstly there's a few issues with the microbenchmark. You do a warm up, which is good. However, it is better to run the test multiple times, which should give a feel as to whether it has really warmed up and the variance of the results. It also tends to be better to do the test of each algorithm in separate runs, otherwise you might cause deoptimisation when an algorithm changes.
The task is very small, although I'm not entirely sure how small. So number of times faster is pretty meaningless. In multithreaded situations, it will touch the same volatile locations so threads could cause really bad performance (use a Random instance per thread). Also a run of 47 milliseconds is a bit short.
Certainly going to another thread for a tiny operation is not going to be fast. Split tasks up into bigger sizes if possible. JDK7 looks as if it will have a fork-join framework, which attempts to support fine tasks from divide and conquer algorithms by preferring to execute tasks on the same thread in order, with larger tasks pulled out by idle threads.
Here are results on my machine (OpenJDK 8 on 64-bit Ubuntu 14.0, Thinkpad W530)
simpleCompuation:6
computationWithObjCreation:5
computationWithObjCreationAndExecutors:33
There's certainly overhead. But remember what these numbers are: milliseconds for 100k iterations. In your case, the overhead was about 4 microseconds per iteration. For me, the overhead was about a quarter of a microsecond.
The overhead is synchronization, internal data structures, and possibly a lack of JIT optimization due to complex code paths (certainly more complex than your for loop).
The tasks that you'd actually want to parallelize would be worth it, despite the quarter microsecond overhead.
FYI, this would be a very bad computation to parallelize. I upped the thread to 8 (the number of cores):
simpleCompuation:5
computationWithObjCreation:6
computationWithObjCreationAndExecutors:38
It didn't make it any faster. This is because Math.random() is synchronized.
The Fixed ThreadPool's ultimate porpose is to reuse already created threads. So the performance gains are seen in the lack of the need to recreate a new thread every time a task is submitted. Hence the stop time must be taken inside the submitted task. Just with in the last statement of the run method.
You need to somehow group execution, in order to submit larger portions of computation to each thread (e.g. build groups based on stock symbol).
I got best results in similar scenarios by using the Disruptor. It has a very low per-job overhead. Still its important to group jobs, naive round robin usually creates many cache misses.
see http://java-is-the-new-c.blogspot.de/2014/01/comparision-of-different-concurrency.html
In case it is useful to others, here are test results with a realistic scenario - use ExecutorService repeatedly until the end of all tasks - on a Samsung Android device.
Simple computation (MS): 102
Use threads (MS): 31049
Use ExecutorService (MS): 257
Code:
ExecutorService executorService = Executors.newFixedThreadPool(1);
int count = 100000;
//Simple computation
Instant instant = Instant.now();
for (int i = 0; i < count; i++) {
double x = Math.random() * Math.random();
}
Duration duration = Duration.between(instant, Instant.now());
Log.d("ExecutorPerformanceTest", "Simple computation (MS): " + duration.toMillis());
//Use threads
instant = Instant.now();
for (int i = 0; i < count; i++) {
new Thread(() -> {
double x = Math.random() * Math.random();
}
).start();
}
duration = Duration.between(instant, Instant.now());
Log.d("ExecutorPerformanceTest", "Use threads (MS): " + duration.toMillis());
//Use ExecutorService
instant = Instant.now();
for (int i = 0; i < count; i++) {
executorService.execute(() -> {
double x = Math.random() * Math.random();
}
);
}
duration = Duration.between(instant, Instant.now());
Log.d("ExecutorPerformanceTest", "Use ExecutorService (MS): " + duration.toMillis());
I've faced a similar problem, but Math.random() was not the issue.
The problem is having many small tasks that take just a few milliseconds to complete. It is not much but a lot of small tasks in series ends up being a lot of time and I needed to parallelize.
So, the solution I found, and it might work for those of you facing this same problem: do not use any of the executor services. Instead create your own long living Threads and feed them tasks.
Here is an example, just as an idea don't try to copy paste it cause it probably won't work as I am using Kotlin and translating to Java in my head. The concept is what's important:
First, the Thread, a Thread that can execute a task and then continue there waiting for the next one:
public class Worker extends Thread {
private Callable task;
private Semaphore semaphore;
private CountDownLatch latch;
public Worker(Semaphore semaphore) {
this.semaphore = semaphore;
}
public void run() {
while (true) {
semaphore.acquire(); // this will block, the while(true) won't go crazy
if (task == null) continue;
task.run();
if (latch != null) latch.countDown();
task = null;
}
}
public void setTask(Callable task) {
this.task = task;
}
public void setCountDownLatch(CountDownLatch latch) {
this.latch = latch;
}
}
There is two things here that need explanation:
the Semaphore: gives you control over how many tasks and when they are executed by this thread
the CountDownLatch: is the way to notify someone else that a task was completed
So this is how you would use this Worker, first just a simple example:
Semaphore semaphore = new Semaphore(0); // initially the semaphore is closed
Worker worker = new Worker(semaphore);
worker.start();
worker.setTask( .. your callable task .. );
semaphore.release(); // this will allow one task to be processed by the worker
Now a more complicated example, with two Threads and waiting for both to complete using the CountDownLatch:
Semaphore semaphore1 = new Semaphore(0);
Worker worker1 = new Worker(semaphore1);
worker1.start();
Semaphore semaphore2 = new Semaphore(0);
Worker worker2 = new Worker(semaphore2);
worker2.start();
// same countdown latch for both workers, with a counter of 2
CountDownLatch countDownLatch = new CountDownLatch(2);
worker1.setCountDownLatch(countDownLatch);
worker2.setCountDownLatch(countDownLatch);
worker1.setTask( .. your callable task .. );
worker2.setTask( .. your callable task .. );
semaphore1.release();
semaphore2.release();
countDownLatch.await(); // this will block until 2 tasks have been completed
And after that code runs you could just add more tasks to the same threads and reuse them. That's the whole point of this, reusing the threads instead of creating new ones.
It is unpolished as f*** but hopefully this gives you an idea. For me this was an improvement compared to no multi threading. And it was much much better than any executor service with any number of threads in the pool by far.