I have a .csv file containing over 70 million lines of which each line is to generate a Runnable and then executed by threadpool. This Runnable will insert a record into Mysql.
What's more , I want to record a position of the csv file for the RandomAccessFile to locate. The position is written to a File.I want to write this record when all the threads in threadpool are finished.So ThreadPoolExecutor.shutdown() is invoked. But when more lines come, I need a threadpool again. How can I reuse this current threadpool instead of make a new one.
The code is as follows:
public static boolean processPage() throws Exception {
long pos = getPosition();
long start = System.currentTimeMillis();
raf.seek(pos);
if(pos==0)
raf.readLine();
for (int i = 0; i < PAGESIZE; i++) {
String lineStr = raf.readLine();
if (lineStr == null)
return false;
String[] line = lineStr.split(",");
final ExperienceLogDO log = CsvExperienceLog.generateLog(line);
//System.out.println("userId: "+log.getUserId()%512);
pool.execute(new Runnable(){
public void run(){
try {
experienceService.insertExperienceLog(log);
} catch (BaseException e) {
e.printStackTrace();
}
}
});
long end = System.currentTimeMillis();
}
BufferedWriter resultWriter = new BufferedWriter(
new OutputStreamWriter(new FileOutputStream(new File(
RESULT_FILENAME), true)));
resultWriter.write("\n");
resultWriter.write(String.valueOf(raf.getFilePointer()));
resultWriter.close();
long time = System.currentTimeMillis()-start;
System.out.println(time);
return true;
}
Thanks !
As stated in the documentation, you cannot reuse an ExecutorService that has been shut down. I'd recommend against any workarounds, since (a) they may not work as expected in all situations; and (b) you can achieve what you want using standard classes.
You must either
instantiate a new ExecutorService; or
not terminate the ExecutorService.
The first solution is easily implemented, so I won't detail it.
For the second, since you want to execute an action once all the submitted tasks have finished, you might take a look at ExecutorCompletionService and use it instead. It wraps an ExecutorService which will do the thread management, but the runnables will get wrapped into something that will tell the ExecutorCompletionService when they have finished, so it can report back to you:
ExecutorService executor = ...;
ExecutorCompletionService ecs = new ExecutorCompletionService(executor);
for (int i = 0; i < totalTasks; i++) {
... ecs.submit(...); ...
}
for (int i = 0; i < totalTasks; i++) {
ecs.take();
}
The method take() on the ExecutorCompletionService class will block until a task has finished (either normally or abruptly). It will return a Future, so you can check the results if you wish.
I hope this can help you, since I didn't completely understand your problem.
create and group all tasks and submit them to the pool with invokeAll (which only returns when all tasks are successfully completed)
After calling shutdown on a ExecutorService, no new Task will be accepted. This means you have to create a new ExecutorService for each round of tasks.
However, with Java 8 ForkJoinPool.awaitQuiescence was introduced. If you can switch from a normal ExecutorService to ForkJoinPool, you can use this method to wait until no more tasks are running in a ForkJoinPool without having to call shutdown. This means you can fill a ForkJoinPool with Tasks, waiting until it is empty (quiescent), and then again begin filling it with Tasks, and so on.
Related
I'm wondering whether there is any advantage to keeping the same threads over the course of the execution of an object, rather than re-using the same Thread objects. I have an object for which a single (frequently used) method is parallelized using local Thread variables, such that every time the method is called, new Threads (and Runnables) are instantiated. Because the method is called so frequently, a single execution may instantiate upwards of a hundred thousand Thread objects, even though there are never more than a few (~4-6) active at any given time.
Following is a cut down example of how this method is currently implemented, to give a sense of what I mean. For reference, n is of course the pre-determined number of threads to use, whereas this.dataStructure is a (thread-safe) Map which serves as the input to the computation, as well as being modified by the computation. There are other inputs involved, but as they are not relevant to this question, I've omitted their usage. I've also omitted exception handling for the same reason.
Runnable[] tasks = new Runnable[n];
Thread[] threads = new Thread[n];
ArrayBlockingQueue<MyObject> inputs = new ArrayBlockingQueue<>(this.dataStructure.size());
inputs.addAll(this.dataStructure.values());
for (int i = 0; i < n; i++) {
tasks[i] = () -> {
while (true) {
MyObject input = inputs.poll(1L, TimeUnit.MICROSECONDS);
if (input == null) return;
// run computations over this.dataStructure
}
};
threads[i] = new Thread(tasks[i]);
threads[i].start();
}
for (int i = 0; i < n; i++)
threads[i].join();
Because these Threads (and their runnables) always execute the same way using a single ArrayBlockingQueue as input, an alternative to this would be to just "refill the queue" every time the method is called and just re-start the same Threads. This is easily implemented, but I'm unsure as to whether it would make any difference one way or the other. I'm not too familiar with concurrency, so any help is appreciated.
PS.: If there is a more elegant way to handle the polling, that would also be helpful.
It is not possible to start a Thread more than once, but conceptually, the answer to your question is yes.
This is normally accomplished with a thread pool. A thread pool is a set of Threads which rarely actually terminate. Instead, an application is passes its task to the thread pool, which picks a Thread in which to run it. The thread pool then decides whether the Thread should be terminated or reused after the task completes.
Java has some classes which make use of thread pools quite easy: ExecutorService and CompletableFuture.
ExecutorService usage typically looks like this:
ExecutorService executor = Executors.newCachedThreadPool();
for (int i = 0; i < n; i++) {
tasks[i] = () -> {
while (true) {
MyObject input = inputs.poll(1L, TimeUnit.MICROSECONDS);
if (input == null) return;
// run computations over this.dataStructure
}
};
executor.submit(tasks[i]);
}
// Doesn't interrupt or halt any tasks. Will wait for them all to finish
// before terminating its threads.
executor.shutdown();
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
Executors has other methods which can create thread pools, like newFixedThreadPool() and newWorkStealingPool(). You can decide for yourself which one best suits your needs.
CompletableFuture use might look like this:
Runnable[] tasks = new Runnable[n];
CompletableFuture<?>[] futures = new CompletableFuture<?>[n];
for (int i = 0; i < n; i++) {
tasks[i] = () -> {
while (true) {
MyObject input = inputs.poll(1L, TimeUnit.MICROSECONDS);
if (input == null) return;
// run computations over this.dataStructure
}
};
futures[i] = CompletableFuture.runAsync(tasks[i]);
}
CompletableFuture.allOf(futures).get();
The disadvantage of CompletableFuture is that the tasks cannot be canceled or interrupted. (Calling cancel will mark the task as completing with an exception instead of completing successfully, but the task will not be interrupted.)
Per definition, you cannot restart a thread. According to the documentation:
It is never legal to start a thread more than once. In particular, a thread may not be restarted once it has completed execution.
Nevertheless a thread is a valuable resource, and there are implementations to reuse threads. Have a look at the Java Tutorial about Executors.
I have a piece of java code which does the following -
Opens a file with data in format {A,B,C} and each file has approx. 5000000 lines.
For each line in file, call a service that gives a column D and append it to {A,B,C} as {A,B,C,D}.
Write this entry into a chunkedwriter that eventually groups together 10000 lines to write back chunk to a remote location
Right now the code is taking 32 hours to execute. This process would again get repeated across another file which hypothetically takes another 32 hours but we need these processes to run daily.
Step 2 is further complicated by the fact that sometimes the service does not have D but is designed to fetch D from its super data store so it throws a transient exception asking you to wait. We have retries to handle this so an entry could technically be retried 5 times with a max delay of 60000 millis. So we could be looking at 5000000 * 5 in worst case.
The combination of {A,B,C} are unique and thus result D can't be cached and reused and a fresh request has to be made to get D every time.
I've tried adding threads like this:
temporaryFile = File.createTempFile(key, ".tmp");
Files.copy(stream, temporaryFile.toPath(),
StandardCopyOption.REPLACE_EXISTING);
reader = new BufferedReader(new InputStreamReader(new
FileInputStream(temporaryFile), StandardCharsets.UTF_8));
String entry;
while ((entry = reader.readLine()) != null) {
final String finalEntry = entry;
service.execute(() -> {
try {
processEntry(finalEntry);
} catch (Exception e) {
log.error("something");
});
count++;
}
Here processEntry method abstracts the implementation details explained above and threads are defined as
ExecutorService service = Executors.newFixedThreadPool(10);
The problem I'm having is the first set of threads spin up but the process doesn't wait until all threads finish their work and all 5000000 lines are complete. So the task that used to wait for completion for 32 hours now ends in <1min which messes up our system's state. Are there any alternative ways to do this? How can I make process wait on all threads completing?
Think about using ExecutorCompletionService if you want to take tasks as they complete you need an ExecutorCompletionService. This acts as a BlockingQueue that will allow you to poll for tasks as and when they finish.
Another solution is to wait the executor termination then you shut it down using:
ExecutorService service = Executors.newFixedThreadPool(10);
service .shutdown();
while (!service .isTerminated()) {}
One alternative is to use a latch to wait for all the tasks to complete before you shutdown the executor on the main thread.
Initialize a CountdownLatch with 1.
After you exit the loop that submits the tasks, you call latch.await();
In the task you start you have to have a callback on the starting class to let it know when a task has finished.
Note that in the starting class the callback function has to be synchronized.
In the starting class you use this callback to take the count of completed tasks.
Also inside the callback, when all tasks have completed, you call latch.countdown() for the main thread to continue, lets say, shutting down the executor and exiting.
This shows the main concept, it can be implemented with more detail and more control on the completed tasks if necessary.
It would be something like this:
public class StartingClass {
CountDownLatch latch = new CountDownLatch(1);
ExecutorService service = Executors.newFixedThreadPool(10);
BufferedReader reader;
Path stream;
int count = 0;
int completed = 0;
public void runTheProcess() {
File temporaryFile = File.createTempFile(key, ".tmp");
Files.copy(stream, temporaryFile.toPath(),
StandardCopyOption.REPLACE_EXISTING);
reader = new BufferedReader(new InputStreamReader(new
FileInputStream(temporaryFile), StandardCharsets.UTF_8));
String entry;
while ((entry = reader.readLine()) != null) {
final String finalEntry = entry;
service.execute(new Task(this,finalEntry));
count++;
}
latch.await();
service.shutdown();
}
public synchronized void processEntry(String entry) {
}
public synchronized void taskCompleted() {
completed++;
if(completed == count) {
latch.countDown();;
}
}
//This can be put in a different file.
public static class Task implements Runnable {
StartingClass startingClass;
String finalEntry;
public Task(StartingClass startingClass, String finalEntry) {
this.startingClass = startingClass;
this.finalEntry = finalEntry;
}
#Override
public void run() {
try {
startingClass.processEntry(finalEntry);
startingClass.taskCompleted();
} catch (Exception e) {
//log.error("something");
};
}
}
}
Note that you need to close the file. Also the sutting down of the executor could be written to wait a few seconds before forcing a shutdown.
The problem I'm having is the first set of threads spin up but the process doesn't wait until all threads finish their work and all 5000000 lines are complete.
When you are running jobs using an ExecutorService they are added into the service and are run in the background. To wait for them to complete you need to wait for the service to terminate:
ExecutorService service = Executors.newFixedThreadPool(10);
// submit jobs to the service here
// after the last job has been submitted, we immediately shutdown the service
service.shutdown();
// then we can wait for it to terminate as the jobs run in the background
service.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
Also, if there is a crap-ton of lines in these files, I would recommend that you use a bounded queue for the jobs so that you don't blow out memory effectively caching all of the lines in the file. This only works if the files stay around and don't go away.
// this is the same as a newFixedThreadPool(10) but with a queue of 100
ExecutorService service = new ThreadPoolExecutor(10, 10,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(100));
// set a rejected execution handler so we block the caller once the queue is full
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
executor.getQueue().put(r);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
});
Write this entry into a chunkedwriter that eventually groups together 10000 lines to write back chunk to a remote location
As each A,B,C job finishes, if it needs to be processed in a second step then I would also recommend looking into a ExecutorCompletionService which allows you to chain various different thread pools together so as lines finish they will immediately start working on the 2nd phase of the processing.
If instead this chunkedWriter is just a single thread then I'd recommend sharing a BlockingQueue<Result> and having the executor threads put to the queue once the lines are done and the chunkedWriter taking from the queue and doing the chunking and writing of the results. In this situation, indicating to the writer thread that it is done needs to be handled carefully -- maybe with some sort of END_RESULT constant put to the queue by the main thread waiting for the service to terminate.
I am developing a program that can send http requests to fetch documents.
I have fill a queue with all the requests items:
Queue<RequestItem> requestItems = buildRequest4Docs();
Then,
int threadNum = requestItems.size();
//ExecutorService exs = Executors.newFixedThreadPool(threadNum);
for (int i = 0; i < threadNum; i++) {
ResponseInterface response = new CMSGOResponse();
RequestTask task = new RequestTask(requestItems.poll(), this, response);
task.run();
//exs.execute(new RequestTask(requestItems.poll(), this, response));
}
//exs.shutdown();
I am confused here, in the for loop,does the tasks run simultaneously? Or the tasks run one by one?
Thanks!
In the way you got it now the tasks will be executed one by one. If you uncomment the code you got now as comments and comment the lines RequestTask task = new RequestTask(requestItems.poll(), this, response); and task.run(); you will get a concurrent execution.
So for the concurrent execution it has to look like this:
int threadNum = requestItems.size();
ExecutorService exs = Executors.newFixedThreadPool(threadNum);
for (int i = 0; i < threadNum; i++) {
ResponseInterface response = new CMSGOResponse();
exs.execute(new RequestTask(requestItems.poll(), this, response));
}
exs.shutdown();
while (! exs.isTerminated()) {
try {
exs.awaitTermination(1L, TimeUnit.DAYS);
}
catch (InterruptedException e) {
// you may or may not care here, but if you truly want to
// wait for the pool to shutdown, just ignore the exception
// otherwise you'll have to deal with the exception and
// make a decision to drop out of the loop or something else.
}
}
In addition to that I suggest, that you do not bind the amount of threads created with the ExecutorService to the amount of task you got to work. Connecting it to the amount of processors of the host system is usually a better method. To get the amount of processors use: Runtime.getRuntime().availableProcessors()
And in the executor service initialized like this you put the items of your queue. But that works nicely without fetching the total size, rather by polling the Queue until it does not return additional data.
The final result of my proposals could look like this:
final int threadNum = Runtime.getRuntime().availableProcessors();
final ExecutorService exs = Executors.newFixedThreadPool(threadNum);
while (true) {
final RequestItem requestItem = requestItems.poll();
if (requestItem == null) {
break;
}
final ResponseInterface response = new CMSGOResponse();
exs.execute(new RequestTask(requestItem , this, response));
}
exs.shutdown();
I am confused here, in the for loop,does the tasks run simultaneously? Or the tasks run one by one?
With the code you've posted, they'll run one-by-one, because (assuming RequestTask is a subclass of Thread) you've called run. You should call start. Now that you've said RequestTask implements Runnable, the correct code wouldn't call start (it doesn't have one!) but rather new Thread(task);. (But it looks like you've now received a good answer regarding the ExecutorService, which is another way to do it.)
Assuming you call start start them on different threads instead, then yes, they'll all run in parallel (as much as they can on the hardware, etc.).
Currently you are running your thread sequentially, Well you have two ways to run threads.(Assuming that RequestTask extends Thread)
I.Either create thread object and call start() method.
RequestTask task = new RequestTask(requestItems.poll(), this, response);
task.start(); // run() method will be called, you don't need to call it
II.Or create ExecutorService
ExecutorService pool = Executors.newFixedThreadPool(poolSize);
//....
for (int i = 0; i < threadNum; i++) {
ResponseInterface response = new CMSGOResponse();
RequestTask task = new RequestTask(requestItems.poll(), this, response);
pool.execute(task);
}
You are running them one by one in the current thread. You need to use the ExecutorService to run them concurrently.
I am confused here, in the for loop,does the tasks run simultaneously? Or the tasks run one by one?
Task will be executed in the same thread i.e. one by one since you are calling run() rather that start , it will not run the task in new thread .
int threadNum = requestItems.size();
ExecutorService exs = Executors.newFixedThreadPool(threadNum);
ResponseInterface response = new CMSGOResponse();
RequestTask task = new RequestTask(requestItems.poll(), this, response);
exs.execute(task );
exs.shutdown();
In above case task will be executed in new thread and as soon as you assign 10 different task to ExecutorService they will be executed asynchronously in different threads.
I usually tend to create my Threads (or classes implementing Interface), THEN launch them with the start() method.
In your case, since RequestTask implements Runnable, you could add a start() method like this :
public class RequestTask implements Runnable {
Thread t;
boolean running;
public RequestTask() {
t = new Thread(this);
}
public void start() {
running = true; // you could use a setter
t.start();
}
public void run() {
while (running) {
// your code goes here
}
}
}
, then :
int threadNum = requestItems.size();
RequestTask[] rta = new RequestTask[threadNum];
// Create the so-called Threads ...
for (int i=0;i<threadNum;i++) {
rta[i] = new RequestTask(requestItems.poll(), this, new CMSGOResponse());
}
// ... THEN launch them
for (int i=0;i<threadNum;i++) {
rta[i].start();
}
I am writing a thread pool utility in my multithreading program. i just need to validate the following methods are correct and are they return the right values for me. i am using a LinkedBlockingQueue with size of 1. and also I refer to the java doc and it always says 'method will return approximate' number phrase. so i doubt weather following conditions are correct.
public boolean isPoolIdle() {
return myThreadPool.getActiveCount() == 0;
}
public int getAcceptableTaskCount() {
//initially poolSize is 0 ( after pool executes something it started to change )
if (myThreadPool.getPoolSize() == 0) {
return myThreadPool.getCorePoolSize() - myThreadPool.getActiveCount();
}
return myThreadPool.getPoolSize() - myThreadPool.getActiveCount();
}
public boolean isPoolReadyToAcceptTasks(){
return myThreadPool.getActiveCount()<myThreadPool.getCorePoolSize();
}
Please let me know your thoughts and suggestions.
UPDATE
interesting thing was if pool returns me there are 3 threads available from the getAcceptableTaskCount method and when i pass 3 tasks to the pool some times one task got rejected and it is handle by RejectedExecutionHandler. some times pool will handle all the tasks i passed. i am wondering why pool is rejected the tasks since i am passing tasks according to the available thread count.
--------- implementation of the answer of gray---
class MyTask implements Runnable {
#Override
public void run() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("exec");
}
}
#Test
public void testTPool(){
ExecutorService pool = Executors.newFixedThreadPool(5);
List<Future<MyTask>> list = new ArrayList<Future<MyTask>>();
for (int i = 0; i < 5; i++) {
MyTask t = new MyTask();
list.add(pool.submit(t, t));
}
for (int i = 0; i < list.size(); i++) {
Future<MyTask> t = list.get(i);
System.out.println("Result -"+t.isDone());
MyTask m = new MyTask();
list.add(pool.submit(m,m));
}
}
This will print Result -false in the console meaning that task is not complete.
From your comments:
i need to know that if pool is idle or pool can accept the tasks. if pool can accept, i need to know how much free threads in the pool. if it is 5 i will send 5 tasks to the pool to do the processing.
I don't think that you should be doing the pool accounting yourself. For your thread pool if you use Executors.newFixedThreadPool(5) then you can submit as many tasks as you want and it will only run them in 5 threads.
so i get the first most 5 tasks from the vector and assign them to the pool.ignore the other tasks in the vector since they may be update / remove from a separate cycle
Ok, I see. So you want to maximize parallelization while at the same time not pre-loading jobs? I would think that something like the following pseudo code would work:
int numThreads = 5;
ExecutorService threadPool = Executors.newFixedThreadPool(numThreads);
List<Future<MyJob>> futures = new ArrayList<Future<MyJob>>();
// submit the initial jobs
for (int i = 0; i < numThreads; i++) {
MyJob myJob = getNextBestJob();
futures.add(threadPool.submit(myJob, myJob));
}
// the list is growing so we use for i
for (int i = 0; i < futures.size(); i++) {
// wait for a job to finish
MyJob myJob = futures.get(i);
// process the job somehow
// get the next best job now that the previous one finished
MyJob nextJob = getNextBestJob();
if (nextJob != null) {
// submit the next job unless we are done
futures.add(threadPool.submit(myJob, myJob));
}
}
However, I don't quite understand how the thread count would change however. If you edit your question with some more details I can tweak my response.
The setup:
I am in the process of changing the way a program works under the hood. In the current version works like this:
public void threadWork( List<MyCallable> workQueue )
{
ExecutorService pool = Executors.newFixedThreadPool(someConst);
List<Future<myOutput>> returnValues = new ArrayList<Future<myOutput>>();
List<myOutput> finishedStuff = new ArrayList<myOutput>();
for( int i = 0; i < workQueue.size(); i++ )
{
returnValues.add( pool.submit( workQueue.get(i) ) );
}
while( !returnValues.isEmpty() )
{
try
{
// Future.get() waits for a value from the callable
finishedStuff.add( returnValues.remove(0).get(0) );
}
catch(Throwable iknowthisisbaditisjustanexample){}
}
doLotsOfThings(finsihedStuff);
}
But the new system is going to use a private inner Runnable to call a synchronized method that writes the data into a global variable. My basic setup is:
public void threadReports( List<String> workQueue )
{
ExecutorService pool = Executors.newFixedThreadPool(someConst);
List<MyRunnable> runnables = new ArrayList<MyRunnable>()
for ( int i = 0; i < modules.size(); i++ )
{
runnables.add( new MyRunnable( workQueue.get(i) );
pool.submit(threads.get(i));
}
while( !runnables.isEmpty() )
{
try
{
runnables.remove(0).wait(); // I realized that this wouldn't work
}
catch(Throwable iknowthisisbaditisjustanexample){}
}
doLotsOfThings(finsihedStuff); // finishedStuff is the global the Runnables write to
}
If you read my comment in the try of the second piece of code you will notice that I don't know how to use wait(). I had thought it was basically like thread.join() but after reading the documentation I see it is not.
I'm okay with changing some structure as needed, but the basic system of taking work, using runnables, having the runnables write to a global variable, and using a threadpool are requirements.
The Question
How can I wait for the threadpool to be completely finished before I doLotsOfThings()?
You should call ExecutorService.shutdown() and then ExecutorService.awaitTermination.
...
pool.shutdown();
if (pool.awaitTermination(<long>,<TimeUnit>)) {
// finished before timeout
doLotsOfThings(finsihedStuff);
} else {
// Timeout occured.
}
Try this:
pool.shutdown();
pool.awaitTermination(WHATEVER_TIMEOUT, TimeUnit.SECONDS);
Have you considered using the Fork/Join framework that is now included in Java 7. If you do not want to use Java 7 yet you can get the jar for it here.
public void threadReports( List<String> workQueue )
{
ExecutorService pool = Executors.newFixedThreadPool(someConst);
Set<Future<?>> futures = new HashSet<Future<?>>();
for ( int i = 0; i < modules.size(); i++ )
{
futures.add(pool.submit(threads.get(i)));
}
while( !futures.isEmpty() )
{
Set<Future<?>> removed = new Set<Future<?>>();
for(Future<?> f : futures) {
f.get(100, TimeUnit.MILLISECONDS);
if(f.isDone()) removed.add(f);
}
for(Future<?> f : removed) futures.remove(f);
}
doLotsOfThings(finsihedStuff); // finishedStuff is the global the Runnables write to
}
shutdownis a lifecycle method of the ExecutorService and renders the executor unusable after the call. Creating and destroying ThreadPools in a method is as bad as creating/destroying threads: it pretty much defeats the purpose of using threadpool, which is to reduce the overhead of thread creation by enabling transparent reuse.
If possible, you should maintain your ExecutorService lifecycle in sync with your application. - create when first needed, shutdown when your app is closing down.
To achieve your goal of executing a bunch of tasks and waiting for them, the ExecutorService provides the method invokeAll(Collection<? extends Callable<T>> tasks) (and the version with timeout if you want to wait a specific period of time.)
Using this method and some of the points mentioned above, the code in question becomes:
public void threadReports( List<String> workQueue ) {
List<MyRunnable> runnables = new ArrayList<MyRunnable>(workQueue.size());
for (String work:workQueue) {
runnables.add(new MyRunnable(work));
}
// Executor is obtained from some applicationContext that takes care of lifecycle mgnt
// invokeAll(...) will block and return when all callables are executed
List<Future<MyRunnable>> results = applicationContext.getExecutor().invokeAll(runnables);
// I wouldn't use a global variable unless you have a VERY GOOD reason for that.
// b/c all the threads of the pool doing work will be contending for the lock on that variable.
// doLotsOfThings(finishedStuff);
// Note that the List of Futures holds the individual results of each execution.
// That said, the preferred way to harvest your results would be:
doLotsOfThings(results);
}
PS: Not sure why threadReports is void. It could/should return the calculation of doLotsOfThings to achieve a more functional design.