Reusing ThreadPoolExecutor vs Creating and Disposing Ad Hoc? - java

I am building a multithreaded process that has a couple stages, each stage iterating through an unknown number of objects (hundreds of thousands from a buffered query resultset or text file). Each stage will kick off a runnable or callable for each object, but all runnables/callables must complete before moving on to the next stage.
I do not want to use a latch or any kind of synchronizer because I don't want to hurt the throughput. I suspect the latch's internals will slow things down with the synchronized counter. I also don't want to use a list of futures with invokeAll() either because I want to start execution of runnables immediately as I iterate through them.
However, creating a ThreadPoolExecutor for each stage, looping through and submitting all the runnables, and then shutting it down for each stage seems to be a functional solution...
public void runProcess() {
ResultSet rs = someDbConnection.executeQuery(someSQL);
ExecutorService stage1Executor = Executors.newFixedThreadPool(9);
while (rs.next()) {
//SUBMIT UNKNOWN # OF RUNNABLES FOR STAGE 1
}
rs.close();
stage1Executor.shutdown();
rs = someDbConnection.executeQuery(moreSQL);
ExecutorService stage2Executor = Executors.newFixedThreadPool(9);
while (rs.next()) {
//SUBMIT UNKNOWN # OF RUNNABLES FOR STAGE 2
}
rs.close();
stage2Executor.shutdown();
}
However, I know that setting up threads, threadpools, and anything that involves concurrency is expensive to construct and destroy. Or maybe it is not that big of a deal and I'm just being overly cautious about performance, because concurrency has expensive overhead no matter what. Is there a more efficient way of doing this? Using some kind of wait-for-completion operation I don't know about?

If you destroy the thread-pool and re-init a new one it will likely cost you much more than using a CountDownLatch!
Further, calling stage1Executor.shutdown(); does not promise that all the current threads will finish their execution before the new ExecutorService is up and running. Even calling shutdownNow() cannot guarantee it! (and you probably wouldn't want to call shutdownNow() because you want your threads to finish executing).
Donald Knuth's once said:
premature optimization is the root of all evil.
so even if you are not persuaded by me - better listen to him :)

Setting up and tearing down a handful of thread pools is negligible. Try it out in a loop in a test.
Using a countdown latch is fine, but maybe that might just be duplicating the work that ThreadPoolExecutor does internally and couples your task to your execution framework. Not a fan of this approach.
As for the original code, ExecutorService has an awaitTermination method so you can wait until your work is done before moving to the next stage.
For my money, your pseudo code is fine. Just replace executor.shutdown() with shutdownAndAwaitTermination(ExecutorService), the source for that is here: http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html

Related

Writing from Future of CachedThreadPool. Is my implementation incorrect?

I need help with my multithreading code.
I have a callable class which returns a value. I have a cachedThreadPool to submit ~60,000 tasks. I collect all the Futures in a List. After the ExecutiveService has shutdown, I loop through the list of Futures, and write the returned values using a bufferedWriter. Is this correct way of implementation?
ExecutorService execService = Executors.newCachedThreadPool();
List<Future<ValidationDataObject<String, Boolean>>> futureList = new ArrayList<>();
for (int i = 0; i < emailArrayList.size(); i++) {
String emailAddress = emailArrayList.get(i);
ValidateEmail validateEmail = new ValidateEmail(emailAddress);
Future<ValidationDataObject<String, Boolean>> future =
execService.submit(validateEmail);
futureList.add(future);
}
execService.shutdown();
for (Future<ValidationDataObject<String, Boolean>> future: futureList) {
ValidationDataObject<String, Boolean> validationObject = future.get();
bufferedWriter.write(validationObject.getEmailAddress() + "|"
+ validationObject.getIsValid());
bufferedWriter.newLine();
bufferedWriter.flush();
}
if (execService.isTerminated()) bufferedWriter.close();
Should I using synchronized block for the bufferedWriter? I am thinking, It doesn't need to be synchronized because, I am using the bufferedWriter from the main Thread, right?
I have a cachedThreadPool to submit ~60,000 tasks.
Off the bat, a cached thread-pool and 60k tasks is a red flag. That is going to start 60k threads which I doubt you really want. You should use a fixed thread pool and vary the number of threads until you achieve a good balance of throughput versus overwhelming your server. Maybe start with 2x the number of CPUs and then vary it depending on the server load.
You might also might consider using a fixed size queue which will limit the number of tasks outstanding although 60k tasks is fine unless those objects are heavy.
I collect all the Futures in a List. After the ExecutiveService has shutdown, I loop through the list of Futures, and write the returned values using a bufferedWriter. Is this correct way of implementation?
Yes, that's a good pattern. You don't show the writer being created but it is certainly fine for the main thread to own that.
Should I using synchronized block for the bufferedWriter? I am thinking, It doesn't need to be synchronized because, I am using the bufferedWriter from the main Thread, right?
Right. No other threads are using it so that's fine. It is a very typical pattern to have a writer thread managing the output of a multi-thread application.
One final comment is that you might want to look at the ExecutionCompletionService which allows you to process the tasks as they finish instead of having to wait for them in order. You might require the output to be in order in which case this isn't helpful but it's good technology to know about anyway.
Apart from the fact, that executor.shutdown() will most likely not do, what you believe it to do (it simply stops the Executor from accepting new Tasks, it will not wait for all tasks to terminate), your code looks fine.
You are right, there is no need for synchronization with respect to the writer, as you access it only single threaded.
There are things, that can be improved, though. Firstly, you are not doing a lot of Exception handling. Future.get() will throw an ExecutionException, if the Callable hits an Exception.
I'm not certain, how large the deviations in execution-time of your Callables are. Assume, there are notable deviations look at the following case: Say we submit Callables A, B and C, you receive FutA, FutB and FutC. Calling the get methods will block until the calculation behind the Future is finished. In your setting, you might be waiting for FutA to complete, while FutB/FutC might already be finished and ready for writing. Worst case here is, that processing of A will delay writing for all 60000 tasks.
I think, I would go for another approach, where every Callable gets the reference to the same ConcurrentLinkedQueue and instead of returning the result via Future writes the result into that queue. In this scenario, the ordering of the result is not dependent on the ordering of the Callables but on the time, the Callables finish execution. Whether or not this results in a speedup depends on your setting (especially time to write result and deviation in execution times of the Callables).

ForkJoinTask completion handler

I have a long-running calculation that I have split up with Java's ForkJoinTask.
Java's FutureTask provides a template method done(). Overriding this method allows for "registering a completion handler".
Is it possible to register a completion handler for a ForkJoinTask?
I am asking because I don't want to have blocking threads in my application - but my application will have a blocking thread as soon as I retrieve the calculation result via calls to result = ForkJoinPool.invoke(myForkJoinTask) or result = ForkJoinPool.submit(myForkJoinTask).get().
I think you mean "lock free" programming http://en.wikipedia.org/wiki/Non-blocking_algorithm? While FutureTask.get() possibly blocks the current thread (and thus leaves an idling CPU) ForkJoinTask.get() (or join) tries to keep the CPU busy.
This works well if you are able to split your problem into many small peaces (ForkJoinTask). If one FJTask is internally waiting for the result of an other task, which is not ready, the ForkJoinTask tries to pick up some other work (Task) to do from its ForkJoinPool and executes that task(s) meanwhile.
Until all your Task are CPU bound, it works fine: all your CPU(s) are kept busy.
It does NOT work if any of your Task waits for some external event (i.e. sending a REST call to the Mars rover). Also the problem should form a DAG, else you may get a deadlock. But until you join only tasks you forked before in the same Task it works well. Even better if you join the task you forked at last.
So it is not too worse to call get() or join() within/between your Tasks.
You mentioned a completion handler to solve the problem. If you are implementing the ForkJoinTask yourself you may have a look at RecursiveTask or even RecursiveAction. You will implement compute() and you may easily forward the result of each task to some collector at the end of your compute() function instead of returning it.
But you have to consider that you collector will be called concurrently! For adding values or counting completion counts have a look at java.util.concurrent.atomic. Avoid using synchronized blocks. Else all your Tasks have to wait for this single bottleneck and only one CPU keeps working.
I think propagating the results involves more problems than returning them (since FJPool handles this). In addition it becomes difficult to decide (and to communicate to the outside) at which point your final result is done.

Add the first element to a ConcurrentLinkedQueue atomically

I want to use a ConcurrentLinkedQueue in an atomic lock-free manner:
Several concurrent threads push events into the queue and some other thread will process them. The queue is not bound and I don't want any thread to wait or get locked. The reading part however may notice that the queue got empty. In a lock free implementation the reading thread must not block but will just end its task and proceeds executing other tasks (i.e. as an ExecutorService). Thus the writer pushing the first new event into an empty queue must become aware about it and should restart the reader (i.e. by submitting a new Runnable to the ExecutorService) to process the queue. Any further threads submitting a second or third event won't care about, as they may assume some reader was already prepared/submitted.
Unfortunately the add() method of ConcurrentLinkedQueue always returns true. Asking the queue if isEmpty() before or after adding the event won't help, as it is not atomic.
Should I use some additional AtomicInteger for monitoring the queue size() or is there some smarter solution for that?
Dieter.
I don't quite understand why you wouldn't just use an ExecutorService directly for this. It uses a BlockingQueue internally and takes care of all of the signaling itself.
// open ended thread pool
ExecutorService threadPool = Executors.newFixedThreadPool(1);
for (Job job : jobsToDo) {
threadPool.submit(new MyJobProcessor(job));
}
Unless you have good reasons, I would not rewrite the same logic yourself.
If you are trying to make use of dormant threads somehow, I would strongly recommend not bothering. Threads are relatively cheap so assigning a thread to process your queued tasks is fine. Re-using threads is unnecessary and seems like premature optimization to me.
Using of AtomicInteger for resolving submit contention is more efficient than locks or synchronized block.
Here is an example how it can be implemented with Java.
Also there is more efficient structure for multi-producer / single-writer queue than ConcurrentLinkedQueue.
Example of using it for actor implementations.
Another example.

Java - phasing threads

I'm implementing a parallel, performance-critical algorithm with multiple threads. I assign all threads some data to work on. When all those threads have finished to work on their data, I assign all threads new data, and the cycle continues. (This is what I refer to as thread "clocking" since it's somewhat similar to CPU clocking.)
What I came up with so far is using a master thread that stores an integer. At the beginning of each cycle, I set the integer to the number of slave threads. When a slave thread is done, it decrements the master thread's integer. Once that integer reaches zero, I start a new cycle.
Is this a good approach, or are there more efficient ways of doing the same thing?
You'd be better off using a Phaser (if you have Java 7), or CyclicBarrier for Java 5+.
I would recommend looking at the newer classes in the java.util.concurrent package, especially ThreadPoolTaskExecutor. You might be reinventing the wheel if you haven't looked beyond java.lang.Thread.
Well. See CyclicBarrier (JavaDoc)
A better way is to use Thread.join(). In you main thread, you call join() on all the threads you are starting. The main thread will wait untill all joined threads are finished.
See for example http://javahowto.blogspot.com/2007/05/when-to-join-threads.html
An ExecutorService can do this for you.
ExecutorService executor = Executors.newFixedThreadPool(10);
do {
List<Callable> tasks = getNextTasksToExecute();
executor.invokeAll(tasks);
} while (tasks.size() > 0);
This will create a thread pool with 10 threads. It will then call getNextTasksToExecute() which you should implement yourself to return the next bunch of tasks that need doing. It will execute those tasks in parallel in the thread pool and then keep looping until getNextTasksToExecute() returns no more tasks.
Edit:
Code not tested, think there may be a compile error, but you can figure that out.

ScheduledThreadPoolExecutor and corePoolSize 0?

I'd like to have a ScheduledThreadPoolExecutor which also stops the last thread if there is no work to do, and creates (and keeps threads alive for some time) if there are new tasks. But once there is no more work to do, it should again discard all threads.
I naivly created it as new ScheduledThreadPoolExecutor(0) but as a consequence, no thread is ever created, nor any scheduled task is ever executed.
Can anybody tell me if I can achieve my goal without writing my own wrapper around the ScheduledThreadpoolExecutor?
Thanks in advance!
Actually you can do it, but its non-obvious:
Create a new ScheduledThreadPoolExecutor
In the constructor set the core threads to the maximum number of threads you want
set the keepAliveTime of the executor
and at last, allow the core threads to timeout
m_Executor = new ScheduledThreadPoolExecutor ( 16,null );
m_Executor.setKeepAliveTime ( 5, TimeUnit.SECONDS );
m_Executor.allowCoreThreadTimeOut ( true );
This works only with Java 6 though
I suspect that nothing provided in java.util.concurrent will do this for you, just because if you need a scheduled execution service, then you often have recurring tasks to perform. If you have a recurring task, then it usually makes more sense to just keep the same thread around and use it for the next recurrence of the task, rather than tearing down your thread and having to build a new one at the next recurrence.
Of course, a scheduled executor could be used for inserting delays between non-recurring tasks, or it could be used in cases where resources are so scarce and recurrence is so infrequent that it makes sense to tear down all your threads until new work arrives. So, I can see cases where your proposal would definitely make sense.
To implement this, I would consider trying to wrap a cached thread pool from Executors.newCachedThreadPool together with a single-threaded scheduled executor service (i.e. new ScheduledThreadPoolExecutor(1)). Tasks could be scheduled via the scheduled executor service, but the scheduled tasks would be wrapped in such a way that rather than having your single-threaded scheduled executor execute them, the single-threaded executor would hand them over to the cached thread pool for actual execution.
That compromise would give you a maximum of one thread running when there is absolutely no work to do, and it would give you as many threads as you need (within the limits of your system, of course) when there is lots of work to do.
Reading the ThreadPoolExecutor javadocs might suggest that Alex V's solution is okay. However, doing so will result in unnecessarily creating and destroying threads, nothing like a cashed thread-pool. The ScheduledThreadPool is not designed to work with a variable number of threads. Having looked at the source, I'm sure you'll end up spawning a new thread almost every time you submit a task. Joe's solution should work even if you are ONLY submitting delayed tasks.
PS. I'd monitor your threads to make sure your not wasting resources in your current implementation.
This problem is a known bug in ScheduledThreadPoolExecutor (Bug ID 7091003) and has been fixed in Java 7u4. Though looking at the patch, the fix is that "at least one thread is started even if corePoolSize is 0."

Categories