I have a long-running calculation that I have split up with Java's ForkJoinTask.
Java's FutureTask provides a template method done(). Overriding this method allows for "registering a completion handler".
Is it possible to register a completion handler for a ForkJoinTask?
I am asking because I don't want to have blocking threads in my application - but my application will have a blocking thread as soon as I retrieve the calculation result via calls to result = ForkJoinPool.invoke(myForkJoinTask) or result = ForkJoinPool.submit(myForkJoinTask).get().
I think you mean "lock free" programming http://en.wikipedia.org/wiki/Non-blocking_algorithm? While FutureTask.get() possibly blocks the current thread (and thus leaves an idling CPU) ForkJoinTask.get() (or join) tries to keep the CPU busy.
This works well if you are able to split your problem into many small peaces (ForkJoinTask). If one FJTask is internally waiting for the result of an other task, which is not ready, the ForkJoinTask tries to pick up some other work (Task) to do from its ForkJoinPool and executes that task(s) meanwhile.
Until all your Task are CPU bound, it works fine: all your CPU(s) are kept busy.
It does NOT work if any of your Task waits for some external event (i.e. sending a REST call to the Mars rover). Also the problem should form a DAG, else you may get a deadlock. But until you join only tasks you forked before in the same Task it works well. Even better if you join the task you forked at last.
So it is not too worse to call get() or join() within/between your Tasks.
You mentioned a completion handler to solve the problem. If you are implementing the ForkJoinTask yourself you may have a look at RecursiveTask or even RecursiveAction. You will implement compute() and you may easily forward the result of each task to some collector at the end of your compute() function instead of returning it.
But you have to consider that you collector will be called concurrently! For adding values or counting completion counts have a look at java.util.concurrent.atomic. Avoid using synchronized blocks. Else all your Tasks have to wait for this single bottleneck and only one CPU keeps working.
I think propagating the results involves more problems than returning them (since FJPool handles this). In addition it becomes difficult to decide (and to communicate to the outside) at which point your final result is done.
Related
In my Java application I have a Runnable such as:
this.runner = new Runnable({
#Override
public void run() {
// do something that takes roughly 5 seconds.
}
});
I need to run this roughly every 30 seconds (although this can vary) in a separate thread. The nature of the code is such that I can run it and forget about it (whether it succeeds or fails). I do this as follows as a single line of code in my application:
(new Thread(this.runner)).start()
Now, this works fine. However, I'm wondering if there is any sort of cleanup I should be doing on each of the thread instances after they finish running? I am doing CPU profiling of this application in VisualVM and I can see that, over the course of 1 hour runtime, a lot of threads are being created. Is this concern valid or is everything OK?
N.B. The reason I start a new Thread instead of simply defining this.runner as a Thread, is that I sometimes need to run this.runner twice simultaneously (before the first run call has finished), and I can't do that if I defined this.runner as a Thread since a single Thread object can only be run again once the initial execution has finished.
Java objects that need to be "cleaned up" or "closed" after use conventionally implement the AutoCloseable interface. This makes it easy to do the clean up using try-with-resources. The Thread class does not implement AutoCloseable, and has no "close" or "dispose" method. So, you do not need to do any explicit clean up.
However
(new Thread(this.runner)).start()
is not guaranteed to immediately start computation of the Runnable. You might not care whether it succeeds or fails, but I guess you do care whether it runs at all. And you might want to limit the number of these tasks running concurrently. You might want only one to run at once, for example. So you might want to join() the thread (or, perhaps, join with a timeout). Joining the thread will ensure that the thread will completes its computation. Joining the thread with a timeout increases the chance that the thread starts its computation (because the current thread will be suspended, freeing a CPU that might run the other thread).
However, creating multiple threads to perform regular or frequent tasks is not recommended. You should instead submit tasks to a thread pool. That will enable you to control the maximum amount of concurrency, and can provide you with other benefits (such as prioritising different tasks), and amortises the expense of creating threads.
You can configure a thread pool to use a fixed length (bounded) task queue and to cause submitting threads to execute submitted tasks itself themselves when the queue is full. By doing that you can guarantee that tasks submitted to the thread pool are (eventually) executed. The documentation of ThreadPool.execute(Runnable) says it
Executes the given task sometime in the future
which suggests that the implementation guarantees that it will eventually run all submitted tasks even if you do not do those specific tasks to ensure submitted tasks are executed.
I recommend you to look at the Concurrency API. There are numerous pre-defined methods for general use. By using ExecutorService you can call the shutdown method after submitting tasks to the executor which stops accepting new tasks, waits for previously submitted tasks to execute, and then terminates the executor.
For a short introduction:
https://www.baeldung.com/java-executor-service-tutorial
In my Java application I have a Runnable such as:
this.runner = new Runnable({
#Override
public void run() {
// do something that takes roughly 5 seconds.
}
});
I need to run this roughly every 30 seconds (although this can vary) in a separate thread. The nature of the code is such that I can run it and forget about it (whether it succeeds or fails). I do this as follows as a single line of code in my application:
(new Thread(this.runner)).start()
Now, this works fine. However, I'm wondering if there is any sort of cleanup I should be doing on each of the thread instances after they finish running? I am doing CPU profiling of this application in VisualVM and I can see that, over the course of 1 hour runtime, a lot of threads are being created. Is this concern valid or is everything OK?
N.B. The reason I start a new Thread instead of simply defining this.runner as a Thread, is that I sometimes need to run this.runner twice simultaneously (before the first run call has finished), and I can't do that if I defined this.runner as a Thread since a single Thread object can only be run again once the initial execution has finished.
Java objects that need to be "cleaned up" or "closed" after use conventionally implement the AutoCloseable interface. This makes it easy to do the clean up using try-with-resources. The Thread class does not implement AutoCloseable, and has no "close" or "dispose" method. So, you do not need to do any explicit clean up.
However
(new Thread(this.runner)).start()
is not guaranteed to immediately start computation of the Runnable. You might not care whether it succeeds or fails, but I guess you do care whether it runs at all. And you might want to limit the number of these tasks running concurrently. You might want only one to run at once, for example. So you might want to join() the thread (or, perhaps, join with a timeout). Joining the thread will ensure that the thread will completes its computation. Joining the thread with a timeout increases the chance that the thread starts its computation (because the current thread will be suspended, freeing a CPU that might run the other thread).
However, creating multiple threads to perform regular or frequent tasks is not recommended. You should instead submit tasks to a thread pool. That will enable you to control the maximum amount of concurrency, and can provide you with other benefits (such as prioritising different tasks), and amortises the expense of creating threads.
You can configure a thread pool to use a fixed length (bounded) task queue and to cause submitting threads to execute submitted tasks itself themselves when the queue is full. By doing that you can guarantee that tasks submitted to the thread pool are (eventually) executed. The documentation of ThreadPool.execute(Runnable) says it
Executes the given task sometime in the future
which suggests that the implementation guarantees that it will eventually run all submitted tasks even if you do not do those specific tasks to ensure submitted tasks are executed.
I recommend you to look at the Concurrency API. There are numerous pre-defined methods for general use. By using ExecutorService you can call the shutdown method after submitting tasks to the executor which stops accepting new tasks, waits for previously submitted tasks to execute, and then terminates the executor.
For a short introduction:
https://www.baeldung.com/java-executor-service-tutorial
After doing lots of searching on Java, I really am very confused over the following questions:
Why would I choose an asynchronous method over a multi-threaded method?
Java futures are supposed to be non-blocking. What does non-blocking mean? Why call it non-blocking when the method to extract information from a Future--i.e., get()--is blocking and will simply halt the entire thread till the method is done processing? Perhaps a callback method that rings the church bell of completion when processing is complete?
How do I make a method async? What is the method signature?
public List<T> databaseQuery(String Query, String[] args){
String preparedQuery = QueryBaker(Query, args);
List<int> listOfNumbers = DB_Exec(preparedQuery); // time taking task
return listOfNumbers;
}
How would this fictional method become a non blocking method? Or if you want please provide a simple synchronous method and an asynchronous method version of it.
Why would I choose an asynchronous method over a multi-threaded method?
Asynchronous methods allow you to reduce the number of threads. Instead of tying up a thread in a blocking call, you can issue an asynchronous call and then be notified later when it completes. This frees up the thread to do other processing in the meantime.
It can be more convoluted to write asynchronous code, but the benefit is improved performance and memory utilization.
Java futures are supposed to be non-blocking. What does non-blocking mean? Why call it non-blocking when the method to extract information from a Future--i.e., get()--is blocking and will simply halt the entire thread till the method is done processing ? Perhaps a callback method that rings the church bell of completion when processing is complete?
Check out CompletableFuture, which was added in Java 8. It is a much more useful interface than Future. For one, it lets you chain all kinds of callbacks and transformations to futures. You can set up code that will run once the future completes. This is much better than blocking in a get() call, as you surmise.
For instance, given asynchronous read and write methods like so:
CompletableFuture<ByteBuffer> read();
CompletableFuture<Integer> write(ByteBuffer bytes);
You could read from a file and write to a socket like so:
file.read()
.thenCompose(bytes -> socket.write(bytes))
.thenAccept(count -> log.write("Wrote {} bytes to socket.", count)
.exceptionally(exception -> {
log.error("Input/output error.", exception);
return null;
});
How do I make a method async? What is the method signature?
You would have it return a future.
public CompletableFuture<List<T>> databaseQuery(String Query, String[] args);
It's then the responsibility of the method to perform the work in some other thread and avoid blocking the current thread. Sometimes you will have worker threads ready to go. If not, you could use the ForkJoinPool, which makes background processing super easy.
public CompletableFuture<List<T>> databaseQuery(String query, String[] args) {
CompletableFuture<List<T>> future = new CompletableFuture<>();
Executor executor = ForkJoinPool.commonPool();
executor.execute(() -> {
String preparedQuery = QueryBaker(Query, args);
List<T> list = DB_Exec(preparedQuery); // time taking task
future.complete(list);
});
}
why would I choose a Asynchronous method over a multi-threaded method
They sound like the same thing to me except asynchronous sounds like it will use one thread in the back ground.
Java futures is supposed to be non blocking ?
Non- blocking operations often use a Future, but the object itself is blocking, though only when you wait on it.
What does Non blocking mean?
The current thread doesn't wait/block.
Why call it non blocking when the method to extract information from a Future < some-object > i.e. get() is blocking
You called it non-blocking. Starting the operation in the background is non-blocking, but if you need the results, blocking is the easiest way to get this result.
and will simply halt the entire thread till the method is done processing ?
Correct, it will do that.
Perhaps a callback method that rings the church bell of completion when processing is complete ?
You can use a CompletedFuture, or you can just add to the task anything you want to do at the end. You only need to block on things which have to be done in the current thread.
You need to return a Future, and do something else while you wait, otherwise there is no point using a non-blocking operation, you may as well execute it in the current thread as it's simpler and more efficient.
You have the synchronous version already, the asynchronous version would look like
public Future<List<T>> databaseQuery(String Query, String[] args) {
return executor.submit(() -> {
String preparedQuery = QueryBaker(Query, args);
List<int> listOfNumbers = DB_Exec(preparedQuery); // time taking task
return listOfNumbers;
});
}
I'm not a guru on multithreading but I'm gonna try to answer these questions for my sake as well
why would I choose a Asynchronous method over a multi-threaded method ? (My problem: I believe I read too much and now I am myself confused)`
Multi-threading is working with multiple threads, there isn't much else to it. One interesting concept is that multiple threads cannot work in a truly parallel fashion and thus divides each thread into small bits to give the illusion of working in parallel.
1
One example where multithreading would be useful is in real-time multiplayer games, where each thread corresponds to each user. User A would use thread A and User B would use thread B. Each thread could track each user's activity and data could be shared between each thread.
2
Another example would be waiting for a long http call. Say you're designing a mobile app and the user clicks on download for a file of 5 gigabytes. If you don't use multithreading, the user would be stuck on that page without being able to perform any action until the http call completes.
It's important to note that as a developer multithreading is only a way of designing code. It adds complexity and doesn't always have to be done.
Now for Async vs Sync, Blocking vs Non-blocking
These are some definitions I found from http://doc.akka.io/docs/akka/2.4.2/general/terminology.html
Asynchronous vs. Synchronous
A method call is considered synchronous if the caller cannot make progress until the method returns a value or throws an exception. On the other hand, an asynchronous call allows the caller to progress after a finite number of steps, and the completion of the method may be signalled via some additional mechanism (it might be a registered callback, a Future, or a message).
A synchronous API may use blocking to implement synchrony, but this is not a necessity. A very CPU intensive task might give a similar behavior as blocking. In general, it is preferred to use asynchronous APIs, as they guarantee that the system is able to progress. Actors are asynchronous by nature: an actor can progress after a message send without waiting for the actual delivery to happen.
Non-blocking vs. Blocking
We talk about blocking if the delay of one thread can indefinitely delay some of the other threads. A good example is a resource which can be used exclusively by one thread using mutual exclusion. If a thread holds on to the resource indefinitely (for example accidentally running an infinite loop) other threads waiting on the resource can not progress. In contrast, non-blocking means that no thread is able to indefinitely delay others.
Non-blocking operations are preferred to blocking ones, as the overall progress of the system is not trivially guaranteed when it contains blocking operations.
I find that async vs sync refers more to the intent of the call whereas blocking vs non-blocking refers to the result of the call. However, it wouldn't be wrong to say usually asynchronous goes with non-blocking and synchronous goes with blocking.
2> Java futures is supposed to be non blocking ? What does Non blocking mean? Why call it non blocking when the method to extract information from a Future < some-object > i.e. get() is blocking and will simply halt the entire thread till the method is done processing ? Perhaps a callback method that rings the church bell of completion when processing is complete ?
Non-blocking do not block the thread that calls the method.
Futures were introduced in Java to represent the result of a call, although it may have not been complete. Going back to the http file example, Say you call a method like the following
Future<BigData> future = server.getBigFile(); // getBigFile would be an asynchronous method
System.out.println("This line prints immediately");
The method getBigFile would return immediately and proceed to the next line of code. You would later be able to retrieve the contents of the future (or be notified that the contents are ready). Libraries/Frameworks like Netty, AKKA, Play use Futures extensively.
How do I make a method Async? What is the method signature?
I would say it depends on what you want to do.
If you want to quickly build something, you would use high level functions like Futures, Actor models, etc. something which enables you to efficiently program in a multithreaded environment without making too many mistakes.
On the other hand if you just want to learn, I would say it's better to start with low level multithreading programming with mutexes, semaphores, etc.
Examples of codes like these are numerous in google if you just search java asynchronous example with any of the keywords I have written.
Let me know if you have any other questions!
Normally when one uses Java 8's parallelStream(), the result is execution via the default, common fork-join pool (i.e. ForkJoinPool.commonPool()).
That is clearly undesirable, however, if one has work that is far from CPU bound, e.g. may be waiting on IO much of the time. In such cases one will want to use a separate pool, sized according to other criteria (e.g. how much of the time the tasks are likely to be actually using the CPU).
There's no obvious means of getting parallelStream() to use a different pool, but there is a way as detailed here.
Unfortunately, that approach entails invoking the terminal operation on the parallel stream from a fork-join pool thread. The downside of this is that if the target-fork join pool is completely busy with existing work, the whole execution will wait on it while doing absolutely nothing. Thus the pool can become a bottleneck worse than single threaded execution. By contrast, when one uses parallelStream() in the "normal" fashion, ForkJoinPool.common.externalHelpComplete() or ForkJoinPool.common.tryExternalUnpush() are used to let the calling thread from outside the pool help in the processing.
Does anyone know of a way to both get parallelStream() to use a non-default fork-join pool and have a calling thread from outside the fork-join pool help in the processing of this work (but not the rest of the fork-join pool's work)?
You can use awaitQuiescence on the pool to help out. However, you can’t select which task(s) you will help, it will just take the next pending from the pool, thus, if there are more pending tasks, you might ending up executing these before getting to your own.
ForkJoinPool forkJoinPool = new ForkJoinPool(1);
// make all threads busy:
forkJoinPool.submit(() -> LockSupport.parkNanos(Long.MAX_VALUE));
// submit our task (may contain your stream operation)
ForkJoinTask<Thread> task = forkJoinPool.submit(() -> Thread.currentThread());
// help out
while(!task.isDone()) // use zero timeout to execute one task only
forkJoinPool.awaitQuiescence(0, TimeUnit.NANOSECONDS);
System.out.println(Thread.currentThread()==task.get());
will print true.
whereas
ForkJoinPool forkJoinPool = new ForkJoinPool(1);
// make all threads busy:
forkJoinPool.submit(() -> LockSupport.parkNanos(Long.MAX_VALUE));
// overload:
forkJoinPool.submit(() -> LockSupport.parkNanos(Long.MAX_VALUE));
// submit our task (may contain your stream operation)
ForkJoinTask<Thread> task = forkJoinPool.submit(() -> Thread.currentThread());
// help out
while(!task.isDone())
forkJoinPool.awaitQuiescence(0, TimeUnit.NANOSECONDS);
System.out.println(Thread.currentThread()==task.get());
will hang forever as it attempts to execute the second blocking task.
Nevertheless, it will let the initiating thread help processing the pool’s pending tasks which will raise the chance of its own task getting executed as long as there are no infinite tasks (the example above is extreme and only chosen for demonstration).
But note that the entire relationship between the Fork/Join framework and the Stream API is an implementation detail anyway.
I try to use a ForkJoinPool to parallelize my CPU intensive calculations.
My understanding of a ForkJoinPool is, that it continues to work as long as any task is available to be executed. Unfortunately I frequently observed worker threads idling/waiting, thus not all CPU are kept busy. Sometimes I even observed additional worker threads.
I did not expect this, as I strictly tried to use non blocking tasks.
My observation is very similar to those of ForkJoinPool seems to waste a thread.
After debugging a lot into ForkJoinPool I have a guess:
I used invokeAll() to distribute work over a list of subtasks. After invokeAll() finished to execute the first task itself it starts joining the other ones. This works fine, until the next task to join is on top of the executing queue. Unfortunately I submitted additional tasks asynchronously without joining them. I expected the ForkJoin framework to continue executing those task first and than turn back to joining any remaining tasks.
But it seems not to work this way. Instead the worker thread gets stalled calling wait() until the task waiting for gets ready (presumably executed by an other worker thread). I did not verify this, but it seems to be a general flaw of calling join().
ForkJoinPool provides an asyncMode, but this is a global parameter and can not be used for individual submissions. But I like to see my asynchronously forked tasks to be executed soon.
So, why does ForkJoinTask.doJoin() not simply executes any available task on top of its queue until it gets ready (either executed by itself or stolen by others)?
Since nobody else seems to understand my question I try to explain what I found after some nights of debugging:
The current implementation of ForkJoinTasks works well if all fork/join calls are strictly paired. Illustrating a fork by an opening bracket and join by a closing one a perfect binary fork join pattern may look like this:
{([][]) ([][])} {([][]) ([][])}
If you use invokeAll() you may also submit list of subtasks like this:
{([][][][]) ([][][][]) ([][][][])}
What I did however looks like this pattern:
{([) ([)} ... ]]
You may argue this looks ill or is a misuse of the fork-join framework. But the only constraint is, that the tasks completion dependencies are acyclic, else you may run into a deadlock. As long as my [] tasks are not dependent on the () tasks, I don't see any problem with it. The offending ]]'s just express that I do not wait for them explicitly; they may finish some day, it does not matter to me (at that point).
Indeed the current implementation is able to execute my interlocked tasks, but only by spawning additional helper threads which is quite inefficient.
The flaw seems to be the current implementation of join(): joining an ) expects to see its corresponding ( on top of its execution queue, but it finds a [ and is perplexed. Instead of simply executing [] to get rid of it, the current thread suspends (calling wait()) until someone else comes around to execute the unexpected task. This causes a drastic performance break down.
My primary intend was to put additional work onto the queue to prevent the worker thread from suspending if the queue runs empty. Unfortunately the opposite happens :-(
You are dead right about join(). I wrote this article two years ago that points out the problem with join().
As I said there, the framework cannot execute newly submitted requests until it finishes the earlier ones. And each WorkThread cannot steal until it's current request finishes which results in the wait().
The additional threads you see are "continuation threads." Since join() eventually issues a wait(), these threads are needed so the entire framework doesn't stall.
You’re not using this framework for the very narrow purpose for which it was intended.
The framework started life as the experiment in the 2000 research paper. It’s been modified since then but the basic design, fork-and-join on large arrays, remains the same. The basic purpose is to teach undergraduates how to walk down the leaves of a balanced tree. When people use it for other than simple array-processing weird things happen. What it is doing in Java7 is beyond me; which is the purpose of the article.
The problems only get worse in Java8. There it’s the engine to drive all stream parallel work. Have a read in part two of that article. The lambda interest lists are filled with reports of thread stalls, stack overflow, and out of memory errors.
You use it at your own risk when you don’t use it for pure recursive decomposition of large data structures. Even then, the excessive threads it creates can cause havoc. I’m not going to pursue this discussion any further.