How to wait for main Thread until asynchronous methods will finish? - java

I have Service.class with start() method:
public void start() {
for (int i = 0; i < companiesList.size(); i++) {
asychronous.someAsynchronous(...);
}
log.info("Start method has finished");
}
I have Asynchronous.class with someAsynchronous() method:
#Async("threadPoolTaskExecutor")
public CompletableFuture<Void> someAsynchronous(some_parameters) {
//do some stuff
return null;
}
The log.info() shows up before someAsynchronous() methods has finished.
How to force it to wait for log.info() until someSynchronous() methods in loop will finish? Btw: Asynchronous threads are still running after finishing loop.

The CompletableFuture<Void> calls are requested to be executed but after all of them start in separate threads, the for-loop finishes and the log is printed even before any of them finished. This is the advantage of the asynchronous processing - you don't care about their results and how much time they took to execute.
To achieve you want, you have to periodically check whether all of them are finished before you proceed to the log outoput.
// Adds executions to the List
List<CompletableFuture<Void>> futures = new ArrayList<>();
for (int i = 0; i < companiesList.size(); i++) {
futures.add(asychronous.someAsynchronous(...));
}
// Periodical check
Iterator<CompletableFuture<Void>> iterator = futures.iterator();
while (iterator.hasNext()) {
CompletableFuture<Void> future = iterator.next(); // get the next one
if (future.isDone()) { // if finished...
//... // ... do an action
iterator.remove(); // ... and remove from the Iterator
}
if (!iterator.hasNext()) { // if you reach the end
iterator = futures.iterator(); // ... repeat the remaining Futures
}
}
log.info("Start method has finished");
Note this method doesn't finish until all of the executions are done.
Edit: Thanks to #Kayaman who suggested using a single method dedicated for that replacing the whole Iterator logics. The futures must be an array:
// Adds executions to the List
List<CompletableFuture<Void>> futures = new ArrayList<>();
for (int i = 0; i < companiesList.size(); i++) {
futures.add(asychronous.someAsynchronous(...));
}
// Join the completion of all the threads
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
log.info("Start method has finished");

Related

Is ordered execution expected on the single thread

Since the service is single threaded, the monkey1 loop series will always get executed before monkey2, so we can expect monkey1 will always be greater than monkey2, can't we?
import java.util.concurrent.*;
public class MonkeyCounter {
private static AtomicInteger monkey1 = new AtomicInteger(0);
private static AtomicLong monkey2 = new AtomicLong(0);
public static void main(String[] args) {
ExecutorService service = null;
try {
service = Executors.newSingleThreadExecutor();
for(int i=0; i<100; i++)
service.submit(() -> monkey1.getAndIncrement());
for(int i=0; i<100; i++)
service.submit(() -> monkey2.incrementAndGet());
System.out.println(monkey1+" "+monkey2);
} finally {
if(service != null) service.shutdown();
}
}
}
Javadoc on Executors.newSingleThreadExecutor:
Creates an Executor that uses a single worker thread operating off an unbounded queue. (Note however that if this single thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks.) Tasks are guaranteed to execute sequentially, and no more than one task will be active at any given time. Unlike the otherwise equivalent newFixedThreadPool(1) the returned executor is guaranteed not to be reconfigurable to use additional threads.
Tasks are placed in a queue. Queue are FIFO so with single thread, the monkey1 > monkey2 is guaranteed if none of the increment fails.
Bear in mind that value of monkey1 and monkey2 are undetermined because you do not wait for the jobs completion.
After Holger corrected me, with Executors.newSingleThreadExecutor()
tasks are guaranteed to execute sequentially.
In following example even if the first batch of tasks being submitted are blocking for 5 secs,
are completed first than the second batch of tasks with no blocking other than println.
example,
import java.util.concurrent.*;
public class ExecutorServiceTests {
public static void main(String[] args) {
java.util.concurrent.ExecutorService service = Executors.newSingleThreadExecutor();
for (int i = 0; i < 5; i++) {
service.submit(() -> {
block(5000);
System.out.println("execution1");
});
}
for (int i = 0; i < 5; i++) {
service.submit(() -> {
System.out.println("execution2");
});
}
service.shutdown();
}
private static void block(int ms) {
try {
Thread.sleep(ms);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Output:
execution1
execution1
execution1
execution1
execution1
execution2
execution2
execution2
execution2
execution2

Java parallel tasks , only executing once

This code I have is not executing tasks in parallel,
it only executes the code in this case once (whatever is in the for loop, but it should be 2) :
public class mqDirect {
public static void main(String args[]) throws Exception {
int parallelism = 2;
ExecutorService executorService =
Executors.newFixedThreadPool(parallelism);
Semaphore semaphore = new Semaphore(parallelism);
for (int i = 0; i < 1; i++) {
try {
semaphore.acquire();
// snip ... do stuff..
semaphore.release();
} catch (Throwable throwable) {
semaphore.release();
}
executorService.shutdownNow();
}
}
}
In Java the main way to make code work in parallel is to create a Thread with a new Runnable as a constructor parameter. You then need to start it.
There are many tutorials to help you get this to happen properly.
As your code stands you are merely creating an ExecutorService (and not using it), creating a Semaphore (which should be done in the thread but isn't), performing some process and then shutting down the Executor.
BTW: ShutDownNow is probably not what you want, you should just use ShutDown.
OK, So I found this good tutorial
http://programmingexamples.wikidot.com/threadpoolexecutor
And I have done something like
public class mqDirect {
int poolSize = 2;
int maxPoolSize = 2;
long keepAliveTime = 10;
ThreadPoolExecutor threadPool = null;
final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(
5);
public mqDirect()
{
threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize,
keepAliveTime, TimeUnit.SECONDS, queue);
}
public void runTask(Runnable task)
{
threadPool.execute(task);
System.out.println("Task count.." + queue.size());
}
public void shutDown()
{
threadPool.shutdown();
}
public static void main (String args[]) throws Exception
{
mqDirect mtpe = new mqDirect();
// start first one
mtpe.runTask(new Runnable()
{
public void run()
{
for (int i = 0; i < 2; i++)
{
try
{
System.out.println("First Task");
runMqTests();
Thread.sleep(1000);
} catch (InterruptedException ie)
{
}
}
}
});
// start second one
/*
* try{ Thread.sleep(500); }catch(InterruptedException
* ie){}
*/
mtpe.runTask(new Runnable()
{
public void run()
{
for (int i = 0; i < 2; i++)
{
try
{
System.out.println("Second Task");
runMqTests();
Thread.sleep(1000);
} catch (InterruptedException ie)
{
}
}
}
});
mtpe.shutDown();
// runMqTests();
}
And it works !
But the problem is , this duplicated code ... runMqtests() is the same task, is there a way to specify it to run in parallel without duplicating the code?
The example I based this off is assuming each task is different.
This code I have is not executing tasks in parallel, it only executes the code in this case once (whatever is in the for loop, but it should be 2) :
Just because you instantiate an ExecutorService instance doesn't mean that things magically run in parallel. You actually need to use that object aside from just shutting it down.
If you want the stuff in the loop to run in the threads in the service then you need to do something like:
int parallelism = 2;
ExecutorService executorService = Executors.newFixedThreadPool(parallelism);
for (int i = 0; i < parallelism; i++) {
executorService.submit(() -> {
// the code you want to be run by the threads in the exector-service
// ...
});
}
// once you have submitted all of the jobs, you can shut it down
executorService.shutdown();
// you might want to call executorService.awaitTermination(...) here
It is important to note that this will run your code in the service but there are no guarantees that it will be run "in parallel". This depends on your number of processors and the race conditions inherent with threads. For example, the first task might start up, run, and finish its code before the 2nd one starts. That's the nature of threaded programs which are by design asynchronous.
If, however, you have at least 2 cores, and the code that you submit to be run by the executor-service takes a long time to run then most likely they will be running at the same time at some point.
Lastly, as #OldCurmudgeon points out, you should call shutdown() on the service which allows current jobs already submitted to the service to run as opposed to shutdownNow() which cancels and queued jobs and also calls thread.interrupt() on any running jobs.
Hope this helps.

ExecutorService.invokeAll and shutdown

So I have some Callable tasks, sensitive to interruptions, which I submit to the ExecutorService using invokeAll. After 5 seconds from another method I call executorService.shutdownNow after which I call the awaitTermination, which returns true, so all seems good. The problem is the executor never terminates.
Due to logging I know that each one of my tasks finished.
nevertheless the invokeAll still blocks on f.get when i is equal to the number of threads of the executor:
The following code is obtained from AbstractExecutorService + some logging.
#Override
public <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException {
if (tasks == null) throw new NullPointerException();
ArrayList<Future<T>> futures = new ArrayList<Future<T>>(tasks.size());
boolean done = false;
try {
List<Callable<T>> list = new ArrayList<Callable<T>>();
for (Callable<T> t : tasks) {
list.add(t);
RunnableFuture<T> f = newTaskFor(t);
futures.add(f);
execute(f);
}
for (int i = 0, size = futures.size(); i < size; i++) {
Future<T> f = futures.get(i);
if (!f.isDone()) {
log.info("Future %s is not done!. Task %s", i, list.get(i));
try {
log.info("Get from future %s", i);
// NEXT LINE BLOCKS FOR i= NUMBER OF THREADS
f.get();
log.info("Got result from future %s", i);
} catch (CancellationException ignore) {
} catch (ExecutionException ignore) {
}
}
}
log.info("Obtained all!");
done = true;
return futures;
} finally {
if (!done) for (int i = 0, size = futures.size(); i < size; i++)
futures.get(i).cancel(true);
}
}
Am I not suppose to use invokeAll with shutdown? I guess not, after all they are in the same class. Why does it get blocked, only when i= the number of threads of the executor?
Yes, you're not suppose to use invokeAll with shutdown. At least this is what I understand, correct me if I'm wrong.
The shutdownNow method:
public List<Runnable> shutdownNow() {
...
checkShutdownAccess();
advanceRunState(STOP);
interruptWorkers();
tasks = drainQueue();
...
}
The only thing is does is interrupt working threads and remove the rest of the runnables from the working queue, see drainQueue. ShutdownNow/Shutdown does not modify the futures in our invokeAll method
So what happens in my case is that for an Executor with N threads, I invoke 300 jobs, each of them take more than 1 minute, after 5 seconds I cancel (interrupt working threads), N threads are interrupted (0 to N-1). What happens with the rest of the futures? Nothing, the next call to f.get() (see corresponding line in the question) will block and you're stuck there. This explains why I'm always blocked on i = Number of threads.

In Java, how to pass the objects back to Main thread from worker threads?

In Java, how to pass the objects back to Main thread from worker threads? Take the following codes as an example:
main(String[] args) {
String[] inputs;
Result[] results;
Thread[] workers = new WorkerThread[numThreads];
for (int i = 0; i < numThreads; i++) {
workers[i] = new WorkerThread(i, inputs[i], results[i]);
workers[i].start();
}
....
}
....
class WorkerThread extends Thread {
String input;
int name;
Result result;
WorkerThread(int name, String input, Result result) {
super(name+"");
this.name = name;
this.input = input;
this.result = result;
}
public void run() {
result = Processor.process(input);
}
}
How to pass the result back to main's results[i] ?
How about passing this to WorkerThread,
workers[i] = new WorkerThread(i, inputs[i], results[i], this);
so that it could
mainThread.reults[i] = Processor.process(inputs[i]);
Why don't you use Callables and an ExecutorService?
main(String[] args) {
String[] inputs;
Future<Result>[] results;
for (int i = 0; i < inputs.length; i++) {
results[i] = executor.submit(new Worker(inputs[i]);
}
for (int i = 0; i < inputs.length; i++) {
Result r = results[i].get();
// do something with the result
}
}
#Thilo's and #Erickson's answers are the best one. There are existing APIs that do this kind of thing simply and reliably.
But if you want to persist with your current approach of doing it by hand, then the following change to you code may be sufficient:
for (int i = 0; i < numThreads; i++) {
results[i] = new Result();
...
workers[i] = new WorkerThread(i, inputs[i], results[i]);
workers[i].start();
}
...
public void run() {
Result tmp = Processor.process(input);
this.result.updateFrom(tmp);
// ... where the updateFrom method copies the state of tmp into
// the Result object that was passed from the main thread.
}
Another approach is to replace Result[] in the main program with Result[][] and pass a Result[0] to the child thread that can be updated with the result object. (A light-weight holder).
However, there us an Important Gotcha when you are implementing this at a low level is that the main thread needs to call Thread.join on all of the child threads before attempting to retrieve the results. If you don't, there is a risk that the main thread will occasionally see stale values in the Result objects. The join also ensures that the main thread doesn't try to access a Result before the corresponding child thread has completed it.
The main thread will need to wait for the worker threads to complete before getting the results. One way to do this is for the main thread to wait for each worker thread to terminate before attempting to read the result. A thread terminates when its run() method completes.
For example:
for (int i = 0; i < workers.length; i++) {
worker.join(); // wait for worker thread to terminate
Result result = results[i]; // get the worker thread's result
// process the result here...
}
You still have to arrange for the worker thread's result to be inserted into the result[] array somehow. As one possibility, you could do this by passing the array and an index into each worker thread and having the worker thread assign the result before terminating.
Some typical solutions would be:
Hold the result in the worker thread's instance (be it Runnable or Thread). This is similar to the use of the Future interface.
Use a BlockingQueue that the worker threads are constructed with which they can place their result into.
Simple use the ExecutorService and Callable interfaces to get a Future which can be asked for the result.
It looks like your goal is to perform the computation in parallel, then once all results are available to the main thread, it can continue and use them.
If that's the case, implement your parallel computation as a Callable rather than a thread. Pass this collection of tasks to the invokeAll() method of an ExecutorService. This method will block until all the tasks have been completed, and then your main thread can continue.
I think I have a better solution, why don't you make your worker threads pass the result into a linkedListBlockingQueue, which is passed to them, after they are done, and your main function picks the results up from the queue like this
while(true){linkedListBlockingQueue.take();
//todo: fil in the task you want it to do
//if a specific kind of object is returned/countdownlatch is finished exit
}

ThreadPoolExecutor Utility methods

I am writing a thread pool utility in my multithreading program. i just need to validate the following methods are correct and are they return the right values for me. i am using a LinkedBlockingQueue with size of 1. and also I refer to the java doc and it always says 'method will return approximate' number phrase. so i doubt weather following conditions are correct.
public boolean isPoolIdle() {
return myThreadPool.getActiveCount() == 0;
}
public int getAcceptableTaskCount() {
//initially poolSize is 0 ( after pool executes something it started to change )
if (myThreadPool.getPoolSize() == 0) {
return myThreadPool.getCorePoolSize() - myThreadPool.getActiveCount();
}
return myThreadPool.getPoolSize() - myThreadPool.getActiveCount();
}
public boolean isPoolReadyToAcceptTasks(){
return myThreadPool.getActiveCount()<myThreadPool.getCorePoolSize();
}
Please let me know your thoughts and suggestions.
UPDATE
interesting thing was if pool returns me there are 3 threads available from the getAcceptableTaskCount method and when i pass 3 tasks to the pool some times one task got rejected and it is handle by RejectedExecutionHandler. some times pool will handle all the tasks i passed. i am wondering why pool is rejected the tasks since i am passing tasks according to the available thread count.
--------- implementation of the answer of gray---
class MyTask implements Runnable {
#Override
public void run() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("exec");
}
}
#Test
public void testTPool(){
ExecutorService pool = Executors.newFixedThreadPool(5);
List<Future<MyTask>> list = new ArrayList<Future<MyTask>>();
for (int i = 0; i < 5; i++) {
MyTask t = new MyTask();
list.add(pool.submit(t, t));
}
for (int i = 0; i < list.size(); i++) {
Future<MyTask> t = list.get(i);
System.out.println("Result -"+t.isDone());
MyTask m = new MyTask();
list.add(pool.submit(m,m));
}
}
This will print Result -false in the console meaning that task is not complete.
From your comments:
i need to know that if pool is idle or pool can accept the tasks. if pool can accept, i need to know how much free threads in the pool. if it is 5 i will send 5 tasks to the pool to do the processing.
I don't think that you should be doing the pool accounting yourself. For your thread pool if you use Executors.newFixedThreadPool(5) then you can submit as many tasks as you want and it will only run them in 5 threads.
so i get the first most 5 tasks from the vector and assign them to the pool.ignore the other tasks in the vector since they may be update / remove from a separate cycle
Ok, I see. So you want to maximize parallelization while at the same time not pre-loading jobs? I would think that something like the following pseudo code would work:
int numThreads = 5;
ExecutorService threadPool = Executors.newFixedThreadPool(numThreads);
List<Future<MyJob>> futures = new ArrayList<Future<MyJob>>();
// submit the initial jobs
for (int i = 0; i < numThreads; i++) {
MyJob myJob = getNextBestJob();
futures.add(threadPool.submit(myJob, myJob));
}
// the list is growing so we use for i
for (int i = 0; i < futures.size(); i++) {
// wait for a job to finish
MyJob myJob = futures.get(i);
// process the job somehow
// get the next best job now that the previous one finished
MyJob nextJob = getNextBestJob();
if (nextJob != null) {
// submit the next job unless we are done
futures.add(threadPool.submit(myJob, myJob));
}
}
However, I don't quite understand how the thread count would change however. If you edit your question with some more details I can tweak my response.

Categories